Copilot
GitHub Copilot as a daily part of my development workflow.
Copilot is genuinely useful for a specific class of work — high-pattern, low-ambiguity code where the shape of the output is predictable. Where it creates problems is when you stop reading what it writes. The mental model shift isn't about trusting it more; it's about being precise about when the suggestion is in a domain it handles well versus when it's confident and wrong.

My heaviest Copilot usage is in two situations: boilerplate that I'd otherwise write by rote (route handlers, DTO classes, migration files, test scaffolding), and syntactic heavy lifting when I'm working in a language I don't have muscle memory for. In both cases, I'm driving with comments and type signatures — the prompt is the code structure I've already committed to, and Copilot fills in the implementation I'd write anyway. For anything involving non-obvious invariants, security-sensitive logic, or business rules with edge cases I care about, I write it myself. The failure mode I've noticed most often isn't the obviously wrong suggestion — it's the plausible one that's subtly off in a way that only surfaces under load or in an edge case. My rule is that any Copilot-generated block gets the same read-through I'd give an unreviewed PR. That discipline is what makes it net-positive.
Mental Models for Effective Use
The patterns I've found reliable: lead with comments and type signatures so the completion has enough context to be idiomatic, accept completions for structural boilerplate, discard or heavily edit completions for logic with invariants. The split between 'accept' and 'steer' is something you calibrate over time, and it's worth being explicit about rather than developing through trial and error.
Copilot for Business Configuration
Content exclusions and telemetry settings matter operationally. Excluding sensitive files and internal credential patterns from the context window is the first configuration step. Usage telemetry is worth turning on early — not for surveillance, but because actual acceptance rate data is more useful than impressions when deciding whether a tooling investment is paying off.
Measuring Actual Impact
The honest way to measure Copilot's impact is to track acceptance rate alongside code churn on accepted completions. High acceptance rate with high churn on those specific lines means you're accepting suggestions and then fixing them — which is net-negative. Low acceptance rate on boilerplate-heavy work means the context isn't set up right. Both are more useful signals than time-saved estimates.
Boilerplate Elimination
Route handlers, DTO classes, migration files — I let Copilot write these because the pattern is well-defined and I'm going to read every line anyway. The time savings are real on this class of work, and the cognitive overhead of reviewing is low because the correct output is structurally obvious.
Unfamiliar Stack Exploration
When I'm writing Rust for the first time or translating a bash script to Python, Copilot handles the syntactic overhead while I focus on the logic. I verify idioms against documentation before accepting anything non-trivial, but the completion rate for syntactic boilerplate in unfamiliar languages is high enough to be genuinely useful.
Documentation as Code
I generate inline docstrings and JSDoc comments from the implementation, then review them against the actual code path. The model handles structure and prose well; I correct anything that describes the interface incorrectly, omits meaningful edge-case behavior, or drifts from what the function actually does.
Let's talk Copilot.
No pitch. Just a technical conversation about the problem you're working on.