Codex
AI code generation as a disciplined part of the development workflow.
Codex-class models are good at high-pattern, low-ambiguity code: test scaffolding, mechanical migrations, boilerplate, documentation derived from existing implementation. The real leverage is knowing exactly where that boundary is and treating model output with the same skepticism you'd apply to any unreviewed diff.

I use AI code generation for a specific category of work: code where the pattern is well-established, the correctness criteria are testable, and writing it by hand would be time spent on mechanics rather than design. That covers test bootstrapping for untested modules, mechanical version migrations, route and DTO scaffolding, and docstring generation from live code. The discipline I've built around it is treating every output as a first draft from a fast junior dev — I read it like a code review, run it through the same lint and test gates as anything else, and reject or rewrite anything that doesn't meet the bar. Prompt patterns matter: I give the model explicit type signatures, an example of idiomatic output, and the constraints it needs to stay in scope. That produces consistent, reviewable results instead of creative surprises.
What Goes Through AI Generation
I keep a clear mental boundary: AI generation is for code where correctness is structurally verifiable — things with a test, a type, or a schema to check against. Business logic, security-sensitive paths, and anything with subtle invariants I write by hand.
Prompt Patterns for Idiomatic Output
Prompts that produce usable code are specific: I include the target type signatures, a concrete example of the style I want, and explicit constraints (no third-party deps, match the existing error-handling pattern). Vague prompts produce vague code.
CI Gates for AI-Generated Code
AI-generated code goes through the same pipeline as everything else — lint, type check, test suite, and where applicable a semgrep pass for common security antipatterns. The gate doesn't care how the code was written.
Test Generation
For modules with no existing coverage, I prompt for a test suite given the function signatures and a description of expected behavior. I treat the output as a starting scaffold — read every assertion, delete the ones that are tautological, and fill in the edge cases the model missed.
Code Modernization
Mechanical migrations — Python 2 to 3, class components to hooks, CommonJS to ESM — are a good fit for AI generation because the transformation rules are well-defined. I generate in chunks, diff against the original, and verify behavior with the existing test suite.
Documentation Generation
I generate docstrings and API docs from existing implementation, then review them for accuracy against the actual code path. The model is good at structure and prose; I correct anything that describes the interface incorrectly or omits meaningful edge-case behavior.
Let's talk Codex.
No pitch. Just a technical conversation about the problem you're working on.