Skip to main content
Back to blog

By Roland CadavosAI & developer tools

LLM Coding Assistants: From Novelty to Daily Workflow (2023)

ChatGPT and IDE plugins changed how we write boilerplate, tests, and docs. The best teams treated AI as an accelerator—not a substitute for judgment.

2023 was the year large language models landed inside editors and terminals. Suddenly everyone had a tireless pair programmer for scaffolding, regex, SQL, and first-draft documentation. Velocity went up for rote tasks; code review had to evolve to catch subtle wrong-but-plausible suggestions. The failure modes changed: fewer syntax errors, more confident-looking mistakes in edge cases.

Mature teams set guardrails: no merging AI-generated security-sensitive code without human verification, pinned prompts for repetitive internal patterns, and a culture that still valued system design and testing strategy over raw line count. Security reviews began asking not only “is this code correct?” but “how do we know the model had the right context?” because context windows and training cutoffs mattered.

Education shifted. Junior developers could move faster with assistance, which made mentorship more about taste, architecture, and verification than about memorizing APIs. Seniors spent more time on reviews, standards, and examples in the repo—because those became the steering wheel for generated code. Teams that invested in crisp patterns and test harnesses saw better model output; teams with messy codebases sometimes amplified inconsistency.

Intellectual property and licensing questions surfaced for enterprises. Legal and engineering had to collaborate on what could be pasted into tools, what could leave the network, and how to handle generated snippets that resembled public code. Policies varied, but the trend was clear: treat assistants like any other dependency—with boundaries, logging, and accountability.

Documentation improved in places because drafting became cheaper. The hard part remained maintenance: auto-generated docs rot quickly if nobody owns updates. Successful teams wired docs into CI, linked them to code owners, and used assistants to suggest diffs when APIs changed—still with human sign-off.

Testing strategies evolved. Property-based tests, contract tests, and snapshot discipline helped catch regressions when AI refactored large areas. Teams also leaned on staging environments and feature flags to reduce blast radius when experiments moved faster than human review could comfortably absorb.

Observability for AI-assisted workflows was tricky: logging raw prompts risked leaking secrets, yet teams needed signal to debug bad outputs. Practical compromises included redaction, short retention, internal-only tooling, and hashed fingerprints for correlating incidents without storing full text.

The niche insight for senior developers: AI amplifies your taste. It rewards clear specs, good examples in the repo, and strong test suites—because that context steers output toward something shippable. In 2023, the developers who treated LLMs as amplifiers rather than oracles gained leverage without surrendering responsibility. The keyboard was still theirs; the judgment had to be, too.