The next is Half 3 of three from Addy Osmani’s unique put up “Context Engineering: Bringing Engineering Self-discipline to Components.” Half 1 might be discovered right here and Half 2 right here.
Context engineering is essential, nevertheless it’s only one part of a bigger stack wanted to construct full-fledged LLM functions—alongside issues like management circulate, mannequin orchestration, device integration, and guardrails.
In Andrej Karpathy’s phrases, context engineering is “one small piece of an rising thick layer of non-trivial software program” that powers actual LLM apps. So whereas we’ve centered on learn how to craft good context, it’s essential to see the place that matches within the general structure.
A production-grade LLM system usually has to deal with many considerations past simply prompting, for instance:
- Downside decomposition and management circulate: As a substitute of treating a consumer question as one monolithic immediate, sturdy techniques typically break the issue down into subtasks or multistep workflows. As an example, an AI agent may first be prompted to stipulate a plan, then in subsequent steps be prompted to execute every step. Designing this circulate (which prompts to name in what order, learn how to resolve branching or looping) is a basic programming job—besides the “capabilities” are LLM calls with context. Context engineering matches right here by ensuring every step’s immediate has the information it wants, however the choice to have steps in any respect is a higher-level design. For this reason you see frameworks the place you basically write a script that coordinates a number of LLM calls and gear makes use of.
- Mannequin choice and routing: You may use completely different AI fashions for various jobs. Maybe a light-weight mannequin for easy duties or preliminary solutions, and a heavyweight mannequin for remaining options. Or a code-specialized mannequin for coding duties versus a common mannequin for conversational duties. The system wants logic to route requests to the suitable mannequin. Every mannequin may need completely different context size limits or formatting necessities, which the context engineering should account for (e.g., truncating context extra aggressively for a smaller mannequin). This side is extra engineering than prompting: consider it as matching the device to the job.
- Device integrations and exterior actions: In case your AI can carry out actions (like calling an API, database queries, opening an internet web page, working code), your software program must handle these capabilities. That features offering the AI with an inventory of accessible instruments and directions on utilization, in addition to truly executing these device calls and capturing the outcomes. As we mentioned, the outcomes then grow to be new context for additional mannequin calls. Architecturally, this implies your app typically has a loop: immediate mannequin → if mannequin output signifies a device to make use of → execute device → incorporate outcome → immediate mannequin once more. Designing that loop reliably is a problem.
- Consumer interplay and UX flows: Many LLM functions contain the consumer within the loop. For instance, a coding assistant may suggest adjustments after which ask the consumer to verify making use of them. Or a writing assistant may supply a couple of draft choices for the consumer to select from. These UX choices have an effect on context too. If the consumer says “Possibility 2 seems to be good however shorten it,” it’s good to carry that suggestions into the following immediate (e.g., “The consumer selected draft 2 and requested to shorten it.”). Designing a easy human-AI interplay circulate is a part of the app, although circuitously about prompts. Nonetheless, context engineering helps it by guaranteeing every flip’s immediate precisely displays the state of the interplay (like remembering which choice was chosen or what the consumer edited manually).
- Guardrails and security: In manufacturing, you must think about misuse and errors. This may embrace content material filters (to forestall poisonous or delicate outputs), authentication and permission checks for instruments (so the AI doesn’t, say, delete a database as a result of it was within the directions), and validation of outputs. Some setups use a second mannequin or guidelines to double-check the primary mannequin’s output. For instance, after the principle mannequin generates a solution, you may run one other verify: “Does this reply include any delicate data? In that case, redact it.” These checks themselves might be carried out as prompts or as code. In both case, they typically add further directions into the context (like a system message: “If the consumer asks for disallowed content material, refuse.” is a part of many deployed prompts). So the context may at all times embrace some security boilerplate. Balancing that (guaranteeing the mannequin follows coverage with out compromising helpfulness) is one more piece of the puzzle.
- Analysis and monitoring: Suffice to say, it’s good to continually monitor how the AI is performing. Logging each request and response (with consumer consent and privateness in thoughts) permits you to analyze failures and outliers. You may incorporate real-time evals—e.g., scoring the mannequin’s solutions on sure standards and if the rating is low, robotically having the mannequin attempt once more or path to a human fallback. Whereas analysis isn’t a part of producing a single immediate’s content material, it feeds again into enhancing prompts and context methods over time. Primarily, you deal with the immediate and context meeting as one thing that may be debugged and optimized utilizing knowledge from manufacturing.
We’re actually speaking about a brand new sort of software structure. It’s one the place the core logic entails managing data (context) and adapting it by a sequence of AI interactions, relatively than simply working deterministic capabilities. Karpathy listed parts like management flows, mannequin dispatch, reminiscence administration, device use, verification steps, and so on., on prime of context filling. All collectively, they type what he jokingly calls “an rising thick layer” for AI apps—thick as a result of it’s doing rather a lot! After we construct these techniques, we’re basically writing metaprograms: packages that choreograph one other “program” (the AI’s output) to resolve a job.
For us software program engineers, that is each thrilling and difficult. It’s thrilling as a result of it opens capabilities we didn’t have—e.g., constructing an assistant that may deal with pure language, code, and exterior actions seamlessly. It’s difficult as a result of most of the strategies are new and nonetheless in flux. We have now to consider issues like immediate versioning, AI reliability, and moral output filtering, which weren’t normal elements of app improvement earlier than. On this context, context engineering lies on the coronary heart of the system: When you can’t get the suitable data into the mannequin on the proper time, nothing else will save your app. However as we see, even excellent context alone isn’t sufficient; you want all of the supporting construction round it.
The takeaway is that we’re shifting from immediate design to system design. Context engineering is a core a part of that system design, nevertheless it lives alongside many different elements.
Conclusion
Key takeaway: By mastering the meeting of full context (and coupling it with strong testing), we will improve the possibilities of getting one of the best output from AI fashions.
For skilled engineers, a lot of this paradigm is acquainted at its core—it’s about good software program practices—however utilized in a brand new area. Give it some thought:
- We at all times knew rubbish in, rubbish out. Now that precept manifests as “unhealthy context in, unhealthy reply out.” So we put extra work into guaranteeing high quality enter (context) relatively than hoping the mannequin will determine it out.
- We worth modularity and abstraction in code. Now we’re successfully abstracting duties to a excessive stage (describe the duty, give examples, let AI implement) and constructing modular pipelines of AI + instruments. We’re orchestrating elements (some deterministic, some AI) relatively than writing all logic ourselves.
- We apply testing and iteration in conventional dev. Now we’re making use of the identical rigor to AI behaviors, writing evals and refining prompts as one would refine code after profiling.
In embracing context engineering, you’re basically saying, “I, the developer, am liable for what the AI does.” It’s not a mysterious oracle; it’s a part I have to configure and drive with the suitable knowledge and guidelines.
This mindset shift is empowering. It means we don’t should deal with the AI as unpredictable magic—we will tame it with strong engineering strategies (plus a little bit of inventive immediate artistry).
Virtually, how are you going to undertake this context-centric strategy in your work?
- Put money into knowledge and information pipelines. An enormous a part of context engineering is having the information to inject. So construct that vector search index of your documentation, or arrange that database question that your agent can use. Deal with information sources as core options in improvement. For instance, in case your AI assistant is for coding, be sure that it may well pull in code from the repo or reference the type information. A whole lot of the worth you’ll get from an AI comes from the exterior information you provide to it.
- Develop immediate templates and libraries. Moderately than advert hoc prompts, begin creating structured templates in your wants. You may need a template for “reply with quotation” or “generate code diff given error.” These grow to be like capabilities you reuse. Hold them in model management. Doc their anticipated conduct. That is the way you construct up a toolkit of confirmed context setups. Over time, your group can share and iterate on these, simply as they’d on shared code libraries.
- Use instruments and frameworks that offer you management. Keep away from “simply give us a immediate, we do the remaining” options in case you want reliability. Go for frameworks that allow you to peek beneath the hood and tweak issues—whether or not that’s a lower-level library like LangChain or a customized orchestration you construct. The extra visibility and management you’ve over context meeting, the better to debug when one thing goes incorrect.
- Monitor and instrument every thing. In manufacturing, log the inputs and outputs (inside privateness limits) so you may later analyze them. Use observability instruments (like LangSmith, and so on.) to hint how context was constructed for every request. When an output is unhealthy, hint again and see what the mannequin noticed—was one thing lacking? Was one thing formatted poorly? It will information your fixes. Primarily, deal with your AI system as a considerably unpredictable service that it’s good to monitor like another—dashboards for immediate utilization, success charges, and so on.
- Hold the consumer within the loop. Context engineering isn’t nearly machine-machine data; it’s in the end about fixing a consumer’s downside. Typically, the consumer can present context if requested the suitable approach. Take into consideration UX designs the place the AI asks clarifying questions or the place the consumer can present additional particulars to refine the context (like attaching a file, or choosing which codebase part is related). The time period “AI-assisted” goes each methods—AI assists the consumer, however the consumer can help AI by supplying context. A well-designed system facilitates that. For instance, if an AI reply is incorrect, let the consumer right it and feed that correction again into context for subsequent time.
- Prepare your group (and your self). Make context engineering a shared self-discipline. In code opinions, begin reviewing prompts and context logic too. (“Is that this retrieval grabbing the suitable docs? Is that this immediate part clear and unambiguous?”) When you’re a tech lead, encourage group members to floor points with AI outputs and brainstorm how tweaking context may repair it. Data sharing is vital as a result of the sector is new—a intelligent immediate trick or formatting perception one particular person discovers can probably profit others. I’ve personally realized a ton simply studying others’ immediate examples and postmortems of AI failures.
As we transfer ahead, I anticipate context engineering to grow to be second nature—very like writing an API name or a SQL question is right this moment. It will likely be a part of the usual repertoire of software program improvement. Already, many people don’t assume twice about doing a fast vector similarity search to seize context for a query; it’s simply a part of the circulate. In a couple of years, “Have you ever arrange the context correctly?” can be as widespread a code overview query as “Have you ever dealt with that API response correctly?”
In embracing this new paradigm, we don’t abandon the outdated engineering rules—we reapply them in new methods. When you’ve spent years honing your software program craft, that have is extremely precious now: It’s what permits you to design wise flows, spot edge instances, and guarantee correctness. AI hasn’t made these expertise out of date; it’s amplified their significance in guiding AI. The function of the software program engineer shouldn’t be diminishing—it’s evolving. We’re turning into administrators and editors of AI, not simply writers of code. And context engineering is the method by which we direct the AI successfully.
Begin considering when it comes to what data you present to the mannequin, not simply what query you ask. Experiment with it, iterate on it, and share your findings. By doing so, you’ll not solely get higher outcomes from right this moment’s AI but in addition be getting ready your self for the much more highly effective AI techniques on the horizon. Those that perceive learn how to feed the AI will at all times have the benefit.
Joyful context-coding!
I’m excited to share that I’ve written a brand new AI-assisted engineering e-book with O’Reilly. When you’ve loved my writing right here you might be eager about checking it out.
AI instruments are shortly shifting past chat UX to stylish agent interactions. Our upcoming AI Codecon occasion, Coding for the Agentic World, will spotlight how builders are already utilizing brokers to construct modern and efficient AI-powered experiences. We hope you’ll be a part of us on September 9 to discover the instruments, workflows, and architectures defining the following period of programming. It’s free to attend. Register now to save lots of your seat.