Dialogue of AI is throughout us, however in my expertise, sensible steering rooted in particular use circumstances is surprisingly uncommon. After spending months deep within the weeds of a large documentation migration with AI as my assistant, I’ve realized some hard-won classes that I feel others may gain advantage from.
When you work in content material engineering, technical documentation, or are merely interested by how AI holds up in a fancy, real-world venture, right here’s my tackle what labored and what didn’t.
Undertaking Context
I’m a DITA Info Architect on the Info Expertise staff at Splunk. DITA, brief for Darwin Info Typing Structure, is an open, XML-based customary for structuring and managing technical content material.
We lately wrapped up the migration of three massive documentation websites right into a single assist portal, powered by a DITA-based element content material administration system (CCMS). The timeline was tight, and almost all the sources had been inner. The migrations had been advanced and important to the enterprise, requiring cautious planning and execution.
I initially deliberate solely to help the migration of the smaller, unversioned website. When that went effectively, I used to be requested to guide the a lot bigger second migration. (The third website was dealt with by one other staff.) Collectively, these two migrations meant grappling with roughly 30,000 HTML information, two very completely different website architectures, and the problem of customizing an present Python migration script to suit the content material at hand, whereas additionally placing processes in place for writers to evaluation and clear up their content material.
I wish to be clear that AI didn’t full this venture for me. It enabled me to work quicker and extra effectively, although solely whereas I did the planning, architecting, and troubleshooting. Used successfully, AI grew to become an influence device that dramatically sped up supply, however it by no means changed the necessity for experience or oversight.
All through this venture, I used the then-current GPT-4 fashions by an inner Cisco chat-based deployment. As of late, I work extra in editor-based instruments corresponding to GitHub Copilot. Nonetheless, the teachings I realized ought to apply to the current (mid-2025) state-of-the-art, with just a few caveats that I point out the place related.
How I used AI successfully
Prompting
One lesson I realized early on was to deal with prompts the way in which I strategy technical documentation: clear, constant, and complete. Earlier than consulting the AI, I’d sketch out what wanted to occur, then break it down into granular steps and write a immediate that left as little to the creativeness as attainable.
If I wasn’t positive concerning the answer, I’d use the AI as a brainstorming companion first, then comply with up with a exact immediate for implementation.
Iterative improvement
The migration automation wasn’t a single script however grew to become a collection of Python instruments that crawl navigation timber, fetch HTML, convert to DITA XML, cut up subjects into smaller items, map content material, and deal with model diffs. Every script began small, then grew as I layered in options.
I shortly realized that asking AI to rewrite a big script suddenly was a recipe for bugs and confusion. As a substitute, I added performance in small, well-defined increments. Every function or repair bought its personal immediate and its personal GitLab commit. This made it simple to roll again when one thing went sideways and to trace precisely what every change completed.
Debugging
Even with good prompts, AI-generated code hardly ever labored completely on the primary attempt – particularly because the scripts grew in measurement. My handiest debugging device was print statements. When the output wasn’t what I anticipated, I’d sprinkle print statements all through the logic to hint what was occurring. Generally I’d ask AI to re-explain the code line by line, which regularly revealed delicate logical errors or edge circumstances I hadn’t thought-about.
Importantly, this wasn’t nearly fixing bugs, it was additionally about studying. My Python abilities grew immensely by this course of, as I pressured myself to essentially perceive each line the AI generated. If I didn’t, I’d inevitably pay the value later when a small tweak broke one thing downstream.
As of late, I lean on an AI-powered built-in improvement surroundings (IDE) to speed up debugging. However the precept is unchanged: don’t skip instrumentation and verification. If the AI can’t debug for you, fall again on print statements and your personal potential to hint the issue to its supply. And at all times double examine any AI-generated code.
AI as an implementer, not inventor
This venture taught me that AI is implausible at taking a well-defined thought and turning it into working code. However if you happen to ask it to design an structure or invent a migration technique from scratch, it should most likely allow you to down. My best workflow was to (1) design the method myself, (2) describe it intimately, (3) let the AI deal with the implementation and boilerplate, and (4) evaluation, take a look at, and refine the AI output.
Model management
I can’t stress sufficient the significance of model management, even for easy scripts. Each time I added a function or fastened a bug, I made a commit. When a bug appeared days later, I might stroll again by my historical past and pinpoint the place issues broke. Certain, that is primary software program engineering, however if you’re working with AI, it’s much more vital. The rate of change will increase, and your personal reminiscence of every modification is inevitably much less exhaustive.
The web impact of those practices was pace with out chaos. We delivered far quicker than we might have in any other case, and the standard of the output considerably lowered post-migration cleanup.
The place AI fell brief
As useful as AI was, it had many shortcomings. The cracks began to point out because the scripts grew in measurement and complexity:
- Context limits: When scripts bought longer, the AI misplaced monitor of earlier code sections. It might add new standalone options, however integrating new logic into present, interdependent code? That usually failed until I spelled out precisely the place and how you can make modifications. I ought to word that right this moment’s newer fashions with bigger context home windows would possibly cut back among the points I bumped into with the migration scripts. However I think that it’s nonetheless necessary to be as particular as attainable about what sections should be up to date and with what logic.
- Failure to discover a working implementation: I discovered that typically the AI merely couldn’t clear up the issue as outlined within the immediate. If I requested for a change and it failed three or 4 occasions, that was often a sign to step again and check out one thing completely different – whether or not that meant prompting for an alternate strategy or writing the code myself.
- System understanding: Sure bugs or edge circumstances required a stable understanding of our methods, like how the CCMS handles ID values, or how competing case sensitivity guidelines throughout methods might journey issues up. This can be a essential space the place AI couldn’t assist me.
What I’d do in another way subsequent time
Right here’s my recommendation, if I needed to do it over again:
- Plan core libraries and conventions early: Determine in your stack, naming schemes, and file construction on the outset and embody them in each immediate. Inconsistencies right here led to time wasted refactoring scripts midstream. That stated, working in an editor-based device that’s conscious of your complete pipeline will assist to maintain your libraries constant from the outset.
- Sanitize the whole lot: File names, IDs, casing, and different seemingly minor particulars could cause main downstream issues. Embrace this steering in your prompting boilerplate.
- Account for customized content material: Don’t assume all docs comply with the identical patterns and positively don’t assume the AI understands the nuances of your content material. Discover out early the place the outliers are. This upfront work will prevent time in the long term.
- Doc the advanced stuff: For any logic that takes quite a lot of minutes to know, write down an intensive clarification you possibly can refer again to later. There have been occasions I needed to re-analyze sophisticated components of the scripts weeks later, when an in depth word would have set me again on target.
One non-AI tip: hold copies of your supply and transformed markup in a repository even after importing the transformed content material to your manufacturing tooling. I promise that you simply’ll have to refer again to them.
AI as a companion, not a alternative
Reflecting on the venture, I can emphatically say that AI didn’t change my vital considering. As a substitute, it amplified my abilities, serving to me work at a pace and scale that might have been troublesome to realize alone, whereas streamlining the post-migration cleanup. However anytime I leaned too closely on AI with out cautious planning, I wasted time and needed to backtrack.
The true worth got here from pairing my area information and important considering with AI’s potential to iterate shortly and implement. Used thoughtfully, AI helped me ship a venture that grew to become a profession milestone.
When you’re going through your personal daunting migration, or simply wish to get extra out of AI in your workflow, I hope these classes prevent some ache, and possibly even encourage you to tackle a problem you may need thought was too huge to sort out.
Discover extra tales on our Innovation channel and subscribe right here!