- cbardwelldoughty
- Jan 15
- 2 min read
In today’s business environment, internal teams need answers instantly, mainly because that's what customer expect. If you're in L&D, technical writing or any other business facing team your customer is the company.
The quote that comes to mind on this topic is from Neil Gaiman:
“Google can bring you back 100,000 answers. A librarian can bring you back the right one.”
The irony in this is that I couldn't remember the quote, so I asked Copilot.

I also checked with Chat GPT.

This is great. I don't need to espouse the well funded benefits of chat bots here but, what if you're trying to use it for business. It still doesn't do what a librarian can do. The AI I've used in this example is searching the web and using an LLM to provide it's responses. In a business environment though, much of the proprietary information can't be found by this kind of AI.
Here's a totally made up but legitimate example of a question if you're trying to use an AI to help with a product problem.

And, in the interest of balance, here's the Copilot response.

I've had a lot of success implementing custom Copilot agents powered by Retrieval-Augmented Generation (RAG) and trained on structured content from MadCap Flare. By combining AI intelligence with single-sourced documentation, you can deliver accurate, context-rich responses to support teams, trainers and product specialists in seconds. This speed is great but it doesn't end there. Getting this right can change the way knowledge is shared within a business.
Why MadCap Flare Is the Perfect Foundation
MadCap Flare and other similar tools are built for single-source, multi-channel publishing. Instead of scattered documents, you have a structured content repository with topics, snippets, variables and conditions.
This structure is so good for AI integration because:
Content is modular and reusable.
Metadata and hierarchy make retrieval precise.
Updates cascade across outputs making training AI part of an existing process.
When you feed this into a Copilot agent, you’re giving it context-rich, well-organised knowledge. If you set it up properly, updating the knowledge source is a by product of the existing process. It's another channel in multi-channel publication.
How RAG Makes It Smart
RAG works in two steps:
Retrieve: When someone asks a question the Copilot searches your indexed Flare content for the most relevant sections.
Generate: It then crafts a clear, natural language answer using that retrieved content.
This means answers aren’t based on the web, they're based off information you and your teams need to know.
What does that look like in practice:
Support Teams: Instead of scrolling through PDFs, they ask Copilot, “What’s the latest wiring guide for Model X?” and get the exact snippet from Flare.
Training Teams: Need to prep for a new product launch? Copilot pulls structured topics and even suggests learning paths.
Operations: When a part number changes, Copilot knows instantly because Flare was updated once and that update flows everywhere.
