Writing Smarter: The Role of Artificial Intelligence (AI) in Winning Proposals



The use of generative AI tools is becoming almost commonplace in our daily lives, much in the same way that we use cell phones and the internet without a second thought. And why not—it’s so easy to enter a prompt into ChatGPT and use the response to start an email, thank you letter, or social media post. So, it’s not surprising that the explosion of generative AI tools like ChatGPT, Claude, and others has sparked major interest—and debate—among those in the proposal community. 

AI promises so many benefits to our proposal process: it can save time and increase productivity, it can help brainstorm win themes and messaging, it can improve readability and clarity, and it can serve as a knowledge assistant. This can help reduce burnout on proposal teams since AI is saving the team time doing the “grunt work” so that proposal team members can focus on higher-value tasks. 

However, AI use does not come without potential risks, including confidentiality and data security risks. Inputting sensitive content—like solution content, past performance content, pricing, client names, etc.—into public AI tools can breach Non-Disclosure Agreements (NDAs) or contracts, expose proprietary or competitive information to third parties, or violate privacy regulations. Knowing the potential benefits, as well as the risks, you and your teams may be asking how you can responsibly use AI to draft sections, develop win themes, or tailor boilerplate content. And how can you do so without introducing AI generated errors? Let’s dig into it!

How You Can Responsibly Use AI to Draft Sections, Develop Win Themes, or Tailor Boilerplate Content

If used responsibly, you can use AI prompts to suggest creative ways to frame your team’s differentiators and benefits, help align your solution with the customer’s priorities and hot buttons, and provide alternative ways to express key messages to keep language compelling. You can also use AI to help rewrite dense, jargon-heavy technical text into clear, persuasive, customer-focused content. Here are some ways you can minimize risk while using AI to support proposal development:

  • Set a policy and train your team
  • Never input confidential or proprietary information into public AI tools
  • Always treat AI output as a draft, never a final deliverable

Set a policy and train your team 

Before you allow teams to start using AI to support proposal development, you should create a written guideline or policy. The policy should define when and how team members may use AI, which tools are approved, and how to mitigate risks while using AI. The policy should define clear boundaries for AI use, considering where it adds value and where it doesn’t. Remember, AI is good for things like:

  • Drafting boilerplate content, like company overviews or standard capabilities statements (after it is fed with source materials)
  • Generating ideas for win themes based on key differentiators and customer hot buttons you provide (note that capture still has to provide this information!)
  • Helping tailor boilerplate to meet solicitation requirements
  • Checking readability and suggesting stylistic improvements
  • Summarizing content

AI is not good for:

  • Making compliance decisions: it may miss mandatory requirements
  • Factual accuracy: it can invent or misstate facts
  • Interpreting ambiguous solicitation language: human judgement is still needed for this!

Once you set the guidelines, train your team! Make sure everyone knows and understands the guidance. Be sure to highlight the data privacy risks and concerns. For many companies, entering proprietary data into a public AI tool may be grounds for termination. 

Never input confidential or proprietary information into public AI tools

Although we have already touched on this point, it really deserves some additional attention. But the bottom line is that you should never paste client-sensitive content, proprietary solution information, or internal pricing into tools like ChatGPT unless your company has a private, secure instance. Many public AI tools store user inputs and use them to further train their models. That means your sensitive information could remain on their servers indefinitely—and possibly reappear in responses to other users. Even with private, secure instances, some companies may be concerned with data breaches or cyber-attacks. If this is the case, consider developing a policy to redact or anonymize sensitive names and figures before asking AI to help tailor content, even in your private instance of the tool.

Always treat AI output as a draft, never a final deliverable

While I was at the APMP Bid and Proposal Conference in Nashville earlier this year, I heard a story about a team that was thrown out of competition because they used AI to write their proposal and then didn’t tailor it. The customer told them that another team submitted the exact same response. I am not sure I was able to hold back the level of shock that a team would submit content without adjusting it—but after hearing that story, this really needs to be said. For so many reasons, you should always review, revise, and tailor the content you receive from your AI tool. The best advice I have heard is to treat AI like a junior writer or assistant—its suggestions still need review, fact-checking, and editing by your experienced proposal team. Just like we have always done as part of our proposal best practices, have a human team member review all your content for compliance, accuracy, your specific proposal style guide, and tone—especially the content developed initially by AI.

How Can You Avoid Introducing Errors or “Hallucinations” When Using AI?

In the context of generative AI tools, a hallucination happens when the AI generates output that is factually incorrect or fabricated, but it still presents the information confidently, as though it were true. Examples of common AI hallucinations include:

  • Inventing a certification your company doesn’t hold
  • Citing a law, regulation, or standard that doesn’t exist
  • Referencing past performance examples or customer names that aren’t real
  • Providing made up statistics or figures 

Hallucinations happen because AI models don’t actually know or understand facts—they predict likely sequences of words based on patterns they’ve seen during training. When they can’t find the answer in the data they were trained on or in what you provided in your prompt, they sometimes simply generate something that sounds plausible. 

This is why, especially when teams are using AI, it is critical for proposal managers and Subject Matter Experts (SMEs) to actively stay involved throughout the process. Here are some steps you can follow to help avoid hallucinations and errors when using AI:

Always start with a compliance matrix

Get back to the basics: build your compliance matrix first and track every requirement explicitly. You can have AI generate a first cut, but then you need to go back and add in everything that the tool may have missed. Next, just as best practice has always told us to do, have a human peer review and validate the matrix. Then use the matrix as the source of truth, and verify that every section written, whether by AI or by a human, maps back to the correct requirement. Remember, AI can help you phrase responses, but it doesn’t reliably recognize all mandatory instructions, page limits, or formatting rules.

Use AI as a support tool, not a decision-maker

Remember that AI can draft, suggest, and rephrase, but it doesn’t understand the legal or contractual weight of a solicitation. We’ve mentioned this already, but always have a SME and/or compliance lead review every section of the response. Don’t throw out your best practice review cycles!

Fact-check everything

Again—another “old school” best practice that has become ever more important in the age of AI. Because AI is prone to hallucinations, it may confidently invent product specifications, certifications, client names, or achievements. To catch these errors, first make sure you provide the AI with accurate source material. Then as part of your review process, require the reviewers to check all the information, including data points, dates, names of agencies or organizations, and references to laws, standards, or regulations.

Feed AI verified content: don’t let it guess

Don’t ask AI open-ended questions, such as “What are the key benefits of our solution?” unless you also give it source content to work with. Instead, first give the tool your actual product or solution specifications, differentiators, and past performance examples and ask it to organize, rephrase, or summarize those into proposal language. Remember, only do this if you are using a paid/private version of the tool. 

Leverage secure, organization-approved AI tools

If you are going to introduce AI tools into your proposal process, it’s best to use private or enterprise-grade AI systems that can be fine-tuned on your approved boilerplate and style guides. Public tools are trained on general data and will be less aligned with your standards—and they also pose serious confidentiality and data security risks if not used carefully. 

Final Thoughts

As generative AI tools continue to evolve and become more embedded in our workflows, proposal teams have an opportunity to harness them responsibly and effectively to produce efficiencies in our winning proposal processes. By understanding both the benefits and the risks, and by establishing clear policies, training, and review processes, you can use AI to enhance productivity without compromising compliance, accuracy, or confidentiality. Ultimately, AI should serve as a supportive assistant, not a substitute for human judgment, expertise, and quality control. Remember:

  • Always fact-check AI output against trusted sources
  • Provide the AI tool with accurate, complete input material (don’t let it guess)
  • Have SMEs review content for accuracy
  • Don’t let AI generate sections from scratch without oversight

With the right balance, AI can help your team work smarter, reduce burnout, and deliver stronger, more competitive proposals!


Written by Ashley (Kayes) Floro, CPP APMP

Senior Consultant and President

Proptimal Solutions, LLC

proptimalsolutions.com

LinkedIn


Comments

Popular posts from this blog

Proposal Reflections: A Look Back In Time

10 Must-Know Proposal Automation and Artificial Intelligence (AI) Tools of 2020

What Exactly is this AI Thing? And What Does it Mean for the World of Proposals?