Jasper Cooper (AutoRFP.ai) on the Future of Proposal Content, Libraries, and “Agents”
When a library helps vs hurts (and why “light libraries” win)
How to cut hallucinations: context, categorization, and source priority
What to connect first (and what to ignore) when linking CRMs, help docs, and drives
Agents vs deterministic workflows—when to use each (and when not to)
Why longer, proof-rich answers are lifting shortlist rates
Christina Carter (00:00) Hey Jasper, welcome. Thank you for joining—there are so many myths around content knowledge bases and libraries. I’d love your take on what’s real. To start, how familiar were you with RFPs before founding AutoRFP.ai?
Jasper (00:29) I led a global enterprise sales team. At 19, I pushed for an RFP my boss called “rigged and a waste of time.” I worked nights and weekends on it, not realizing we’d be invited against nine other vendors. We won—likely more due to the product than my writing at that time. That success proved RFPs could be meaningful. We scaled from none to global across the UK, US, and Australia. RFPs drove ~30% of company revenue; we hired hundreds and closed big accounts like Starbucks. Still, the worst part of enterprise sales remained the RFP process. As the team lead, I became the de facto RFP tool owner—chose a vendor, implemented it, and ended up managing it. We split responsibilities: security handled security content and SMEs, product did the same there. We never carved out a dedicated proposal function—it was self-service into the platform plus department heads managing their content.
Christina Carter (02:02) Many proposal folks forget that in teams without proposal functions, sales leaders end up doing RFPs. Before AutoRFP.ai, when did you realize “this isn’t the way” and there’s a better approach?
Jasper (02:33) We built internal sales tools—AI search, chatbots—before it was cool, so I knew the tech existed. Meanwhile my inbox filled with “review this content” requests. We poured effort into maintaining the library, then the tool proudly auto-filled 4% of an RFP when I knew 80% of answers were in there. Another moment: 12 consecutive questions about our mobile app were auto-answered with the same generic mobile overview. Clearly broken.
Christina Carter (03:33) We’ve all been there. Switching from old patterns to better ones is hard. What’s the biggest belief proposal/content managers hold that’s no longer true?
Jasper (03:54) That a library is the best idea for most content; that SMEs will log in and keep it updated; and that it should be your primary approach. A library is part of a healthy system—but no longer the only part. Many organizations still rely solely on a library.
Christina Carter (04:18) When you get pushback on AutoRFP.ai’s approach, how do you help people see what’s changed?
Jasper (04:33) We show results in a test. Even for us—coming from other systems—our initial assumptions were wrong. We budgeted four weeks to build a new categorization system; it took 12 and required more people. Talking to users revealed how it actually needed to work, which differed from our initial concept. Broadly: keep a library, yes—but also pull directly from SME systems where they already work, and integrate other sources as needed.
Christina Carter (05:22) It’s tough to grasp without seeing it. I’ve been a knowledge manager for years—shifting to “this can be different and better” took time.
Jasper (05:30) Making it feel familiar matters—the change management is intense, and that’s valid.
Christina Carter (05:51) I see two groups: people desperate for a better way who’ve been grinding nights/weekends, and those convinced change isn’t possible.
Jasper (06:12) We’re in the gray area. Twelve months ago, this was hard with available tech. Today, it’s getting better fast.
Christina Carter (06:20) People worry about AI hallucinations. RFP responses carry reputation and potential legal risk. Is hallucination a real risk in most content libraries?
Jasper (06:51) Newer systems generate more responses with better match rates. The underrated risk is actually context loss—the AI picks the wrong product, or pulls German content for a US answer because it lacks situational context. Organizing and presenting the context of each source to the generator is crucial. Our benchmarks used to show ~83% factual accuracy, then ~99% six months ago, and now further nines—~99.95%. Google in particular is close to knocking hallucination out, as long as you provide relevant content. Ask it to write without context and risk goes up.
Christina Carter (08:10) When shopping for tools that minimize hallucinations, what should teams look for?
Jasper (08:24) For small, simple offerings (e.g., local council lawn services), a simple setup—ChatGPT plus Google Drive—won’t get confused. Complexity changes things: multiple products, markets, and regions demand strong categorization and storage structure. When integrating, the system must preserve folder/location context and respect it during retrieval, instead of dumping everything into a big, flat pit. Tools that keep your SharePoint/Drive structure meaningful will generate better answers.
Christina Carter (09:24) Let’s tackle common beliefs. First myth: you must maintain a library of responses—always.
Jasper (09:46) Not always. Many customers run without a traditional library—relying on the last three months of projects plus integrated org sources—especially when they lack dedicated content managers. Over 75% of teams moving off older systems don’t migrate their libraries, which suggests those libraries weren’t maintainable or valuable enough to carry forward.
Christina Carter (10:27) Are there cases where a library is necessary?
Jasper (10:35) Yes—financial services, for example. There are regulated values and nuanced numbers where a maintained source of truth is critical to avoid fines or worse. For software companies, it’s case-by-case, and smaller teams often shouldn’t maintain large libraries.
Christina Carter (11:02) Next myth: more content equals better results.
Jasper (11:20) False for libraries. Duplicates and bloat multiply maintenance. We advocate light libraries: cover ~80% of common questions and avoid the last 20% that causes 5× more content. Keep a centralized archive of past projects—super valuable to pull niche, real-world answers (“Gary’s telco wisdom”) without stuffing the library.
Christina Carter (12:27) New tools connect to knowledge bases, CRMs, the web, marketing content—everything. Is connecting to “all the things” good or bad?
Jasper (12:58) Connecting systems is now industry standard and powerful. Decide what is source of truth vs fallback. For tech companies, help docs might be primary for many responses, with CRM or web as backup. Prioritization and ranking matter.
Christina Carter (13:44) If I can connect everything, what should I connect first?
Jasper (14:03) Don’t dump everything. Products change, companies merge—old docs can mislead AI. Choose a recency cutoff (e.g., when you hired a content manager two years ago). Prioritize quality over quantity—most systems are pre-trained; they need correct context, not mass. Start with high-signal sources (Confluence/Zendesk/Intercom). Use SharePoint/Drive later to fill gaps. It’s easier to add than to reverse-prune.
Christina Carter (15:12) So be selective to get better responses.
Jasper (15:18) Exactly—connect one or two core sources, test, identify gaps, then add. Avoid indiscriminate ingestion.
Christina Carter (15:58) Next myth: SMEs must feed a central RFP system. Any exceptions?
Jasper (16:26) Plenty. Some orgs keep excellent documentation so proposal teams can pull directly without chasing SMEs. You’ll still loop SMEs in for odd, one-off questions during live RFPs, but not for routine content maintenance.
Christina Carter (17:04) Final myth in this set: you need a single source of truth.
Jasper (17:10) “Single source of truth” makes sense for an employee (update your address once). For organizational knowledge, truth changes constantly across domains. Expecting one perfectly up-to-date source for products, services, and policies isn’t realistic. The practical model: keep a small library for fixed facts (e.g., employee count), then define domain sources of truth with contextual preferences (e.g., UK prefers Source A, US prefers Source B). Sometimes a “secondary” source is more current, so it’s preferred in practice. That’s reality.
Christina Carter (18:52) I like that—and it pushes teams to collaborate across internal knowledge owners.
Jasper (18:54) Also, stop forcing double entry. Support teams already build high-quality help guides; investor relations builds great decks. Asking them to retype templated answers inside your RFP tool yields worse content. It’s their last priority.
Christina Carter (20:07) At AWS, every RFP had sustainability questions. The sustainability team maintained excellent Q&A docs on an internal site, but I had to copy/paste updates into our knowledge base because we couldn’t connect systems then. It was painful.
Jasper (20:37) And it burns goodwill. We ask for the site, then ask again per RFP, and again to “add it to the library.” Reducing back-and-forth with SMEs is key—for their sanity and ours. Save your “social credit” for live deadlines.
Christina Carter (21:07) Agreed—SMEs face burnout too. Anything that lessens their load while improving content is the right direction.
Jasper (21:23) Exactly. Spend SME time where it matters most—on active opportunities.
Christina Carter (21:30) Any other myths you hear often?
Jasper (21:50) Less a myth, more a marketing fog: the constant escalation from chatbots to “gen-AI agents.” Labels can obscure what tools can actually do. What matters is outcome.
Christina Carter (22:22) Agentic AI is sold heavily—agents for each micro-task in the proposal process sounds compelling. How do buyers know they’ll get the outcomes they need?
Jasper (22:47) First, test for results—no smoke and mirrors, no optimism bias. It won’t cure everything, but it should do what it claims. Second, on architecture: agents choose tools (web, library, etc.), making them non-deterministic. That’s fine for research, but not for a repeatable answer-generation pipeline. You want deterministic workflows you can optimize: “check website → library → prior responses → SharePoint” in a fixed order. Agents make different choices on different days; workflows are tunable and stable. It’s easier to build a free-running agent than a highly optimized chain-of-thought pipeline—but for many RFP tasks, determinism wins. Agents shine in open-ended research; they’re the wrong tool for steps that need consistent outputs. Even leading agent vendors caution buyers here.
Christina Carter (25:35) So: use agents for irregular, exploratory tasks; use deterministic workflows for high-stakes, repeatable steps.
Jasper (25:48) That’s the gist. For anything you want to hyper-optimize, turn the agent off and build a clear workflow. It’s far easier to tune.
Christina Carter (26:17) You’re sitting on a lot of data. Any stat or trend that surprised you?
Jasper (26:36) Two things. First, content from projects often scores higher than library content—because it’s specific and less generic. Consider pulling from recent projects more. Second, many teams don’t migrate their legacy libraries—useful thought exercise: would you take yours with you? If not, what should you prune now? Third, response lengths are increasing—not fluff, but specificity and proof: “Yes, via X; here’s a customer using it; here’s a screenshot; link to more info.” High-win teams re-export Excel RFPs into Word to enable richer responses. If your win rate is slipping and your answers aren’t getting more detailed, investigate.
Christina Carter (28:40) Why do longer, more detailed answers help?
Jasper (28:52) They make sense to evaluators. A tailored, thoughtful response improves your odds of making the shortlist. It may not win the whole deal, but it moves you forward.
Christina Carter (29:14) Proposal managers will agree. Pre-sales may not. Any nuance by question type?
Jasper (29:19) Security and DDQs: keep it short to avoid follow-ups and speed reviews. For technical/solution responses: consider more depth and proof—unless your brand or category (e.g., FinTech) demands a very conservative tone.
Christina Carter (30:10) If you know the stakeholders, is it worth experimenting—especially in conservative industries?
Jasper (30:50) Try a side-by-side test on a safe, mid-market opportunity. In my previous role, moving from generic to fleshed-out responses triggered immediate feedback: “This feels bespoke; we learned from your answer.” That perceived partnership moved us to the front. Many Fortune 500s still send light edits on library content—there’s differentiation in doing more.
Christina Carter (31:51) Outside of security/DDQ, more customized, thought-leadership-style responses are usually good.
Jasper (32:06) Include specific numbers and, controversial opinion, use customer names in responses. The admin is hard, but the credibility is worth it.
Christina Carter (32:20) I’m with you—bring the pitchforks if not.
Jasper (32:34) Please not five-year predictions—let’s keep it to one year.
Christina Carter (32:52) What will the content management role look like in a year? Who owns the process?
Jasper (33:00) I’m bullish on content management. I don’t see AI replacing it; I see the opposite: teams growing and being rebranded as Knowledge Teams. An organization’s intellectual property is its largest asset. Central teams will connect information streams, manage high-value content, work with SMEs, and make knowledge accessible across Slack/Teams and RFP tooling. Expect leverage: fewer sellers manually searching, more investment in the knowledge function. It’ll be better than ever—if teams get on the wave, accept some learning pain, and test for ROI with a skeptical eye.
Christina Carter (35:11) What skills should people build to be part of that team?
Jasper (35:17) Don’t over-rotate on “prompt engineering courses.” Useful, but not the specialization. Instead, explore how tools actually work and how they might fit your org. Even simple tests (e.g., ChatGPT + Google Drive integrations) help you feel out basic versions of these systems. Also, improve what you already have: stabilize your content library; pull from SME sources directly instead of demanding contributions.
Christina Carter (36:21) Should content teams engage sales/knowledge leaders proactively?
Jasper (36:36) Yes. Budget cuts to content/RFP teams often don’t make sense. We see mindsets flip after a single CRO/CEO session. If I were on a content team, I’d craft a vision for where we’re heading and communicate it—in slides if needed. Leaders are searching for meaningful AI ROI; content management is one of the few places AI works right now. Even if you lack budget, show that you’re future-minded and engaged.
Christina Carter (38:25) Where can people learn more about AutoRFP.ai and your approach across the full response process?
Jasper (38:37) Visit autoRFP.ai. We focus on specific areas—great fit for software and finance, less so for highly bespoke construction use cases. We’re transparent; reach out if you disagree or want to add perspective. I welcome the debate.
Christina Carter (39:15) Jasper is a joy to talk to—smart, generous, approachable. Don’t let the title scare you; connect with him and follow his content. Thanks for joining us!
Q1: Do I still need a giant content library? Often no. A small, well-kept “light library” plus recent project answers and connected sources outperforms bloated libraries.
Q2: How do I reduce hallucinations? Preserve context (folder/source lineage), use strong categorization, and prioritize sources. Models perform when fed the right context.
Q3: What should I connect first? Start with high-signal sources (Confluence/Zendesk/Intercom). Fill gaps with Drive/SharePoint later. Choose a recency cutoff.
Q4: Agents or deterministic workflows? Agents = research/exploration. Deterministic workflows = repeatable, high-stakes generation (easier to optimize, consistent outputs).
Q5: Are longer answers better? When they’re specific and proof-rich—often yes. Exception: keep security/DDQ concise to avoid rework.
Q6: Do SMEs have to populate the RFP system? Not if product/help documentation is excellent. Loop SMEs in for one-off edge cases during live RFPs.
Q7: Is a single source of truth realistic? Treat it as domain-based sources of truth with contextual preferences (e.g., region-specific) and fallbacks—not one perfect source.