The 2026 Proposal & Bid Software Report is coming ✹ reserve your free copy →

Log In or Sign Up

Trust Over Speed: Why Your AI Confidence Score Means Nothing, with Manisha Raisinghani

In this episode of The Stargazy Brief, Christina Carter and Manisha Raisinghani cover:

  • ✹ Why percentage confidence scores (78%, 85%) give reviewers nothing actionable, and how SiftHub's red/yellow/green system guides attention to the 40 answers that matter on a 500-question RFP

  • ✹ The three requirements for trustworthy proposal AI: provenance, boundary awareness, and freshness

  • ✹ How AI changes the SME bottleneck by turning subject matter experts from authors into reviewers

  • ✹ Why the best teams track win rate and pipeline progression as AI metrics, not response speed

  • ✹ The persistent deal graph: why AI should span the full deal cycle, not end at the questionnaire

  • ✹ How the proposal leader role is evolving from content manager to deal strategist and agent manager

  • ✹ Why CROs are increasingly trusting pre-sales leaders to evaluate AI tools for revenue teams

Manisha Raisinghani is Co-founder and CEO of SiftHub, an AI-native platform for pre-sales and proposal teams. SiftHub is built on trust-first design principles developed through 200+ practitioner interviews, with a focus on provenance, boundary signalling, and model-agnostic architecture.

Transcript

Christina Carter (00:02) Hey, Manisha, thank you so much for being on the Stargazy Brief. I'm incredibly excited to have you here.

Manisha Raisinghani (00:08) Thank you for having me, Christina. I have a lot of respect for what you have been doing at Stargazy, building a genuine community around proposal work. Take someone who truly understands the craft and really cares about the people doing it. And I think that really comes through in everything what you're building here.

Christina Carter (00:28) That's so kind of you. No, but it's the same back at SiftHub. You're the first female co-founder or CEO that I've spoken to running a proposal tech management company, which is huge. So I'm really excited to talk to you because your background is so interesting and so fun. And I know it's going to be very useful to everybody who listens.

For the background of people who are getting to know you and getting to know SiftHub, I think what's really impressive about you is that you spoke to well over 200 sales and pre-sales leaders before you even built SiftHub. So it was already built on a bunch of knowledge and a bunch of conversations that you had with people. So my question really is when you started to add AI into an RFP process, and you wanted to speak to people about that, what fear did you hear most often? And do you think that they were worried about the right thing or were they worried about the wrong thing?

Manisha Raisinghani (01:34) That's a very interesting question. I think this was almost two years back. So the fear wasn't really "AI will take my job." That's what makes headlines, right? But that's not what kept people up at night.

The real fear was, what if it's confidently wrong and I don't catch it? And it costs us the deal. That was the real fear. And that in itself is an incredibly sophisticated fear, I would say. Because these are the people who have spent years building institutional knowledge who understand that when a prospect asks about data residency, there are three layers beneath that question that matter.

They weren't really afraid of being replaced. They were afraid of being blamed when AI hallucinated something that went to a Fortune 500 evaluation committee.

I mean, if you ask me, were they worried about the right thing? Absolutely. They were worried about the thing most AI vendors were ignoring then, right? The early wave of AI in RFP was all about speed. Hey, generate a draft in minutes. Speed matters. But if you're an SE who's been burned even once by an inaccurate response going to a customer or a prospect, you know that speed without trust is more dangerous than the old manual way of doing things. And that insight shaped everything about SiftHub. We didn't start with how do we generate responses faster? We started with how do we make AI trustworthy enough that a seasoned SE would stake their credibility on it? That's a fundamentally different design point.

And I think they were absolutely right to have this fear of, can I trust my AI?

Christina Carter (03:36) Yeah, and I think that shows that you were ahead of the curve in this because I feel like it's only now that a lot of sales AI companies in general, instead of talking about speed, they're talking about accuracy and governance. So you were well ahead of the curve already with that. And I think that probably comes from the fact that you spoke to so many people before you built SiftHub. So I love that.

But because you have looked at so many RFPs, in an enterprise RFP, being almost right is not enough. You have to be completely accurate, especially in things like compliance, security, data residency. So what do you think the difference is between AI that is helpful, going pretty fast, versus the kind that is really trustworthy, especially in a deal critical workflow, like the ones we work in?

Manisha Raisinghani (04:31) That's a very good question, Christina, and I think mid-market and enterprise proposal folks and SEs will love this. I think this is the question that separates toy demos from production systems. Helpful is what you get when you paste a question into a general purpose LLM and get a plausible paragraph back.

And for lots of use cases, plausible is fine. But in a deal workflow, when legal is reviewing your security claims, when a CISO is reading your data residency answer, when procurement is comparing your compliance response against three other competitors, plausible can be career ending. Losing a deal versus ending a career are two very different things.

I think trustworthy AI needs three things and that's how our platform is also rooted in. First is provenance. Every generated answer should trace back to a specific verified source and you should be able to see that source. Not be trained on your data. You should be able to click through and see where this answer came from. So that's the number one most important point.

Second is it needs to know its boundaries and communicate those boundaries in a way that's useful to the users. We have seen a lot of tools in this space show you confidence scores: 78%, 85%, 60% and what are you supposed to do with that? As a human who is reviewing 500 responses, does 78% mean this is mostly right or this could be dangerously wrong? The number feels precise, but it's meaningless in practice.

We took a much more human approach here. For example, if the system has no verified information for a question, it flags it as red. It will not give you a random answer. It will not give you, I'm giving you an answer, but I'm 30% sure this answer is right. I don't know what that 30% is, right? So if we are giving it as red, this needs a human to write the answer. It's a very clear indication. And if it has partial information, we flag it as yellow. Here's what we found, but this needs a thorough human review and likely more facts to be added in the answer.

So red, yellow or good to go are the kind of themes we use. And an SE who is looking at a 500 question RFP, they know in seconds exactly where to spend their time. That's the difference between AI that generates a number and AI that respects how humans make decisions under pressure. Because submitting RFP proposals is not an easy job. It's a high pressure job.

The third one is it needs to stay current. Your company security posture changes, your compliance certifications are getting renewed. Your products are shipping new capabilities every day these days. If your AI is working off a stale knowledge base, or worse, Q&A pairs someone loaded 18 months ago and forgot about, you're building on sand.

And the other dimension I would say people underestimate is consistency. When you build on a general purpose AI, even the greatest one, your output quality depends entirely on who is prompting it and how. One SE gets a great answer. Another SE asks the same question differently and gets something mediocre. Multiply that across a team of 100 proposal managers and SEs and you have widely inconsistent responses going to the market under the same brand. So trustworthy doesn't mean accurate only. It means standardized and reliable across your entire team every single time.

Christina Carter (09:24) I very much agree with everything you said. Whenever I see confidence scores in proposal management tools, that means nothing to me. You're having AI grade itself. Yeah. It doesn't mean a whole lot. And when it does come to consistency, you're absolutely right. You can get people using the same tool, but if they have different prompts, you're right, they're going to get something different and then might not know if it's good or not. I'm with you.

That brings me to my next question, which is really all about human in the loop. What does that really mean in proposal work from your perspective? What should a strong human review process really look like when AI has done that first draft? And is there a difference between a real quality check and a quick rubber stamping check because you're in a rush?

Manisha Raisinghani (10:19) I love this question. Human in the loop. It has become the catchphrase that people throw around without really integrating what it really means in practice. I think in most setups, what it means is AI generates a draft, someone skims it under a deadline, and hits approve.

That's not human in the loop. That's rubber stamping with extra steps. I think a strong review process is about layered judgment. And a lot of people talk about this, that agency and taste are the two things which will differentiate you as a human being. You have to really excel at it in the world of AI. Because the AI handles the factual assembly, pulling the right technical specs, the right compliance language, the right reference architecture. That's the heavy lifting that used to eat 60 to 70% of an SE's time or a proposal consultant's time.

But then the human brings what AI genuinely cannot. Which is deal context. This prospect has been burned by a failed migration before, so we need to over index on implementation support in our answer. This particular evaluator is technical because I have been on the calls with this person and will see through marketing fluff instantly. That question about SSO is a trap question because they are trying to disqualify vendors who don't support this specific IDP.

So human review has to be strategic, not editorial. You shouldn't be spending time fixing grammar, which I think nobody is doing these days, or looking up whether you're SOC 2 Type 2 certified. AI should have that right with proper verified sources. And the system has to actively guide that review.

This is something we really strongly feel about. When you open a 300 question questionnaire, the system should tell you these are the 40 answers you should really spend time on. If you change these answers, if you add more deal context, if you think more strategically, maybe which competition is involved in creating this RFP. Spend time on those 40 questions and it can make or break your deal.

Christina Carter (13:03) I completely agree with you. Taste and judgment. Those are the truly human things right now that are going to help you win. I mean with anything, but obviously in sales, like what we're talking about. So if you do review it from that lens, that is what's going to set you apart. And although I think AI can help you with that, it's not going to be that final checkbox for all those things. So completely agree with you there.

That makes me think of so many RFPs where I've sent the proposal response to the CRO or the VP of sales to review it. Like, hey, the executive summary for example. And I'll be like, hey, you've had multiple conversations with this customer. You know everything about them. Can you read through this and make sure that's going to really resonate with the main stakeholder there? And all they do is change some sentences around because they like the way it sounds. But that is not what we need them for.

We need them in there for judgment and for that taste of what the stakeholder cares about. So I feel like that's always been a problem. That's what we've always needed humans for. But for a lot of the teams who are responding to RFPs, pre-sales teams and even quite a few proposal teams nowadays, that RFP is a small part of this larger sales process.

So what do you think about AI extending beyond that response management to support the broader orchestration of these larger deals?

Manisha Raisinghani (14:43) Honestly, this is one thing I'm most passionate about. And it's the insight that led us to build SiftHub the way we are building it. If you look at how most tools in this space were built, and I'm talking about legacy platforms, the new AI native point solutions alike, they see the world as a questionnaire comes in, you answer it, you send it out. That's the job.

And look, that job matters. That job is really important. But it's maybe 20-25% of what a pre-sales or proposal team does in a deal cycle. Think about what happens around that RFP before it arrives, before it even arrives. There's discovery, understanding the prospect's environment, their pain points, their evaluation criteria. There are security questionnaires. There's a demo which needs to be tailored. There's a proof of concept. Obviously, the longer the sales cycle, the more stages keep getting added.

After the RFP goes out, there are follow-up questions, a technical deep dive with the security team, a pricing negotiation where your SE really needs to articulate value to procurement.

Every one of these moments requires institutional knowledge, deal context, and accurate information about your product and company, which is sitting somewhere in your organization. Either it is in one of your tools, it could be there in your Zoom, Gong, Google Drive, SharePoint, or it is sitting in someone's head or buried in some Slack messages.

So the question becomes, why would you have one system for questionnaires and duct tape together five other tools and tribal knowledge for everything else? That's the question which most forward looking SE leaders ask us.

And this is where the concept of a persistent deal graph matters. Most approaches to AI in this space are stateless, whether it's a legacy tool doing keyword matching or a general purpose LLM making tool calls into your systems. Every interaction is independent. There's no accumulated understanding of the deal. But a deal is a living thing that evolves over weeks or months. The answer you give on day one and the answer you give on day 100 will be very different. The depth will be very different.

For example, the RFP response should inform the security review. The security review should inform what you emphasize in the executive presentation. Whether your AI layer maintains a persistent index graph across all these moments and across all of your connected systems, every touch point gets smarter because it has full context. That's deal orchestration.

I didn't use deal orchestration in the beginning because it is not a buzzword. But when you go step by step, you realize you have to orchestrate the deal.

And critically, the most important thing I would say is this should be model agnostic. Everyone was excited about OpenAI models, then Gemini models and now Claude models. The LLM landscape is evolving fast. You don't want your entire deal execution workflow locked into one model provider. The switching means rebuilding everything from scratch. The intelligence layer should let you swap or mix models behind the scenes. One model might be better at doing something, another model is better at doing something else. Because your workflows and your team's experience have to stay the same. And that's a product architecture decision. It's one that a narrow RFP tool or a DIY build cannot match.

Christina Carter (19:16) No, I really agree with that last part, especially with being able to use multiple models within the products. For all the reasons you said, but I mean, we've been seeing this in other software, like Notion for example, you can use multiple different models. And I feel like it makes such a big difference in how you use the software and how useful that software is. So I'm all about that. I love it.

And I think along with that deal orchestration, obviously there are multiple steps in the sales cycle, whatever your sales process looks like, and you want it to work across it, but there's also in every single step in that sales process, you're working with a different subset of people, stakeholders within your company to get that response across or whatever it is, like finance, legal, SMEs, product, whoever. So there's a lot of follow-up. There's a lot of orchestration internally for an RFP itself.

So in your view, how does AI change that cross-functional workflow and collaboration and where do teams still tend to get stuck because things are different now?

Manisha Raisinghani (20:39) Yeah, very interesting. I think the larger the ACVs, the longer the sales cycle, and the more team members are involved within the company and at the customer or prospect's side.

Cross-functional collaboration is where most deals slow down and it's the part nobody talks about. Everyone's focused on how fast can I generate a response. Meanwhile, the real bottleneck is the three days you're waiting for the security team to validate an answer or the product manager who's in back-to-back sprint reviews and can't respond to your Slack.

What AI changes is the default. You're not starting from a clean slate. So instead of the SE being the person who has to go track down every SME for every question, the AI provides a high confidence first answer sourced from your actual verified content, your security documentation, your product specs, your Zoom transcripts.

Now when you route something to legal you're not saying "can you write our GDPR answer from scratch." What you're saying is "here's an answer sourced from the approved DPA we used last month. Can you confirm this is still current for this deal?"

That's a fundamentally different task. Think of this when you're reading your emails. When you have to respond to the email in yes or no, you would do it immediately. But when you have to think about that task, you will keep procrastinating that email. Maybe by one week, two weeks, three weeks, four weeks, and it never ends.

That's a fundamentally different way of working. It takes the SME from author to reviewer. It goes from "I need an hour of your time" to "can you give me two minutes of your time and I can close this deal."

And what also matters is because now the responses can be standardized. When five different SEs are using a general purpose AI, they are not getting standardized responses. But with a vertical focused platform, they are getting standardized responses and the reviewing time reduces, the follow-ups reduce, and the collaboration and internal meeting blocks on everyone's calendars reduce.

Christina Carter (23:31) Yeah.

Manisha Raisinghani (23:33) I think you also asked, it all sounds rosy right? Of course there are always places where teams still get stuck and I think that is change management. The tool can be perfect but if your security team doesn't trust the AI answers enough to review and approve, if your legal team insists on rewriting from scratch because that's what they already do and you can't use AI.

So you haven't solved the workflow problem, you have moved it. And this is where I think it's very important that companies who are selling AI platforms really think about change management in a way that the product should speak to the customer and tell it, hey, trust me, I'm here to help you.

Christina Carter (24:27) Yeah, absolutely. I think that's a huge step. I think the SME bottleneck problem is probably the most frustrating thing. At least I'm sure it's the same for presales, but especially for proposal managers, they hate it. It is one of the things I probably complain about most.

And I think you're right, going to them with at least as often as possible, going to them with that first draft to review is definitely the way to go over that first hurdle. And especially, a lot of them don't want to write. They don't like writing. Maybe they don't feel confident in writing. Maybe it's not their first language. It's helping them out in so many ways. So that's one of my favorite things about AI is giving them that first draft to review.

But you spoke earlier about how speed, response speed is the baseline. It's really not that interesting, especially when you need it to be accurate, right? You need the content to be accurate more than to be fast. So what do you think pre-sales and proposal leaders should be looking at in terms of KPIs for the outcomes? How do they know the AI that they're using is doing its job other than looking at the speed? Is there a way for them to test it, I guess is my question.

Manisha Raisinghani (25:51) Yeah, that's a very hard question, Christina. It's really difficult for proposal leaders to think of different metrics.

Speed is the first thing everyone measures and it's the easiest to show on a slide. Hey, we reduced response time by 60%. Great. But if all you got from AI is speed, you probably overpaid for it. And the metrics that we care about and the ones I see the best proposal leaders track fall into three buckets.

First is quality and win rate. Are your responses getting better? Are you winning more competitive deals? The best teams are seeing this because when AI handles the factual assembly, humans spend their time on strategic narrative, the customization, the competitive positioning. That helps with evaluations. So you end up with a response that's both more accurate and more compelling.

I'll give you a real example. A Syrian lab, one of our customers, they saw their opportunities advancing to the next stage jumped from 65% to over 90%. That's not a speed metric. That's a deal quality metric. It means the responses they are putting out are so much more complete, more accurate, more strategically positioned that buyers are saying, yes, move forward, at dramatically higher rates.

So when your AI layer makes the substance of your response better, not faster, it shows up directly in pipeline progression. And that's the metric that gets a CRO's attention. So that's one.

The second is coverage. Before AI, most teams were working ruthlessly. They couldn't respond to every RFP. So there would be bid, no bid, go, no go. Some percentage of revenue opportunities were getting a no bid or a half-hearted response. When the heavy lifting is automated, they can cover more ground without hiring. That's revenue you were previously leaving on the table and now you are getting that opportunity to have more coverage. So that's the second one.

The third, and this is the hardest to measure, but matters most, is team capacity and retention. Pre-sales burnout is real. Proposal consultant and proposal manager burnout is real. Your best SEs don't leave because the work isn't interesting. They leave because they are spending 70% of their time on repetitive knowledge work instead of doing the strategic work they were hired for.

When AI shifts that ratio, you keep your best people longer and they do higher impact work. In fact, in a couple of customers, one proposal manager is now an AE plus proposal manager. He upskilled himself. He was like, hey, my work has become fast now. Let me also take up another role. And he's an amazing AE because as a proposal manager, you know the technical details so well that you're not depending on anyone else to give you those answers.

And in the second customer, the senior proposal manager, he's now handling RevOps for the company as well. So I think we have seen a variety of things where people are also moving up the value chain, expanding their jobs. These are good changes to see when it is impacted by your product.

Christina Carter (29:58) Yeah, that's amazing. I mean, I'm curious, because you have already seen people evolve in their roles, like you said, what do you think, obviously AI is changing most sales roles in general, or most business roles in some way. But how would you say it's changing the proposal leader's role, especially in the next year or two? How is that evolving? And what skills and instincts are going to be more valuable now than they were before?

Manisha Raisinghani (30:35) Very interesting question. I'll take the same theme of agency and taste. I think the proposal leader of two, three years from now looks more like a deal strategist than a content manager. Think about what the role historically has been, right? Managing timelines, tracking who owes what answer, maintaining a content library, ensuring formatting consistency, doing quality checks on 300 questions. It's incredibly important high pressure work and most of it is operational.

The strategic potential of the role has always been there, but people never had the bandwidth to do it. When AI reliably handles your operational layer, the assembly, first drafts, knowledge retrieval, formatting, everything, what's left is pure strategy. Which deals should we bid on and why? How do we position against the incumbent? What's the win theme for this particular proposal? What are the gaps in our story that the evaluator is going to probe?

The skills that become most valuable are the ones that are hardest to automate. All of us know this. Earlier before ChatGPT was launched, everyone thought the labor kind of skills would be automated first. But we were surprised. How can design, development, and all these skills, marketing jobs get automated first.

So I think deal judgment is important. This comes back to taste. Knowing what the buyer is really asking, reading between the lines, competitive instinct, understanding where you're strong, where you're exposed, how to frame both efficiently around the deal.

And narrative craft. The ability to tell a cohesive story across 200 questions that doesn't feel like 200 disconnected answers.

And I think this is something we have seen happening. I don't think it's two, three years down the line. What will happen, which people are not talking about enough yet, is the proposal leader of the future is also an agent manager. Think about it. You will have AI agents that handle first draft generation, agents that are doing compliance checks, agents that pull competitive intelligence, agents that keep your knowledge base fresh.

The proposal leader's job isn't to use these agents. It's to train them, tune them, teach them the company's voice, the vertical specific nuances, the things that make your responses sound like your company and not generic AI output.

The job is like managing a team of incredibly fast junior analysts. The value isn't in doing the work yourself. It's in setting the standard, giving the feedback loops and knowing when the agent output is good enough versus when it needs human elevation.

So I think the best proposal leaders will be the ones who get exceptional output from the agents because they have invested in training them well.

Christina Carter (34:13) Yeah, who better to do that than the people who know what best practices and first principles for getting really great outputs are, how to win them. If you know that and you can set up an agent for that, you are the person to do it. Some random person off the street can't do that.

Manisha Raisinghani (34:21) Yeah. And also proposal leaders who resist this evolution are going to struggle not because AI replaces them, but because their peers who embrace it will cover more deals, they will win at higher rates, they will operate at a level of strategic impact that makes the old operational model look unsustainable. So the teams that figure this out first get a genuine competitive advantage. And it compounds. It compounds every single day. So the best time to start was yesterday. The second best time is right now.

Christina Carter (35:16) I completely agree with you. I mean, the vast majority of people I think will be listening to this are already like, they get it, they're on AI. But if you're listening to this and you have a proposal or solution consultant friend who's an AI skeptic, I think this is the episode to send them. Yeah, I'm with you.

I have one last question for you though. And I ask it to every co-founder, every single CEO that I talk to. And that is, you speak with a bunch of CEOs yourself, every single day, CROs, revenue leaders. And so you're hearing them talk about their pre-sales team, you're hearing them talk about their proposal teams. And so you're hearing them say probably both good things and maybe not so good things.

So I'm curious, the teams who are getting talked about in a really positive way, what are they doing differently from the teams who are not getting that positive reaction from the revenue leaders?

Manisha Raisinghani (36:16) Yeah, that's a great question. I think there are a variety of thoughts here. Different sizes of companies are thinking differently.

Most with CROs specifically, what I have seen is they are trusting the technical counterparts a lot more. And what I mean by this: when it comes to AI tool evaluation, they're relying a lot more on pre-sales leaders. Because pre-sales leaders are technical. If you have to evaluate AI technologies, even for your sales teams, they are relying on pre-sales leaders a lot more. They are trusting them. They are saying, hey, you guys are technical.

I rely on revenue generation and technical wins on you, then I trust you more with our internal AI ops. Earlier we had all these titles like digital transformation, CIO. People tried hiring AI councils or forming AI councils in 2024 and 2025.

But we saw that failed miserably. Now, every department leader is owning their AI tools and CROs are completely relying on pre-sales leaders to help them decide where they should be using AI to not bring productivity alone, but mainly to improve their win rate and increase the top line.

Christina Carter (37:59) That makes a lot of sense. Let's say people are listening to this, they're hearing you speak, they're really interested in SiftHub. Where can they find you? Where can they reach out to you or to SiftHub? Where are the best places?

Manisha Raisinghani (38:14) I think you can find us on LinkedIn, on Twitter, and please feel free to drop me an email directly. I'm at manisha@sifthub.io. Would love to hear your thoughts, how you are thinking about AI, what your teams are doing, and see if we can help you anywhere.

Christina Carter (38:33) Yeah. And we'll put all those links in the show notes so people can reach out to you really easily, find SiftHub really easily. But yeah, thank you so much. I love this conversation and I know it was going to be incredibly helpful for the proposal and pre-sales people out there who are trying to get their stuff together and figure out what's next for them. So thank you.

Manisha Raisinghani (38:52) No, thanks again, Christina, for having me. As I said, you have built something really special with Stargazy. There aren't many spaces where proposal and pre-sales leaders can have honest, practitioner level conversations. So I think I really love what you're doing. Thank you for having me.

Christina Carter (39:12) Yeah, thank you so much.