In this episode of The Stargazy Brief, Chris sits down with George, founder and CEO of 1up, to unpack how RFPs and security questionnaires are actually being evaluated in 2025.
Drawing on his background as a former cybersecurity founder, George explains why AI-driven review is changing qualification, why “confidence scoring” can hurt proposal credibility, and how automation enables teams to respond to more opportunities without burning proposal resources.
The conversation also explores the rise of the “knowledge engineer,” the future of self-service Trust Hubs, and what proposal teams need to do now to stay relevant as buyers, and procurement, rely more heavily on AI.
This episode is a must-listen for proposal managers, sales engineers, and GTM leaders navigating RFP automation, AI-assisted responses, and evolving best practices.
Chris @ Stargazy (00:08): Hey, welcome to the Stargazy Brief. Today we are talking to George, who is the founder and CEO of 1up. He has a background in cybersecurity. He has already founded and sold a cybersecurity company, where they responded to RFPs, security questionnaires, DDQs, you name it, on a daily basis. They understood how tough it was to respond to RFPs, and on the back of that built 1up.
If you don't already follow them for the memes alone, go do it. The main thing you're going to get out of this conversation today is, number one: how your RFPs are genuinely being evaluated nowadays because of their software—they have the data around this—so it's important to listen.
Number two is the proposal tech feature that you're probably seeing all the time, but it's actually worse for your responses. So pay attention.
And number three: his take on qualification is nuanced. I don't agree with him, but it's worth listening to because he makes some really valid points of how we should be changing our qualification process.
I hope you enjoy listening to this as much as I had speaking with George, one of the coolest people in our industry.
Chris @ Stargazy (01:14): Hey George, how are you? Thank you so much for being a part of The Stargazy Brief. I'm really excited to have you here.
George (01:19): I am so happy to start my day with this.
Chris @ Stargazy (01:21): Yeah, of course. I've been wanting to talk to you because, of course, 1up is—it's almost impossible to not see 1up on LinkedIn just because of your branding. It's incredibly fun. You have memes and blog posts that are exceptionally relevant to the space. But obviously, you came from a very different background from proposals. You came from cybersecurity.
Of course, you have to then respond to a bunch of security questionnaires. And I'm assuming that's your background and how you got into this in the first place.
George (01:50): Yeah, you nailed it. So before 1up, I was the founder and CEO of Hyper. It's a cybersecurity company where we raised 100 million. We deployed passwordless authentication worldwide. It was really fun space. The problem with that industry is you get a ton of RFPs, DDQs, questionnaires for every little thing. And they're a nightmare to fill out. As you know, they take forever and they're very technical.
And the worst experience I ever had was we spent two weeks filling one out for like a Fortune 100 company. And on the last day, we opened the answers tab that was hidden and we found that our competitors answers were pre-filled. The customer had forgotten to remove them. So this was a mind blowing experience for us. We didn't even wanna submit a proposal after that. And that was the genesis for me thinking, wow, wouldn't it be great if this was just automated?
Chris @ Stargazy (02:39): Yeah, and I know that the vast majority of people listening to this have had a similar experience to that, where they've found out quite literally while reading the proposal that it was baked for somebody else.
That kind of leads me a hot topic question, and that is: do you really think that should have been a response that was automated or one that you just shouldn't have responded to in the first place? Or is there some third option that you think would have been the best way to go?
George (03:07): So I'm gonna—I have a hot take on this. I don't think there are any opportunities you should not respond to. We ended up winning that account.
Chris @ Stargazy (03:14): Whoa, take me down that story.
George (03:16): We were not the preferred vendor. We lost that RFP cycle. It was never meant for us to win. But engaging with the customer, we started cultivating a champion there so that when that competitor came up for renewal, we were fast-tracked into displacing them.
They had never fully deployed. People hated using the tool. It was way overpriced. And because we had built that relationship, we were just slotted in. We had deployed within weeks after that.
We ended up winning that deal. They're still a customer to this day of Hyper. And what I would say is the reason for that is because we didn't just sit there and go, we're not responding to this. We did respond to it.
And my argument for automation is if you're in a world where these responses are automated, you can take on more of these opportunities that you feel are really poorly qualified or you're not going to win them. I mean, what's the worst that could happen? They're going to remember you a year from now.
I really don't believe in the decline to respond kind of experience. Yeah, that's my hot take.
Chris @ Stargazy (04:12): Do think that is true for most of these RFP responses, most of these proposals, or do you think it's for a particular industry?
George (04:20): I think it's for a particular sales org. If you have a sales org that's front and center cultivating champions, they're out there, they're building relationships and they're going to get an RFP now and again, that's not for them. But you can maneuver around that. You can win the deal later. You can make friends at this company. They'll call you when stuff breaks, right?
So the common experience is I'm on a proposal team. I got something.
Chris @ Stargazy (04:22): Okay, talk to me about that one.
George (04:47): The SE, the AE may not have ever even interfaced with that prospect. You're 100% right that you shouldn't respond to that. You're just DD fodder at that point, right?
But if you have an account executive engaged in the deal, why not? I really don't see the downside of it if there's automation.
Certainly if you have to stop what you're doing and do this manually and you're definitely gonna lose, that's a different story altogether, which is why I go back to my point of if it's automated, if you've got a sales team that's qualifying these things, you should not decline to respond. There's no downside.
Chris @ Stargazy (05:18): Yeah, do you think there's anything else besides just responding to an RFP that an AE has some relationship with? Is it just the response or is there something else that they should be doing like follow-ups? What do you think the best practice is here?
George (05:31): Well, any good AE knows to make a friend, not a customer. A good, great AE, the best ones I've seen, they're friends with the customers throughout the cycle. They stay friends with them through the lost deal.
A great account executive cultivates that relationship, loses the deal, still hangs out with that person. They still stay in touch. That's how they win the deal later.
So I think you almost have to remove the proposal from that process and my one liner would be proposal teams should start thinking a little more like an AE and AE should start building those relationships with their proposal teams. Cause right now things are kind of on an island.
So when I say this to a proposal person, they're like, I don't even know my AEs, like why would I believe that they can maneuver this? And I think that's a shame.
Chris @ Stargazy (06:15): I always have enjoyed my role so much better when I'm closely embedded within the sales team personally, and I feel part of the team.
So as somebody who obviously sells quite a bit, has been a CEO more than once, what do you think proposal people can be doing to make friends? How can we make friends with our sales team?
George (06:39): How can you make friends with the sales team?
Chris @ Stargazy (06:41): Yeah, we'll be buddies.
George (06:43): Bring them leads.
Chris @ Stargazy (06:45): Okay, fair.
George (06:45): No, I'm serious.
You know how sales teams make fun of marketing? It's like a meme.
I think if the proposal team helped with sourcing a bit, that doesn't mean go out there and bring an opportunity. I don't think most proposal teams are set up for that, but I think if the proposal team was a little bit on the standups, asking the reps, can I look through your pipeline? Can I see what you're working on? Maybe I know someone in that account.
I think if you just build that casual relationship, you'll make a friend on the sales org, right?
And then when the time comes up and the sales rep says, hey, I'm working Bank of America, right? Do you know anyone there? You never know. A proposal person might know a procurement person they've worked with in the past, right?
So I think communication is key and a lot of it's lacking because what you have with these RFP teams is they're on an island somewhere and then the sales team is out in the field and you got to bridge that gap. And it's actually very easy to do.
You ever been to a sales kickoff?
Chris @ Stargazy (07:41): Yeah, I'm an old lady. Yeah, I've been to a lot.
George (07:41): Did you find them valuable? Did you find them valuable use of your time?
Chris @ Stargazy (07:45): No, not at all.
So I'll be quite honest with you. I was always friends with my sales team anyway, but it really was all about finding how they can find leads and the email flow they need to have. And it was almost like SDR level stuff. And so to me, that was not of interest.
George (08:05): So I think what I've seen in a lot of accounts is when I talk to people, I'm like, your sales kickoff's coming up. You are you all going out there? And the proposal team has nothing to do with it.
And it's almost like, wait a minute, you're helping them win these deals. Sometimes you're helping them source these deals. Why aren't you in Miami with your sales team or wherever, right?
Because in my experience, some of the best relationships are built at these events. That's where you pull in marketing. That's where you pull in people who aren't necessarily closing deals and it would be awesome to see some proposal managers at these events.
So if you're listening to this and you have an SKO coming up, go to your boss right now and demand that you be included and maybe bring one of your proposal people with you.
Chris @ Stargazy (08:42): No, I completely agree with you on that. That is a place to build relationships.
But I love the idea bringing leads to people because I've never thought about that before. That's really clever.
George (09:02): Yeah, why are proposal people just sitting there waiting for stuff to come in? Go out there and help the sales team.
I think everybody in the company should be a sales person at the end of the day.
Chris @ Stargazy (09:20): Yeah, I'm with you, George. I love that.
And I'm going to go back up to 1up because obviously you built this tool because you thought there was a problem.
There are other proposal tech tools out there. You're not the first one out there. But you did see that there was a need, obviously, when you were responding to these RFPs that you weren't going to win. You thought I need to automate this.
But once again, there are other ones that do automate it. So I'm wondering, what was the bet that you were making on 1up to be the true differentiator in a market?
George (09:29): Well, I didn't build it myself. Some really smart people built it.
Chris @ Stargazy (09:43): You're not the first one out there.
George (09:58): Well, I had a tool when I was at Hyper. I'm not going to say who it was. It was one of the big RFP tools and we ended up turning out of it. And I also had a sales enablement tool.
So I had an RFP tool, a sales enablement tool and a knowledge management tool. I think we were spending like a 100k plus on all three annually. And at the end of this one, two year cycle, my head of sales engineering was like, dude, we haven't automated anything.
We're still getting questions from the AEs that we can't answer. We're still filling out RFPs and questionnaires manually. And I spend all day maintaining the knowledge base. When I get an RFP, I have to go in and review the Q&A from past RFPs. And I spend more time doing that than filling out the new one. How do I rationalize this?
So the idea with 1up was that it's not that the RFP process is broken.
I think the RFP process from some of these tools is fantastic. They've done a really good job of project managing it. I didn't think that that's what was missing.
I thought what was missing was a knowledge automation tool for the sales org.
I think that you can buy all of these tools in the sales stack—the knowledge management stuff, the enablement stuff, the prospecting stuff, the RFP management stuff—and you still end up with reps asking questions in Slack every day.
So the vision with 1up wasn't just let's automate RFPs.
It was let's automate knowledge for the GTM org, make every product question answerable and a workflow you build on top of that is RFP automation.
So I hope that gives you an idea of how we were thinking about it differently because to your point, not only weren't we the first, we weren't the fifth, sixth, seventh, or even the 10th. There were so many tools out there.
Chris @ Stargazy (11:36): What's the biggest bet your product makes about how proposals should work and how is that different from how most teams operate?
George (11:42): Self-service.
I think that proposals in the future—at least these questionnaires—and you see this with our Trust Hub product at 1up. I'm not trying to plug it. I'm just saying the vision for Trust Hub was, can you self-serve some of these answers?
So when a customer comes into your pipeline and you know they're going to send you a questionnaire, why can they not just upload that questionnaire to a customer facing page, like the 1up Trust Hub?
And you get it, it's pre-analyzed for you. You can take a look at it. You can decide if you want to respond or not. But potentially, you could just say automate, and it should automate the answers for you.
I think there's a huge opportunity for inverting this process and making it self-service.
Because we have that with a lot of other aspects of the GTM experience, right? Customers can ask support questions all day. They're very rarely talking to a human. Customers can read your website and get all kinds of marketing and docs about your products and stuff.
But when it comes to filling out a questionnaire, they have to send it to an AE. The AE has to stop what they're doing, read it, go get a proposal person, a security person, maybe someone else, get a green light to fill it out. That whole thing is crazy. That's a legacy process.
Why can't we let the customer upload that questionnaire and maybe just say, okay, answer this and get answers.
So I think that's one behavior that's gonna change over the next months.
Chris @ Stargazy (13:06): I'm good. I think you're right.
I'm seeing this quite a bit now, but I'm wondering what types of customers you're seeing this with. Is it a specific industry, a size of company, or is it across the board?
George (13:20): I don't think it's a type of customer, it's a type of proposal.
Functional requirements, structured questionnaires, stuff that's not free form narrative—that's a great candidate for this use case.
Stuff that's like hundred page PDF that you have to absorb and then build your own book out of, that's not a good candidate.
Chris @ Stargazy (13:38): Who do you think should be owning these trust hubs?
George (13:41): The sales org.
Chris @ Stargazy (13:42): When—go ahead.
George (13:42): I think the sales org lacks a customer facing automation experience.
Chris @ Stargazy (13:48): But within the internal sales team, who should be the person or process to make sure that the trust hub stays updated? What does that look like?
George (13:56): So I think we're actually writing a post about this. I'll send it to you in a few days. It's called the rise of knowledge engineering.
And the knowledge engineer is an interesting role that we've started seeing following the rise of RAG based systems, knowledge automation systems, AI systems.
If you talk to all of our customers, they have someone internally who's handling the AI.
They might be a different person on different teams. There's an HR department, someone there uses their HR AI. There's a marketing team. They've got 10 different AI tools, but somebody is delegated to be the AI person, right?
So I think the knowledge engineer in the sales context tends to be solution people. They're very technical. They want that job. They love working with these systems.
I've got some scenarios where it's the AEs. And here's what's crazy about it. 10 years ago, if you said, hey, account execs are going to be maintaining AI knowledge systems. People would have laughed at you. They would have been like, what are you talking about? I can't get my AE to update Salesforce.
But we actually have that today in some of our accounts.
So I hope that answers your question. I think 90% are solutions teams, maybe 10% are AEs.
And let's not discount the RFP folks. Some of them do maintain the AI, but I do think the source of truth in a lot of these orgs is the sales engineering and solutions teams.
Chris @ Stargazy (15:13): Yeah, let's say we have a proposal person who's like, that's me. I want to do this.
What do you think they need to be doing to maybe get that role if it doesn't exist yet within their company, or to move on and get that role somewhere else? Is there a specific skillset they should be working on?
George (15:28): Yeah, it's the person who understands the product best.
I've met proposal teams who don't understand the product at all. They're glorified copy paste people.
I've met sales teams who don't understand the product at all.
You want someone who understands the product inside out.
Because if you're gonna designate a knowledge engineer in your org, they have to know where the knowledge is, they have to know how to pull it together, and product knowledge is that big gap.
We've had knowledge management for years. It's not cool, it's nothing new.
But product knowledge is a huge gap, and you probably know this, because anytime you have a product question, you go ask a human.
Can we automate that?
So proposal people, if they wanna fill that role, they just have to write it on a whiteboard: who on our team knows the product best, and that's your person.
Chris @ Stargazy (16:14): Yeah, I know a lot of proposal people who are obsessed with the product probably do know it backwards and forwards. They just don't demo it.
So in terms of RFP habits, you're already thinking of changing up how proposals should be.
Do you think there is a specific workflow or habit that needs to end—something they need to stop focusing on—because it isn't important or effective anymore?
George (16:48): I'm not a big believer in the go no go part of the process.
I think any good account executive knows within five minutes of receiving an RFP, if it's qualified for them, if it's due diligence fodder, if it was made by a competitor. I don't think you need a tool to do that.
I don't think you need a process to do that. There shouldn't even be a stage.
Ultimately you get it. Let the AE use their gut instinct. And if they can't, I'm sorry, you need a different AE on that account.
So hot take, cause I know it's built into tools. I know our customers have this process, but I think the go no-go has got to go. Pun intended.
Chris @ Stargazy (17:25): Okay, so you already know that I was gonna push back on this.
The reason is: usually there's a qualification process because the AE doesn't have a great intuition or they have bad pipeline or they're not the best at their job.
We all know AEs who aren't that great and you almost need that to help them make good decisions.
It's literally there for them. It's there for the low bar AEs.
George (17:57): But this goes—so in a perfect world, I agree with you.
It's there for the low bar AEs and you want to be the arbiter of is this qualified or not.
Remember my thesis statement is that we should respond to everything and we should respond to everything because it's an AI reading it. It should be an AI writing it. And our job is to review and make sure that it's true and it's accurate.
So if it's an AI reading it and an AI writing it, why are we doing all this hand wringing around responding?
That stems from a place of not wanting to burn proposal people's time. And that's a good place to be. You don't want to waste people's time.
But if we get to this automated state, I think that goes away.
I think you can let these AEs make their own decisions.
I actually predict the world where customers will upload RFPs, they will fill themselves out. And you as a proposal leader simply decide—you have final cut—you decide if that's ready to go to the user.
I think that's where we want to get to. And I think the go, no go part of that just goes out the window at that point.
Chris @ Stargazy (18:54): I think you might be right in terms of the more straightforward technical requirement portion of RFPs or if you're in that sort of industry.
But at the same time, even you don't think you should go for necessarily everything. Because before you were saying that you shouldn't go if the AE doesn't have any sort of say with them, then you are just that fodder for the procurement team.
And I've worked with mostly amazing sales teams, but there are still some AEs who would want to respond to everything—large, complex, difficult RFPs that would take proposal team time, solution consulting time, product time, finance time, legal time.
So we did have to have someone to say no.
And it usually was that data-backed go-no-go process, and without that, they would have gone for everything, no matter what.
George (19:48): So I guess that's where we disagree. I want them to go for everything.
If I could wave a magic wand, every questionnaire would be responded to, every opportunity would be taken.
Even if there's a 5% lift in pipeline, I think that's worth it.
But that's predicated on what you said: the type of AE, the type of RFP.
If it's a free form narrative driven question kind of proposal, you're not automating that. You're not automating that today. You're not even automating that five years from now, in my opinion.
Chris @ Stargazy (20:15): Then what would you suggest to the proposal team that wants to be useful? They want to help with pipeline, but they do have a few AEs who want to respond to everything and they are that low level AE.
Let’s say you do have some sort of tool that automates the response maybe to 80%, but you still need legal, finance, solution consulting, and your own time to respond.
What would you suggest to those people who work with a sales team who does want to respond to everything and doesn't really use their intuition as well?
George (20:52): I'll tell you what we see in the field.
A lot of our customers, when they start with 1up, they have a very rigid workflow that's built around manual process.
Then we talk to them six months in and we're like, what has changed about your workflow? And they're like, I let the AE log in and upload the thing. And then I look at it, I assign questions out. I get legal to look at it. I might get security to look at it. I have final cut.
The admin has final cut, but the AE has just uploaded it and the AE gets something back.
So the difference there is you can give these AEs self-service power now. They didn't have that before.
Before, they had to send it to some RFP team, send it to some SE, hope that it gets done.
I've been in scenarios where the AE is standing over the shoulder of the solutions consultant and making them fill it out.
We got to get away from that.
So you gotta be able to upload the questionnaire, get a draft back, but someone internally—like I said earlier, it's that knowledge engineer—they have to have final cut.
You don't want to give the junior BDR final cut over an RFP. They're just gonna say yes to everything.
Chris @ Stargazy (21:59): Cool. No, that answers my question.
George (22:01): We don't have to agree.
I walk around in a founder reality distortion field where some of this stuff I say sounds like complete BS. So I have to be aware of that and you got to keep me aware of it.
Chris @ Stargazy (22:12): No, I love to fight. So especially about qualification.
Because you talk with so many people who are trying to purchase these tools, do you get any pushback about AI? Do you see anything even after they purchase AI that they have a hard time adopting the tool?
George (22:34): I actually would flip that and say, we push back on AI as we sell them.
There are cues and steps in our product where we educate them on stuff that AI is good for and not.
I'll give you some examples.
We have customers who do the 1up trial and they come and they buy, they say, I was blown away at how good the answers were. But I noticed that when it answers things, it takes them small chunks at a time.
And I said, well, what's different about that from what you've been doing? They're like, well, I upload a file to Claude and I get back a bunch of answers, but they're not great. They're very thin.
And I'm like, well, here's the thing. You're uploading a lot of information. You're throwing it all into one prompt. And then you're getting back something you're unhappy with.
These systems degrade with size. The broader the use case, the bigger the context window, the more information you're trying to shove in there, the worse they get.
But to the user, they don't know this. They're getting conditioned by the model providers to think that bigger context windows better—more stuff you can load in, longer answers, it's thinking for longer—obviously my output's gonna be better.
So we have to re-educate them: you want small information to go in, you want small information to come out, you just want to do this a thousand times, fast.
That's a small example of re-educating people who've been conditioned.
Chris @ Stargazy (24:11): I love that you are pushing back and you are re-educating people because I think a lot of times people are purchasing these tools and they know that they need to adopt AI within their go-to-market processes.
But that doesn't mean they know everything.
I think they want to be able to rely on you for that knowledge and that help along the way.
George (24:29): You are going to post that we hear after they start using our tool.
Chris @ Stargazy (24:32): Yeah, I think people want that. I don't think people want to just be left to their own devices because, like you said, most people aren't—they don't have PhDs in machine learning.
George (24:42): Some pushback we hear is people are still unsure of how answers come together.
We're very transparent about how answers get generated. We show them the exact sources. We don't go beyond that.
People still call us sometimes and they say, hey, I got a bad answer. I want you to look at it.
And we look at it and we're like, hey, this is your answer that you saved.
And they're like, what do you mean?
I'm like, well, here, look.
And they're like, yeah, but we don't do that.
I'm like, yeah, but this person said you do. So you've taught the AI that you do this.
And now you're complaining that it's spitting it back out to you.
So I think a lot of people push back on answers that they're getting back, but then they're realizing how these answers are coming together.
And I think that's simultaneously pushback and also an education.
Some of this stuff isn't hallucination, it's just me teaching it bad answers.
Chris @ Stargazy (25:45): Yeah, so what would you say to go to market people who are trialing out a tool like 1up and they're getting some crappy answers back. How do they know if it's the tool that sucks or the content they're putting in that sucks?
George (25:58): First ask the tool, are you going beyond my knowledge base?
There are tools out there who, when they can't give you an answer from their knowledge base, they go to the broader web, they go to the model. That's where you have a high risk of hallucination.
So if that's not happening, open the answer, look at the sources, read where they're coming from.
You'll see that there's something in there that's influencing.
I had a customer who said that they did a certain feature that not only they don't do, they actually promote not to do, but someone at some point had uploaded information straight up lying and said that they did.
And you don't want to judge them. You don't want to tell them, hey, you kind of fudge that answer.
Now it's coming back.
You just got to tell them to be mindful of this.
And I think they're getting better at it.
Because now they're thinking, it's not what the AI is hallucinating, it's what did I lie about six months ago?
Chris @ Stargazy (26:51): Yeah, don't lie in your proposals, folks.
George (26:53): I'll give you another one. This is real bad.
Have you ever asked or been told that they're pulling knowledge from Slack into these?
We tested this. We ran it with a few customers and they came back to us.
Our thesis was that it was going to be junk.
And they're like, it's not only junk, it's lethal. Get it out of our system.
When customers ask us, they're like, I pull information from my Slack logs?
I'm like, not only will you not pull it, if you ask, you couldn't pay us enough to do that.
Chris @ Stargazy (27:09): Yeah, I believe that.
Obviously we're seeing a bunch of different features in these tools and I'm wondering if there's one that you think is just bad or made up to make people feel better.
George (27:30): Bro, you know what I'm gonna say.
How do I do this without hurting people's confidence?
Confidence scoring.
It's neither confident nor is it a score.
This is a really common feature.
You see it in old and new tools and it demos really well.
When people see it, they're like, whoa, it's a confidence score in my answer. That means high is good and low is bad.
It's a great demo.
Every customer we've talked to who has implemented it, the first time that it scores something high that's actually bad, they lose all credibility in the tool.
And the technical reason for this is simple: you should never have an AI grade itself. You should never have it grade its own answers.
With a lot of these confidence scores, that's really what's happening. Here's an answer, how did I do?
Or there's some other methodology behind it, but you're asking it to grade itself.
The only person who should be grading this is a human.
Every junior ML engineer knows this. Every data scientist knows this.
And I think with some of these tools, what you're seeing are features that demo really well and probably get customers excited.
But then when they start using it, it leads to stress and it leads to a complete lack of credibility.
So not only do I push back on confidence scoring, I actively advocate and say, you've got to stop putting this stuff in products.
Even to our competitors, it's not serving you.
They start using it and they blame you for what's really the model's fault.
You can't have the model grading itself.
What do you think of that?
Chris @ Stargazy (28:59): I've always hated it for the exact same reason that you just mentioned. You're asking someone to grade themselves.
We're not accurate judges of ourselves and AI is the same way.
George (29:17): If you generated an answer and then I asked you to grade it and you said it was a poor answer, why'd you generate it in the first place?
Chris @ Stargazy (29:23): Yeah. But then also, why do you think that they're doing this? Why do you think tools in general are putting confidence answers or scores?
George (29:30): Confidence scoring stems from a search-based system, an era of search relevancy.
Relevancy scoring at the search level is normal. You had this for years.
You're doing sparse or dense search, you've got scores, you're ranking them.
And confidence scoring comes from an era where a lot of these tools were search based and scoring the results was totally normal.
There's technically nothing invalid about that.
Now you've entered an era where the answers are conversational. You've gone beyond just search and you're still applying confidence scoring from that era.
But you're positioning it as applying to the quality of the output.
That's a big mistake.
I think people are asking for it. So competitors are delivering it.
But you could just ditch it and nothing would change. It would actually improve the experience.
Chris @ Stargazy (30:34): Yeah, we're going to get over the fact that you're ruining the environment, George, with your car.
But we're going to go right into what do you think people need to be using to then give a confidence score on their response?
George (30:47): I'll tell you what we do at 1up. Do you want to hear that?
Chris @ Stargazy (30:49): Yeah, I'm curious.
George (30:51): 1up has a completeness rating where 1up after conducting its final self-assessment determines if the question being asked is getting a full answer, a partial answer, or if it's just completely unanswered.
That's a very binary system.
It's either is or is not an answer, or maybe it's partially an answer.
That's a totally different way of classifying answers.
That's not on a scale, it's not on a scoring system, it doesn't rely on a legacy method, and it also doesn't assess the quality of an answer.
It simply says, the thing that I gave you might not be great at all, but the content of it at least addresses all parts of this question.
We've had positive experience with it, and our customers have told us the completeness rating is useful, because it tells them where they need to spend a little more time.
Chris @ Stargazy (31:42): Yeah, it's more objective.
And I'm moving away from just features in general.
A lot of the people who are listening to this are working in proposal teams.
Every single go-to-market role is changing.
But I'm wondering if you're seeing a certain skillset that is helping them become more relevant now or things that they should be working on now for the future to be relevant in our marketing and sales teams.
George (32:09): The people who are good Googlers are running the show now.
If you could Google well, you are now the AI person or knowledge engineer or whatever you want to call it.
Because I think these folks had a really good understanding of search.
And when you're talking to a chat bot or if you're inputting an RFP for automation, if you think about it like, I'm about to Google a thousand things at once, that's the mindset that gives them an advantage.
I don't think it's something technical. I don't think it's industry related.
People who were good Googlers are getting a ton out of these systems.
And people who ask others to Google for you, you know where they end up. They're dissatisfied, they're confused, they don't understand the results they got and they defer to someone.
Chris @ Stargazy (32:53): So do you think then it's about curiosity and being able to just find things and figure things out?
George (32:59): Yeah, that's a good way of putting it.
Good Googlers, they're skeptical of the results. They look at the sources, they look at multiple sources. They know how to research.
And this is like a personification of that in the AI.
Let me ask you a question. Are the best proposal writers the best writers? Are they the best researchers? What do you think?
Chris @ Stargazy (33:17): No, I think it's the people who are really good at seeing what the customer needs to hear and wants to hear, specifically the champion, to be able to sell to whoever the executive buyer is, that economic buyer.
If you can get what that is and give them the leverage to give to that person, that to me is the best one.
But that comes with research. You're not going to know that unless you've done the Googling or whatever it is that you need to know within and outside the company.
Obviously you have already founded a company. 1up is not your first.
So what I'm curious: what you've learned from your last company, what's something that you learned to not do?
George (33:58): I would say a majority of my learnings were things not to do.
I don't know how much time you have. Contextualize it in a particular—from a founder's perspective, from a product perspective—what are you curious about?
Chris @ Stargazy (34:02): That's fair. Let's do it from a go-to-market perspective.
George (34:12): Yeah, sure. Nobody cares about your product.
Stop talking about your product.
The people listening to this, you get pitched RFP tools. Since the time I've started talking, you've probably received three pitches in your inbox.
Do you care about any of those products? The answer is probably no.
Nobody cares about your product.
The American consumer, and more broadly, the global consumer, they have an affinity for branding, messaging, feeling, and they like when people understand their problem.
So the real GTM aha is if you spend more time talking about the problem and you spend more time messaging to these people as human beings and less time pitching your new feature.
No one cares that you added another button that does an export in three formats.
The people you're trying to GTM to, they're people and people are attracted to branding, feeling, messaging and relatability around their problems.
The more time you spend talking about the problem without positioning your own product, the better.
One small example: you read blog posts from a lot of these companies that talk about the problem space.
Most of them pitch their own product in the blog post.
Do not most of them pitch their own product in the blog post.
Is that the point?
Why is that the point?
If the point of every blog post is to pitch your own product, you're going to lose credibility at some point while the user's reading.
So talk about the problem.
Cause if they resonate with you on the problem, they're actually going to give you more credibility.
And that's going to deduct credibility and mind share from other brands that are flooding their inbox and their LinkedIn with new features.
Chris @ Stargazy (36:09): Do you think that makes it so that the branding is more important in these responses? Or do you think that just doesn't really matter? And it's only with the background—
George (36:17): Let's pretend for a moment that when you submit your proposal, it's an AI reading it. Does the branding matter?
Chris @ Stargazy (36:22): No.
George (36:24): Does the template, the design matter?
Chris @ Stargazy (36:25): Well, maybe it depends what you put in there that the AI can read. It's an image.
George (36:30): Yeah, of course. It's got to be legible for a machine.
But does it matter if you're putting nice images and branding there?
It's that matrix moment. Like, do you think a human is reading the 300 page proposal you just wrote?
What we're seeing, and this is actually in our tool, we see this more often: a lot of these RFPs being generated and submitted for you to fill out, they're already AI slop. They're AI generated.
So the assumption should be that you're getting something made by an AI. It's probably going to be reviewed by an AI.
You can imagine like 50 of these things coming in at once to the customer.
Why wouldn't they automate the review process?
So to the proposal person, you're putting all this time and effort into making something beautiful.
But spending more time focusing on the problem that's attempting to be solved and showing a proficiency around that problem—you want to get high scores from a machine.
You don't necessarily want the machine to be like, wow, this is colorful.
Chris @ Stargazy (37:38): So I have two questions that come off of that.
When we are writing, and we know that procurement is using AI to review, what do you think we need to be doing to make sure that AI is getting the right stuff to then evaluate it really high?
George (37:55): I think these systems have a similarity between them.
You can imagine 10 proposals that you and I submit tomorrow, 10 AI systems on the other end are going to be grading them.
There's going to be a similarity between the way they read, interpret, even among the different models.
Even if the questions the customer is asking: how well does this meet our requirements? What are some gaps? Where did they fail?
So I think in this era, that grading is going to be really distilled down to feature gaps.
I hate to say it, but feature gaps are going to make a comeback because it's no longer a human reading and getting a gut feeling.
Now it's an AI going, they don't do this. You told me to check if they do this and they don't.
So feature gaps are back and you're gonna have to do a better job massaging it.
If you've got 10 AIs grading this thing, they're gonna really just check what your gaps are.
They're not gonna grade you on vibes or branding.
Chris @ Stargazy (39:08): No, I think it's better to be honest.
A lot of us are still going by best practice from five years ago, which is not going to serve us going forward.
For tech specifically, if branding isn't important anymore, do you think anybody is seeing that? Or do you think it's still important when you get to the shortlist, like when they do check the executive summary and the pricing pages? What is your suggestion there?
George (39:37): You know me, I love branding. I'm a big proponent of it.
I got this guy with me everywhere I go.
I love branded content that even if no one reads, it's not for them, it's for you.
You feel good sending it. Your AE feels good sending an exec summary with your logo and your colors and your font.
If it's in Calibri, throw it in the trash.
Put the right font on there.
Even if a human's not reading that, you're not doing the branding for them. It's a consistency for yourself and for your company. And that's the face of your company. And if you do win, somebody's going to read that.
So two pieces of advice:
One, respond in a manner that's legible for machines, not humans. Assume it's machines reading it.
But two, include elements of that response. It's not a plain text file you're sending. Give it to them in the format they ask for, add a spin to it, add videos if you can, add some images and screenshots.
Branded cover sheets are one thing, but I encourage people: go put videos in your responses.
We're adding image answers. That's something I'm really excited about. Put images in your responses.
There's a lot there that can improve it.
Colors, fonts go a long way.
Chris @ Stargazy (40:55): Then let's say, George, that people are interested in learning more about you, interested in learning more about 1up.
Maybe they just want to see some nice memes, some nice branding.
Where can they find you? Where can they learn more about you?
George (41:05): Yeah, don't go to our website. Don't look at our product.
Go to our LinkedIn and you will get a new meme every day and you will be happy. It will give you something to waste time on.
I don't need you signing up for our trial. If you want dank memes delivered for GTM teams, go follow 1up on LinkedIn right now.
We also have an Instagram, Crushing Quota. You're going to get a new meme every day. You're going to love it. And you're going to forget about the misery that is selling in 2025 or 26 or whatever it is that you do.
Chris @ Stargazy (41:38): I will link all those below, including your Instagram, but I would suggest for everybody to check out their website, check out their free trial because you're gonna know instantly if you like it or not, if it works for you or not.
But also you have a really good blog post that is just full of—
George, it was lovely to speak with you today and we'll talk soon.
George (41:56): Yes, I had a lot of fun. Thank you for this. And I hope you do this again. This is a lot of fun.
🎧 Listen to the full episode: https://www.youtube.com/watch?v=mfzfsjjxbgI
💬 Join the conversation: Share your take on qualification, AI review, and proposal automation in the stargazy community at https://stargazy.circle.so/feed
🌟 FIND ON STARGAZY 🌟 ✸ Community: https://stargazy.circle.so/join?invitation_token=0856b517503bca21eecbae1d058313543675481b-28d54c10-c886-4708-8b3b-ffd62cd3c935 ✸ Newsletter: https://the-stargazy-brief.beehiiv.com/subscribe ✸ Website: https://stargazy.io/ ✸ LinkedIn: https://www.linkedin.com/company/stargazyproposals/ 🌙 GUEST LINKS 🌙 ✸ 1up Website: https://www.1up.ai/ ✸ 1up on LinkedIn: https://www.linkedin.com/in/george-avetisov/ ✸ George on LinkedIn: https://www.linkedin.com/in/george-avetisov/ ✸ 1up’s Stargazy Page: https://stargazy.io/proposal-tech/1up