How I use AI as a Sales Engineer in 2025
It’s 2025, and I am a reluctant AI user. I have long thought my edge as a sales engineer is being a real human expert in DevOps and being totally accessible to my customers. I don’t rely on gimmicks or cutting corners to win—I am very knowledgeable, and I share that knowledge openly with my prospects. When they are successful, I am successful. For that reason I value study and thoughtfulness, which are not things I personally associate with AI.
But I decided that’s no longer enough and maybe I’m carrying unfounded prejudice. Despite my aesthetic preferences for “carbon intelligence”, I must begrudgingly find more ways to use AI or risk becoming an unemployed grump. As a first step, I decided to catalog how I’m currently using AI, and I am surprised that it’s already found its way into many aspects of my daily work.
Demos
I don’t really use AI here. I can do these in my sleep. Years ago I built out my example projects and they continue to serve me just fine. What could I do? There is probably a case to be made for using AI to build out customized demo scenarios that more accurately represent the customer scenario. I’m not sure this is worth it. My buyers are smart people who can relate the general-purpose example to their own scenario.
POC Support
Sometimes, in order to get a tech win on a POC, a customer will want me to write a custom integration for them. For example, years ago I wrote a script that uploaded an app executable into Microsoft Intune. I learned Powershell for it (I found that I like Powershell!), and it probably took me 2-3 days.
I’ve started shaving the time investment for these situations down with chat-based LLMs. I have the LLM write code snippets for me (ex. I never remember how to do an if-else or a for loop in bash). This type of stuff is mostly boilerplate, so the LLM has plenty of training for this kind of basic stuff.
So I’ve concluded that greenfield stuff, when small in scope and simple, is a sweet spot for tools like Claude CLI. In a matter of moments, Claude could write that Powershell script that took me days, and do so with remarkable quality.
I recommend approaching agent-assisted coding just like normal software development: iterate bit by bit. To avoid dead-ends and unmaintainable code, do not just tell Claude (or whatever tool you use) exactly what you want from the beginning. If you tell it “write me a Flutter app for fantasy football,” you’ll get spaghetti. Instead, prompt incrementally: “scaffold the project”, commit, ”build this module”, commit, ”add tests”, commit, ”build that module”, commit, etc. etc. until you have the product you want.
Customer Support
A managed CI service is basically just generic compute sold by the minute. The difference is the customer’s expectation that the person on the other side of the Slack window understands the customer’s build tools and scripts, even when they are bespoke. My software development career had breadth across multiple domains, so I pride myself on my ability to jump into any type of project and figure things out. But I will never have the expansiveness as LLMs.
For example, I have long forgotten the many ways that Cocoapods can fail to install dependencies, but LLMs haven’t. Often pasting failure logs directly into the LLM is enough to produce the solution. Only when that fails will I invest the time re-educate myself on whichever obscure tool is failing. Because LLMs save me from this in most cases, I am able to increase the number of customers I support while simultaneously speeding up resolution time. I sometimes do this live, on calls.
By the way, we’ve done experiments to put log failure recommendations directly in product, but the results are taking iteration to get right.
I have mixed feelings about this, because it often feels like I’m turning my brain off.
In other cases, the build failure will come with a wall of logs (it could be tens of MB), and the best clue is hidden somewhere in the middle. Using an LLM with a large context window is useful, because I can paste in long passages where I think something relevant will be found, and the LLM will indeed find it. It’s all the more helpful when 12,833 results show up after doing CMD-F for “error” or “warning”. It’s probably important to note that my employer uses a commercial license that prohibits training on chat logs.
Prospecting Support
Bitrise is a rather technical product in a niche space. For example, I have been encouraging our marketing and sales teams to focus on selling to Kotlin Multiplatform users^. It’s not easy for non-technical people to correctly explain why Bitrise is great for KMP in marketing assets or prospecting messages. But when I feed the LLM with the technical concepts, it can convert those into customer messaging fit for purpose.
Questionnaires (Infosec, RFPs, RFIs)
At Bitrise, we use a commercial tool called Skypher, which essentially provides a database of “approved” responses to questionnaires. This is a huge time saver. Feed it an XLS file with questions and you’ll get one back with answers filled in. It also exposes a Slackbot for more back and forth prompting (kind of a natural language search interface for acceptable external statements). A questionnaire that would have taken me four hours might now take one or less.
“Garbage in, garbage out” applies here. Tools like this require continuous upkeep to ensure the “approved” responses remain accurate and to add new information as it evolves. Otherwise you will give incorrect responses to sensitive questions. It’s also important to note that in many contexts, questionaires are not box checking exercises—they are opportunities to sell and challenge the prospect. No AI will think critically for you (yet) about how to use the questionnaire to box out your competition or reframe the customer’s perspective.
Turns out I’m not an AI skeptic after all
I guess I’ve already integrated AI quite deeply into my workflows! But I know I have more work to do. A lot of what I’ve mentioned above is garden-variety stuff-—primarily productivity boosts not value-add. But similar to my KMP example with Bitrise, it’s the niche overlaps that are transformational. Keeping an eye out for these possibilities by staying educated on the newest developments are my next step.
^Bitrise is perfect for KMP because it is uniquely capable of build performance improvements. KMP’s linker phase is painfully slow for iOS apps. One recent customer moved to Bitrise because they were hitting the Github Actions max timeout before their app would finish compiling. The linker is sped up, for now, only with faster hardware. On top of that, KMP uses Gradle for its build tool. Gradle supports build caching for speeding build and test execution. Bitrise is the only CI that provides both the most modern (fastest) Mac hardware AND a managed Gradle build cache. This is a unique combination makes a huge difference to a very specific niche.