AI Is Not A Software Project

AI Is Not A Software Project
Author

Leo Rota

Sr. Program Director

Published Date

December 9, 2025

Every week in service, manufacturing, and field operations, leaders say some version of the same thing:

  • “We need an AI strategy.”
  • “Which AI platform should we buy?”
  • “Can we bolt AI onto what we already have?”

The instinct is always the same. Treat AI like any other software decision. Pick a platform, plug it in, run a project plan, roll it out.

That mindset is exactly how you get an AI program that never makes it out of proof of concept.

AI is not a typical software problem. At least not primarily. It is a business problem, data problem, and a trust problem that happens to use software.

Why Treating AI Like an App Sets You Up to Fail

Requirements are clear therefore the output is predictable. Traditional software projects follow this pattern:

  1. Define requirements
  2. Design screens and workflows
  3. Integrate systems
  4. Configure, test, deploy
  5. Bugs are defects, and fixes are knowable
  6. Train users to click the right things

You iterate these steps over and over and year after year as your business needs and strategy changes.

An AI implementation is very different. AI is probabilistic and deals with uncertainty, not guarantees. AI works in a spectrum of outcomes, not in a binary state.

Roughly speaking:

  • Maybe 20% is software configuration

The other 80% is:

  • Choosing the right business problem
  • Defining measurable success
  • Ensuring you actually have the data
  • Designing the model and feedback loop
  • Building enough trust so people act on what the AI suggests

If you treat AI like a standard app build, you set the project up to fail before you even start.

Most AI programs that “fail” did not fail because the model was bad. They failed because the organization framed the problem like a software feature instead of an experiment that needed iteration, learning, change management, and governance.

Bottom line: You Don’t “launch” AI – You Operationalize It.

Pick a tractable problem, not a shiny one

I've discussed how to take a crawl, walk, run approach when getting started with an AI initiative. The crawl phase starts with a tractable use case, a term you’ll hear often in the AI world.

That just means:

  • There is a real, painful problem
  • You can measure it
  • You know it costs you time, money, or risk
  • You have, or can realistically get, the data needed
  • There is a clear “better” outcome if you improve it

A quick example from oil and gas industry: Some refinery pumps run for 15-20 years or more with almost zero failures. Could you build an AI model to predict issues with those pumps? Sure. Is that the best use of your time and budget? Probably not. Who wants an AI model to tell you that it predicts a failure in 20 years? 

There is no ROI in improving something that is already highly optimized. That is an example of the wrong problem.

Where it gets interesting is when you look at preventive or condition-based maintenance along with the interactions of systems around those pumps:

  • Enable proactive maintenance schedules during optimal production windows
  • Detecting subtle anomalies that signal need for maintenance based on real time conditions
  • Upstream equipment that fails more often
  • Downstream processes that create bottlenecks
  • Conditions that contribute to rare but very costly events

This is where AI can surface patterns humans can’t easily see resulting in cost savings, extending equipment lifespan, reduce unplanned downtime, and most importantly ensure safety and compliance when dealing with hazardous conditions.

So the right first question is not: “Which AI product should we buy”. The right questions to start with are: “Where do we bleed money, time, or impact customer trust in ways that we can actually measure?”

Only then does it make sense to ask whether AI has teeth there.

Why AI projects Fail at the “cool demo” stage

There’s a familiar pattern that often occurs:

  1. A team builds a promising model.
  2. It works well in a lab or as a dashboard.
  3. Everyone says “This is very cool.”
  4. It never becomes part of daily operations.

Why? Usually one of three reasons:

  • The problem wasn’t tractable
  • You picked a reasonable problem, but you do not have the right data
  • Or the model worked, but no one trusted it enough to act on it

That last one is the silent killer.

The Trust Gap: AI Must Earn What Humans Already Have

In many plants there are already teams who behave like “manual AI”. They sit between all the sensors and the physical equipment. They watch:

  • Temperatures
  • Pressures
  • Vibration patterns
  • Flow rates

When something looks off, they pick up the phone and call the plant engineers.

“Hey, line X, pump A and B look hot. You might want to check that.”

The plant team responds:

“Yeah, we know, we are cleaning a tank so that is expected.”

or

“Good catch, we will run an inspection test and take a look.”

There’s a relationship and trust between those humans. The data team understands the process. The plant team respects their judgment.

There’s context.

There’s history.

There’s trust.

Now replace that phone call with a model output: “Anomaly detected in vibration pattern on pump B. Predicted failure in 4 weeks.”

If the engineers in the field do not trust the signal, they will ignore it. It does not matter how accurate the math is. 

The human in the loop will always be necessary (at least for now 😉) to evaluate and validate the AI prediction which in turn builds trust.

If You’re Behind on AI, Here’s the Right Way to Start?

If you are a service or operations leader who feels behind on AI, it can feel overwhelming. The good news is you do not need a moonshot.

Here is a practical way to begin.

1. Find one tractable use case

Pick a problem that:

  • Has real business impact
  • Is specific and measurable
  • Has a clear owner
  • Affects a process you deeply understand

Examples:

“Reduce unplanned downtime for this one class of equipment in one region”

“Cut truck rolls for remote water quality alarms by 15 percent”

“Improve first time fix on a single high value product line”

Sometimes the best AI use cases are rarely glamorous. They’re boring yet profitable.

2. Match the AI to the Problem, Not the other way around

One of the biggest mistakes teams make is assuming all AI is the same. I hear statements, “I want to put AI in it” is like saying, I want transportation. Or, We want to use LLMs or chatbot or GenAI. Different problems require very different forms of AI. By the way, same is true for transportation.

Here are some examples of how to pick the right AI type:

If the problem requires classifying, predicting, or scoring then use traditional machine learn. This is best for:

  • Predictive Maintenance
  • Failure probability
  • Ticket / Issue / Task Classifications
  • Risk Scoring
  • Forecasting

These problems have structure and measurable outcomes

If the problem requires summarizing, generating, or reasoning over test the use Large Language Models. This is best for:

  • Summaries of long reports
  • Drafting technician notes
  • Knowledge retrieval (What procedure apply here?
  • Conversational assistants.

LLMs are terrible at deterministic math but excellent at language and reasoning over unstructured information

If the problem requires taking actions across systems use AI Agents or orchestration Models. This is best for:

  • Looking up data
  • Call APIs
  • Trigger Workflows
  • Following step by step logic

This scenario means you’re building an AI Agent. Agents are used in operations where the model must do more than answer; it must act.

Note that AI capabilities are evolving at an exponential pace, therefore, these examples will eventually merge as multi-model AI becomes available.

3. Inventory the data you actually have

Ask simple, honest questions:

  • What sensors exist on these assets today
  • How often do they send data
  • Where does it land
  • Can we link events to work orders, parts, or outcomes in ServiceNow or our CRM

You may discover you first need to:

  • Add or upgrade IoT devices
  • Stream data into a lake or platform
  • Clean up core master data

That is not wasted work. It is the foundation.

4. Design an experiment, not a grand program

Frame the first AI initiative as:

  • A focused experiment with clear success criteria
  • Showing value for a narrow slice of the business
  • A learning exercise where pivoting is expected

Build the project around:

  • Replay historical data to validate
  • Involve the people who will validate and use the output
  • Iterate on model and thresholds based on feedback
  • Clear success criteria (for example reduce false alarms by 20 percent in 90 days)

AI rewards teams who experiment fast, not those who plan perfectly.

5. Invest early in trust and adoption

Adoption is not the last step of AI. It is the multiplier.

Build change management into the core of the project from the start:

  • Identify champions in the field and in the plant
  • Give them early access and ask for unfiltered feedback
  • Set up an AI help desk inside the business
  • Share early wins with context, not just dashboards (explainability)

You are not just deploying a model. You are onboarding a new kind of teammate to work alongside you.

The future is experimental, not magical

We are still early in this journey. It feels a lot like the early internet days. Everyone knows this is going to change everything, but it is hard to picture exactly how. We will have a series of ups and downs as we come out of the hype cycles.

The temptation is to look for the perfect AI product.

In reality, success looks more like:

  • A series of small, well chosen experiments
  • A growing foundation of IoT and operational data
  • A platform like ServiceNow that can act on signals within existing workflows
  • A culture willing to learn and adapt to their new AI teammates

AI doesn’t reward the companies with the biggest budgets. It rewards the companies who are willing to try and experiment the fastest.

You do not have to solve everything at once. You just have to pick the right first problem and start.

If you want help exploring tractable AI use cases in IOT, and explore how to do that in a CRM with predictive maintenance, or field service scenarios, the team at Bolt Data lives in this world every day. We are happy to walk through your ideas, your data reality, and what a tractable first step could look like.