Litmus at ProveIt! Conference 2026. Where Industrial AI Starts

I sat through a Litmus session at the ProveIt! Conference in Dallas and I left with one thought in my head. Most AI in manufacturing conversations are backwards.

People argue about models, copilots, vision, GenAI, and all that. Meanwhile, the real reason AI projects stall is way less exciting. The data is a mess. It is inconsistent. It is not contextual. It is not governed. So the AI team spends months cleaning data instead of building value. Then everyone wonders why the AI did not deliver.

Litmus leaned into that reality. Their pitch was simple. They are not trying to be your AI. They are trying to be the industrial data foundation that makes AI possible.

And that message landed.

AI does not fail because AI is weak. AI fails because the data architecture is weak.

That is uncomfortable, because it means you cannot pilot your way out of it. You eventually have to do the boring work. Standardize connectivity. Define context. Put governance around the namespace and the model. Make it repeatable across sites.

I have seen this firsthand. You can have great data scientists and strong use cases. But if Plant A calls it “Line1.Temp” and Plant B calls it “L01_Temperature,” you are not building an enterprise capability. You are building a science project that dies quietly after the demo.

What They Actually Showed

The storyline was not just connectivity. It was repeatability.

They walked through a maturity ladder.

Connectivity. Edge analytics. Contextualization and digital twins. Containerized workloads at the edge. AI hooks. Enterprise orchestration.

The hard part is not reading tags from a PLC. The hard part is doing it 50 times, across 50 sites, with the same model, the same security posture, and the same naming rules.

They claimed that effort drops significantly after the first site. The first plant is hard. The second is less hard. After that, replication accelerates. The exact numbers always depend. But the pattern is real if you invest early in templates, standards, and governance.

Edge Layer

At the edge, the focus was on connecting to PLCs and systems, normalizing and enriching tags, running lightweight analytics close to machines, and keeping some local time-series data.

They mentioned local data retention. That is useful for troubleshooting and short-loop analysis. It is not a historian replacement. That distinction matters. Time-series storage existing is not the same as meeting enterprise retention, audit, and compliance requirements.

Enterprise Management Layer

Then comes centralized management. Templates. Bulk deployment. Policy enforcement. The ability to push consistent configurations across multiple plants.

This is where scale either happens or dies.

If every site configures its own tags, its own MQTT topics, and its own edge logic, you do not have a platform. You have 20 variations of the same problem.

Governed MQTT and UNS

They also showed governance around MQTT and Unified Namespace validation. Entities turn green once validated. Orange when not validated.

It sounds simple. It is powerful.

Most teams do not fail because they lack technology. They fail because nobody wants to be the data standards person. Governance is not glamorous. It slows people down at first. It creates friction.

But without enforcement, bad data spreads quietly. Then scaling becomes a permanent cleanup exercise.

If you are serious about multi-site rollout, governance must be clear enough to follow, strict enough to protect standards, and practical enough that plants do not work around it.

Otherwise you do not have a platform. You have a shared dumpster.

UNS Context vs AI-Ready Context

One concept I liked was the distinction between UNS-style context and AI-ready context.

UNS context lives in the topic structure. Something like:

Site/Area/Line/Machine/Tag

That is great for pub-sub plumbing and human readability.

But AI-ready context means embedding context directly inside the payload as structured JSON. The payload becomes self-describing. Fields are explicit. Meaning is not hidden inside a string.

This is not a small detail.

Most downstream analytics and AI workflows perform better when they receive clean, enriched JSON. Not when they have to reverse-engineer meaning from topic strings and inconsistent tag names.

If context lives only in topics, every consumer ends up rebuilding the same parsing logic. That is not scale. That is repetition disguised as architecture.

AI at the Edge. Useful, Not Magic

They demoed AI being used inside the architecture. Not as the architecture.

Examples included image-to-JSON extraction, troubleshooting network captures, and generating simulations for testing.

That is the right positioning.

AI as a tool for engineers. Not AI as a replacement for engineering.

In plants, AI is usually a wrench you use inside a structured system. It is not the system itself. When vendors promise AI will run your factory, trust evaporates fast. When they show AI supporting operators and engineers in specific workflows, adoption becomes realistic.

Litmus kept reinforcing that they are not an AI company. They are trying to make structured, contextualized industrial data available so AI has something solid to work with.

That framing is healthy.

My Takeaway

If I had to translate that session into practical guidance for a real program, it would look like this.

  1. Decide your standard models and naming conventions early.
  2. Make governance real, not just a slide.
  3. Optimize for replication across sites, not for the first demo.
  4. Treat AI as an add-on tool. Not a substitute for architecture.

And here is one unpopular opinion.

Most companies do not have an AI problem. They have a decision problem.

They do not want to choose a standard, because choosing means saying no to someone. So every site keeps its own version of truth. Every plant defends its naming logic. Every project negotiates exceptions.

Then leadership wonders why enterprise AI feels slow and expensive.

The unsexy truth is this.

If you want AI in manufacturing, you probably need to fund data plumbing first. Not as a science project. Not forever. But as a real product mindset. Standard models. Repeatable deployment. Governance people actually follow.

It is not flashy.

It is also the only path I have seen actually scale.


This post reflects my personal opinions from sessions I attended at the ProveIt! Conference 2026.
I am not affiliated with the companies mentioned, and this content is not sponsored. Company names and trademarks belong to their respective owners.
Company referenced: litmus.io

Leave a Comment

Discover more from The Industrial IoT Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading