For a long time, Kepware has been the go-to tool for connecting machines and pulling data from PLCs, DCS, and all sorts of shop-floor equipment. I’ve used it at dozens of sites — from auto parts to steel mills, pharma, and food plants. It’s reliable, familiar, and, for many years, it was “enough.” But as manufacturing has changed — more data, more sites, more cloud, more analytics, more everything — I’ve seen the cracks appear. This is the story of why I (and many teams I’ve worked with) had to rethink the edge connectivity stack, what actually happened when Kepware wasn’t enough, and what’s worked better in practice.
Where Kepware Shines — and Where It Hits a Wall
Kepware is great at what it was built for: translating industrial protocols (OPC, Modbus, Ethernet/IP, and so on) into something IT systems can use. It’s stable, supports a ton of drivers, and you can get it running quickly for a single plant or pilot project. For years, I recommended it as the default “connectivity glue” for MES and historian integrations.
But the game has changed. Here are some of the real issues I’ve hit, time and again:
- Throughput and Scalability: Modern projects aren’t about streaming 500 tags at 5-second intervals. I’ve had requirements like 10k+ tags every second, or 100k+ tags every 1 minute, across multiple lines or sites. Kepware’s performance starts to choke at these scales, especially when you try to push everything through a single node or use REST APIs for ingestion. We saw data loss and bottlenecks that just couldn’t be tuned away.
- Store-and-Forward Weakness: If you lose cloud or network connectivity, you need edge buffering that’s bulletproof. Kepware’s local buffering is basic and not designed for long outages, especially if you want to keep data contextualized and in order for later replay.
- Cloud-Native Integration: Getting data to modern cloud platforms (AWS SiteWise, Azure, Snowflake, etc.) often means ugly workarounds — using REST APIs, custom scripts, or extra middleware. This adds complexity, and performance takes a hit.
- OPC UA and Proprietary Protocols: Kepware’s OPC UA server is solid, but not all vendors’ implementations are equal. Some PLCs and OEM equipment use proprietary or poorly documented protocols. We often needed additional gateways or custom drivers, and bi-directional OPC UA “methods” rarely worked as advertised.
- Security and Compliance: Meeting strict cybersecurity and GxP requirements (especially in food and pharma) means you need granular access control, audit trails, and encrypted transport everywhere. Kepware can be locked down, but it’s not built for zero-trust, multi-tenant, or containerized deployments.
- Licensing and Management at Scale: Rolling out 20+ Kepware servers across global sites means lots of license management, patching, and manual configuration. Centralized management is possible, but it’s clunky compared to modern containerized solutions.
Real-World Example: When Kepware Wasn’t Enough
At one large manufacturing site, we needed to onboard 100k+ tags from dozens of production lines, with relativity low latency, and stream everything to both a cloud data lake and local MES for analytics and compliance. We started with Kepware, but hit a wall:
- Performance: Data loss began at around 1k tags/second. REST API ingestion couldn’t keep up. We tried patching and tuning, but the architecture just wasn’t built for this scale.
- Downtime and Buffering: Any network hiccup meant lost data — not acceptable for compliance or batch traceability.
- Cloud Integration: Building and maintaining custom connectors for each cloud service was a maintenance nightmare.
What Actually Works Better (And Why)
Modular, Containerized Edge Gateways
Modern edge stacks are modular and container-friendly. I’ve had the best results with combinations of solution that:
- Handles protocol conversion, local buffering, and can run on anything (Windows, Linux, IPCs, VMs, containers). Native support for MQTT/Sparkplug B, plus scripting and data modeling. Easy to deploy multiple instances per site for resilience and scale.
- Adds contextualization, asset modeling, and data transformation at the edge, before streaming to the cloud or UNS (Unified Namespace).
- Publish/subscribe model for real-time, scalable, and decoupled data flows. Works across sites, clouds, and consumers. Store-and-forward is built in, and you can replay missed data after outages.
Unified Namespace (UNS)
Instead of point-to-point integrations, UNS creates a single, structured source of truth for all shop-floor data. I’ve standardized on ISA-95 models, using MQTT topics with Sparkplug B payloads. This makes onboarding new equipment, analytics, or apps as simple as subscribing to the right topic. No more “where’s the real data?” headaches — everyone gets the same version of the truth.
Cloud-Native and Edge-First
I recommend a hybrid model: edge gateways for local acquisition and buffering, MQTT for streaming, and cloud data platform as the “historian/analytics” platform. This lets you scale globally, maintain compliance, and keep local operations running if the cloud is down.
Lessons Learned and Best Practices
- Start with the Use Case, Not the Tool: Kepware is still great for legacy PLCs or small projects. But if you need high throughput, cloud integration, or standardized data models, plan for something more modern.
- Buffer Locally, Stream Globally: Always have robust edge buffering (store-and-forward), especially for regulated industries. Don’t rely on “just in time” data flows.
- Standardize Data Models: Use UNS and ISA-95 wherever possible. It pays off in analytics, troubleshooting, and onboarding new sites or apps.
- Prefer Containerized Deployments: Containers (Docker, Podman, etc.) make it easy to deploy, patch, and scale gateways — especially across multiple sites.
- Security and Compliance First: Go beyond “firewall and antivirus.” Use encrypted protocols (TLS everywhere), role-based access, and audit logs. Involve your cybersecurity team early.
- Automate Device Onboarding: Use spreadsheet-driven definitions or JSON configs for new equipment. This minimizes errors and makes scaling easier.
- Keep Legacy Tools for Legacy Needs: Sometimes only Kepware (or even older OPC DA/HDA servers) can talk to certain equipment. Use them for those cases, but don’t make them the backbone of your future architecture.
Honest Opinion
Here’s the unpopular truth: If you’re still using Kepware as the main edge gateway for a modern, multi-site, cloud-integrated manufacturing network, you’re setting yourself up for pain. It’s not about the tool being “bad” — it’s just not built for the scale, speed, and flexibility that today’s smart manufacturing needs. The shift to MQTT, Sparkplug B, UNS, and containerized gateways isn’t just “the next buzzword” — it’s the only way I’ve seen teams actually solve the problems of scale, reliability, and future-proofing. Kepware still has its place, but it’s a supporting actor, not the star.
Final Thoughts
Every plant is different, and there’s no silver bullet. But if you’re hitting performance walls, struggling with cloud integration, or drowning in point-to-point spaghetti, it’s time to rethink the stack. Start small, pilot the new approach, and scale out as you build confidence. And if you’re not sure where to start, reach out — I’ve made enough mistakes for both of us.

Leave a Comment