Coordinate convergence and calm complexity

HighByte uses AI and AWS to help glassmaker streamline data infrastructure
April 15, 2026
8 min read

Key Highlights

  • Digital tools can analyze large data volumes to identify actionable insights.
  • IT/OT convergence and contextualized data platforms enable real-time decision-making and AI-driven automation in manufacturing.
  • Vivix Vidros implemented a scalable data architecture with AWS and HighByte.

Beyond its obvious flexibility, one of digitalization’s main benefits is its software tools can sort through huge volumes of data, and find useful nuggets and threads. These details can simplify complex process applications, and enable improvements that might have seemed unapproachable before—including artificial intelligence (AI). But why stop there?

For instance, HighByte approaches digitalization, virtualization and AI by focusing on IT/OT convergence, how data is used in processes and plants, and contextualizing it, so outside users and artificial intelligence (AI) can take advantage of it, according to Aron Semle, CTO at HighByte. “If a facility has AGVs running all around, its users need an app to detect when they’re stuck or stalled, so they can free or fix them more quickly,” he says. “This means identifying relevant data, shipping it back to the plant, in moving it up and down to users as needed.”

Semle states that developers and users are realizing that AI can accelerate these efforts, and consume and integrate data from outside the OT sphere. “Over the next five years, the place where AI can really add value is addressing workforce issues,” explains Semle. “It will let operations personnel ask questions and about unusual machine conditions, and get answers and confirmations based on previously archived information and other tribal knowledge. This makes AI is perfect for ‘if you see this, then do this’ situations.”

Glassmaker builds data foundation

For example, Vivix Vidros Planos produces about 900 tons of float glass per day at its 90,000-square-meter facility in Goiana, Pernambuco, Brazil, and has a unique position as the only Brazilian glass manufacturer with its own solar plant that supplies power for production, as well as its own raw material processing plant.

In conjunction with its “mine-to-line” policy for ensuring consistent, high-quality production, Vivix established its Industrial Transformation department to cut costs, increase its industrial teams’ productivity, and improve resilience to global changes by becoming more data-driven. The department’s committee documented initial use cases, including digitalizing engineering checklists, real-time temperature regulation and asset maintenance, and began evaluating people, process and technology required to achieve their digital transformation goals. Its three main goals included:

  1. Merge, normalize, standardize and contextualize operations data to better predict, schedule and complete asset maintenance;
  2. Overcome challenges related to scalability, data integration and data product development within the company’s Mendix application-development platform; and
  3. Balance cost efficiency and performance, while restructuring Vivix’s data architecture for future growth.

To scale its projects, the glassmaker’s committee focused on building a solid foundation for its data architecture and process information. Vivix enlisted Amazon Web Services (AWS) and HighByte in 2022 to build its data foundation using an industrial data fabric design. Their approach included:

  • Deploy HighByte’s Intelligence Hub software in the corporate data center to curate, orchestrate and model data from OPC-networked servers, SQL servers and other industrial sources at the edge before publishing payloads into the AWS ecosystem.
  • Build a scalable, industrial data fabric using Intelligence Hub as the data-ops layer and Amazon S3 as the centralized cloud data store.
  • Host Intelligence Hub in AWS’ cloud to streamline integrations and transformations between various cloud-based services, including Amazon Bedrock, Snowflake and business-intelligence applications like Mendix (Figure 1).
  • Use Amazon Bedrock and Mendix to build generative AI (genAI) agents for maintenance teams.
“Intelligence Hub provides an agile solution for industrial data integration. It’s fundamentally changed how we architect and use data in our plant,” says Aristóteles Terceiro Neto, industrial transformation manager at Vivix. “We built a digital infrastructure with Intelligence Hub that lets us improve operational efficiency, and make data-driven decisions for consistent production of high-quality glass. The low-code approach is key.”

Neto and Vivix report the primary benefits of implementing its new data foundation with help from HighByte and AWS include: less unscheduled downtime; increased asset lifespans and reduced maintenance costs; reduced operating expenses; improved collaboration between people, departments, and systems; faster time-to-experience for new data products and apps; reduced development and maintenance costs; and the ability to address larger organizational goals without sacrificing the material quality or production efficiency.

In the future, Vivix plans to accelerate adoption of machine learning (ML) and gen AI to rapidly develop AI products with Mendix and Snowflake.

It also expects to employ a loop of “autonomous agents” that allows its model to decide in real time which data sources to consult, which tools to use to process the information, and how to perform the tasks in an orchestrated manner. Finally, the company also wants to build “Vivix Virtual Engineer” to reduce customer response times from days or weeks to minutes when quality incidents are reported, and cut by 90% the time spent searching for and interpreting information.

Get your subscription to Control's tri-weekly newsletter.

MCP makes information understandable

To assist Vivix and other initiatives, Semle adds that Intelligence Hub organizes process and batch data, process variables and units of measure from multiple sources, such as PLCs and SQL databases. Situated on top of the user’s plant or operations network, it’s a passthrough application layer, not a historian or analyzer. This lets it add industry-specific context and names that can be understood by humans. For AI agents, this is typically done using one centralized strategy, such as model context protocol (MCP), which is an open-source standard developed by Anthropic for its Claude Code agentic AI, command line interface (CLI) tool. MCP exposes other tools that AI can use to reach out for data, such as running SQL queries to get information, and even perform actions such as changing setpoints.

“As a data passthrough, Intelligence Hub transforms data, adds context, and shapes it for consumers like MCP for AI, Snowflake, Microsoft Data Fabric or Amazon Web Services (AWS),” adds Semle. “This is similar to the way that Inductive Automation’s Ignition software creates HMI templates for machines and processes, but those are just for interfaces. Industrial data and operating systems require this context, so everyone can use them.”

To determine which type of system virtualization, AI or other form of digitalization will be the most effective, Semle reports that several basic questions must be asked about users’ individual data operations, including:

  • What components, software and networks are already in place?
  • What’s connected, and what’s still separated into islands?
  • What baseline connectivity exists for HMI/SCADA systems to take data from sensors, PLCs and other devices, as well as business systems?
  • What other information do users wish they had access to?

“We pull in lots of raw files, spreadsheets and other information, and work with many different data batches and systems that aren’t linked, which means there are lots hurdles to overcome,” says Semle. “For instance, we recently helped FedEx maintain its conveyors, which have vibration sensors, and are networked via MQTT to SQL databases and Google’s cloud-computing service. This lets the company monitor the state of its conveyors, and send out work orders as needed. It also uses Intelligence Hub to automate many former hurdles, including using Docker containers in about 100 facilities, where it can test, and drop in replacement components and software in one day.”

Beyond considering what additional information would be useful to them, Semle adds that potential AI and digitalization users should also determine what data is presently a pain to reach, and how AI might also help with training. Identifying and answering these questions forms the use cases, which guide and serve as entry points for industrial data operations, as well as the connectivity and context they’ll require.

Intelligence Hub uses OT-friendly tag browsing and other tools that make it easier to deploy, and simpler for IT to support. This capability lets it connect to files, whether they’re on PLCs, HMIs, discs or elsewhere. HighByte also employs a plain-English data model for machines and other physical equipment. These models consist of the definitions and software representations of corresponding physical devices, which are instantiated with links to subsequent production and performance data from related sensors, PLCs, databases and other sources. A common model also gives users a standardized version that can be reused and adjusted as needed to match changes in its physical counterparts. They’re typically located in onsite servers networked via OPC UA, or in HMI applications, or in Docker or Portainer containers that can run locally as pared-down virtual machines (VM). The advantage of software containers is they’re simple and stateless, which gives them the volume required for storing and sharing information.

“Once software containers and models are connected and contextualized, they can start to perform industrial data operations, and serve their use cases,” explains Semle. “For example, a unified namespace (UNS) is a destination for context that uses an MQTT broker to publish definitions of models, so users can see the states of their data. As more new information comes in, UNS builds layers, which enables it to take previously messy data, and make it available to anyone with appropriate access. This process also lets UNS construct and put application program interfaces (API) into operations and plants, which standardizes production input, makes it look more like regular IT data, and enables it to be consumed and analyzed by other users and levels in an overall enterprise’s fleet.

“Likewise, HighByte lets users build APIs for their processes in conjunction with the Clean Energy Smart Manufacturing Innovation Institute’s (CESMII) Industrial information interoperability eXchange (i3X) standard that defines what APIs should include, and how they should interact with contextualized production data to allow better interoperability on both the OT and IT sides.”

About the Author

Jim Montague

Executive Editor

Jim Montague is executive editor of Control. 

Sign up for our eNewsletters
Get the latest news and updates