Skip to content
Manufacturing · Energy · IIoT

You Have the Sensor Data.
Your Analytics Run on Last Night's Export.

CrateDB connects to your existing OT stack and runs OEE, predictive maintenance, and cross-plant queries on live sensor data. Standard SQL. No historian replacement required.

The data is there. The analytics are not.

Your historians and PLCs ingest without issue. The problem shows up when you need answers.

Traditional time-series databases handle simple metric ingestion at low cardinality well. They degrade when you add dimensions — device types, plant locations, shift codes, production runs. Every new series becomes a separate storage structure. The more context you add, the slower the queries get.

The result: your operations team waits for reports updated at midnight. Your data engineers maintain two systems instead of one.

Where traditional systems fall short

  • Slow calculations: OEE that takes minutes instead of milliseconds. Your shift ends before the report loads.
  • Cross-system joins: Root-cause queries that require joining sensor data with ERP downtime codes across a second system — manually.
  • Schema fragility: Every new sensor type triggers a pipeline migration. Adding context to your data makes everything slower.


Connect to your OT stack. Query the moment data arrives.

Connect without replacing

CrateDB connects to your existing industrial sources via Telegraf — OPC-UA, MQTT, SCADA, and historian outputs. Your PLCs, historians, and SCADA infrastructure stay in place.

Index everything on ingestion

CrateDB indexes every field automatically on ingestion. No manual index management, no batch loading step, no DBA required.

cr-quote-image

Standard SQL, existing tools

Write SQL with joins, CTEs, window functions, and aggregations. Your BI tools connect via the PostgreSQL wire protocol — no new query language.

Dynamic columns

Add a new sensor type: CrateDB handles it with dynamic columns. No schema migration, no downtime, no pipeline rewrite.
cr-quote-image

OEE root-cause analysis.
On data that just arrived.

Standard SQL. No proprietary functions. No pre-aggregation. The data is live.

 

        

/* Based on IoT devices reports, this query returns the voltage variation over time
for a given meter_id */ WITH avg_voltage_all AS ( SELECT meter_id, avg("Voltage") AS avg_voltage, date_bin('1 hour'::INTERVAL, ts, 0) AS time FROM iot.power_consumption WHERE meter_id = '840072572S' GROUP BY 1, 3 ORDER BY 3 ) SELECT time, (avg_voltage - lag(avg_voltage) over (PARTITION BY meter_id ORDER BY time)) AS var_voltage FROM avg_voltage_all LIMIT 10;
        

+---------------+-----------------------+
|          time |           var_voltage |
+---------------+-----------------------+
| 1166338800000 | NULL                  |
| 1166479200000 |   -2.30999755859375   |
| 1166529600000 |    4.17999267578125   |
| 1166576400000 |   -0.3699951171875    |
| 1166734800000 |   -3.7100067138671875 |
| 1166785200000 |   -1.5399932861328125 |
| 1166893200000 |   -3.839996337890625  |
| 1166997600000 |    9.25               |
| 1167044400000 |    0.4499969482421875 |
| 1167174000000 |    3.220001220703125  |
+---------------+-----------------------+
        

/* Based on IoT devices reports, this query returns the voltage corresponding to
the maximum global active power for each meter_id */ SELECT meter_id, max_by("Voltage", "Global_active_power") AS voltage_max_global_power FROM iot.power_consumption GROUP BY 1 ORDER BY 2 DESC LIMIT 10;
        

+------------+--------------------------+
| meter_id   | voltage_max_global_power |
+------------+--------------------------+
| 840070437W |                   246.77 |
| 840073628P |                   246.69 |
| 840074265G |                   246.54 |
| 840070238E |                   246.35 |
| 840070335K |                   246.34 |
| 840075190M |                   245.15 |
| 840072876X |                   244.81 |
| 840070636M |                   242.98 |
| 84007B113A |                   242.93 |
| 840073250D |                   242.28 |
+------------+--------------------------+

See it for yourself in under 30 minutes

Examples of AI workloads in production

ABB's OPTIMAX® Cloud for Smart Charging is a state-of-the-art power management system designed for EV charging stations and heavy vehicle depots in the logistics and bus industries. It provides smart load management for ABB and non ABB EV chargers, and integrates with external assets like battery storages, PV systems, and interfaces to grid operators’ systems.

"CrateDB is a critical piece of our OPTIMAX® Cloud platform. Its ability to handle vast amounts of time-series data from diverse sources, while delivering real-time insights, has allowed us to scale our operations seamlessly. With CrateDB, we’ve empowered our customers with smarter energy management, reduced costs, and supported a more sustainable future."

Christian Kohlmeyer
Product Owner Mobility & Sites
ABB

EV-Charging-Station-cropped
TGW Logistics Group is one of the leading international suppliers of material handling solutions. As systems integrator, TGW plans, produces and implements complex logistics centres, from mechatronic products and robots to control systems and software. 

Using CrateDB, TGW accelerates data aggregation and access from warehouse systems worldwide, resulting in increased database performance. The system can handle over 100,000 messages every few seconds.

"CrateDB is a highly scalable database for time series and event data with a very fast query engine using standard SQL".

Alexander Mann
Owner Connected Warehouse Architecture
TGW Logistics Group

TGW-warehouse
SPGo! is part of PETROMIN, which has more than 23 years of experience in the mining and oil industries. They build applications for monitoring all material conveyor belt idlers every minute 24 hours a day through online sensors. They use CrateDB as a central database to capture and query data from 30,000 sensors per mine, representing 760 million records a day.

"With CrateDB, we can continue designing products that add value to our customers. We will continue to rely on CrateDB when we need a database that offers great scalability, reliability and speed."

 

Nixon Monge Calle
Head of IT Development and Projects
SPGo! Business Intelligence

SPGo

Resources

Webinar
From Sensors to Dashboards: Building Real-Time Analytics Pipelines That Actually Work
From Sensors to Dashboards: Building Real-Time Analytics Pipelines That Actually Work

The promise of real-time data is everywhere, but for most engineering teams, the reality is a nightmare of stale data, system bottlenecks, and lagging dashboards that are useless for operational decisions. If your team is spending more time tuning infrastructure and reprocessing data than delivering new value, you are not alone. Watch this recording to learn the three principles for building real-time analytics that work.

Talk
How-ABB-Ability-Genix-applies-AI-and-analytics-to-unlock-the-value-of-industrial-data-with-CrateDB-02
ABB: AI and Analytics applied to Industrial Data

In this talk, Marko Sommarberg, Lead Digital Strategy and Business Development at ABB, explaine how ABB Ability™ Genix applies AI and analytics to unlock the value of industrial data using CrateDB.

Talk
Not-All-Time-Series-Are-Equal_Challenges-of-Storing-and-Analyzing-Industrial-Data
TGW Logistics: Not All Time-Series are Equal

This talk at the IoT Tech Expo 2023 explores the complexities of industrial big data, characterized by its high variety, unstructured features, and different data frequencies. It also analyzes how these attributes influence data storage, retention, and integration when dealing with an IoT database.

Tutorial
HiveMQ
Setup HiveMQ using CrateDB as consumer

This blog post gives you an overview of how to set up HiveMQ using CrateDB as a consumer.

Video
Unstoppable Insights: Resilient Data Streaming with CrateDB
Unstoppable Insights: Resilient Data Streaming with CrateDB

Explore the resilience of CrateDB for real-time data streaming in a demonstration of node failure scenarios. This video illustrates the capabilities of a distributed CrateDB cluster to maintain operational continuity during both a graceful shutdown and an abrupt unavailability of a database node within a 3-node configuration. Witness firsthand how continuous data ingestion is sustained, underscoring CrateDB's inherent high availability.

Webinar
IIoT World Manufacturing & Supply Chain Day: From Data Overload to Actionable Insights: Mastering Data Management in Manufacturing
From Data Overload to Actionable Insights: Mastering Data Management in Manufacturing

This session delves into the challenges of managing vast amounts of data and demonstrates how real-time analytics can transform raw data into valuable insights. Learn how modern data management solutions enable manufacturers to optimize production, improve operational efficiency, and drive innovation. From harnessing data streams in real-time to leveraging AI-powered analytics, this session will equip you with the tools to master data management and make faster, data-driven decisions.

Additional resources

See it for yourself in under 30 minutes

FAQ

Streaming analytics refers to real-time processing and analysis of data as it flows in (e.g. from sensors, logs, events). It allows organizations to detect anomalies, trigger actions, and make decisions immediately, rather than relying on batch processing that introduces latency.

CrateDB is engineered for scalable ingestion from sources such as Kafka, MQTT, CDC streams, and logs. It features automatic indexing and distributed architecture that let it scale horizontally and sustain large throughput while serving sub-second queries.

Yes, CrateDB supports native SQL. You can query streaming and time-series data using familiar SQL semantics (aggregations, windowing, joins) without needing to learn a proprietary query language.

CrateDB supports flexible schemas, allowing you to ingest semi-structured data (e.g. JSON fields) and evolve your schema over time. This makes it easier to adapt as new sources and fields emerge.

It’s particularly effective in scenarios such as IoT (sensor monitoring), log and event analytics, anomaly detection, fleet/transport monitoring, real-time dashboards, AI/ML feature pipelines, and any system requiring near-instant insight from high-volume continuous data.