The Value of Real-Time Data

Real-time data poses a unique challenge for process-oriented operations. However, conveying its functionality to team members less familiar with data management can be a daunting task. To bridge this gap, we've categorized real-time data into four distinct phases:

  1. Real-Time: This is the realm of streaming data, where information is visualized and promptly acted upon by the system. Think of it as the immediate response phase. Real-time data empowers the human-machine interface (HMI) to monitor and control operations in real-time. While this phase doesn't typically record data, certain logs might be created for legal or developmental purposes. It's the heartbeat of machine control and feedback loops, demanding high-quality data for effective operational control.
  2. Forensic Analysis and Lessons Learned: Here, we delve into captured data (and event logs to a lesser extent) to investigate specific performance metrics and operational issues. This phase involves archiving, condensing, and filtering data for historical reference. Forensic analysis represents post-operational analytics, critical for preparing operators for similar processes in the future. It's a bridge between past experiences and future operations.
  3. Data Mining: Data mining is all about exploring past operational events to uncover trends, areas for improvement, and insights to inform upcoming operations. It helps identify bottlenecks, problem areas, and correlations that may not be immediately apparent. It's the phase where data is consolidated, hashed, and clustered to reveal valuable patterns.
  4. Proactive / Predictive Analytics: In this forward-looking phase, we leverage both present and historical data streams to predict the immediate or distant future. This requires a foundation of historical data, insights from data mining, and the application of learned correlations. Proactive and predictive analytics enable us to convert correlations into actionable performance and operational changes, a realm closely related to but distinct from artificial intelligence (AI).

A more detailed explanation of these phases may be:

Control and Supervision: Real-time data is used to provide direct HMI (human-machine-interface) and permit the human computer to monitor / control the operations from his console. The control and supervision phase of real-time data does not, as part of its function, record the data. (However, certain data logs may be created for legal or application development purposes.) Machine control and control feedback loops require, as a minimum, real-time data of sufficient quality to provide steady operational control.

Forensic Analysis and Lessons Learned: Captured data (and, to a lesser extent, data and event logs) are utilized to investigate specific performance metrics and operations issues. Generally, this data is kept in some form for posterity, but it may be filtered, processed, or purged. Nevertheless, the forensic utilization does represent post-operational analytics. Forensic analysis is also critical to prepare an operator for an upcoming similar process – similar in function, geography, or sequence.

Data Mining: Data mining is used to research previous operational events to locate trends, areas for improvement, and prepare for upcoming operations. Data mining is used identify a bottleneck or problem area as well as correlate events that are less than obvious.

Proactive / Predictive Analytics: The utilization of data streams, both present and previous, in an effort to predict the immediate (or distant) future requires historical data, data mining, and the application of learned correlations. Data mining may provide correlated events and properties, but the predictive analytics will provide the conversion of the correlations into positive, immediate performance and operational changes. (This utilization is not, explicitly AI, artificial intelligence, but the two are closely related)

The operational and process environment is distinct from administrative and financial contexts, as it revolves around getting the job done efficiently. Consequently, the demands placed on computers, information systems, instrumentation, controls, and data differ significantly. In operations, data is never "in balance," always carries an element of uncertainty, and the process can't afford to halt. Operations teams have learned to adapt and perform their tasks while waiting for systems to come online, upgrade, or even be invented.

Once online, systems must operate flawlessly despite challenges. They must handle data from diverse sources, often characterized by intermittent or sporadic data flow. This makes processing, utilization, storage, and analysis of real-time data a formidable challenge, distinct from the environments seen in administrative or financial operations.

The data-information-knowledge-understanding-wisdom paradigm: Within the data--->wisdom paradigm, real-time data is just that - data. The entire tree breaks out as:

  • Data: This is raw, untempered data from the operations environment. While it's elemental, some data filtering and quality checks are necessary.
  • Information: This represents the presentation of data in formats comprehensible to humans, akin to the control and supervision phase of real-time data.
  • Knowledge: Knowledge emerges through forensic analytics, data mining, and correlation analysis. It's about extracting valuable insights from data.
  • Understanding: The proactive and forward-looking changes in behavior, characteristic of the proactive/predictive analytics phase, contribute to understanding.
  • Wisdom: The domain of wisdom remains firmly in the hands of the human computer, where human judgment and decision-making come into play.

In essence, this Data-Hierarchy paradigm serves as a foundational framework for technologists, guiding them through the progressive stages of data transformation and interpretation, ultimately culminating in the wisdom to apply that knowledge effectively.

Related Posts:

The Data-Information Hierarchy, Part 1
The Data-Information Hierarchy, Part 2