Data Observability. What is it?

Data Observability. What is it?
Max Lukichev

The data management landscape has dramatically changed over the last decade with the evolution and massive adoption of BigData, ML/AI, ever-increasing number of data sources, and volumes. Ensuring high quality and completeness of data is critical for driving valuable business decisions for enterprises. Yet statistics show that 87% of machine learning projects don’t make it into production and data engineers spend 80% of their time cleaning the data.


Around 5-8 years ago, enterprises had started seeing similar patterns around cloud infrastructure. Organizations were investing heavily in cloud architecture with low returns on their investments. There was a lack of monitoring and minimal predictability about the anomalies and service failures. This need gave birth to the evolution of Cloud Infrastructure Observability.


Running SaaS operations at an ever-growing scale and the need for efficiency and reliability put the focus on observability products, and today, there are several major players in this space like Splunk, DataDog, NewRelic, Dynatrace. I have noticed various interpretations of what Observability is, but recently it has converged into three key concept pillars. In the world of Cloud Infrastructure Observability, these pillars are metrics, traces, and logs and they try to answer the following questions:

  1. Do I have a problem, how bad is it?
  2. Where is my problem and what is the impact?
  3. What went wrong?

Initially, the first pillar was addressed by metrics monitoring tools, the second one by tracing, and the third one by logs. The observability area is very dynamic and experiencing explosive growth, so we see many new tools emerging and addressing the needs of each pillar.

However, data problems are hidden from these tools and even when all the metrics, traces, and logs look normal for a data pipeline, it still can produce garbage data. It is a significant problem for the businesses, which leads not only to constant escalation mode and burnout of data engineers who are troubleshooting discrepancies in the reports but also hurts business big time.


Hence a similar concept of observability has now emerged in data management for data quality use cases. More and more companies realize they need to focus on addressing the data issues or what is sometimes referred to as “data downtime.” It is not surprising to see the growing interest in Data Observability.


Just like Cloud Observability, Data Observability Suites are trying to get answers to the same three questions. However, there is no established consensus on the naming. Let me offer my take on the Data Observability pillars:

  • Monitoring of the Data Quality detects a variety of problems or anomalies in the data. There are numerous things which could go wrong, but we can break it down into three high-level categories: missing/incomplete data, incorrect data, and stale data
  • Data Lineage helps understand the impact and source of a data anomaly. You need to know how various sources of data relate to each other and how they contribute to downstream systems and reports
  • Data Troubleshooting helps to find the root cause of an issue. In the App Observability, it is the Logs. In the case of Data, Observability logs are of minimal help since the pipeline or an application are still operating as expected, but they are processing the wrong data and ultimately causing wrong decision making


Given the complexity of the domain, I anticipate a wide range of tools being introduced for each pillar, far greater than what we saw for Cloud Observability.


So what is Data Observability in the end? In short, it is a new discipline, which is trying to fill the gaps where traditional data management systems like data quality, data profiling, lineage are falling short to help data engineers achieve operational excellence and deliver business results.


#dataquality #dataobservability #dataops

Data profiling helps organizations understand their data, identify issues and discrepancies, and improve data quality. It is an essential part of any data-related project and without it data quality could impact critical business decisions, customer trust, sales and financial opportunities. 

To get started, there are four main steps in building a complete and ongoing data profiling process:

  1. Data Collection
  2. Discovery & Analysis
  3. Documenting the Findings
  4. Data Quality Monitoring

We'll explore each of these steps in detail and discuss how they contribute to the overall goal of ensuring accurate and reliable data. Before we get started, let's remind ourself of what is data profiling.

What are the different kinds of data profiling?

Data profiling falls into three major categories: structure discovery, content discovery, and relationship discovery. While they all help in gaining more understanding of the data, the type of insights they provide are different:

 

Structure discovery analyzes that data is consistent, formatted correctly, and well structured. For example, if you have a ‘Date’ field, structure discovery helps you see the various patterns of dates (e.g., YYYY-MM-DD or YYYY/DD/MM) so you can standardize your data into one format.

 

Structure discovery also examines simple and basic statistics in the data, for example, minimum and maximum values, means, medians, and standard deviations.

 

Content discovery looks more closely into the individual attributes and data values to check for data quality issues. This can help you find null values, empty fields, duplicates, incomplete values, outliers, and anomalies.

 

For example, if you are profiling address information, content discovery helps you see whether your ‘State’ field contains the two-letter abbreviation or the fully spelled out city names, both, or potentially some typos.

 

Content discovery can also be a way to validate databases with predefined rules. This process helps find ways to improve data quality by identifying instances where the data does not conform to predefined rules. For example, a transaction amount should never be less than $0.

 

Relationship discovery discovers how different datasets are related to each other. For example, key relationships between database tables, or lookup cells in a spreadsheet. Understanding relationships is most critical in designing a new database schema, a data warehouse, or an ETL flow that requires joining tables and data sets based on those key relationships.

Stay in touch

Stay updated with our progress. Sign up now

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Stay in touch

Stay updated with our progress. Sign up now

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Data Observability
Data Quality

Leverages ML and statistical analysis to learn from the data and identify potential issues, and can also validate data against predefined rules

Uses predefined metrics from a known set of policies to understand the health of the data

Detects, investigates the root cause of issues, and helps remediate

Detects and helps remediate.

Examples: continuous monitoring, alerting on anomalies or drifts, and operationalizing the findings into data flows

Examples: data validation, data cleansing, data standardization

Low-code / no-code to accelerate time to value and lower cost

Ongoing maintenance, tweaking, and testing data quality rules adds to its costs

Enables both business and technical teams to participate in data quality and monitoring initiatives

Designed mainly for technical teams who can implement ETL workflows or open source data validation software

Stay in touch

Stay updated with our progress. Sign up now

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Start your data observibility today

Connect your data and start generating a baseline in less than 10 minutes. 

No sales call needed

Stay in touch

Stay updated with our progress. Sign up now

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Start your data observability today

Connect your data and start generating a baseline in less than 10 minutes. 

Telmai is a platform for the Data Teams to proactively detect and investigate anomalies in real-time.
© 2023 Telm.ai All right reserved.