Announcing Telmai's Data Observability for your Delta Lake

Announcing Telmai's Data Observability for your Delta Lake
Mona Rakibe (CEO & Co-Founder) and Chandru Hebbasooru (Engineer)

Overview

We are super excited to announce Telmai's native support for Delta Lake. With this new integration, Telmai users have end-to-end data observability across the entire data pipeline, i.e., Data Lake and Lakehouse environments, Data Warehouses, Delta Lake, and even streaming sources.

What is Delta Lake?

Open-sourced in April 2019, Delta Lake is a Databricks project that brings reliability, performance, and lifecycle management to data lakes. 

Delta Lake provides ACID transactions, scalable metadata handling, and unifies streaming and batch data processing. Delta Lake runs on top of your existing data lake and is fully compatible with Apache Spark APIs.

Designed to solve the Data Reliability gaps in the Data Lake architecture, Delta Lake has gained rapid adoption since its launch in 2019. At Telmai, this integration was designed for an existing Delta and Unity catalog customer looking to enhance their data reliability further.

Telmai integration with Apache Delta Lake
Telmai + Apache Delta lake

So how is Telmai enabling data reliability and data quality for Delta Lake?

As a central data observability tool for your entire data pipeline, Telmai can now easily integrate with Delta Lake to analyze data inside Delta Lake for anomalies like outliers and drifts.  

Our no-code integration will enable Delta users to automatically monitor close to 40 data metrics for your Delta tables/views within hours of getting started.

Some of these metrics include,

  • Schema drifts: Schema changes like new attributes added or removed
  • Record count: The volume of received data, calculated as row counts
  • Completeness: Incomplete data received like null values, empty strings, NA, etc.
  • Uniqueness: Count of unique values to track duplicates
  • Distribution: Distribution drift for categorical data
  • Pattern drifts: Unexpected syntax patterns, useful for well-formatted attributes like codes, phone numbers, SSN, Zip Code, etc.
  • Controlled lists of values: Controlled list of values (LOV) like ISO codes, ICD codes, Gender, Address_Type, etc 
  • Accuracy: Data accuracy is calculated based on multiple metrics like numeric values, is_email, is_URL, length of strings, tokens, etc. Telmai can flag outliers based on these metrics.
  • Business metrics: Track specific metrics derived from data. For example, taking an average of all values from an attribute like a credit_score and tracking sudden changes in the aggregated value over time.

With Telmai's notifications, your team will get alerted on unexpected drifts in these metrics. Additionally, users can set expectations/rules using our UI to fine-tune these metrics and thresholds for specific business needs.  

Telmai will also automatically classify these metrics into Data Quality KPIs like freshness, completeness, accuracy, validity, uniqueness, etc. 

Our Delta Lake integration is designed to natively process and monitor the changed records and analyze only those. Delta integration differs from other integrations like BigQuery and Snowflake, which don't natively track changed data. Telmai will leverage a timestamp-based column in those sources to identify and track the changed records. 

Moreover, we have made all this super easy, so the Delta Lake users can focus on building great data products and not burn out by taking care of pipeline health issues.

How does our Delta integration work? 

It is a simple 3 step process that's documented here:

  • Collect JDBC connection information from your Databricks cluster. 
  • Create an API token that would allow Telmai to connect to your cluster.
  • Create a source in Telmai to connect to your Delta table. 
  • Enable **delta flag on the Telmai source/connection to allow monitoring of changed data.
  • Enable the schedule on the Telmai source to run jobs on a scheduled period.

** Telmai's delta flag works across all sources (not just Delta Lakes). The naming is coincidental, enabling a change in data capture mode to monitor and observe only the changed records.

Additionally Telmai's REST based integrations will enable Databricks users to enrich their Unity Catalog functionality by providing Data reliability insights like - Open alerts on tables, Data quality scores on Freshness, completeness, accuracy etc. Giving a full 360 degrees view on overall data health.

We are excited about this new feature as it has been a highly requested one, enabling our customers to accelerate their data reliability on their Delta Lakes. And we hope that you find it exciting as well!

Reach out to us if you want to read the use case study or have any questions or schedule a demo here

Data profiling helps organizations understand their data, identify issues and discrepancies, and improve data quality. It is an essential part of any data-related project and without it data quality could impact critical business decisions, customer trust, sales and financial opportunities. 

To get started, there are four main steps in building a complete and ongoing data profiling process:

  1. Data Collection
  2. Discovery & Analysis
  3. Documenting the Findings
  4. Data Quality Monitoring

We'll explore each of these steps in detail and discuss how they contribute to the overall goal of ensuring accurate and reliable data. Before we get started, let's remind ourself of what is data profiling.

What are the different kinds of data profiling?

Data profiling falls into three major categories: structure discovery, content discovery, and relationship discovery. While they all help in gaining more understanding of the data, the type of insights they provide are different:

 

Structure discovery analyzes that data is consistent, formatted correctly, and well structured. For example, if you have a ‘Date’ field, structure discovery helps you see the various patterns of dates (e.g., YYYY-MM-DD or YYYY/DD/MM) so you can standardize your data into one format.

 

Structure discovery also examines simple and basic statistics in the data, for example, minimum and maximum values, means, medians, and standard deviations.

 

Content discovery looks more closely into the individual attributes and data values to check for data quality issues. This can help you find null values, empty fields, duplicates, incomplete values, outliers, and anomalies.

 

For example, if you are profiling address information, content discovery helps you see whether your ‘State’ field contains the two-letter abbreviation or the fully spelled out city names, both, or potentially some typos.

 

Content discovery can also be a way to validate databases with predefined rules. This process helps find ways to improve data quality by identifying instances where the data does not conform to predefined rules. For example, a transaction amount should never be less than $0.

 

Relationship discovery discovers how different datasets are related to each other. For example, key relationships between database tables, or lookup cells in a spreadsheet. Understanding relationships is most critical in designing a new database schema, a data warehouse, or an ETL flow that requires joining tables and data sets based on those key relationships.

Stay in touch

Stay updated with our progress. Sign up now

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Stay in touch

Stay updated with our progress. Sign up now

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Data Observability
Data Quality

Leverages ML and statistical analysis to learn from the data and identify potential issues, and can also validate data against predefined rules

Uses predefined metrics from a known set of policies to understand the health of the data

Detects, investigates the root cause of issues, and helps remediate

Detects and helps remediate.

Examples: continuous monitoring, alerting on anomalies or drifts, and operationalizing the findings into data flows

Examples: data validation, data cleansing, data standardization

Low-code / no-code to accelerate time to value and lower cost

Ongoing maintenance, tweaking, and testing data quality rules adds to its costs

Enables both business and technical teams to participate in data quality and monitoring initiatives

Designed mainly for technical teams who can implement ETL workflows or open source data validation software

Stay in touch

Stay updated with our progress. Sign up now

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Start your data observibility today

Connect your data and start generating a baseline in less than 10 minutes. 

No sales call needed

Stay in touch

Stay updated with our progress. Sign up now

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Start your data observability today

Connect your data and start generating a baseline in less than 10 minutes. 

Telmai is a platform for the Data Teams to proactively detect and investigate anomalies in real-time.
© 2023 Telm.ai All right reserved.