What's New at Telmai

The last few months have been super exciting at Telmai. We have onboarded new customers, added product capabilities, announced new partnerships, and gotten ourselves some industry accolades.
Big thank you to our amazing customers for trusting in us!
We have gotten ourselves a 5-star review on G2. These high marks have qualified us to debut on G2’s Data Quality Grid® Report as a High Performer. Over the past few months, these reviews have been our most rewarding experience. Our commitment to our customers remains our top priority as we continue to grow.
Other key highlights:
- New Data Health Dashboard. Telmai users now have a bird’s eye view of real time health metrics across all monitored data systems in their data pipelines. You can monitor all batch and streaming workflows and see how your data quality metrics progress or digress over time. Get dynamic updates in the dashboard as Telmai monitors data in the background. For more information, see our docs.
- Custom SQL Queries. This superpower feature allows you to customize your data monitors without creating custom views in your database. This is valuable when you don’t have permission to create these views or just want to avoid polluting your database with additional schemas. With custom queries, you can monitor derivative attributes (e.g., Transaction Age via Today’s Date (dynamic) - Transaction Date) or correlated attributes (e.g., Transaction Amount divided by Cost) or exclude records via a custom condition (e.g., monitor Transaction Amount > $100,000).
- Business Metrics Monitoring. We have expanded our metrics monitoring beyond core data quality to now support business metrics monitoring. Examples include monitoring drifts in the sum, average, and count of values of numeric attributes such as payments, transaction amount, or total sales grouped by one or millions of dimensions.
- Databricks Partnership. In March, we also announced our partnership with Databricks. We have certified our platform integrations with Delta Lake and Unity Catalog to accommodate Databricks users with continuous reliability of their batch and streaming pipelines and real-time data quality monitoring of their Data Lakehouses and cataloged data sets. To learn more about our capabilities, read our blog here.
- DataStax Case Study. We are proud to support DataStax as one of our top customers. Using Telmai, DataStax has built automated data quality and observability for its product usage monitoring of over 36,000 clusters. You can read the case study with DataStax here.
As we continue to add new capabilities and update our platform with your requests, please let us know what is top of mind for you.
To learn more about Telmai, contact me or request a demo.

Data profiling helps organizations understand their data, identify issues and discrepancies, and improve data quality. It is an essential part of any data-related project and without it data quality could impact critical business decisions, customer trust, sales and financial opportunities.
To get started, there are four main steps in building a complete and ongoing data profiling process:
We'll explore each of these steps in detail and discuss how they contribute to the overall goal of ensuring accurate and reliable data. Before we get started, let's remind ourself of what is data profiling.
1. Data Collection
Start with data collection. Gather data from various sources and extract it into a single location for analysis. If you have multiple sources, choose a centralized data profiling tool (see our recommendation in the conclusion) that can easily connect and analyze all your data without having you do any prep work.
2. Discovery & Analysis
Now that you have collected your data for analysis, it's time to investigate it. Depending on your use case, you may need structure discovery, content discovery, relationship discovery, or all three. If data content or structure discovery is important for your use case, make sure that you collect and profile your data in its entirety and do not use samples as it will skew your results.
Use visualizations to make your discovery and analysis more understandable. It is much easier to see outliers and anomalies in your data using graphs than in a table format.
3. Documenting the Findings
Create a report or documentation outlining the results of the data profiling process, including any issues or discrepancies found.
Use this step to establish data quality rules that you may not have been aware of. For example, a United States ZIP code of 94061 could have accidentally been typed in as 94 061 with a space in the middle. Documenting this issue could help you establish new rules for the next time you profile the data.
4. Data Quality Monitoring
Now that you know what you have, the next step is to make sure you correct these issues. This may be something that you can correct or something that you need to flag for upstream data owners to fix.
After your data profiling is done and the system goes live, your data quality assurance work is not done – in fact, it's just getting started.
Data constantly changes. If unchecked, data quality defects will continue to occur, both as a result of system and user behavior changes.
Build a platform that can measure and monitor data quality on an ongoing basis.
Take Advantage of Data Observability Tools
Automated tools can help you save time and resources and ensure accuracy in the process.
Unfortunately, traditional data profiling tools offered by legacy ETL and database vendors are complex and require data engineering and technical skills. They also only handle data that is structured and ready for analysis. Semi-structured data sets, nested data formats, blob storage types, or streaming data do not have a place in those solutions.
Today organizations that deal with complex data types or large amounts of data are looking for a newer, more scalable solution.
That’s where a data observability tool like Telmai comes in. Telmai is built to handle the complexity that data profiling projects are faced with today. Some advantages include centralized profiling for all data types, a low-code no-code interface, ML insights, easy integration, and scale and performance.
Start your data observibility today
Connect your data and start generating a baseline in less than 10 minutes.
No sales call needed
Start your data observability today
Connect your data and start generating a baseline in less than 10 minutes.