Telmai Announces $2.8 Million Seed From .406 Partners, Zetta Venture Partners, Y Combinator & others
We have some exciting news to share - we’ve closed our Seed round with .406 Ventures, Zetta Venture Partners, Y Combinator, and some exceptional angel investors like, Manish Sood (Founder and CTO, Reltio), Sam Ramji (Chief Strategy Officer, DataStax) and Venkat Varadachary (CEO, Zenon | Former EVP/CDAO @ Amex).
We feel fortunate to partner with Graham Brooks and Jocelyn Goldfein on our journey. They are the best partners we could have asked for as a data company with an ML-first approach.
This is a big milestone for our team. We wanted to take this moment to share our thoughts on Telmai’s opportunity, and our plans for the future.
Telmai is a no-code real-time data quality analysis and monitoring platform. Our product automatically and proactively detects data quality issues as data is being ingested. Our human-in-the-loop machine learning engine enables Telmai users to define correct versus incorrect data -definitions that in turn enable proactive and autonomous monitoring and alerting of data quality issues.
Since inception in November 2020, we’ve been working hard to build visionary products and spread the word about our unique approach to data reliability. We have been fortunate to work with early design partners like Dun & Bradstreet and Myers-Holum, who share our vision for the future of data quality. Thanks to partners like them, and the thousands of prospective users we’ve been fortunate to interview, Telmai launched its MVP this month (October 2021).
More important than our category-leading product is our world-class team. We have been fortunate to recruit a lean but exceptional technical group who share a deep belief in Telmai’s vision of the future.
The next chapter
This seed funding round has enabled us to grow our team and to continue building Telmai’s leading data-quality platform for the modern data stack.
We are excited to continue this journey with an incredible group of institutional and angel investors who believe in our mission.
"As an investor focused on data and analytics for the past 15 years, our portfolio and our network of Chief Data Officers have continually highlighted data quality and trust as key challenges to realizing the promise of big data analytics and machine learning.
The biggest challenge around data quality is the historic gap between engineering-focused data quality tools and business users, who understand the data, how it is being used and the expectations around data reliability. Telmai is not only eliminating that gap but also addressing both IT and business' most pressing needs in an infinitely-scalable, integrated data observability tool that establishes trust in data.”
- Graham Brooks, Partner, .406 Ventures
“The AI and machine learning platform shift is well under way, and ensuring data quality - efficiently, in production, and at scale - will be a prerequisite for every product and every business decision in the decades to come. We're delighted to partner with Telm.ai's founders, who acutely understand the challenge of maintaining high quality data and have built an industry leading product, itself powered by data and machine learning, to help their customers achieve it.”
- Jocelyn Goldfein, Managing Director , Zetta Venture Partners
We want to thank our customers and all the people who made it possible for us to reach this point – we are beyond excited to continue Telmai’s journey.
– Mona & Max
Data profiling helps organizations understand their data, identify issues and discrepancies, and improve data quality. It is an essential part of any data-related project and without it data quality could impact critical business decisions, customer trust, sales and financial opportunities.
To get started, there are four main steps in building a complete and ongoing data profiling process:
We'll explore each of these steps in detail and discuss how they contribute to the overall goal of ensuring accurate and reliable data. Before we get started, let's remind ourself of what is data profiling.
1. Data Collection
Start with data collection. Gather data from various sources and extract it into a single location for analysis. If you have multiple sources, choose a centralized data profiling tool (see our recommendation in the conclusion) that can easily connect and analyze all your data without having you do any prep work.
2. Discovery & Analysis
Now that you have collected your data for analysis, it's time to investigate it. Depending on your use case, you may need structure discovery, content discovery, relationship discovery, or all three. If data content or structure discovery is important for your use case, make sure that you collect and profile your data in its entirety and do not use samples as it will skew your results.
Use visualizations to make your discovery and analysis more understandable. It is much easier to see outliers and anomalies in your data using graphs than in a table format.
3. Documenting the Findings
Create a report or documentation outlining the results of the data profiling process, including any issues or discrepancies found.
Use this step to establish data quality rules that you may not have been aware of. For example, a United States ZIP code of 94061 could have accidentally been typed in as 94 061 with a space in the middle. Documenting this issue could help you establish new rules for the next time you profile the data.
4. Data Quality Monitoring
Now that you know what you have, the next step is to make sure you correct these issues. This may be something that you can correct or something that you need to flag for upstream data owners to fix.
After your data profiling is done and the system goes live, your data quality assurance work is not done – in fact, it's just getting started.
Data constantly changes. If unchecked, data quality defects will continue to occur, both as a result of system and user behavior changes.
Build a platform that can measure and monitor data quality on an ongoing basis.
Take Advantage of Data Observability Tools
Automated tools can help you save time and resources and ensure accuracy in the process.
Unfortunately, traditional data profiling tools offered by legacy ETL and database vendors are complex and require data engineering and technical skills. They also only handle data that is structured and ready for analysis. Semi-structured data sets, nested data formats, blob storage types, or streaming data do not have a place in those solutions.
Today organizations that deal with complex data types or large amounts of data are looking for a newer, more scalable solution.
That’s where a data observability tool like Telmai comes in. Telmai is built to handle the complexity that data profiling projects are faced with today. Some advantages include centralized profiling for all data types, a low-code no-code interface, ML insights, easy integration, and scale and performance.
Leverages ML and statistical analysis to learn from the data and identify potential issues, and can also validate data against predefined rules
Uses predefined metrics from a known set of policies to understand the health of the data
Detects, investigates the root cause of issues, and helps remediate
Detects and helps remediate.
Examples: continuous monitoring, alerting on anomalies or drifts, and operationalizing the findings into data flows
Examples: data validation, data cleansing, data standardization
Low-code / no-code to accelerate time to value and lower cost
Ongoing maintenance, tweaking, and testing data quality rules adds to its costs
Enables both business and technical teams to participate in data quality and monitoring initiatives
Designed mainly for technical teams who can implement ETL workflows or open source data validation software
Start your data observibility today
Connect your data and start generating a baseline in less than 10 minutes.
No sales call needed
Start your data observability today
Connect your data and start generating a baseline in less than 10 minutes.