Both Max and I have been very clear that we wanted to build Telmai for the DataOps teams, specifically data engineers. Our objective is to empower these highly skilled data engineers with the right set of tools for data observability.
Once we were clear on the user persona and the problem we wanted to solve, our next set of decisions were around the technology foundation for Telmai. We went through a series of discussions around topics like open source, open core, the SaaS model, and the criticality of seamless integrations into data pipelines.
Giving you all a peek into some of our key decisions.
Software as a Service(SaaS)
Typically monitoring software is very resource-intensive, especially when ingestion rates reach millions of data points or records per second and experience significant unpredictable spikes in the volume. Handling enterprise-grade security, fast auto-scaling, throttling, retries are additional overhead for data engineers who are already dealing with highly complex data systems. Our experience in designing such highly secured systems in an efficient manner can eliminate this overhead from the data engineering team.
The SaaS model also gives us an opportunity for continuous improvement of our AI models.
To understand if you have problems with your data, you need superior monitoring to detect outliers. This is traditionally addressed in data quality systems using rules, however, rules are fragile, hard to develop and they can only discover what you already know. We want to tell you what you don't know and should know.
Advanced ML models significantly reduce the time and effort to get value and also adapt to constantly evolving data. At the same time, you might have validation logic that relies on business rules. Such rules are typically well understood and robust. In such cases augmenting your rules with ML makes our system even more powerful.
Simplicity of integration
Last but not the least, whether your pipeline reads files from GCS or S3, or a data warehouse like BigQuery, Redshift or Snowflake, or even process records with Spark or Dataflow - we want your integration with Telmai to be as seamless as possible and we will provide both client libraries and REST APIs to satisfy any type of integration. All without adding to the latencies of your pipeline.
You will notice that the primary design principle for Telmai’s architecture and technology is to provide the best developer experience possible.
To summarize we will have :
Secured cloud platform to reduce infrastructure planning and maintenance overhead
Advanced ML models for robust and scalable anomaly detection
The simplicity of integration to reduce time to value
Now how will we do all of this? That is the magic of “Telmai”.
#dataobservability #dataquality #dataops
Stay in touch
Stay updated with our progress. Sign up now
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
More like this
5 Reasons to Consider Centralized Data Observability for Your Modern Data Stack