What’s new at Telmai in 2025: Key product feature updates so far

Reliable data is the foundation for every modern enterprise,from powering AI models to ensuring trusted reporting and customer experiences. But as architectures evolve toward distributed open lakehouse and multi-cloud hybrid data environments, ensuring data quality at the source becomes more critical than ever. That’s why, in the first half of 2025, Telmai continued to double down on enabling observability where the data lives—in the lake itself. From native Iceberg support that eliminates warehouse dependencies to enhanced rule logic and smarter alerting, our latest updates are built to help data teams monitor, validate, and remediate data issues earlier in the pipeline.

The result: greater trust in data-driven initiatives through automated resolution, allowing data reliability to scale alongside your growing ecosystem without adding operational overhead.

Ensuring data quality is made simple for Business and Engineering teams

Enhanced DQ Rule Engine to scale and unify validation across your stack

As data pipelines become increasingly distributed and complex, centrally managing the various data quality rules and validation workflows is no longer optional—it’s essential. Telmai’s enhanced rule engine offers a unified interface for creating, editing, and deploying validation rules across all systems, eliminating fragmented and siloed operations. With reusable templates, and JSON-based rule definitions, teams can standardize validations, ensure version control and further integrate into CI/CD workflows. Decoupled from warehouse compute, Telmai executes validations in its own engine, delivering performance and scalability without additional cost or operational overhead.

Custom free-form SQL metrics and advanced rule logic

Telmai now enables users to define custom metrics using SQL and implement complex validation logic that reflects their unique data domain, all through an intuitive interface. Whether it’s tracking nuanced business KPIs or applying layered rule conditions, teams can build data quality rules that align with real-world expectations, without engineering overhead. This empowers users across technical levels to define what “data quality” means for their organization and catch edge-case anomalies that generic rules often miss.

Monitor data quality trends with custom metric dashboards

image
image

Telmai’s Metric Inspector equips teams with interactive dashboards to visualize and investigate data quality trends over time with interactive, time-series dashboards that reveal how key data quality metrics behave over time. Users can drill into data quality KPIs such as null rates, freshness lag, or custom-defined KPIs and correlate them with anomaly triggers. This level of transparency helps data teams identify patterns, validate rule thresholds, and continuously refine their data quality strategy.

Observability at the Source for Open Table Formats

As AI applications and advanced analytics become table stakes for modern enterprises, organizations are increasingly adopting open lakehouse architectures powered by table formats like Apache Iceberg, Delta Lake, and Hudi. While these formats offer the flexibility of object storage with the governance and performance of traditional warehouses, they lack native mechanisms for ensuring data quality and reliability. 

Without observability at the source, organizations are forced to validate data downstream, driving up cloud costs, delaying issue detection, and undermining the reliability of critical analytics and AI initiatives. Telmai solves this by delivering native, source-level observability for Apache Iceberg on GCP and GCS, with Delta and Hudi support on the roadmap.

Through partition-level profiling and metadata pushdown, Telmai enables full-fidelity validation without scanning entire datasets or triggering warehouse compute. This empowers teams to detect schema drift, freshness issues, and value anomalies early in the pipeline, ensuring AI models and analytical workloads operate on trusted, timely data, at scale and without architectural compromise.

Operational Efficiency Through Smarter Workflows and UX

Streamlined interface for defining and managing DQ policies

Telmai’s updated policy management UI makes it easier for users to define, configure, and review data quality policies at scale. The refreshed layout improves visibility into rule thresholds, affected datasets, and policy logic—reducing the time it takes to author or adjust checks. Designed to support both technical and operational users, this interface lowers the barrier to entry and enables more teams to take ownership of data quality without relying on engineering.

Smarter alerting and workflows for faster resolution

As data ecosystems scale, managing noise becomes just as important as detecting issues. Telmai now includes enhanced alert routing logic that ensures notifications are delivered based on policy type, team ownership, and relevance—so the right people are alerted at the right time. By aligning alerts with organizational context, teams can reduce alert fatigue, prioritize what matters, and accelerate resolution. This makes it easier to operationalize data quality across distributed teams and complex pipelines.

Simplified connection and asset management for faster onboarding

To streamline onboarding and reduce repetitive setup, Telmai has introduced a modular approach to managing connections and assets. Previously, connections were tied directly to asset creation, requiring full configuration for each new asset.

Now, source connections are created once through a centralized interface and can be reused across multiple assets. This separation simplifies setup, reduces duplication, and gives teams better control over how data sources are organized and maintained—leading to faster deployment and easier scaling across environments.

Closing Thoughts

As modern data ecosystems scale to support AI, open table formats, and real-time analytics, ensuring data quality at the source has become a strategic priority. Our latest updates reflect Telmai’s continued commitment to enabling proactive, high-fidelity AI augmented data quality monitoring, tailored to the needs of both business and data teams.We’re excited for you to explore these new capabilities and welcome your feedback as we continue to innovate.

Want to learn how Telmai can accelerate your AI initiatives with reliable and trusted data? Click here to connect with our team for a personalized demo.

Want to stay ahead on best practices and product insights? Click here to subscribe to our newsletter for expert guidance on building reliable, AI-ready data pipelines.

Snowflake Summit 2025 Recap: What It Means for Data Reliability and AI Readiness

Introduction: Data quality is no longer an afterthought

“There is no AI strategy without a data strategy.” This statement from Snowflake CEO Sridhar Ramaswamy wasn’t just a soundbite, it was the central theme surrounding Snowflake Summit 2025. From Cortex AI SQL to Openflow and the Postgres acquisition, one principle became clear: the future of AI and enterprise applications is grounded in the quality, reliability, and observability of data.

In this article, we look at some key product announcements that stood out from the Snowflake Summit 2025.

1. Easy, Connected, Trusted: Snowflake’s AI Data Cloud in three words

Snowflake Co-founder and Head of Product Benoit Dageville opened the Summit by outlining how AI is becoming embedded across all domains and functions within the enterprise. He emphasized Snowflake’s transformation into a unified platform for intelligent data operations and distilled the company’s AI vision into three foundational principles: easy, connected, and trusted.

  • Easy: AI development should be frictionless. A unified data platform must reduce complexity so teams can build and deploy faster.
  • Connected: AI systems can’t operate in silos. Data and applications must move freely across organizational boundaries.
  • Trusted: Governance isn’t an afterthought. Trust must be built into the platform through end-to-end visibility, control, and accountability.

This framework wasn’t just theoretical but laid the groundwork for many of the following product announcements.

2. Open table formats are now first-class citizens

One of the clearest trends from the Summit was Snowflake’s deeper alignment with the open data ecosystem, especially Apache Iceberg. With support for native Iceberg tables and federated catalog access, Snowflake positioned itself as a format-agnostic, interoperable layer, regardless of whether your architecture follows a lakehouse, data mesh, traditional warehouse, or hybrid model.

This move underscores the growing need to unlock data access and analysis across open and managed environments, enabling teams to build, scale, and share advanced insights and AI-powered applications faster. Snowflake’s commitment to open interoperability was also reflected in its expanded contributions to the open-source ecosystem, including Apache Iceberg, Apache NiFi, Modin, Streamlit, and the incubation of Apache Polaris.

“We want to enable you to choose a data architecture that evolves with your business needs,” said Christian Kleinerman, EVP of Product at Snowflake. “

To support this vision in practice, Snowflake also announced deeper interoperability with external Iceberg catalogs such as AWS Glue and Hive Metastore, allowing teams to query data where it lives without moving it.

Enhanced compatibility with Unity Catalog further reflects a broader trend: governance and lineage must now extend across formats, clouds, and tooling ecosystems, not just within a single vendor stack. These updates position Snowflake not only as a data platform but as a flexible control plane for AI-ready architectures—one where open data, external catalogs, and trusted analytics can operate in sync.

3. Eliminating silos with Openflow’s unified and autonomous ingestion

Snowflake OpenFlow marks a significant step in simplifying data ingestion across structured and unstructured sources. Built on Apache NiFi, it offers an open, extensible interface that supports batch, stream, and multimodal pipelines within a unified framework.

Users can choose between a Snowflake-managed deployment or a bring-your-own-cloud setup, offering flexibility for hybrid and decentralized teams. Crucially, OpenFlow applies the same engineering and governance standards to unstructured data pipelines as it does to structured ones, enabling teams to build reliable data products regardless of source format.

During the keynote, EVP of Product Christian Kleinerman also previewed Snowpipe Streaming, a high-throughput ingestion engine (up to 10GB/s) with multi-client support and immediate data query ability.

Together, these advancements aim to eliminate siloed ingestion workflows and reduce operational friction without compromising reliability at the point.

4. Metadata governance for the AI era: Horizon Catalog and Copilots

image

Snowflake unveiled Horizon Catalog, a federated catalog designed to unify metadata across diverse sources, including Iceberg tables, dbt models, and BI tools like Power BI. This consolidated view provides both lineage and context across structured and semi-structured datasets, which is critical for organizations embracing decentralized data ownership models or a data mesh architecture.

In addition, the new Horizon Copilot brings natural language search, usage analysis, and lineage insights to the forefront, making it easier for teams to discover, understand, and validate data across their stack.

As enterprises shift to more decentralized models of data ownership, this level of federated visibility and governance becomes essential to ensuring reliability at scale, mainly when data flows across pipelines, clouds, and tools.

5. Semantic views and context-aware AI Signals

image

Snowflake’s introduction of Semantic Views and Cortex Knowledge Extensions marks a strategic shift toward embedding domain logic directly into the data platform. Semantic Views provide a standardized layer for business logic, enabling consistent metrics, definitions, and calculations across tools. This is especially critical when powering AI models that rely on aligned semantics for trustworthy insights.

image

Cortex Knowledge Extensions allow teams to inject metadata, rules, and domain-specific guidance into their LLMs and copilots, improving accuracy and reducing hallucinations. For data teams building AI-native pipelines, this means more context-aware signal processing, less noise in anomaly detection, and alerts that reflect business impact.

6. Accelerating AI and DataOps without compromising trust

image

Snowflake doubled down on operationalizing AI across the enterprise with product updates aimed at trust, speed, and precision. Cortex AI SQL brings LLM capabilities to familiar SQL workflows, allowing users to build natural language-driven queries while maintaining governance. Paired with Snowflake Intelligence and Document AI, these tools reflect a growing push toward embedded agents and copilots that enhance productivity without compromising oversight.

These updates underscore a broader trend: enabling faster AI development cycles while preserving the reliability, auditability, and explainability of decisions made downstream. For data teams, this means aligning DataOps with MLOps and building safeguards that scale with velocity.

Final Thoughts: What This Means for Data Teams

The Snowflake Summit 2025 goes far beyond feature releases and reflects a more profound shift in the design of enterprise data architectures and their governance strategies:

  • Open formats like Apache Iceberg, Delta Lake, etc, are not just supported, they’re foundational to modern, flexible architectures.
  • Ingestion at scale is now coupled with expectations of real-time validation and trust at the entry point.
  • Governance is moving from static policies to intelligent automation and embedded lineage.
  • AI precision demands semantic alignment and metadata context from the start.

From Horizon Catalog to Cortex AI SQL and OpenFlow, Snowflake is designing for a world where AI-powered insights must be fast and dependable. For data teams, this means architecting systems where reliability, explainability, and agility are not trade-offs but baseline requirements.

As Snowflake doubles down on support for open formats and distributed pipelines, AI-powered data observability tools like Telmai ensure that your data quality scales with your architecture. Whether you’re onboarding Iceberg tables, streaming data through OpenFlow, or aligning KPIs via semantic layers, Telmai integrates natively into your existing data architecture to proactively monitor your data for inconsistencies and validate every record before it impacts AI and analytics outcomes.

Are you looking to make your Snowflake pipelines AI-ready? Click here to talk to our team of experts to learn how Telmai can accelerate access to trusted and reliable data.

Passionate about data quality? Get expert insights and guides delivered straight to your inbox – click here to subscribe to our newsletter now.