What’s new at Telmai in 2025: Key product feature updates so far

Reliable data is the foundation for every modern enterprise,from powering AI models to ensuring trusted reporting and customer experiences. But as architectures evolve toward distributed open lakehouse and multi-cloud hybrid data environments, ensuring data quality at the source becomes more critical than ever. That’s why, in the first half of 2025, Telmai continued to double down on enabling observability where the data lives—in the lake itself. From native Iceberg support that eliminates warehouse dependencies to enhanced rule logic and smarter alerting, our latest updates are built to help data teams monitor, validate, and remediate data issues earlier in the pipeline.

The result: greater trust in data-driven initiatives through automated resolution, allowing data reliability to scale alongside your growing ecosystem without adding operational overhead.

Ensuring data quality is made simple for Business and Engineering teams

Enhanced DQ Rule Engine to scale and unify validation across your stack

As data pipelines become increasingly distributed and complex, centrally managing the various data quality rules and validation workflows is no longer optional—it’s essential. Telmai’s enhanced rule engine offers a unified interface for creating, editing, and deploying validation rules across all systems, eliminating fragmented and siloed operations. With reusable templates, and JSON-based rule definitions, teams can standardize validations, ensure version control and further integrate into CI/CD workflows. Decoupled from warehouse compute, Telmai executes validations in its own engine, delivering performance and scalability without additional cost or operational overhead.

Custom free-form SQL metrics and advanced rule logic

Telmai now enables users to define custom metrics using SQL and implement complex validation logic that reflects their unique data domain, all through an intuitive interface. Whether it’s tracking nuanced business KPIs or applying layered rule conditions, teams can build data quality rules that align with real-world expectations, without engineering overhead. This empowers users across technical levels to define what “data quality” means for their organization and catch edge-case anomalies that generic rules often miss.

Monitor data quality trends with custom metric dashboards

image
image

Telmai’s Metric Inspector equips teams with interactive dashboards to visualize and investigate data quality trends over time with interactive, time-series dashboards that reveal how key data quality metrics behave over time. Users can drill into data quality KPIs such as null rates, freshness lag, or custom-defined KPIs and correlate them with anomaly triggers. This level of transparency helps data teams identify patterns, validate rule thresholds, and continuously refine their data quality strategy.

Observability at the Source for Open Table Formats

As AI applications and advanced analytics become table stakes for modern enterprises, organizations are increasingly adopting open lakehouse architectures powered by table formats like Apache Iceberg, Delta Lake, and Hudi. While these formats offer the flexibility of object storage with the governance and performance of traditional warehouses, they lack native mechanisms for ensuring data quality and reliability. 

Without observability at the source, organizations are forced to validate data downstream, driving up cloud costs, delaying issue detection, and undermining the reliability of critical analytics and AI initiatives. Telmai solves this by delivering native, source-level observability for Apache Iceberg on GCP and GCS, with Delta and Hudi support on the roadmap.

Through partition-level profiling and metadata pushdown, Telmai enables full-fidelity validation without scanning entire datasets or triggering warehouse compute. This empowers teams to detect schema drift, freshness issues, and value anomalies early in the pipeline, ensuring AI models and analytical workloads operate on trusted, timely data, at scale and without architectural compromise.

Operational Efficiency Through Smarter Workflows and UX

Streamlined interface for defining and managing DQ policies

Telmai’s updated policy management UI makes it easier for users to define, configure, and review data quality policies at scale. The refreshed layout improves visibility into rule thresholds, affected datasets, and policy logic—reducing the time it takes to author or adjust checks. Designed to support both technical and operational users, this interface lowers the barrier to entry and enables more teams to take ownership of data quality without relying on engineering.

Smarter alerting and workflows for faster resolution

As data ecosystems scale, managing noise becomes just as important as detecting issues. Telmai now includes enhanced alert routing logic that ensures notifications are delivered based on policy type, team ownership, and relevance—so the right people are alerted at the right time. By aligning alerts with organizational context, teams can reduce alert fatigue, prioritize what matters, and accelerate resolution. This makes it easier to operationalize data quality across distributed teams and complex pipelines.

Simplified connection and asset management for faster onboarding

To streamline onboarding and reduce repetitive setup, Telmai has introduced a modular approach to managing connections and assets. Previously, connections were tied directly to asset creation, requiring full configuration for each new asset.

Now, source connections are created once through a centralized interface and can be reused across multiple assets. This separation simplifies setup, reduces duplication, and gives teams better control over how data sources are organized and maintained—leading to faster deployment and easier scaling across environments.

Closing Thoughts

As modern data ecosystems scale to support AI, open table formats, and real-time analytics, ensuring data quality at the source has become a strategic priority. Our latest updates reflect Telmai’s continued commitment to enabling proactive, high-fidelity AI augmented data quality monitoring, tailored to the needs of both business and data teams.We’re excited for you to explore these new capabilities and welcome your feedback as we continue to innovate.

Want to learn how Telmai can accelerate your AI initiatives with reliable and trusted data? Click here to connect with our team for a personalized demo.

Want to stay ahead on best practices and product insights? Click here to subscribe to our newsletter for expert guidance on building reliable, AI-ready data pipelines.

Why is Model Context Protocol a game-changer for Enterprise AI

The artificial intelligence landscape is fundamentally shifting how enterprises deploy, manage, and govern AI systems. While enthusiasm around large language models and autonomous agents is high, organizations continue to face a persistent challenge in operationalizing AI: enabling models to access high-quality, real-time enterprise data without relying on brittle, hard-coded integrations. Whether it’s a chatbot referencing outdated policies or a model hallucinating due to missing business context, the disconnect between AI and enterprise systems continues to limit trust and effectiveness.

Driving this transformation forward is a new open standard known as the Model Context Protocol (MCP), first launched by Anthropic in November 2024. Just as APIs standardized how applications communicate, MCP is quickly becoming the common protocol for enabling AI to access external tools, structured data, and dynamic enterprise context in a secure and scalable manner.

What exactly is MCP, why is it generating such unprecedented buzz in the AI community, and how does it strengthen enterprise observability and governance by making model behavior more transparent, auditable, and grounded in reliable data context

What is Model Context Protocol (MCP)?

The Model Context Protocol (MCP) is an open, model-agnostic standard designed to help AI systems securely and consistently retrieve structured context from external tools and data sources. Think of it as a shared interface that allows AI agents and enterprise systems to communicate without the need for custom integrations or code.

Instead of building brittle, hardcoded connectors for every new tool or service, developers can implement MCP once and reuse it across environments. This architecture design solves the classic M×N integration problem by replacing a tangled mess of custom APIs with a universal, reusable layer for contextual AI access.

Major AI players like OpenAI, Microsoft, and Google have already committed to supporting MCP, and the open-source ecosystem around it is growing rapidly.

How does Model Context Protocol (MCP) work?

Model Context Protocol Architecture

MCP uses a client-server architecture built on JSON-RPC 2.0, allowing any AI client to communicate with any MCP-enabled service. The AI system (client) connects to MCP servers, each of which wraps around a backend system (CRM, file store, database, etc.) and exposes its capabilities in a structured, machine-readable schema.Whether you’re calling a cloud-based CRM or reading a local CSV file, the interface remains consistent.

This architecture consists of four primary components that work together to create a unified AI ecosystem:

  • Host Applications: These are AI-powered applications like Claude Desktop, AI-enhanced IDEs, or custom enterprise AI agents that users interact with directly
  • MCP Clients: Integrated within host applications, these components manage connections to MCP servers and maintain secure, one-to-one relationships with each server
  • MCP Servers: Lightweight wrappers around systems like GitHub, PostgreSQL, or internal APIs. These expose structured functionality to the AI using the MCP schema
  • Transport Layer: Supports both local (STDIO) and remote (HTTP + Server-Sent Events) communication, allowing MCP to run across cloud and on-prem environments with minimal overhead

Using this architecture, MCP enables AI systems to perform three core operations:

  • Use tools: Trigger specific functions or workflows to perform specific actions, (e.g., look up customer data, run a SQL query)
  • Fetch resources: Retrieve Context and data sources like documents, database entries, or configuration files that provide information without significant computation or side effects, similar to GET endpoints in REST APIs
  • Invoke prompts: Execute pre-defined prompt templates that guide multi-step interactions

How MCP boosts AI capabilities and performance

Before MCP, most LLMs operated in silos, unable to access external tools or live business data without one-off manual integrations. This limited their usefulness in enterprise environments. MCP changes that by offering a consistent protocol for contextual retrieval and action execution. The interface remains consistent whether you’re calling a cloud-based CRM or reading a local CSV file. The performance improvements are measurable across multiple dimensions:

Enhanced Context Awareness: AI Models can query up-to-date business data and respond with grounded, relevant insights, reducing hallucinations and stale information.

Dynamic Tool Discovery: AI systems can discover available tools at runtime and adapt to new workflows without hardcoding, accelerating use-case development.

Reduced Integration Complexity: MCP encourages modularity. With one standard protocol, engineering teams no longer need to maintain multiple connectors, cutting integration time from weeks to hours. Existing MCP servers can be reused across applications, minimizing duplication and improving maintainability.

Together, these capabilities unlock scalable, connected, and more reliable AI systems that can operate securely across diverse enterprise environments.

Reliable AI starts with MCP integrated by Data Observability

Giving AI access to real-time data is powerful but risky without guardrails. AI systems are only as reliable as the data they consume, and without proper monitoring, they may access broken pipelines, outdated schemas, or inconsistent information. Here’s where data observability plays a critical role.

Pairing MCP with a data observability platform like Telmai ensures:

  • Trusted context: Data Observability platforms continuously monitor data quality metrics, ensuring that AI models receive accurate and reliable information
  • Real-Time Monitoring: All data interactions via MCP can be proactively monitored to detect anomalies and issues in data pipelines before they impact AI outputs
  • Policy enforcement: Telmai detects when sensitive data, such as PII, is exposed to AI models via MCP, or when data values accessed in real time drift in ways that violate business rules—enabling proactive safeguards and responsible AI behavior.
  • Faster root cause analysis: Telmai’s data quality binning and circuit breaker features automatically isolate issues and trigger orchestration-level interventions—preventing pipeline failures without requiring heavy engineering effort.

MCP simplifies connectivity between AI and enterprise systems. Data Observability ensures those connections are reliable, secure, and explainable, a critical foundation for scaling trustworthy AI.

Conclusion: Data trust is the real AI multiplier

As AI becomes core to enterprise workflows, success will depend not just on access to real-time context but on ensuring that context is reliable, relevant, and governed.

MCP delivers a powerful, standardized access layer for AI, but access without validation can introduce risk. That’s where data observability comes in. It acts as the guardrail layer that continuously monitors and validates the data accessed by models is accurate, timely, and policy-compliant.

By pairing MCP with data observability tools like Telmai:

  • You ensure AI models are grounded in accurate, up-to-date, and compliant data.
  • You gain end-to-end visibility across your data pipelines — from source to model input.
  • You automate detection and intervention using advanced techniques like data quality binning and circuit breakers.

Together, MCP and observability form the foundation for scalable, secure, and trustworthy AI.

Want to learn how Telmai can accelerate your AI initiatives with reliable and trusted data? Click here to connect with our team for a personalized demo.

Passionate about data quality? Get expert insights and guides delivered straight to your inbox – click here to subscribe

Snowflake Summit 2025 Recap: What It Means for Data Reliability and AI Readiness

Introduction: Data quality is no longer an afterthought

“There is no AI strategy without a data strategy.” This statement from Snowflake CEO Sridhar Ramaswamy wasn’t just a soundbite, it was the central theme surrounding Snowflake Summit 2025. From Cortex AI SQL to Openflow and the Postgres acquisition, one principle became clear: the future of AI and enterprise applications is grounded in the quality, reliability, and observability of data.

In this article, we look at some key product announcements that stood out from the Snowflake Summit 2025.

1. Easy, Connected, Trusted: Snowflake’s AI Data Cloud in three words

Snowflake Co-founder and Head of Product Benoit Dageville opened the Summit by outlining how AI is becoming embedded across all domains and functions within the enterprise. He emphasized Snowflake’s transformation into a unified platform for intelligent data operations and distilled the company’s AI vision into three foundational principles: easy, connected, and trusted.

  • Easy: AI development should be frictionless. A unified data platform must reduce complexity so teams can build and deploy faster.
  • Connected: AI systems can’t operate in silos. Data and applications must move freely across organizational boundaries.
  • Trusted: Governance isn’t an afterthought. Trust must be built into the platform through end-to-end visibility, control, and accountability.

This framework wasn’t just theoretical but laid the groundwork for many of the following product announcements.

2. Open table formats are now first-class citizens

One of the clearest trends from the Summit was Snowflake’s deeper alignment with the open data ecosystem, especially Apache Iceberg. With support for native Iceberg tables and federated catalog access, Snowflake positioned itself as a format-agnostic, interoperable layer, regardless of whether your architecture follows a lakehouse, data mesh, traditional warehouse, or hybrid model.

This move underscores the growing need to unlock data access and analysis across open and managed environments, enabling teams to build, scale, and share advanced insights and AI-powered applications faster. Snowflake’s commitment to open interoperability was also reflected in its expanded contributions to the open-source ecosystem, including Apache Iceberg, Apache NiFi, Modin, Streamlit, and the incubation of Apache Polaris.

“We want to enable you to choose a data architecture that evolves with your business needs,” said Christian Kleinerman, EVP of Product at Snowflake. “

To support this vision in practice, Snowflake also announced deeper interoperability with external Iceberg catalogs such as AWS Glue and Hive Metastore, allowing teams to query data where it lives without moving it.

Enhanced compatibility with Unity Catalog further reflects a broader trend: governance and lineage must now extend across formats, clouds, and tooling ecosystems, not just within a single vendor stack. These updates position Snowflake not only as a data platform but as a flexible control plane for AI-ready architectures—one where open data, external catalogs, and trusted analytics can operate in sync.

3. Eliminating silos with Openflow’s unified and autonomous ingestion

Snowflake OpenFlow marks a significant step in simplifying data ingestion across structured and unstructured sources. Built on Apache NiFi, it offers an open, extensible interface that supports batch, stream, and multimodal pipelines within a unified framework.

Users can choose between a Snowflake-managed deployment or a bring-your-own-cloud setup, offering flexibility for hybrid and decentralized teams. Crucially, OpenFlow applies the same engineering and governance standards to unstructured data pipelines as it does to structured ones, enabling teams to build reliable data products regardless of source format.

During the keynote, EVP of Product Christian Kleinerman also previewed Snowpipe Streaming, a high-throughput ingestion engine (up to 10GB/s) with multi-client support and immediate data query ability.

Together, these advancements aim to eliminate siloed ingestion workflows and reduce operational friction without compromising reliability at the point.

4. Metadata governance for the AI era: Horizon Catalog and Copilots

image

Snowflake unveiled Horizon Catalog, a federated catalog designed to unify metadata across diverse sources, including Iceberg tables, dbt models, and BI tools like Power BI. This consolidated view provides both lineage and context across structured and semi-structured datasets, which is critical for organizations embracing decentralized data ownership models or a data mesh architecture.

In addition, the new Horizon Copilot brings natural language search, usage analysis, and lineage insights to the forefront, making it easier for teams to discover, understand, and validate data across their stack.

As enterprises shift to more decentralized models of data ownership, this level of federated visibility and governance becomes essential to ensuring reliability at scale, mainly when data flows across pipelines, clouds, and tools.

5. Semantic views and context-aware AI Signals

image

Snowflake’s introduction of Semantic Views and Cortex Knowledge Extensions marks a strategic shift toward embedding domain logic directly into the data platform. Semantic Views provide a standardized layer for business logic, enabling consistent metrics, definitions, and calculations across tools. This is especially critical when powering AI models that rely on aligned semantics for trustworthy insights.

image

Cortex Knowledge Extensions allow teams to inject metadata, rules, and domain-specific guidance into their LLMs and copilots, improving accuracy and reducing hallucinations. For data teams building AI-native pipelines, this means more context-aware signal processing, less noise in anomaly detection, and alerts that reflect business impact.

6. Accelerating AI and DataOps without compromising trust

image

Snowflake doubled down on operationalizing AI across the enterprise with product updates aimed at trust, speed, and precision. Cortex AI SQL brings LLM capabilities to familiar SQL workflows, allowing users to build natural language-driven queries while maintaining governance. Paired with Snowflake Intelligence and Document AI, these tools reflect a growing push toward embedded agents and copilots that enhance productivity without compromising oversight.

These updates underscore a broader trend: enabling faster AI development cycles while preserving the reliability, auditability, and explainability of decisions made downstream. For data teams, this means aligning DataOps with MLOps and building safeguards that scale with velocity.

Final Thoughts: What This Means for Data Teams

The Snowflake Summit 2025 goes far beyond feature releases and reflects a more profound shift in the design of enterprise data architectures and their governance strategies:

  • Open formats like Apache Iceberg, Delta Lake, etc, are not just supported, they’re foundational to modern, flexible architectures.
  • Ingestion at scale is now coupled with expectations of real-time validation and trust at the entry point.
  • Governance is moving from static policies to intelligent automation and embedded lineage.
  • AI precision demands semantic alignment and metadata context from the start.

From Horizon Catalog to Cortex AI SQL and OpenFlow, Snowflake is designing for a world where AI-powered insights must be fast and dependable. For data teams, this means architecting systems where reliability, explainability, and agility are not trade-offs but baseline requirements.

As Snowflake doubles down on support for open formats and distributed pipelines, AI-powered data observability tools like Telmai ensure that your data quality scales with your architecture. Whether you’re onboarding Iceberg tables, streaming data through OpenFlow, or aligning KPIs via semantic layers, Telmai integrates natively into your existing data architecture to proactively monitor your data for inconsistencies and validate every record before it impacts AI and analytics outcomes.

Are you looking to make your Snowflake pipelines AI-ready? Click here to talk to our team of experts to learn how Telmai can accelerate access to trusted and reliable data.

Passionate about data quality? Get expert insights and guides delivered straight to your inbox – click here to subscribe to our newsletter now.