Who's responsible for the Data and Data quality?

Who's responsible for the Data and Data quality?
Harsha Bipin and Susan Austin

n pursuit of identifying the team that is accountable for data quality, we interviewed a multitude of people across multiple companies, who answered varied questions about how organizations are structured, who has the most knowledge of the data, who is the most impacted and how can the value of data be derived to support the business. Summarizing in one sentence, our study has found that data quality ownership cannot be siloed, it needs to be democratized with a collaborative data culture. Adopting a culture around data to create a solid foundation has added a new dimension to enforce a shift in paradigm so that the core business decisions are led by data.

With data culture - people, tools and processes are structured around sourcing, access, quality and consumption needs of business and operations, bringing together a combination of business owners and data engineers. Extending the core data ownership roles to business units allows for more domain-informed decision making with the engineering team to support the larger vision.

Which segues naturally into how such teams can co-exist to form a solid data team.

The last few years have seen an explosion in frameworks and architectural suggestions to create a structure around ingestion, consumption, storage, analytics, all striving towards making informed and quick decisions on the data to generate business value. Depending on the needs, the size and the use case of the organizations, a couple of different structures of team and layouts can be implemented. An interesting, in-depth read on this topic is a blog written by Zhamak Dehghani, where she breaks down models that data-driven organizations could consider to create a solid foundation for all data management.

Domain oriented Data Ownership model

More traditional data architectures follow a linear, central, domain oriented data ownership approach. A centralized data ownership team that manages the ingestion of data from various sources, processes the data and then provides access to different consumers, scientists, analytics, and BI teams.

This may prove to be a good solution for smaller organizations that have a simpler domain and consumption use cases, where a central team is able to serve the needs of a handful of distinct domains using data warehouse and data lake architectures. The key here is centralized data management, which may prove to be a hurdle for a growth and expansive data vision, where the data teams may be overwhelmed serving the various different needs, fighting fires to produce correct, timely data to make business critical decisions.

Domain Agnostic Data Ownership model

For a more complex infrastructure with many different sources of data, equally diverse sets of consumers with different needs and use cases, the centralized data ownership model will fail to keep up. Data Mesh is one such strong contender creating a design shift in how data can be managed organizationally.

While data pipeline, ingestion, storage can be maintained by centralized self-serve data infrastructure platform team, allowing data to be locally owned by product domains for their specific use case, can allow for a more independent and targeted lifecycle of data, dictating quality measures, metadata structures, consumer needs, defining business KPIs for their product data thereby reducing the turnaround time in response to their customers and the overall load on the otherwise centralized data team.

Such teams would continue to include product data owners as well as data engineers to support the needs of the product so as to bring together a team of unique skills fulfilling the wider data-driven vision of the organization.

Data Quality Owners

Business and data analysts are more accustomed to identifying issues with data in the context of its use case than the engineering teams, however, as described above, both are an essential part of data ownership. While the operations teams have varied consumption and domain centered quality needs, the domain data teams can define their correctness Service Level Indicators(SLIs) on data. Here is a high level breakdown on the needs of various operational units in a typical organization:


It's very evident from our conversations that data quality can not be achieved by technology alone. It needs the right combination of process, culture and tools,  especially tools that will empower both technical and non-technical teams.


To summarize,

Data Quality needs to be democratized across business and IT and democratization of data quality can be achieved when organizations focus on building collaborative data culture.


More and more technologies need to focus on empowering entire data teams (both technical and non-technical) to improve data quality.


#dataObservabiliity #dataquality #demcratizeDataQuality #dataanalytics #dataengineer

Data profiling helps organizations understand their data, identify issues and discrepancies, and improve data quality. It is an essential part of any data-related project and without it data quality could impact critical business decisions, customer trust, sales and financial opportunities. 

To get started, there are four main steps in building a complete and ongoing data profiling process:

  1. Data Collection
  2. Discovery & Analysis
  3. Documenting the Findings
  4. Data Quality Monitoring

We'll explore each of these steps in detail and discuss how they contribute to the overall goal of ensuring accurate and reliable data. Before we get started, let's remind ourself of what is data profiling.

What are the different kinds of data profiling?

Data profiling falls into three major categories: structure discovery, content discovery, and relationship discovery. While they all help in gaining more understanding of the data, the type of insights they provide are different:

 

Structure discovery analyzes that data is consistent, formatted correctly, and well structured. For example, if you have a ‘Date’ field, structure discovery helps you see the various patterns of dates (e.g., YYYY-MM-DD or YYYY/DD/MM) so you can standardize your data into one format.

 

Structure discovery also examines simple and basic statistics in the data, for example, minimum and maximum values, means, medians, and standard deviations.

 

Content discovery looks more closely into the individual attributes and data values to check for data quality issues. This can help you find null values, empty fields, duplicates, incomplete values, outliers, and anomalies.

 

For example, if you are profiling address information, content discovery helps you see whether your ‘State’ field contains the two-letter abbreviation or the fully spelled out city names, both, or potentially some typos.

 

Content discovery can also be a way to validate databases with predefined rules. This process helps find ways to improve data quality by identifying instances where the data does not conform to predefined rules. For example, a transaction amount should never be less than $0.

 

Relationship discovery discovers how different datasets are related to each other. For example, key relationships between database tables, or lookup cells in a spreadsheet. Understanding relationships is most critical in designing a new database schema, a data warehouse, or an ETL flow that requires joining tables and data sets based on those key relationships.

Stay in touch

Stay updated with our progress. Sign up now

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Stay in touch

Stay updated with our progress. Sign up now

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Data Observability
Data Quality

Leverages ML and statistical analysis to learn from the data and identify potential issues, and can also validate data against predefined rules

Uses predefined metrics from a known set of policies to understand the health of the data

Detects, investigates the root cause of issues, and helps remediate

Detects and helps remediate.

Examples: continuous monitoring, alerting on anomalies or drifts, and operationalizing the findings into data flows

Examples: data validation, data cleansing, data standardization

Low-code / no-code to accelerate time to value and lower cost

Ongoing maintenance, tweaking, and testing data quality rules adds to its costs

Enables both business and technical teams to participate in data quality and monitoring initiatives

Designed mainly for technical teams who can implement ETL workflows or open source data validation software

Stay in touch

Stay updated with our progress. Sign up now

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Start your data observibility today

Connect your data and start generating a baseline in less than 10 minutes. 

No sales call needed

Stay in touch

Stay updated with our progress. Sign up now

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Start your data observability today

Connect your data and start generating a baseline in less than 10 minutes. 

Telmai is a platform for the Data Teams to proactively detect and investigate anomalies in real-time.
© 2023 Telm.ai All right reserved.