If often considered an obstacle by companies, data quality nevertheless occupies an important place in strategic thinking, and is today the subject of numerous projects and initiatives. Implementing a data quality strategy is becoming increasingly essential in businesses. However, we note that it is still far from being integrated into practice. To effectively transform data and become a data-driven organization, it is important that companies have a good understanding of this process.
Goal: data quality
The first step towards higher data quality is business collaboration, i.e. managing data quality as a team, to combine business users ‘understanding of goals, results from IT teams’ use of data , governance and control.
From this point of view, the level of integration and communication of the Google Suite or Office 365 model, built on a large number of gateways and easy to use for business users, represents the goal to be achieved.
Thanks to the self-service solutions, it was possible to standardize and industrialize the quality of the data and significantly improve the levels of collaboration in the company. Solutions such as data management or data preparation allow users to be in control of the data they need, enforce the necessary rules and ensure data availability, while IT teams manage the needs at the same time. governance and data access. However, some data tools are difficult to use, prompting users to resort to known tools such as the Office suite. Silos appear due to this lack of data understanding, or even the lack of initiatives to provide greater “data literacy”.
Data moves based on context
The meeting between users and collaborators in charge of data processing is not, in reality, so easy to achieve. On the one hand, business users understand “the language of data,” but on the other hand employees do not master its subtleties and are limited to processing processes, which are much less complex to understand. It is possible to address this problem by establishing a data culture, in order to be able to consider data as defined information, because if tools occupy a fundamental place in data quality projects, it is also essential to ensure that all employees have the same understanding of information.
One important thing to know: data quality varies by context. Data condition and quality are measured by several factors, such as reliability and accuracy, but this is rarely done internally. For example, in the completeness of information, data may or may not exist. But if the data doesn’t exist, is that a problem? Consider the following situation: the customer database contains “opt-in” and “opt-out” information and fields. On the one hand, if the customer is “opt-in”, it will be possible to find, for example, his mobile number; if, on the other hand, the customer is “opt-out”, no personal information concerning him can be viewed. It is therefore due to its absence that the data is considered “valid” in the eyes of the data compliance law. The context around the information therefore makes the interpretation of completeness valid.
Some tools and technologies are fundamental to contextualize data, such as metadata, which are used through data inventory and data cataloging tools; they allow users to find data, to know it exists and to understand it. The more data there is, the better the understanding of the data will be. It’s not just these tools that help put your data in context: the rules repository, data preparation and data management technologies allow you to apply rules and change “raw” data into contextualized information. , for the business user.
In other words, it is essential to understand the data first in order to be able to treat it as a business asset in its own right.
(Published forums are the responsibility of their authors and do not involve CB News).