SEDIMARK Logo

D3.1 Energy efficient AI-based toolset for improving data quality. First version

SEDIMARK · December 19, 2023
D3.1

This document is the first deliverable from WP3, aiming to provide a first draft of the SEDIMARK data quality pipeline. This document details how the pipeline aims to improve the quality of datasets shared through the marketplace while also addressing the problem of energy efficiency in the data value chain. This document is actually the first version of the deliverable, showing the initial ideas and initial implementation of the respective tools and techniques. An updated version of the deliverable with the final version of the data quality pipeline will be delivered in M34 (July 2025). This means that the document should be considered a “live” document that will be continuously updated and improved as the technical development of the data quality tools evolves.

The main goal of this document is to discuss how data quality is seen in SEDIMARK, what are the metrics defined in order to assess the quality of data that are generated by data providers, and what techniques will be provided to them for improving the quality of their data before sharing on the data marketplace. This will help the data providers to both optimise their decision-making systems for the Machine Learning (ML) models they train using their datasets, and to increase their revenues by selling datasets of higher quality and thus higher value. Regarding the first argument, it is well documented that low-quality data has a significant impact on business, with reports showing a yearly cost of around 3 Trillion USD, and that knowledge workers waste 50% of their time searching for and correcting dirty data [1]. It is evident that data providers will hugely benefit from automated tools to help them improve their data quality, either without any human involvement or with minimum human intervention and configuration. 

The document presents high-level descriptions of the concepts and tools developed for the data quality pipeline and the energy efficiency methods for reducing its environmental cost, as well as concrete technical details about the implementation of those tools. Thus, it can be considered that this is both a high-level and a technical document, thus targeting a wide audience. Primarily, the document targets the SEDIMARK consortium, discussing the technical implementations and the initial ideas about them, so that the rest of the technical tasks can draw ideas about the integration of all the components into a single SEDIMARK platform. Apart from that, this document also targets the scientific and research community, since it presents new ideas about data quality and how the developed tools can help researchers and scientists improve the quality of the data they use in their research or applications. Similarly, the industrial community can leverage the project tools to improve the quality of their datasets or also assess how they can exploit the results about energy efficiency to reduce the energy consumption of their data processing pipelines. Moreover, EU initiatives and other research projects should consider the contents of the deliverable in order to derive common concepts about data quality and reducing energy consumption in data pipelines.

This document, along with all the public deliverables and documents produced by SEDIMARK, can be found in the Publications & Resources section.

Subscribe to SEDIMARK!

* required

Unlock the power of data with #SEDIMARK’s innovative approach using “Mage AI” as a #dataorchestrator! Read more in our latest article. 👇

Great news 🆕! SEDIMARK will be present at the upcoming #DataWeek24 organized by @BDVA_eu, by the hands of our coordinator, Arturo Medela. If you happen to be there, join us on March 12th 📅 to learn more about our project. And why not, explore any collaboration opportunities! 🤝

Efficiently managing large data workflows 🔄, data quality and ensuring data security and privacy are key challenges in today’s data-driven world. Discover how #SEDIMARK tackles these issues with a #dataorchestrator! #DataPipelineProcessing 👇

Load More
crossmenu