Data has become a new currency in the current data-driven economy. The EU has launched the EU data market monitoring tool, which continuously monitors the impact of the data economy in the member states. The tool has identified that in 2021 the EU there are more than 190.000 data supplier companies, and more than 560.000 data users. For the revenues of the data companies, the 2022 figures are at 84B Euros with forecasts towards 114B in 2025 and 137B in 2030.
Data quality is a persistent issue in the ongoing digital transformation of European businesses. Some studies estimate that the curation and cleaning of data can take up as much as 80% of data professionals time, which is time not spent on producing insights or actual products and services. The quality of data used within a business is of utmost importance, since according to reports, bad data or dirty data cost the US 3 trillion USD per year. For instance, many business datasets can contain a high number of anomalies, duplicates, errors and missing values which degrade the value of the data for making business decisions. Unhandled bias in the data, such as when a dataset fails to account properly for minority labels (i.e. for gender, age, origin, or any other label in the data), can result in machine learning models that are skewed towards unfair outcomes. Additionally, much of the existing data in business silos often lacks the proper documentation and annotation that would allow professionals to properly leverage it towards downstream decision making tasks.
One of the main pillars of the SEDIMARK project is to promote data quality for the data that will be shared on the marketplace. SEDIMARK will build a complete data curation and quality improvement pipeline that will be provided to the data providers so that they can assess the quality of their data and clean them in order to improve the quality. This data curation pipeline will require minimum human intervention from domain experts to provide optimal results, but will also be fully customisable for experts to unlock maximum performance. This will be achieved by exploring state of the art techniques in Auto-ML (automated machine learning) and Meta-ML, both of which can be applied to transform the data with minimal human supervision, by learning from previous tasks.
Additionally, the marketplace within SEDIMARK will prioritise and promote data providers who undertake the effort to curate their data before sharing them widely. SEDIMARK will implement a range of transparent data quality metrics that will show statistics about the data and these will be displayed side by side with the data offerings on the marketplace. This will help consumers to find high quality data so that they minimise the time they spend preprocessing the data for their services. Moreover, an efficient recommender system within SEDIMARK will also help consumers to easily find high quality, highly rated offerings within their domain.
In conclusion, SEDIMARK aims to provide tools for improving the data quality both for providers and consumers, boosting the data economy, while at the same time saving significant time from data scientists, allowing them to focus more their time on actually extracting value from the high quality data producing new products and services instead of spending the majority of their time cleaning the data themselves. The SEDIMARK team in the Insight Centre for Data Analytics of the University College Dublin (UCD) builds on Insight’s data expertise and is leading the activities to define efficient metrics for data quality and develop an automated and simplified data curation and quality improvement pipeline for data providers and users to check and improve their datasets.
Data interoperability refers to the functionality of information systems to exchange data and enable information sharing. More specifically, itis defined as the ability of systems and services that create, exchange, and consume data to have clear, shared expectations for the format, contents, context, and meaning of that data. Thus, it allows to access and process data from multiple sources in diverse formats without losing sense and then to integrate that data for mapping, visualization, and other forms of representation and analysis. Data interoperability enables people to find, explore, and understand the structure and content of heterogeneous data.
In this context, SEDIMARK aims to provide an enriched secure decentralized data and services marketplace where scattered data from various domains and geographical locations within the EU can be easily generated, cleaned, protected, discovered, and enriched with metadata, AI and analytics, and exploited for diverse business and research scenarios. SEDIMARK involves a combination of heterogeneous data, and achieving data interoperability will allow it to maximize the value of the data and overcome the significant challenges posed by distributed assets (heterogeneity, data formats, sources, etc). For this to happen, SEDIMARK will reuse the semantic models developed in previous and ongoing EU initiatives, such as Gaia-X, IDS and NGSI-LD, and propose extensions to them to create one generic semantic model able to annotate and enrich heterogeneous data from multiple domains semantically.
Besides data, interoperability between AI models that emerge from this data is of great interest. In the decentralized environment of SEDIMARK, decentralized training requires that users train their models locally and then exchange model weights for jointly learning a global model. Ensuring that all SEDIMARK users will use the exact same machine learning platform for training the model and the exact same machines is unrealistic. So, SEDIMARK models will be agnostic to underlying platforms and SEDIMARK will provide tools to convert models to various formats and support models to run on machines of various capabilities and on various platforms.
ARTEMIS is the product of WINGS that is oriented to the proactive management of water, energy, gas infrastructures.
Based on the WINGS approach, it combines advanced technologies (IoT, AI, advanced networks and visualizations) with domain knowledge, to address diverse use cases. Being a management system it delivers the following functionalities.
- Efficient metering: optimized information flow and cost with 24/7 capability, prediction of demand and of capabilities);
- Fault management: faulty meters, predictive maintenance, outage handling (energy), leakage or flood avoidance (water), outage handling.
- Performance optimizations: optimization of water quality, maximization of revenue water, optimization of the deployment of renewables and of storage components, optimization for residences / businesses factories.
- Configuration and security aspects.
Commercial traction has been achieved, while further interest is stimulated in various areas and with various tentative partners.
In parallel WINGS strives to develop and integrate further advances. A wave of new projects related to ARTEMIS activities is being implemented. SEDIMARK aims to create a secure decentralised data marketplace based on distributed ledger technology and AI. Under this new approach,
- Data will no longer be stored on the “core cloud” but also on “edge systems”, close to where they are generated, thus avoiding security concerns.
- According to diverse strategies, data will be “cleaned”, labelled and classified, in accordance with legal / ethical frameworks and FAIR (findable, accessible, interoperable and reusable) principles, for enabling easy linkage and efficient utilization.
- Diverse analysis mechanisms can be powered.
Within SEDIMARK, WINGS contributes on the marketplace (leveraging its experience in other vertical sectors, like food security and safety) and with AI strategies.
SEDIMARK will empower European stakeholders to set the proper foundation for the energy market, expand their competences and compete and scale at a global level
This document is a deliverable of the SEDIMARK project, funded by the European Commission under its Horizon Europe Framework Programme. This document presents the “D6.2 Dissemination and exploitation plan” deliverable, including the expected impact of the ongoing and planned activities, target audience, milestones, and mechanisms to assess the dissemination and exploitation activities carried out throughout the project execution.
Dissemination activities are any action related with the public disclosure of the project results by any appropriate means, including scientific publications. On the other hand, Communication activities also include the promotion of the project itself to multiple audiences, including both the media and the public. Separating the concept and the goal of dissemination and communication plan is important as the communication plan is about the project and its results, whilst the dissemination one is only about the results.
Moreover, exploitation activities have a broader scope compared to communication and dissemination. They can include actions such as utilizing the project results in further research activities other than those covered by the concerned project, developing, creating and marketing a product or process, creating and providing a service, or even in standardisation activities.
SEDIMARK knows the importance of regulating data management issues within a context such as the one posed by the project. A solution will be considered where consortium partners will deposit all underlying information on data-related business processes (data storage, data provisioning, processing etc.) of the SEDIMARK solution clearly and transparently.
The purpose of the Data Management Action Plan (DMAP) is to identify the main data management elements that apply to the SEDIMARK project and the consortium. This document is the first version of the DMAP and will be reviewed as soon as there is a clearer understanding of the types of data that will be collected.
Given the wide range of sources from which data will be collected or become available within the project, this document outlines that the consortium partners will consider embracing and applying the Guidelines on FAIR Data Management in Horizon 2020 and Horizon Europe (HE); “In general terms your data should be ‘FAIR’, that is Findable, Accessible, Interoperable and Re-usable”, as information about data to be collected becomes clearer”.
As the name suggest, SEDIMARK will be a Data and Service marketplace. But SEDIMARK focus is not only on data and services assets: Decentralisation also play a key role…
D as Decentralisation
The decentralisation allows to stay away from a single and central authority for control and decision-making, instead it enables the interactions directly among multiple independent parties.
There are several perks in a decentralised system:
- Reduced Weakness: relying too much on one entity can lead to systemic failures. Multiple entities shield from unfortunate events.
- Optimization of resources: in a decentralized system, the resources available can be spread among multiple entities to provide better services.
- Security and Trust: in a decentralized network, security and trust is a must pre-condition.
SEDIMARK Marketplace achieves the security and trust thanks to Distributed Ledger Technologies (DLT).
D as DLT
A DLT is a network composed of several nodes that independently replicate, share, and synchronize the same data spread across many different physical locations without a central administrator.
The most famous example is the Blockchain, today largely employed for financial transactions with bitcoin crypto-currency. However, the SEDIMARK decentralised architecture will be based on a different DLT, that is the IOTA Tangle designed and deployed by the IOTA Foundation. The IOTA Tangle is an open, feeless and highly scalable distributed ledger, designed to support both data and value transfer with a green fashion.
Do you want to know more? Stay tuned for next blog posts by signing up to our newsletter below.
Follow us on LinkedIn Twitter GitHub
* Source image: shutterstock
Machine Learning introduction
Machine Learning (ML) is a modern and efficient branch of AI (Artificial Intelligence), specialised in pattern recognition within data streams. It can provide precise analysis based on statistics to detect insights from a large data set, using the same principle as human neural networks in our brain. Every system equipped with ML must learn and discover patterns from historical data and compare its predictions with real data, before providing reliable information. That is why AI systems are trained with as much data as possible.
ML algorithms are more efficient than traditional modelling methods and can surpass human intelligence through its powerful computational efficiency. For instance, image recognition and time series analysis are well-known and widespread domains of application of ML for real world cases, such as the EU-funded SEDIMARK project. SEDIMARK aims at building a secure, trusted and intelligent decentralised data and services marketplace over several years, using ML in order to automate data quality management. Over time, the project results will provide ever-increasing accuracy and precision with its growing data sources.
ML could be directly used on edge systems to ensure data quality. Some algorithms are specialised for this purpose, with low power consumption and modest memory size. For instance, EdgeML and TinyML are open-source libraries that provide this kind of outcome.
ML embarked on edge systems
The IoT platform from EGM (i.e. the EdgeSpot) is compatible with both libraries and could manage and distribute FAIR data in an energy efficient way. The ONNX (Open Neural Network Exchange), an open format representing ML models, may be a solution to select the right combinations of tools. And finally, with the help of the use cases provided within SEDIMARK, the project might elaborate a concrete strategy to automate and manage data quality.
SEDIMARK plans to build a distributed registry of resources stored on edge systems, close to where data is generated. The purpose is to clean, label, validate and anonymise data.
A digital twin is at the highest level, an architectural construct that is enabled by a combination of technology streams such as IoT (Internet of Things), Cloud Computing, Edge Computing, Fog Computing, Artificial Intelligence, Robotics, Machine Learning, and Big Data Analytics.
The concept of a Digital Twin is based on the fact that every physical part has a virtual counterpart that is conceptual, structural and functional the same as the physical part. The concept of Digital Twins dates in the 1970s used by NASA in the Apollo 13 mission. Nowadays the Digital Twins are used in various industries, being a key concept in realizing the communication mechanism between the physical and the virtual world by using data.
The primary use case for Digital Twin is asset performance, utilization, and optimization. Digital Twin enables monitoring, diagnosing, and prognostics capabilities for a particular use case.
Some applications for Digital Twin are presented below:
Digital Twin for creating 3D modeling of digital objects from physical objects. This use case is a critical success factor for smart manufacturing initiatives.
Digital Twin is used inside factories to identify symptoms with constant monitoring and finding the root causes of production issues.
In healthcare Digital Twin are used for simulation purposes so doctors can do risky operations first in a simulated environment before doing the operation on a real patient.
Town planners use Digital Twin initiatives by using virtual models to improve the city conditions in a proactive manner. This approach can reduce the complexity and simplify the processes for planners. In conclusion using the Digital Twin architecture can help in a lot of industries making the work easier.
A digital twin is at its highest level, an architectural construct that is enabled by a combination of technology streams such as IoT (Internet of Things), Cloud Computing, Edge Computing, Fog Computing, Artificial Intelligence, Robotics, Machine Learning and Big Data. Analytics.
The digital twin concept is based on the fact that every physical part has a virtual part that is conceptually, structurally and functionally the same as the physical part. The concept of Digital Twins dates to the 1970s, used by NASA in the Apollo 13 mission. Nowadays Digital Twins are used in various industries, being a key concept in realizing the communication mechanism between the physical and virtual worlds using data.
The primary use case for Digital Twin is asset performance, utilization, and optimization. Digital Twin enables monitoring, diagnostic and forecasting capabilities for a specific use case.
Examples of Digital Twin application scenarios are described in the following:
- Digital Twin for creating 3D modelling of digital objects from physical objects. This use case is a critical success factor for smart manufacturing initiatives.
- Digital Twin is used in factories to identify symptoms with constant monitoring and find the root causes of production problems.
- In healthcare, Digital Twin is used for simulation purposes so that doctors can perform risky operations first in a simulated environment before performing the operation on a real patient.
- Urban planners use Digital Twin initiatives using virtual models to improve city conditions in a proactive manner. This approach can reduce complexity and simplify processes for planners.
Digital Twins and Data Spaces
Digital twins must be considered in their relationship to data spaces. A larger overview is therefore required, including systemic oversight, and supporting infrastructure.
International Data Spaces (IDS) provides data space technologies and concepts for various application domains that enable standardized data exchange and integration in a trusted environment. The International Data Spaces Association (IDSA) is a non-profit organization that promotes IDS architecture as an international standard in a variety of fields, including healthcare, mobility, agriculture, and more.
It is expected that in the medium term, in strong relation to specific requirements, collaboration solutions with centralized data storage in one or more clouds and distributed data storage with efficient data processing will be realized by combining the Digital Twins with Dataspaces.
With help of the use-cases provided within SEDIMARK, the project could elaborate a concrete strategy in which this relationship between Digital Twins and Dataspaces can prove of real value.