SEDIMARK Logo

This report represents the first version of the SEDIMARK’s approach on what will be its data sharing platform, as the main entry point to the system from the outside world. Hence, it must touch base not only on the front-end users will interact with but also on added features such as the Recommender system and the Open Data enabler which are at the essence of the solution. Given the stage on the project execution, the contents hereby presented will be subjected to an evolution and thus a new version of the SEDIMARK data sharing platform will be provided in Month 34 (July 2025) in the Deliverable 4.6 (Data sharing platform and incentives. Final version). Therefore, this document does not offer a fully functional depiction of this platform, just a high-level presentation of its constitutive components instead. In fact, in what refers to the Marketplace front-end a description will appear in the report, while as only the Recommender system and the open data enabler will be described also from a backend perspective. Thus, it is intended for a certain audience, mainly for members in the project consortium to employ it as the template to drive specific technical activities from other work packages within SEDIMARK.

This document, along with all the public deliverables and documents produced by SEDIMARK, can be found in the Publications & Resources section.

In response to the growing demand for secure and transparent data exchange, the infrastructure of SEDIMARK Marketplace leverages cutting-edge technologies to establish a resilient network.

This is the first version of the decentralised infrastructure and access management mechanisms. This deliverable presents a comprehensive overview of the decentralized infrastructure and access management mechanisms implemented in the SEDIMARK Marketplace. As the landscape of data exchange evolves, the decentralization approach ensures increased security, transparency, and user-centric control both over data assets and user identity information. Data providers of the SEDIMARK Marketplace can also provide additional types of assets, related to their data. Examples are Machine Learning (ML) Models, data processing pipelines and tools.

Operating within the principles of decentralization, this project addresses the growing need for secure and transparent data exchange in a globalized digital economy. The SEDIMARK Marketplace leverages distributed ledger technologies to establish a resilient and scalable infrastructure. The decentralized architecture of the marketplace is built on a robust distributed ledger employed for user identity management, as well as blockchain foundation, fostering tamper-resistant contracts. Utilizing a distributed network, the infrastructure eliminates single points of failure, enhancing reliability and ensuring the continued availability of assets to be exchanged. The decentralized infrastructure supports standardized protocols for data exchange, enabling collaboration and data sharing across various platforms and participants.

This deliverable is the first capstone in the SEDIMARK project, realizing the underlying infrastructure and mechanism that allow the fulfilment of the functionalities defined for the Marketplace. An updated version of this deliverable will be provided in the next Deliverable SEDIMARK_D4.2 in July 2025.

This document, along with all the public deliverables and documents produced by SEDIMARK, can be found in the Publications & Resources section.

This document is the first deliverable from WP3, aiming to provide a first draft of the SEDIMARK data quality pipeline. This document details how the pipeline aims to improve the quality of datasets shared through the marketplace while also addressing the problem of energy efficiency in the data value chain. This document is actually the first version of the deliverable, showing the initial ideas and initial implementation of the respective tools and techniques. An updated version of the deliverable with the final version of the data quality pipeline will be delivered in M34 (July 2025). This means that the document should be considered a “live” document that will be continuously updated and improved as the technical development of the data quality tools evolves.

The main goal of this document is to discuss how data quality is seen in SEDIMARK, what are the metrics defined in order to assess the quality of data that are generated by data providers, and what techniques will be provided to them for improving the quality of their data before sharing on the data marketplace. This will help the data providers to both optimise their decision-making systems for the Machine Learning (ML) models they train using their datasets, and to increase their revenues by selling datasets of higher quality and thus higher value. Regarding the first argument, it is well documented that low-quality data has a significant impact on business, with reports showing a yearly cost of around 3 Trillion USD, and that knowledge workers waste 50% of their time searching for and correcting dirty data [1]. It is evident that data providers will hugely benefit from automated tools to help them improve their data quality, either without any human involvement or with minimum human intervention and configuration. 

The document presents high-level descriptions of the concepts and tools developed for the data quality pipeline and the energy efficiency methods for reducing its environmental cost, as well as concrete technical details about the implementation of those tools. Thus, it can be considered that this is both a high-level and a technical document, thus targeting a wide audience. Primarily, the document targets the SEDIMARK consortium, discussing the technical implementations and the initial ideas about them, so that the rest of the technical tasks can draw ideas about the integration of all the components into a single SEDIMARK platform. Apart from that, this document also targets the scientific and research community, since it presents new ideas about data quality and how the developed tools can help researchers and scientists improve the quality of the data they use in their research or applications. Similarly, the industrial community can leverage the project tools to improve the quality of their datasets or also assess how they can exploit the results about energy efficiency to reduce the energy consumption of their data processing pipelines. Moreover, EU initiatives and other research projects should consider the contents of the deliverable in order to derive common concepts about data quality and reducing energy consumption in data pipelines.

This document, along with all the public deliverables and documents produced by SEDIMARK, can be found in the Publications & Resources section.

Machine Learning (ML) algorithms have demonstrated remarkable advancements across diverse fields, evolving to become more intricate and data-intensive.  This evolution is particularly driven by the expanding size of datasets and the infinite growing nature of data streams. However, this substantial progress has come at the cost of intensified energy consumption, emphasizing the urgent requirement for resource-efficient methodologies.  It is thus crucial to balance computational demands and model performance to mitigate the escalating environmental impact associated with the energy-intensive nature of machine learning processes.

Enhancing data efficiency stands as a central strategy in SEDIMARK to manage the considerable energy needs inherent in machine learning algorithms. SEDIMARK aims to achieve resource and energy efficiency during the training of ML models by reducing the quantity of data needed without compromising performance. To accomplish this, SEDIMARK will use summarization techniques in conjunction with ML algorithms, including but not limited to dimension reduction, sampling, and other reduction strategies.

Resource-efficient Machine Learning 2

In the SEDIMARK AI pipeline, dimension reduction techniques play a crucial role in mitigating resource consumption. By reducing the number of features, both computational complexity and memory requirements can be substantially lowered. Furthermore, the removal of irrelevant features through this process can enhance overall quality performance. Two main strategies that exist within dimension reduction are feature selection and feature extraction. The former involves the selection of a subset of the input features, while the latter entails constructing a new set of features in a lower-dimensional space from a given set of input features. This dual approach ensures a nuanced and effective reduction in the data footprint, contributing significantly to the overall goal of resource and energy efficiency in SEDIMARK.

Sampling is another effective strategy for resource-efficient machine learning. Instead of analyzing the entire dataset or maintaining a whole data stream, algorithms operate on a representative subset (or a sliding window for data streams). This approach is particularly useful for large datasets where processing the entire set is impractical.

Resource-Efficient Machine Learning

Resource-efficient machine learning is not just a practical necessity but a crucial avenue for sustainable and scalable model development. By strategically employing dimension reduction, sampling, corsets, data distillation and other summarization techniques, the ML models will be computationally frugal, making them particularly suitable for deployment on devices with limited processing capabilities, such as edge and IoT devices. SEDIMARK can strike a balance between computational efficiency and model accuracy. As machine learning evolves, these optimization strategies will play an increasingly vital role in ensuring that advanced algorithms remain accessible and practical in real-world applications.

The document is the first deliverable of WP5 and reports the results of T5.1 activities aimed at recommending an evaluation methodology, performance metrics, and a timetable for the integration of the SEDIMARK platform according to the rules of decentralization, trustworthiness, intelligence, data quality, and interoperability. This deliverable is important because it defines the evaluation methodology, monitoring approach, and efficiency of what is being built, as well as the system validation through real pilot demonstrations. In order to assess the framework's capabilities from various user perspectives, the developed methodology adapts multiple quality factors implemented using technical metrics.

Before delving into the core of the deliverable, the document briefly describes the vision of the SEDIMARK marketplace, in which participants will exchange assets in a secure decentralized manner. In SEDIMARK D2.2, the architecture’s components were thoroughly examined. To create the overall decentralized solution, the integration activities are based on those components and tools under a standard development framework.

All technology providers are accountable for the various modules to which they are assigned based on a top-down integration plan that is outlined in this document. Some architecture components are not included in the first version of the platform because they are part of the platform's second and final releases. The initial release focuses on enhancing the minimum functionalities required to provide a minimum viable product. The integration plan is built upon the use case scenarios defined in T2.1 and SEDIMARK D2.1 and the timeline for the execution of the scenarios. The components are integrated using Virtual Machines (VMs), docker containers, and other orchestration tools.

This deliverable also specifies a customized evaluation process as well as numerous criteria to be employed in this evaluation. The criteria comprise technical criteria tailored to each technique/module evaluated, as well as general criteria/KPIs tailored to each use case and a metrics framework based on ISO/IEC established methods for system and product quality assessment. The standardization provides the procedures with security and compatibility. The framework will begin with the establishment of a comprehensive and meaningful set of performance metrics based on the requirements and use cases of the stakeholders. Just to remind, SEDIMARK encompasses four main use cases in different sites: Mobility Digital Twin (Finland), Urban Bike Mobility Planning (Spain), Valorisation of Energy Consumption and Customer Reactions/Complaints (Greece), and Valuation and Commercialization of Water Data (France).

This document, along with all the public deliverables and documents produced by SEDIMARK, can be found in the Publications & Resources section.

Nowadays, users register to a service and, usually, the service itself stores the users data - the identity. Today the majority of online services are centralized and rely, in some form, on a single authority for identity management. SEDIMARK instead aims to be a fully decentralized data Marketplace.

This architectural choice has consequences also on the management of the users belonging to the system. With decentralization in mind, SEDIMARK adopts a new model for the identity, the Self Sovereign Identity (SSI).

SSI

SSI is a digital identity model that gives the user who creates it full control over his or her identity and the information to be shared.

The SSI model is rooted on the Decentralized Identity paradigm: it is the user him/her-self – the Holder of the identity -  that owns a unique identity composed of a set of attributes.

The attributes are releasedand associated to the identity by other entities – the Issuers of such claims.

These claims can be checked by other entities – called Verifiers. As an example, imagine a new graduate from a university. His/Her digital identity may contain a claim “Graduated” issued by the University. A future employer who wants to check this information acts as the Verifier.

SSI in practice

SSI? Never heard of it!

Yes, SSI is is a relatively new concept in the field of digital identity. It is an emerging technology relying on blockchain and other distributed ledgers which are in turn still evolving. Embracing and implementing these new identity systems is a process that requires time…

…But things are moving forward!

Microsoft has recently released a new product called Microsoft Entra Verified ID that employs decentralized identity.

Also European Union is addressing the EU citizen identities towards a model where the users have full control of their data with the European Digital Identity Wallet.

SSI in SEDIMARK

SEDIMARK will deploy its own custom SSI framework relying on IOTA Tangle DLT.

The users of the marketplace will have full control on their digital identity, allowing to preserve and maintain their privacy. Users have the ability to create and manage their own identities without relying on a central authority.

Moreover, thanks to SSI, also the authentication and authorization policies can be enforced with a more granular control. For example, a data provider can verify who is authorized to receive its data, liming the access only to a certain group.

Do you want to know more? Stay tuned for next blog posts by signing up to our newsletter below.

Follow us on LinkedIn and Twitter / X!

Source image from Shutterstock.

Data and AI in action! On the 25th-27th of October the 𝗘𝘂𝗿𝗼𝗽𝗲𝗮𝗻 𝗕𝗶𝗴 𝗗𝗮𝘁𝗮 𝗩𝗮𝗹𝘂𝗲 𝗙𝗼𝗿𝘂𝗺 #EBDVF, organised by #BDVA - Big Data Value Association, took place in Valencia, Spain. We had an insightful time exploring the most recent developments and reflections in Data and AI alongside professionals, researchers, policymakers and other entities across Europe.

SEDIMARK, which is always on the cutting edge of innovation, was introduced to the community, and ideas from similar research projects were exchanged to create synergies. WINGS ICT Solutions presented the scope of the project which is to design and prototype a secure decentralised and intelligent #data and services #marketplace that bridges remote data platforms and allows the efficient and privacy-preserving sharing of vast amounts of heterogeneous, high-quality, certified data and services supporting the common #EU #data #spaces.

Several crucial insights emerged from the discussion, highlighting the pressing requirement for standardized data, the significance of responsible data #governance, and the importance of aligning technological endeavors with well-defined business goals to ensure a meaningful impact. Additionally, the enormous potential of #data #spaces in promoting #data #sharing, catalyzing business expansion, and generating tangible value was underscored.

#EBDVF is over, but SEDIMARK’s work on #data and #AI and future realities continues!

In the context of climate change, water is a critical resource that must be managed very carefully. The ecosystem of water management is full of actors, each having a different responsibility and their own datasets which may be of value for other stakeholders. Currently, these datasets are not or are poorly shared. To tackle this issue, EGM has developed a Water Data Valorization Platform, that will enrich the SEDIMARK Marketplace ecosystem. It will be deployed in the municipality of Les Orres, where there is a need for an optimal way of handling all the data related to water, especially the one related to the Lac de Serre-Ponçon.

The platform revolves around the Stellio Context Information Broker, which allows connection and information sharing between all types of data and use cases. It is based on the European FIWARE open-source ecosystem which uses the NGSI-LD specification produced by ETSI. The platform provides some powerful visualization and business intelligence to view real time data or perform data analysis on a history of data and includes a set of modules specifically designed to deal with the need of actors in the water domain. Each module and its purpose are presented Hereafter.

Data collection module: In many cases, data is collected fully automatically, the workflow is set up once and then nothing else is to be done by the user. However, in some cases, the dataflow can only be automated up to a certain point, but still needs some input from a user.  For instance, a dataflow a could be automated but only a need the user to give an input file path or a list of attributes to select. This module allows the user to perform the required actions from a user-friendly interface directly in the platform without having to interact without the backend interface.

Data validation module: In the field of water data collection, the validation of the measurement is a very important part of the process. Indeed, most sensory devices are on the outside, subject to many potential disturbances, which need to be addressed. The data can be automatically pre-validated, but most actors in the water domain will want to perform a manual final validation of the data to ensure an optimal data quality. This module provides a user-friendly interface to perform this manual validation as easily and quickly as possible. It includes some optimized table and graph display of the attributes to be validated and allows to perform multiple actions (select multiple lines/columns, filter, perform basic mathematical operations, …) on the data to validate or invalidate them.

Calculation tool module: This module allows the user to launch all kinds of models available in the platform, such as Machine Learning and AI models, hydrological models, any kind of algorithm that takes one or many inputs to calculate one or many outputs.

Data Export module: After retrieving and processing its data on the platform, the user might want to export the data in different formats. This module allows the user to export its data in csv format. Optionally, the user can perform some temporal and/or geographical aggregation before exporting the data. In addition, the module proposes a possibility to export the data into a report or summary sheet, which works based on pre-registered report template. The user selects the template and the data to export to automatically generate an excel or word file.

Risk and Event management module: A common need in the water sector is to be able to set some threshold breach detection to monitor the behavior of the data. This module allows to create some Event to detect and historize the occurrence of threshold breach and missing data. In addition, it includes a risk management system, allowing the user to create a risk, i.e., defining some condition for different severity based on multiple measured attributes to monitor the risk and potentially be alerted whenever it occurs.

Alert creation module: Based on the risk and the event defined in the previous module, the user can define alerts that will trigger whenever the risk or event occurs. This module allows the user to set the condition on which the alert should trigger (e.g., risk ‘A’ occurred with severity ‘high’, or event ‘B’ occurred, …), the message to be delivered by the alert and the mailing list that should receive it.

Since the platform was designed to answer specifically to the need of the actors of the water domain, it includes a very narrow right and authorization management system, allowing to give to each specific actor (developer, data validator, data scientist, politic decider, citizen, …) the specific rights to access only the modules and/or dataset visualizations that are relevant to them. The water Data Valorization Platform will be deployed as a part of the SEDIMARK Marketplace. It will enrich the ecosystem with its modules, its services, and its dataset, to be used by many actors of the water domain but also any user with some data to be processed.

Image credit: Rémi Morel - OT Serre-Ponçon

This document serves as the first version of the SEDIMARK architecture aiming to provide an in-depth description of various architectural views and the roadmap on how the views were extracted. The main goal is to provide an architecture that supports the main concepts of SEDIMARK for full decentralisation, trustworthiness, intelligence, data quality and interoperability. This document is considered as one of the main deliverables of the project, because the main technical and evaluation activities will be based on the description of the functional components of the architecture and their interactions. SEDIMARK follows the concept of agile innovation-driven methodology for the development of the decentralised marketplace. This means that the architecture document should be considered as a “live” document that will be continuously updated and improved as the technical development and testing activities of the rest of the workpackages will evolve, aiming to identify omissions or issues with the initial architecture draft, so that these can be fixed by adapting the components or by adding missing components and removing components that are either not useful or duplicated. A new version of the SEDIMARK architecture will be provided in Month 24 (September 2024) in the Deliverable D2.3.
Considering that this document does not provide a fully functional architectural framework, but rather only a high-level document presenting initial concepts and ideas (not tested), it is intended for a limited audience, primarily for the project consortium to use it for driving the technical activities of the project in the rest of the work packages. Additionally, other researchers and developers in the areas of interest of the project will also find interesting ideas about developing decentralised data and services marketplaces. Moreover, EU initiatives and other research projects should consider the contents of the deliverable in order to help derive common architectures and concepts for creating data spaces and building marketplaces on top of them focusing on improved trustworthiness, data quality and intelligence.

This document, along with all the public deliverables and documents produced by SEDIMARK, can be found in the Publications & Resources section.

crossmenu