SEDIMARK Logo

Last Thursday, we had the pleasure of hosting Javier Valiño, Program Manager of the Data Space Working Group (Data Space WG) from the Eclipse Foundation, at Universidad de Cantabria.

During the meeting, Javier presented the latest advancements made by the Data Space WG. Their mission is to promote the global use of dataspace technologies, supporting the development and maintenance of secure data-sharing ecosystems. We also discussed SEDIMARK's initiatives in creating technologies for decentralized and secure data exchange.

Data Space WG goals are closely aligned with the SEDIMARK's innovative proposals for a decentralized marketplace, thus we will keep exploring future collaborations in the Data Space world.

In the modern era of big data, the challenge of integrating and analyzing data from various sources has become increasingly complex. Different data providers often use diverse formats and structures, leading to significant challenges in achieving data interoperability. This complexity necessitates robust mechanisms to convert and harmonize data, ensuring they can be effectively used for analysis and decision-making. SEDIMARK has identified two critical components in this process and is actively working on them: data formatter and data mapper.

A data formatter is designed to convert data from various providers, each using different formats, into the NGSI-LD standardized format. This standardization is crucial because it allows data from disparate sources to be compared, combined, and analyzed in a consistent manner. Without a data formatter, the heterogeneity of data formats would pose a significant barrier to interoperability. For example, data from providers might be in XLSX format, another in JSON, and yet another in CSV. A data formatter processes these different formats, transforming them into a unified format that can be easily managed and analyzed by SEDIMARK tools.

A data mapper comes into play after data processing to store the data and maps it to a specific data model. This process involves not only aligning the data with the model but also enriching it with quality metrics and metadata. During this stage, the data mapper adds valuable information about data quality obtained during the data processing step, such as identifying outliers and their corresponding anomaly scores, and missing and redundant data identification. This enriched data model becomes a powerful asset for future analyses, giving a complete picture of the data.By converting various data formats into a standard format and then mapping and enriching the data, SEDIMARK achieves a higher level of data integration. This process ensures that data from multiple sources can be used together seamlessly, facilitating more accurate and comprehensive analyses. Moreover, the inclusion of data quality metrics during the mapping process adds a layer of reliability and trustworthiness to the data. Information about outliers, missing data, and redundancy is crucial for data scientists and analysts, as it allows them to make informed decisions and apply appropriate processing techniques.

The first letter in SEDIMARK stands for Secure. How does Security is involved into SEDIMARK? In this blog post we will present an overview of the Security and Trust Domain within SEDIMARK!

Nowadays, the proliferation of large amount of data requires to ensure the security and integrity of the information exchanged. In the traditional way, a centralized data marketplaces face security challenges such as data manipulation, unauthorized access, and lack of transparency. 

In response to these challenges, Distributed Ledger Technology (DLT) has emerged as an alternative solution, offering decentralized (see "The letter D in SEDIMARK"), immutable, and transparent data exchange mechanisms.

Enhancing Security in Data Exchange

Centralized data marketplaces are susceptible to various security vulnerabilities, including single points of failure and data breaches.

Using DLT mitigates these risks: the control is decentralized and the cryptographic mechanisms ensure the security.

In SEDIMARK Marketplace the participants can securely exchange data without relying on third-parties (intermediaries), reducing the risks for unwanted data manipulation or unauthorized access to their data (or, more in general, their assets).

Security Features

SEDIMARK will employ (... or is it already?!) key features enabled by DLT, such as smart contracts, Self-Sovereign Identity (SSI), and cryptographic primitives to enhance security and transparency of the Marketplace.

The Smart Contracts automate the execution of agreements between parties, ensuring trustless and tamper-proof transactions.

SSI allows users of the Marketplace to retain full control on their own identity, without relying on centralized authorities (see A Matter of Identities).

Finally, the cryptographic primitives are the underlying functions to ensure data security and integrity.

Ensuring Data Origin

Using cryptographic functions, such as digest, ensures the creation of a mathematically unique fingerprint for a certain asset.

Recording (or "anchoring") such value onto the DLT allows to achieve an immutable data trail.

So, every user can be certain of the origin of the asset that is purchasing.

This also leads to additional transparency enhancing the Trust in this distributed marketplace.

SEDIMARK exploits traditional cryptographic mechanisms as well as DLT to freshen up data (asset!) exchange mechanism and to secure the Marketplace.

Do you want to know more? Stay tuned for next blog posts by signing up to our newsletter below.

Follow us on @Twitter / X and LinkedIn.

* Source image: shutterstock

On Wednesday, June 5th, the SEDIMARK Midterm review with the European Commission took place. The meeting highlighted the project's achievements so far, including a presentation of the architecture supporting the decentralized SEDIMARK Marketplace. Additionally, the progress of the four use cases was presented, along with plans for the upcoming period.

We received an initial positive feedback and are now awaiting the final remarks and feedback from the project reviewers. In the meantime, we continue to work on the future of data exchange through the decentralized and secure SEDIMARK Marketplace.

In April, all twelve partners of the SEDIMARK consortium convened in Helsinki for a productive General Assembly meeting, marking a significant milestone in our journey to establish a decentralized marketplace. With the first half of the project completed, we are now preparing for the next phase, which promises to be filled with significant milestones.

Our discussions focused on five key scenarios crucial to SEDIMARK's development: Onboarding, Offering Lifecycle, Data Exchange, Data Processing Pipeline, and Open Data. Through our collective efforts, we are advancing these scenarios, moving closer to their integration and the realization of the SEDIMARK Marketplace.

Special thanks to our hosts, Forum Virium Helsinki, for their exceptional organization, which made our time in Helsinki both productive and memorable. The city's enchanting charm provided the perfect backdrop for our collaborative endeavors. Here's to the progress we've made and the exciting journey ahead!

Helsinki Meeting

The “D6.3 Dissemination and Impact creation activities. First version” deliverable, presents the ongoing and carried out activities, communications and dissemination material, along with the current status of the Key Performance Indicators (KPIs) for such activities. Besides, it also includes the current efforts for the cooperation with other projects and associations.
During the first half of the project lifetime, a number of dissemination and communication activities have been carried out, reaching a large audience of variable types, including users, citizens, other research projects and the scientific community.
The content from the current document will be continued in the deliverable SEDIMARK_D6.4 Dissemination and Impact creation activities which is due in M36 (September 2025).

This document, along with all the public deliverables and documents produced by SEDIMARK, can be found in the Publications & Resources section.

Data has become a growing business of the utmost importance in the recent years of IoT technological expansion, driving crucial decision-making systems at the EU and global level, impacting all domains: industry, politics and economy, society and individuals, as well as the environment.

As the volume of instreaming data being collected, stored and processed is constantly expanding, most systems and techniques to absorb efficiently, appropriately and in a scalable manner such data, are lacking or rapidly overwhelmed by the technological revolutions. Furthermore, of great concern, the quantity of circulating private and sensitive information linked to individuals and organizations. Consequently, the data is insufficiently managed and maintained, too often misunderstood due to its complexity, lacking in high quality standards, ill-adapted to large-scale AI analytics, which in turn leads to inappropriate handling, sharing and misuse of data across borders and domains, even though they conform to European RGPD and FAIR* principles!  

For this reason, SEDIMARK uses a data orchestrator called “Mage.ai” to : (i) better organize integration of multiple data sources, applications, toolboxes, services and systems, (ii) render scalable the data workflows to improve performance and reduce bottlenecks, (iii) ensure data consistency, harmony and highest quality, (iv) guarantee data privacy and security compliant with EU regulations by anonymization and decentralized systems, and finally to (v) minimize and mitigate potential risks by automating schedules for data and system maintenance, monitoring and alerting procedures. On top of all, the orchestrator enables all actors of the data to easily manage, adapt and visualize the data situation. 

(*) Findable easily, Accessible, Interoperable, Reusable

The synergy between Geosciences and Machine Learning in today's world are at the forefront of global concerns. For example, the management of water resources has become a critical issue. The integration of geosciences and machine learning is emerging as a new and innovative solution to these problems.

Geosciences provide a fundamental understanding of water systems. By analyzing geological data, scientists can understand the impact of environmental factors on water systems and assess risks such as human settlement risks, environmental risks, or scarcity.

Machine learning brings predictive analytics into this matter, offering the ability to forecast future trends based on historical data. In water management, ML algorithms can predict usage patterns, potential pollution incidents, and the impact of climate change on water resources. This predictive capability is invaluable in planning and implementing strategies for sustainable water usage and conservation.

Case Studies and Applications

Using geological data and historical consumption patterns, machine learning models can predict areas at risk of water scarcity or flood, allowing for early intervention.

Machine learning algorithms can analyze data from various sources to detect and predict pollution levels in water bodies, enabling timely measures to protect water quality.

By combining geological data with climate models, machine learning can forecast the long-term impacts of climate change on water resources, guiding various adaptation strategies.

This interdisciplinary approach not only enhances our understanding of water systems but also equips us with the tools to make informed and sustainable decisions.

Geosciences provide the foundational 'what' and 'why', while machine learning offers the 'when' and 'how'. This combination can provide the strategy of creating efficient, intelligent, and sustainable solutions for urban environments and industrial applications, which can be of interest for large companies like Siemens.

In the dynamic City of technological progress, Lidar sensors, or Light Detection and Ranging sensors, have emerged as pivotal tools in the development of smart cities. These sophisticated sensors utilize laser light to gauge distances and construct intricate three-dimensional maps, providing a trove of data that is reshaping urban planning, traffic management, and public safety on a global scale.

LIDAR in Autonomous Vehicles

Among the many applications of Lidar sensors in smart cities, their integration into autonomous vehicles stands out as particularly impactful. Self-driving cars heavily rely on Lidar technology to navigate their surroundings with precision and safety. By emitting laser pulses and measuring the time it takes for the light to return, Lidar sensors generate real-time data crucial for autonomous vehicles to navigate obstacles, maintain safe distances, and adhere to traffic regulations. As cities worldwide embrace autonomous vehicles in their public transportation systems, Lidar technology is poised to play a pivotal role in ensuring the safety and efficiency of these fleets.

Case Study: Helsinki's Innovative Approach

In a notable example, the Helsinki pilot, in collaboration with the companies, has implemented Lidar sensors in its city center, specifically in the Esplanadi streets and pathways. Three Lidar radars have been strategically placed to collect data on moving vehicles, bicycles, electric scooters, and pedestrians. This real-time information is gathered and analyzed with utmost respect for privacy. The Lidar data analysis in this initiative aims to explore the potential of extracting detailed information about movement patterns in the area. This data, derived from advanced Lidar sensors, could be instrumental in enhancing the safety and appeal of the region.

Creating Detailed 3D Models

The Lidar sensors strategically placed in the area combine their data to generate precise representations of factors such as traffic flow, potential hazards, and the volume of pedestrians and light traffic. These sensors create detailed three-dimensional models of the environment, offering a comprehensive understanding of how different factors, including seasons, events, and traffic arrangements, influence pedestrian and light traffic patterns.

Future Implications and Collaborative Initiatives

The data collected through Lidar sensors not only aids in understanding current scenarios but also opens doors for future improvements. The Helsinki project aims to explore whether analyzing this data can enhance the safety of commuters, improve cycling efficiency, and boost the overall attractiveness of the area for pedestrians. Importantly, the project focuses on anonymous behavior analysis rather than individual tracking, respecting privacy concerns.

References

  • Photographer: Forum Virium Helsinki. The images maybe used free of charge when promoting Helsinki
crossmenu