SEDIMARK Logo

Have you ever wondered how a smart city manages to keep everything from urban planning to environmental monitoring running smoothly? The answer lies in something called Spatial Data Infrastructure (SDI). While it might sound technical, SDI framework plays a crucial role in making geographic information accessible and integrated, benefiting everyone.

Imagine a world where data about locations – from urban planning maps to environmental monitoring systems – is at your fingertips. SDI turns this vision into reality. By connecting data, technology, and people, SDI helps improve decision-making and efficiency in numerous areas of our lives.

Smart City: SEDIMARK Helsinki Pilot and Spatial Data

The SEDIMARK Helsinki pilot aims to demonstrate how Digital Twin technology can revolutionize urban mobility with spatial data as the backbone. SEDIMARK's context broker (NGSI-LD) handles linked data, property graphs, and semantics using three main constructs: Entities, Properties, and Relationships. This integration opens up opportunities for new services and the development of a functional city, aiming to enhance geospatial data integration within urban digital twins. In Helsinki, the approach focuses on transitioning from a monolithic architecture to a modular, API-driven approach, developing Digital Twin viewers and tools, and collaborating on a city-wide Geospatial Data.

Join us on this journey as we dive into the world of Spatial Data Infrastructure and see how it's making our city smarter, more efficient, and better prepared for the future.

Photo credit. https://materialbank.myhelsinki.fi/images/attraction?sort=popularity&openMediaId=6614

When we think of data, especially from diverse traffic sources, beauty isn't typically the first thing that comes to mind. Instead, we imagine numbers, graphs, and charts, all designed to convey information quickly and efficiently. However, what if we could see data not just as a tool for analysis, but as a source of inspiration, capable of producing visuals as captivating as a masterpiece by Vincent van Gogh? Just like van Gogh's "Starry Night" finds beauty in complexity and chaos, we can render data into beautiful, meaningful visualizations.

The Complexity of Traffic Data

Traffic data is inherently complex. It comes from a variety of sources of interoperable systems and devices. Each source provides a different perspective, capturing the flow of vehicles, the density of traffic, and the speed of travel at any given time. When combined, these data points create a comprehensive picture of urban movement.

From Chaos to Clarity

Much like the seemingly chaotic yet harmonious art, raw traffic data can appear overwhelming. However, through careful visualization and simulation, patterns and insights emerge. Advanced algorithms process the data, identifying trends and correlations that aren't immediately apparent. For instance, heat maps can show areas of high congestion, while flow diagrams can illustrate the movement of vehicles through a city over time.

The beauty of data

Data visualization is an art form in its own right. The choice of colors, shapes, and lines can transform a simple graph into a work of art. For example, a time-lapse visualization of traffic flow can resemble the dynamic motion in an urban city with streams of vehicles.

Helsinki mobility digital twin

Helsinki mobility digital twin paves the way for a future where cities leverage data. This data-driven revolution, fueled by powerful data visualization, holds immense potential for creating a more efficient, sustainable, and safer urban transportation landscape.

So, can traffic data be beautiful? Absolutely. All it takes is the right perspective and a touch of creativity to turn numbers into a work of art.

Photo credit: Kuva.

In the modern era of big data, the challenge of integrating and analyzing data from various sources has become increasingly complex. Different data providers often use diverse formats and structures, leading to significant challenges in achieving data interoperability. This complexity necessitates robust mechanisms to convert and harmonize data, ensuring they can be effectively used for analysis and decision-making. SEDIMARK has identified two critical components in this process and is actively working on them: data formatter and data mapper.

A data formatter is designed to convert data from various providers, each using different formats, into the NGSI-LD standardized format. This standardization is crucial because it allows data from disparate sources to be compared, combined, and analyzed in a consistent manner. Without a data formatter, the heterogeneity of data formats would pose a significant barrier to interoperability. For example, data from providers might be in XLSX format, another in JSON, and yet another in CSV. A data formatter processes these different formats, transforming them into a unified format that can be easily managed and analyzed by SEDIMARK tools.

A data mapper comes into play after data processing to store the data and maps it to a specific data model. This process involves not only aligning the data with the model but also enriching it with quality metrics and metadata. During this stage, the data mapper adds valuable information about data quality obtained during the data processing step, such as identifying outliers and their corresponding anomaly scores, and missing and redundant data identification. This enriched data model becomes a powerful asset for future analyses, giving a complete picture of the data.By converting various data formats into a standard format and then mapping and enriching the data, SEDIMARK achieves a higher level of data integration. This process ensures that data from multiple sources can be used together seamlessly, facilitating more accurate and comprehensive analyses. Moreover, the inclusion of data quality metrics during the mapping process adds a layer of reliability and trustworthiness to the data. Information about outliers, missing data, and redundancy is crucial for data scientists and analysts, as it allows them to make informed decisions and apply appropriate processing techniques.

The first letter in SEDIMARK stands for Secure. How does Security is involved into SEDIMARK? In this blog post we will present an overview of the Security and Trust Domain within SEDIMARK!

Nowadays, the proliferation of large amount of data requires to ensure the security and integrity of the information exchanged. In the traditional way, a centralized data marketplaces face security challenges such as data manipulation, unauthorized access, and lack of transparency. 

In response to these challenges, Distributed Ledger Technology (DLT) has emerged as an alternative solution, offering decentralized (see "The letter D in SEDIMARK"), immutable, and transparent data exchange mechanisms.

Enhancing Security in Data Exchange

Centralized data marketplaces are susceptible to various security vulnerabilities, including single points of failure and data breaches.

Using DLT mitigates these risks: the control is decentralized and the cryptographic mechanisms ensure the security.

In SEDIMARK Marketplace the participants can securely exchange data without relying on third-parties (intermediaries), reducing the risks for unwanted data manipulation or unauthorized access to their data (or, more in general, their assets).

Security Features

SEDIMARK will employ (... or is it already?!) key features enabled by DLT, such as smart contracts, Self-Sovereign Identity (SSI), and cryptographic primitives to enhance security and transparency of the Marketplace.

The Smart Contracts automate the execution of agreements between parties, ensuring trustless and tamper-proof transactions.

SSI allows users of the Marketplace to retain full control on their own identity, without relying on centralized authorities (see A Matter of Identities).

Finally, the cryptographic primitives are the underlying functions to ensure data security and integrity.

Ensuring Data Origin

Using cryptographic functions, such as digest, ensures the creation of a mathematically unique fingerprint for a certain asset.

Recording (or "anchoring") such value onto the DLT allows to achieve an immutable data trail.

So, every user can be certain of the origin of the asset that is purchasing.

This also leads to additional transparency enhancing the Trust in this distributed marketplace.

SEDIMARK exploits traditional cryptographic mechanisms as well as DLT to freshen up data (asset!) exchange mechanism and to secure the Marketplace.

Do you want to know more? Stay tuned for next blog posts by signing up to our newsletter below.

Follow us on @Twitter / X and LinkedIn.

* Source image: shutterstock

The “D6.3 Dissemination and Impact creation activities. First version” deliverable, presents the ongoing and carried out activities, communications and dissemination material, along with the current status of the Key Performance Indicators (KPIs) for such activities. Besides, it also includes the current efforts for the cooperation with other projects and associations.
During the first half of the project lifetime, a number of dissemination and communication activities have been carried out, reaching a large audience of variable types, including users, citizens, other research projects and the scientific community.
The content from the current document will be continued in the deliverable SEDIMARK_D6.4 Dissemination and Impact creation activities which is due in M36 (September 2025).

This document, along with all the public deliverables and documents produced by SEDIMARK, can be found in the Publications & Resources section.

Data has become a growing business of the utmost importance in the recent years of IoT technological expansion, driving crucial decision-making systems at the EU and global level, impacting all domains: industry, politics and economy, society and individuals, as well as the environment.

As the volume of instreaming data being collected, stored and processed is constantly expanding, most systems and techniques to absorb efficiently, appropriately and in a scalable manner such data, are lacking or rapidly overwhelmed by the technological revolutions. Furthermore, of great concern, the quantity of circulating private and sensitive information linked to individuals and organizations. Consequently, the data is insufficiently managed and maintained, too often misunderstood due to its complexity, lacking in high quality standards, ill-adapted to large-scale AI analytics, which in turn leads to inappropriate handling, sharing and misuse of data across borders and domains, even though they conform to European RGPD and FAIR* principles!  

For this reason, SEDIMARK uses a data orchestrator called “Mage.ai” to : (i) better organize integration of multiple data sources, applications, toolboxes, services and systems, (ii) render scalable the data workflows to improve performance and reduce bottlenecks, (iii) ensure data consistency, harmony and highest quality, (iv) guarantee data privacy and security compliant with EU regulations by anonymization and decentralized systems, and finally to (v) minimize and mitigate potential risks by automating schedules for data and system maintenance, monitoring and alerting procedures. On top of all, the orchestrator enables all actors of the data to easily manage, adapt and visualize the data situation. 

(*) Findable easily, Accessible, Interoperable, Reusable

The synergy between Geosciences and Machine Learning in today's world are at the forefront of global concerns. For example, the management of water resources has become a critical issue. The integration of geosciences and machine learning is emerging as a new and innovative solution to these problems.

Geosciences provide a fundamental understanding of water systems. By analyzing geological data, scientists can understand the impact of environmental factors on water systems and assess risks such as human settlement risks, environmental risks, or scarcity.

Machine learning brings predictive analytics into this matter, offering the ability to forecast future trends based on historical data. In water management, ML algorithms can predict usage patterns, potential pollution incidents, and the impact of climate change on water resources. This predictive capability is invaluable in planning and implementing strategies for sustainable water usage and conservation.

Case Studies and Applications

Using geological data and historical consumption patterns, machine learning models can predict areas at risk of water scarcity or flood, allowing for early intervention.

Machine learning algorithms can analyze data from various sources to detect and predict pollution levels in water bodies, enabling timely measures to protect water quality.

By combining geological data with climate models, machine learning can forecast the long-term impacts of climate change on water resources, guiding various adaptation strategies.

This interdisciplinary approach not only enhances our understanding of water systems but also equips us with the tools to make informed and sustainable decisions.

Geosciences provide the foundational 'what' and 'why', while machine learning offers the 'when' and 'how'. This combination can provide the strategy of creating efficient, intelligent, and sustainable solutions for urban environments and industrial applications, which can be of interest for large companies like Siemens.

In the dynamic City of technological progress, Lidar sensors, or Light Detection and Ranging sensors, have emerged as pivotal tools in the development of smart cities. These sophisticated sensors utilize laser light to gauge distances and construct intricate three-dimensional maps, providing a trove of data that is reshaping urban planning, traffic management, and public safety on a global scale.

LIDAR in Autonomous Vehicles

Among the many applications of Lidar sensors in smart cities, their integration into autonomous vehicles stands out as particularly impactful. Self-driving cars heavily rely on Lidar technology to navigate their surroundings with precision and safety. By emitting laser pulses and measuring the time it takes for the light to return, Lidar sensors generate real-time data crucial for autonomous vehicles to navigate obstacles, maintain safe distances, and adhere to traffic regulations. As cities worldwide embrace autonomous vehicles in their public transportation systems, Lidar technology is poised to play a pivotal role in ensuring the safety and efficiency of these fleets.

Case Study: Helsinki's Innovative Approach

In a notable example, the Helsinki pilot, in collaboration with the companies, has implemented Lidar sensors in its city center, specifically in the Esplanadi streets and pathways. Three Lidar radars have been strategically placed to collect data on moving vehicles, bicycles, electric scooters, and pedestrians. This real-time information is gathered and analyzed with utmost respect for privacy. The Lidar data analysis in this initiative aims to explore the potential of extracting detailed information about movement patterns in the area. This data, derived from advanced Lidar sensors, could be instrumental in enhancing the safety and appeal of the region.

Creating Detailed 3D Models

The Lidar sensors strategically placed in the area combine their data to generate precise representations of factors such as traffic flow, potential hazards, and the volume of pedestrians and light traffic. These sensors create detailed three-dimensional models of the environment, offering a comprehensive understanding of how different factors, including seasons, events, and traffic arrangements, influence pedestrian and light traffic patterns.

Future Implications and Collaborative Initiatives

The data collected through Lidar sensors not only aids in understanding current scenarios but also opens doors for future improvements. The Helsinki project aims to explore whether analyzing this data can enhance the safety of commuters, improve cycling efficiency, and boost the overall attractiveness of the area for pedestrians. Importantly, the project focuses on anonymous behavior analysis rather than individual tracking, respecting privacy concerns.

References

  • Photographer: Forum Virium Helsinki. The images maybe used free of charge when promoting Helsinki

The recently concluded ecosystem workshops at Helsinki Mobility Lab have provided invaluable insights into the world of digital twins and data marketplaces. In particular, discussions surrounding the Mobility Data Marketplace have shed light on crucial aspects that will shape the future of this evolving landscape.

Key Insights on Mobility Data Marketplace Workshop

Strategic Conceptual Focus

Maintaining a strategic focus at the conceptual level has proven instrumental in comprehending the dynamics of mobility data marketplaces. By centering discussions on needs and concerns, we move beyond mere technical solutions to delve into fundamental questions regarding data acquisition, sharing, and utilization in the realm of mobility. This is the space where the mobility digital twin demonstrates its value, revealing new opportunities for urban city development.

In the context of value creation and processes within the digital twins, data becomes a catalyst for process optimization. By leveraging information from other functions more broadly, we can gain a better understanding of our operations' role as part of the whole and anticipate future needs.

Trust in Mobility Data Sharing

Trust emerges as a central theme, especially in a market focused on mobility data. Building mechanisms for trust from the ground up and sustaining it as the ecosystem expands is not only a recommendation but a necessity for the success of mobility data marketplaces.

Benchmarking and Roadmap Initiation in Mobility Data

The recommendation to initiate roadmap work and benchmark solutions from other industries is particularly relevant to the dynamic landscape of mobility. Learning from successful models in related fields can accelerate the development of effective mobility data marketplaces

Incremental Progress and Pilots in Mobility Data Marketplace

Aligning with the workshop approach, incremental progress through pilots is crucial for the mobility data marketplace. The tangible actions resulting from such an approach validate decisions made during conceptual discussions, ensuring that the marketplace evolves in a way that best serves the needs of its participants. Constructing a roadmap for a data marketplace specific to mobility requires a nuanced understanding of the unique challenges and opportunities within this sector.

As we delve deeper into the complexity of the Mobility Data Marketplace, it becomes evident that the roadmap to a successful mobility data marketplace involves a combination of strategic discussions, practical considerations, trust-building measures, and iterative progress. The future roadmap positions us well for the challenges and opportunities that lie ahead, steering us towards a future where the effective utilization of mobility data contributes significantly to innovation and sustainable development.

References

Mobility Lab - Digital Twin, Data Marketplace workshop

crossmenu