Hypermaps | Closing the complexity gap in robotic mapping

4-year research fellowship on multi-layer and semantic spatial representations for robotics

abstract

Environmental awareness is a crucial skill for robotic systems intended to autonomously navigate and interact with their surroundings.

Robots access knowledge about their environment through maps. However, currently we see a big “complexity gap” in robotic mapping: while in recent years advances in computer vision have given us the ability to perceive our surroundings like never before through object detection and people tracking, robots still rely on maps containing only enough information for them to be able to navigate, but insufficient for many other tasks required by advanced autonomy. For example, most maps do not host semantic or dynamic information about the environment, needed for any application where interaction with people or specific objects is required. Until this gap is bridged, mobile robots will not be able to operate autonomously in dynamic environments.

Hypermaps lays the groundwork for the next level of interaction between robots and their environment by closing the complexity gap. In this project, we propose to go beyond today’s multi-layer maps by a new formalism, called hypermaps, where spatio-temporal knowledge (e.g., occupancy, semantics through deep object recognition, people movement in the environment) is stored and processed through advanced artificial intelligence to offer the robot task-specific maps to complete its missions. The core hypothesis of the project is that such a formalism will leverage the interplay between different maps to extract even more information and allow deeper reasoning. Anomalies in one map will be detected and corrected by looking at its correlation with the other maps, and information not visible in any single map will be made visible when the information of the layers is combined.

Closing the complexity gap constitutes a fundamental step towards the development of general, fully autonomous robots, able to execute high-level tasks and interact with us and their environment.

journal articles

  1. RA-L
    26_ral.png
    Event-Grounding Graph: Unified Spatio-Temporal Scene Graph from Robotic Observations
    Phuoc Nguyen, Francesco Verdoja, and Ville Kyrki
    IEEE Robotics and Automation Letters, May 2026

conference articles

  1. ICRA
    26_icra_2.png
    QuASH: Using Natural-Language Heuristics to Query Visual-Language Robotic Maps
    Matti Pekkanen, Francesco Verdoja, and Ville Kyrki
    In 2026 IEEE Int. Conf. on Robotics and Automation (ICRA), Jun 2026
  2. IROS
    25_iros_1.png
    REACT: Real-time Efficient Attribute Clustering and Transfer for Updatable 3D Scene Graph
    Phuoc Nguyen, Francesco Verdoja, and Ville Kyrki
    In 2025 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Oct 2025
  3. IROS
    25_iros_2.png
    Do Visual-Language Grid Maps Capture Latent Semantics?
    Matti Pekkanen, Tsvetomila Mihaylova, Francesco Verdoja, and Ville Kyrki
    In 2025 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Oct 2025
  4. IROS
    24_iros_2.jpg
    Bayesian Floor Field: Transferring people flow predictions across environments
    Francesco Verdoja, Tomasz Piotr Kucner, and Ville Kyrki
    In 2024 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Oct 2024

workshop articles

  1. ICRA
    24_icra_w_2.png
    Evaluating the quality of robotic visual-language maps
    Matti Pekkanen, Tsvetomila Mihaylova, Francesco Verdoja, and Ville Kyrki
    May 2024
    Presented at the “Vision-Language Models for Navigation and Manipulation (VLMNM)” workshop at the IEEE Int. Conf. on Robotics and Automation (ICRA)
  2. ICRA
    24_icra_w_4.jpg
    Using occupancy priors to generalize people flow predictions
    Francesco Verdoja, Tomasz Piotr Kucner, and Ville Kyrki
    May 2024
    Presented at the “Long-term Human Motion Prediction” workshop at the IEEE Int. Conf. on Robotics and Automation (ICRA)

preprints

  1. IROS
    26_iros.png
    Rheos: Modelling Continuous Motion Dynamics in Hierarchical 3D Scene Graphs
    Iacopo Catalano, Francesco Verdoja, Javier Civera, Jorge Peña-Queralta, and Julio A. Placed
    Mar 2026
    2026 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) (submitted)
  2. RO-MAN
    26_roman.png
    Relational Scene Graphs for Object Grounding of Natural Language Commands
    Julia Kuhn, Francesco Verdoja, Tsvetomila Mihaylova, and Ville Kyrki
    Mar 2026
    2026 IEEE IEEE Int. Conf. on Robot and Human Interactive Communication (RO-MAN) (submitted)