Shared control combines human intention with autonomous decision-making, from low-level safety overrides to high-level task guidance, enabling systems that adapt to users while ensuring safety and performance. This enhances task effectiveness and user experience across domains such as assistive robotics, teleoperation, and autonomous driving. However, existing shared control methods, based on e.g. Model Predictive Control, Control Barrier Functions, or learning-based control, struggle with feasibility, scalability, or safety guarantees, particularly since the user input is unpredictable. To address these challenges, we propose an assistive controller framework based on Constrained Optimal Control Problem that incorporates an offline-computed Control Invariant Set, enabling online computation of control actions that ensure feasibility, strict constraint satisfaction, and minimal override of user intent. Moreover, the framework can accommodate structured class of non-convex constraints, which are common in real-world scenarios. We validate the approach through a large-scale user study with 66 participants–one of the most extensive in shared control research–using a computer game environment to assess task load, trust, and perceived control, in addition to performance. The results show consistent improvements across all these aspects without compromising safety and user intent.
@inproceedings{202606_chaubey_misc,address={Vienna, Austria},title={{Minimal} {Intervention} {Shared} {Control} with
{Guaranteed} {Safety} under {Non}-{Convex} {Constraints}},booktitle={2026 {IEEE} {Int.} {Conf.} on {Robotics} and {Automation}
({ICRA})},publisher={IEEE},author={Chaubey, Shivam and Verdoja, Francesco and Deka, Shankar and Kyrki, Ville},url={https://arxiv.org/abs/2507.02438},month=jun,year={2026},}
ICRA
QuASH: Using Natural-Language Heuristics to Query Visual-Language Robotic Maps
Matti Pekkanen, Francesco Verdoja, and Ville Kyrki
In 2026 IEEE Int. Conf. on Robotics and Automation (ICRA), Jun 2026
Embeddings from Visual-Language Models are increasingly utilized to represent semantics in robotic maps, offering an open-vocabulary scene understanding that surpasses traditional, limited labels. Embeddings enable on-demand querying by comparing embedded user text prompts to map embeddings via a similarity metric. The key challenge in performing the task indicated in a query is that the robot must determine the parts of the environment relevant to the query. This paper proposes a solution to this challenge. We leverage natural-language synonyms and antonyms associated with the query within the embedding space, applying heuristics to estimate the language space relevant to the query, and use that to train a classifier to partition the environment into matches and non-matches. We evaluate our method through extensive experiments, querying both maps and standard image benchmarks. The results demonstrate increased queryability of maps and images. Our querying technique is agnostic to the representation and encoder used, and requires limited training.
@inproceedings{202606_pekkanen_quash,address={Vienna, Austria},title={{QuASH}: {Using} {Natural}-{Language} {Heuristics} to {Query}
{Visual}-{Language} {Robotic} {Maps}},shorttitle={{QuASH}},booktitle={2026 {IEEE} {Int.} {Conf.} on {Robotics} and {Automation}
({ICRA})},publisher={IEEE},author={Pekkanen, Matti and Verdoja, Francesco and Kyrki, Ville},url={http://arxiv.org/abs/2510.14546},month=jun,year={2026},}