ICARSC2025

IEEE International Conference on Autonomous Robot Systems and Competitions

April 2nd and 3rd, 2025

ICARSC
2025

Scientific Meeting

April 2nd and 3rd, 2025


Menu:

PROGRAM

Each paper will be allowed a maximum of 12 minutes for the presentation and 3 minutes for questions and answers.

Special Sessions ICARSC 2025

IEEE Education Society

Robotics in Education: Transforming Learning Through Innovation

Chairs: Anikó Costa, Paulo Ferreira, Rui Lopes

We invite researchers, educators, and practitioners to submit papers exploring the integration of robotics in education. Topics of interest include innovative teaching methods, robotics in STEM curricula, educational robotics systems, AI and robots in education, hands-on applications, teacher training, and the social and ethical impacts of robotics in education. Join us in discussing how to shape the future of education through robotics!

IEEE Robotics and Automation Society

Extended Reality Applied to Human-Robot Interaction

Chairs: Paulo Menezes, Ana Lopes, Paulo Gonçalves and João Bimbo 

Extended reality blends real and virtual worlds, transforming how humans and robots collaborate. By sharing spatial understanding, extended reality enables seamless cooperation on tasks through intuitive interaction. Extended reality devices, like robots, perform spatial perception tasks (e.g., visual SLAM), facilitating colocalization and enhancing human-robot interaction.

Extended Reality interfaces include Augmented Reality (AR), overlaying virtual elements on the real world; Virtual Reality (VR), immersing users in digital environments; and Mixed Reality (MR), merging real and virtual content interactively. AR and MR are particularly vital for enabling shared spatial awareness and collaboration.

This special session highlights advances in Extended Reality for human-robot collaboration across industries like healthcare, manufacturing, and education, showcasing new developments and applications.

KEYNOTE SPEAKERS 

Ana C. Murillo

Beyond Handcrafted Rules: AI-Driven Drone Swarms

This talk will cover recent advancements in automating drone swarm formations using foundation models. The first work, CLIPSwarm, leverages vision-language foundation models to generate drone formations that match natural language descriptions. CLIPSwarm refines simple robot formations based on CLIP similarity, enabling the generation of 2D shapes from single-word inputs. The second work, Gen-Swarms, introduces a novel framework that combines deep generative models with reactive navigation algorithms to automate the creation of 3D drone shows from text categories. By adapting flow matching models and incorporating motion constraints, Gen-Swarms generates smooth, collision-free trajectories for drones to form desired shapes. Together, these works demonstrate the power of foundation models to automate the artistic design and control of drone swarms, addressing challenges in high-level semantic control, trajectory generation, and collision avoidance.

Ana is associate professor at the University of Zaragoza, and one of the two coordinators of the Robotics, Computer Vision and Artificial Intelligence Lab (https://ropert.i3a.es/), where she leads a team of researchers in computer vision and machine learning. She obtained her PhD in Computer Science at the University of Zaragoza, about Visual Localization for Robotics, and since then she has been faculty at the University of Zaragoza, visiting faculty at UC San Diego and scientist at different companies. She has participated in numerous publications, research projects and consulting industry projects on the fields of computer vision, machine learning and robotics. Her current research topics include visual recognition and scene understanding for different robotics and healthcare applications, which present data and resource constrained environments and applications.

Paulo Menezes

Beyond Emotions: Affective Computing, VR, and Robotics in Mutual Awareness and Safe Interaction

This talk explores the evolution of human-machine interaction, emphasizing affective computing for enhanced user experiences. Beyond basic emotion detection, it introduces “emotional activations” as crucial for intuitive, engaging systems. Humans naturally respond with attraction or rejection to stimuli based on emotions, guiding interaction design toward simplicity and fidelity to foster trust and reduce frustration. This principle has influenced game-based approaches in VR/AR and robotics, enhancing engagement in learning, therapy, and wellbeing.
Trust is key to interactive systems, mirroring human-animal relationships where predictability and mutual awareness ensure safe coexistence. Just as pets recognize human presence and adapt, robots must perceive human cues—such as gaze and body language—and express internal states. The talk presents applications in therapeutic and industrial contexts, emphasizing mutual awareness as a foundation for safer, more trustworthy, and engaging intelligent systems.

Paulo Menezes is an Associate Professor in the Department of Electrical and Computer Engineering at the University of Coimbra, Portugal, and a Senior Researcher at the Institute of Systems and Robotics (ISR-UC). He leads the Immersive Systems and Sensorial Stimulation Laboratory, where his research is dedicated to advancing the fields of Affective Computing, Human-Robot Interaction, Virtual and Augmented Reality, and Human Behavior Analysis. His work is particularly focused on developing innovative solutions for Active and Healthy Living, Rehabilitation Games, and Human-Centric Interaction with both Social and Industrial Robots. With a strong interdisciplinary approach, Paulo Menezes explores the integration of sensorial stimulation, artificial intelligence, and immersive environments to enhance human-computer and human-robot interactions. His research contributes to improving assistive technologies, rehabilitation systems, and social robotics, fostering advancements in healthcare, industry, and daily life applications. Through national and international collaborations, he has been actively involved in R&D projects, bridging academia and industry to drive innovation in human-centered technologies.

Gala Dinner

A Gala Dinner awaits you on the first day of the conference, April 2nd. 

It will be soon announced here.