Seven Papers by researchers from the Intelligent Space Robotics Lab (Skoltech Center for Digital Engineering) were accepted at HRI'25, the 20th edition of the IEEE/ACM International Conference on Human-Robot Interaction (Core A), Melbourne, Australia.
Accepted papers included:
O. Sautenkov, Y. Yaqoot, A. Lykov, M. A. Mustafa, G. Tadevosyan, A. Akhmetkazy, M. Altamirano Cabrera, M. Martynov, S. Karaf, and D. Tsetserukou, “UAV-VLA: Vision-Language-Action System for Large Scale Aerial Mission Generation,” in Proc. IEEE/ACM Int. Conf. on Human Robot Interaction (HRI 2025), Melbourne, Australia, March 4-6, 2025, in print. (Core2023 A, Scopus, WoS, H-index (SJR)=70)
M. H. Khan, S. Asfaw, D. Iarchuk, M. Altamirano Cabrera, L. Moreno, I. Tokmurziyev, and D. Tsetserukou, “Shake-VLA: Vision-Language-Action Model-Based System for Bimanual Robotic Manipulations and Liquid Mixing,” in Proc. IEEE/ACM Int. Conf. on Human Robot Interaction (HRI 2025), Melbourne, Australia, March 4-6, 2025, in print. (Core2023 A, Scopus, WoS, H-index (SJR)=70)
O. Kobzarev, A. Lykov, and D. Tsetserukou, “GestLLM: Advanced Hand Gesture Interpretation via Large Language Models for Human-Robot Interaction,” in Proc. IEEE/ACM Int. Conf. on Human Robot Interaction (HRI 2025), Melbourne, Australia, March 4-6, 2025, in print. (Core2023 A, Scopus, WoS, H-index (SJR)=70)
I. Tokmurziyev, M. Altamirano Cabrera, L. Moreno, H. Khan, and D. Tsetserukou, “GazeGrasp: DNN-Driven Robotic Grasping with Wearable Eye-Gaze Interface,” in Proc. IEEE/ACM Int. Conf. on Human Robot Interaction (HRI 2025), Melbourne, Australia, March 4-6, 2025, in print. (Core2023 A, Scopus, WoS, H-index (SJR)=70)
G. Tadevosyan, M. Osipenko, D. Aschu, A. Fedoseev, V. Serpiva, O. Sautenkov, S. Karaf, and D. Tsetserukou, “SafeSwarm: Decentralized Safe RL for the Swarm of Drones Landing in Dense Crowds,” in Proc. IEEE/ACM Int. Conf. on Human Robot Interaction (HRI 2025), Melbourne, Australia, March 4-6, 2025, in print. (Core2023 A, Scopus, WoS, H-index (SJR)=70)
A. Erkhov, A. Bazhenov, S. Satsevich, D. Belov, F. Khabibullin, S. Egorov, M. Gromakov, M. Altamirano Cabrera, and D. Tsetserukou, “ViewVR: Visual Feedback Modes to Achieve Quality of VR-based Telemanipulation,” in Proc. IEEE/ACM Int. Conf. on Human Robot Interaction (HRI 2025), Melbourne, Australia, March 4-6, 2025, in print. (Core2023 A, Scopus, WoS, H-index (SJR)=70)
D. Trinitatova, S. Shevelo, and D. Tsetserukou, “Towards intuitive drone operation using handheld motion controller,” in Proc. ACM/IEEE Int. Conf. on Human Robot Interaction (HRI 2025), Melbourne, Australia, March 4-6, 2025, in print. (Core2023 A, Scopus, WoS, H-index (SJR)=70)
Dzmitry Tsetserukou, Associate Professor, Head of Intelligent Space Robotics lab (Skoltech Centre for Digital Engineering)
«UAV-VLA paper is the first one which focuses on application of AI for generation of large scale aerial missions, opening a doors for end-to-end autonomous flights. Papers on Shake-VLA and GestLMM presents the new way of interaction of human and robots through vision-language-action model and gesture language recognition with LLM. ISR lab. becomes one of the world 's leader in the Artificial General Intelligence (AGI) for Robotics along with such brilliant groups as Stanford AI Lab.(Professor Chelsea Finn) and UC Berkeley, BAIR Lab. (Professor Sergei Levin). We also has proposed new interfaces to interact with robots, such as, GestGrasp, ViewVR, and Drone Omni Joystick. It is a best result in Russia on the number of the accepted papers for Core A level conference by one Lab».