
Job description
Huawei Canada has an immediate 12-month contract opening for a Researcher.
About the team:
The Human-Machine Interaction Lab unites global talents to redefine the relationship between humans and technology. Focused on innovation and user-centered design, the lab strives to advance human-computer interaction research. Our team includes researchers, engineers, and designers collaborating across disciplines to develop novel interactive systems, sensing technologies, wearable and IoT systems, human factors, computer vision, and multimodal interfaces. Through high-impact products and cutting-edge research, we aim to enhance user experiences and interactions with technology.
About the job:
Perform fundamental and applied research in areas such as human-robot interaction, robot foundation models, multimodal spatial reasoning, self-supervised representation learning, large behavior models, and audio-vision-language–action integration.
Develop and demonstrate foundational robot capabilities such as 3D spatial understanding, robot dexterity, task generalization and adaptation, and safe, natural human-robot interaction.
Conduct research on AI models for spatial intelligence and robot dexterity, leveraging simulation and large-scale multimodal data for task generalization.
Integrate and evaluate large foundation models (e.g., ALMs/VLMs/LLMs) for embodied AI applications involving audio and speech perception, scene understanding, and interactive reasoning.
Translate theoretical concepts and mathematical formulations into efficient, executable algorithms and code.
Conduct empirical studies and benchmarking on both simulated and physical robotic systems, including manipulators, mobile robots, and audio–vision sensor platforms.
Contribute to project proposals, technical reports, and intellectual property generation (e.g., patents).
Publish or contribute to top-tier AI and robotics conferences and journals.
Job requirements
About the ideal candidate:
PhD in Computer Science, Electrical/Computer Engineering, Robotics, or a related field, or a Master’s degree with equivalent research and development experience.
Strong research background in topics such as: Human-Robot Interaction design and/or innovations; Imitation learning, reinforcement learning, or self-supervised representation learning; Multimodal learning (audio, image, text, tactile, etc.), with an emphasis on audio and speech understanding; 3D scene understanding and reconstruction, task-and-motion planning, and robotic perception and control; Foundation model adaptation – including ALM/VLM/LLM fine-tuning and integration for embodied AI.
Hands-on experience in integrating and evaluating AI models with real-world data from microphone, RGB/depth, or LIDAR sensors, and deploying them on physical robotic platforms.
Familiarity with robotic simulation environments (e.g., Isaac Sim, MuJoCo, PyBullet) and robot operating systems (e.g., ROS/ROS2).
Proficiency in Python and C++, with experience in PyTorch or TensorFlow.
Proven research record demonstrated by publications in top-tier venues.
Good understanding of recent trends in robot foundation models, audio-vision-language-action models, and large behavior models.
Experience or strong interest in human-robot interaction, multimodal communication, and voice-based interaction is an asset.
or
All done!
Your application has been successfully submitted!
