Artificial Intelligence is changing the world, but most AI still exists only in code. Robotics brings AI into the physical world, allowing it to see, move, and interact. Join us at the Australian Institute for Machine Learning (AIML) for a robotics tutorial delivered by our world-class researchers and engineers, and learn about:
An introduction to robotics learning with Professor Minh Hoai Nguyen, AIML Deputy Director
Advanced topics in robotics research with Dr Feras Dayoub, AIML Senior Lecturer
Robotics platforms at AIML with Stefan Podgorski, AIML Principal Engineer
This tutorial is aimed at a general technical audience with programming and machine learning experience, but no prior robotics background. It is suitable for 3rd- and 4th-year computer science or data science undergraduates, graduate students, and AI engineers. If you understand the concept of machine learning and how to train a model to map inputs to target outputs using data, you will be able to follow along.
Note: This is not a MiTSA event — we’re proud to host this exceptional seminar on behalf of our primary sponsor the Australian Institute of Machine Learning here on our website.
An introduction to robotics learning Professor Minh Hoai Nguyen, AIML Deputy Director
This tutorial will cover robot state and action spaces, demonstration data collection, policy training via behaviour cloning, and the practical implementation of control loops—from synchronous to asynchronous execution. The session will also touch on emerging methods such as Vision-Language-Action models, Diffusion Policy, and Action Chunking. These concepts will be introduced in connection with real-world platforms, including the LeRobot SO101 arm and the more advanced UniTree humanoid robot.
Advanced topics in robotics research Dr Feras Dayoub, AIML Senior Lecturer
This tutorial will explore how powerful foundation models (FMs) are being applied to robotics, bridging the gap between high-level language understanding and low-level physical actions. It will begin with an overview of FMs, their historical evolution, and the taxonomy of language, vision, and multimodal models, leading into robotics-specific advancements such as RT-1, RT-2, and Gemini Robotics. This section will highlight the importance of grounding—linking symbolic representations to real-world perceptions and actions—and examine pioneering approaches in the field. It will also cover how these models enable robots to perceive, reason, and act in complex environments with minimal task-specific training. Finally, the session will address current challenges in safety, interpretability, and real-time deployment, and discuss future directions toward truly embodied and adaptive robotic intelligence.
Robotics platforms at AIML Stefan Podgorski, AIML Principal Engineer
This tutorial will explore the practical side of robotics. We’ll begin with a brief overview of the current state of robotics and its implications for end-to-end and embodied AI research. The session will introduce robotics middleware ROS, key simulation tools like Gazebo and Isaac, and common sensor modalities such as LiDAR, RGB-D cameras, stereo and event cameras, and IMUs highlighting their strengths, limitations, and real-world applications when applied to robotics. You’ll also get an introduction to the robotic platforms available to researchers at AIML, such as TurtleBot, Unitree Go1, DART, and the ALOHA Viper and Widow arm systems. To bring it all to life, the session will include planned live sensors and robotic demonstrations. These will provide a fun and engaging way to see how robotics platforms and tools can be used in practice.
Enquiries to Hilary Brookes: