From Lab Curiosity to Real-World Robots: A Student-Facing Q&A with Dr. Feras Dayoub
Faculty Focus – Dr. Feras Dayoub
Q: What drew you into embodied AI and robotic vision?
A: A drive to put computer vision on real robots to do useful work—despite the difficulty—because seeing systems act in the physical world is uniquely rewarding. Embodied AI adds the full loop of sensing → reasoning → acting, which makes the problem challenging and exciting.
Seeing systems act in the physical world is uniquely rewarding.
Q: How do you view deployment and the end-to-end vs hybrid debate?
A: Pure end-to-end pipelines are compelling research, but real robots need reliability and safety. In practice that means mixing learned components with classical, well-tested modules.
Q: What does “thinking” look like in today’s systems?
A: It’s best seen as a learned mapping from sensory inputs to useful outputs (classes, boxes, or actions). That mapping helps a robot complete a task, but it’s not human-style reasoning.
Q: Could one big model learn everything end-to-end?
A: Possibly in principle, but data-driven systems can be brittle under domain shift and edge cases. You shouldn’t blindly trust every output just because a model always produces one.
Q: For students: focus on data, models, or decision layers?
A: All are valid research spaces—there is no single “right” answer. That said, data quality is often the bottleneck, and most robotics projects start from strong pretrained models.
Q: What’s a good “hello-world” project?
A: Use a simulator with a robotic arm to implement a simple pick-and-place. Wire in an opensource model, replicate a known result, and learn where the integration actually breaks.
Q: When should a student seek guidance?
A: Start something tangible first so feedback can be specific and grounded. Mentors can then help you right-size scope—expanding promising ideas or trimming moonshots.
Q: How do PhD and industry paths differ—and can you switch?
Data quality is often the bottleneck, and most robotics projects start from strong pre-trained models.
A: Early on, most students don’t see the full menu, and that’s normal. Movement between research and industry is common; what matters is continuous learning so your value compounds.
Q: Should robots mimic humans—or just be useful?
A: Optimize for utility: safe around people, efficient from A→B, and doesn’t break things. Humanlevel intelligence is ill-defined; clear, task-relevant capability is the target.
Q: What are useful robots today—and what’s next?
A: Industrial arms and household helpers lead today; highly autonomous systems sit under tight regulation.
A note from Dr. Feras about bringing research spaces to university students
Lower the barrier to first contact: The fastest way to make advanced topics feel real is to let students touch them: simulators are plentiful and open-source models are easy to try. With a small scaffold and a few pointers, students can stand up a tiny experiment in an afternoon and immediately see what “works” versus what just sounds good.
Adopt a project-first mentorship culture: Feedback is 10× better when there’s a concrete artifact on the table. Encourage students to attempt a tiny build before asking for guidance; then supervisors can tune scope, redirect energy, or highlight risks with real evidence instead of theory.
Teach replication as a learning superpower: A small, shippable “hello-world” (e.g., pick-and place in sim) builds vocabulary, exposes wiring issues, and reveals the gritty constraints that lectures can’t. Replication isn’t busywork—it’s how students internalize patterns they’ll reuse on novel problems.
Make career exploration explicit (and flexible): Treat research and industry as permeable lanes rather than mutually exclusive paths. Use talks, blogs, and showcases to “break the ceiling” of what students think is possible, while reinforcing that continuous learning—not early specialization—is what keeps their value growing.
Anchor ambition in utility: Define success in practical terms: safety around humans, reliable task completion, efficiency, and minimal breakage. Clear, measurable criteria keep projects grounded and make evaluation fair and motivating.
Dr. Feras’ Mini Project Initiation suggestions (3-step checklist)
Replicate something tiny: In a simulator, implement a one-task pipeline (e.g., pick-andplace) using a public model; record what you changed and why.
Instrument and observe: Log inputs/outputs, failure cases, and latency; note where the model or the wiring breaks in edge cases.
Reflect and iterate: Write a one-page brief: what worked, what failed, what you’d try next; bring this artefact to your mentor for targeted feedback.
End to End Robotics Learning - On Tuesday 2nd September 2025 at AIML