Dr.-Ing. Hannan Ejaz Keen

Senior Robotics & AI Engineer
Industrial AI • Autonomous Systems • Safety-Critical Deployment


8+ years of experience in designing, training, and deploying AI systems for autonomous and robotic platforms in real-world, safety-critical environments. My work focuses on robotic foundation models, multimodal perception, and diffusion-based generative models, bridging cutting-edge AI research with production deployment across simulation, embedded systems, and city-scale V2X infrastructure.

Highlights

  • 8+ years in robotics, autonomous systems, and applied AI
  • Lead engineer on large-scale industrial research programs (VALISENS, ENGEL)
  • Deployed AI perception systems on real city infrastructure and connected vehicles
  • Expertise spanning foundation models → data pipelines → embedded deployment
  • Proven technical leader mentoring PhD students and research engineers

What I Do

I design and deploy robust AI systems for industrial robotics and autonomous mobility, with a strong focus on perception, multimodal learning, and system reliability under real-world constraints.
My core expertise lies in:

  • Robotic foundation models, including Vision–Language–Action (VLA) architectures
  • Multimodal perception combining vision, LiDAR, thermal, GNSS, IMU, and V2X data
  • Diffusion-based generative models for synthetic data generation and robustness
  • End-to-end ML engineering, from data curation and simulation to deployment and monitoring
    A recurring theme in my work is ensuring that AI systems remain reliable under distribution shifts, rare events, and long-term operation in safety-critical environments.

Current Role

Senior Robotics & AI Engineer
Xitaso GmbH IT & Software Solutions · Karlsruhe, Germany

  • Lead development and deployment of multimodal AI systems for autonomous driving and safety-critical applications
  • Designed and trained diffusion-based generative models (conditional diffusion, ControlNet, object inpainting) for high-fidelity synthetic data generation
  • Built scalable data pipelines combining real-world sensor data, Unreal Engine simulation, and synthetic datasets
  • Led VALISENS, a large-scale industrial research project on collaborative perception using infrastructure-mounted sensors and connected vehicles via V2X
  • Deployed AI perception models on real city infrastructure under strict real-time and safety constraints (I2V / V2I)
  • Implemented dataset versioning, out-of-distribution detection, and model-drift monitoring
  • Mentored 3 PhD students and 5 researchers, defining technical roadmaps aligned with business objectives

Selected Flagship Projects

VALISENS – Collaborative Perception for Autonomous Driving

  • Designed distributed multi-sensor fusion pipelines combining roadside and vehicle-mounted sensors
  • Enabled V2X-based collaborative perception to improve robustness in complex urban traffic scenarios
  • Deployed systems on real city infrastructure, forming a foundation for Level-3+ automated driving research
  • Resulted in peer-reviewed publications, including ICCV 2025 and IEEE T-ITS

Focus: Infrastructure perception, V2X, safety-critical deployment

ENGEL – Energy-Efficient Flight Guidance & Synthetic Data Generation

  • Led research on diffusion-based image synthesis (conditional diffusion, ControlNet, inpainting)
  • Generated and quantitatively evaluated synthetic datasets for multi-weather and rare-condition robustness
  • Studied trade-offs between realism, semantic faithfulness, and controllability
  • Applied results directly to industrial perception pipelines rather than simulation-only benchmarks

Focus: Data-centric robotics, robustness, foundation-model workflows

Ponton Boot – Autonomous Surface Vehicle

  • Developed AI-based surface water navigation for an autonomous pontoon boat
  • Focused on perception, mapping, and traversability in flooded environments
  • Formed the basis of my doctoral dissertation and multiple peer-reviewed publications

Focus: Robotics perception in extreme environments

Autonomous Campus Bus – Pedestrian-Zone Deployment

  • Designed pedestrian-focused perception and classification models using pose, height, and motion cues
  • Deployed perception and decision-support systems on an autonomous bus in pedestrian zones
  • Released a public dataset and published results at IROS

Focus: Safety-critical perception for urban autonomy

Core Expertise

  • Robotic Foundation Models & Multimodal AI
  • Vision–Language–Action (VLA) models
  • Multimodal transformers
  • Diffusion models (conditional, ControlNet, inpainting)
  • Synthetic data generation, dataset versioning, drift detection
  • Robotics Systems & Deployment
  • ROS 2, Gazebo, Unreal Engine
  • Autonomous vehicles and mobile robotics
  • Embedded AI on NVIDIA Jetson (Linux)
  • Sensor fusion and perception pipelines
  • V2X communication
  • ML Engineering
  • PyTorch, TensorFlow, CUDA (training)
  • Model deployment under latency constraints
  • MLOps, CI/CD, monitoring, validation
  • Safety-critical AI systems

Selected Publications

  • A LiDAR-Visual-Thermal Dataset Enabling Vulnerable Road User Focused Roadside Perception – ICCV 2025
  • A Systematic Literature Review on Vehicular Collaborative Perception – IEEE T-ITS
  • Traversability Mapping for Safe Navigation in Flooded Environments – ICRA 2023
  • Drive on Pedestrian Walk: TUK Campus Dataset – IROS