Dienstag, 06. Februar 2024

Explainable Multi-Modal Perception for Automated Vehicles

Topic and Goal of the Thesis

In the realm of automated driving, the development of sophisticated multi-modal perception systems is paramount to ensuring the safety and efficacy of autonomous vehicles. This thesis aims to explore the conception and evolution of an explainable multi-modal perception system, seeking to shed light on the underlying rationale of its predictions and enhance its interpretability.

A thesis on this topic is the conception and the development of an explainable multi-modal perception system, which aims to provide insights into the reasoning behind the system's predictions and improve its interpretability. This could involve the use of techniques such as attention mechanisms[1] or visualization tools to understand how the detection processes the data from multiple sensors to make its predictions.
This thesis should, for example, help to answer the following questions regarding a multi-sensor perception system: What is happening inside the Neural Networks on the current sensory input? Where is the perception system "looking" at? Which sensor(s) is(are) responsible for the current detection?

The basis of this thesis is an existing multi-modal perception framework which is currently integrated into our test vehicle. The existing computer vision models must be extended to incorporate explainability and interpretability features. Based on this explainability features, further ROS-based visualizations tools should be implemented which help the developer, the user, or authorities to understand the current state and reasoning of the perception system.

[1] Explainable Multi-Camera 3D Object Detection with Transformer-Based Saliency Maps, Till Beemelmanns, Wassim Zahr and Lutz Eckstein, 2023, Machine Learning for Automated Driving Symposium, New Orleans

Working Points

  • Literature research on Explainable AI (XAI) and Computer Vision for Automated Driving
  • Extend existing perception models with interpretability enhancing methods (XAI)
  • Training of the neural network on public available datasets on our compute cluster if necessary
  • Integration of new XAI perception models into our test vehicle
  • Implementation of a XAI visualization interface

Requirements

  • Python / ROS / Machine Learning / Computer Vision
  • Enthusiasm for Machine Learning and Robotics

What we offer

  • Private Nvidia A100 compute cluster with ssh access
  • Existing learning framework for machine learning applications
  • Remote work / Office

Hinweis: Bitte kurzen Lebenslauf und eine Notenübersicht anhängen.

Kontakt

Till Beemelmanns M.Sc.
+49 241 80 26533
E-Mail

Art der Arbeit

Masterarbeit

Beginn

earliest date possible

Vorkenntnisse

Python, ROS, Machine Learning, Computer Vision

Sprache

Deutsch, Englisch

Forschungsbereich

Fahrzeugintelligenz & Automatisiertes Fahren

Adresse

Institut für Kraftfahrzeuge
RWTH Aachen University
Steinbachstraße 7
52074 Aachen · Deutschland

office@ika.rwth-aachen.de
+49 241 80 25600

Wir nutzen Cookies auf unserer Website. Einige von ihnen sind essenziell für den Betrieb der Seite, während andere uns helfen, diese Website und die Nutzererfahrung zu verbessern (Tracking Cookies). Sie können selbst entscheiden, ob Sie die Cookies zulassen möchten. Bitte beachten Sie, dass bei einer Ablehnung womöglich nicht mehr alle Funktionalitäten der Seite zur Verfügung stehen.