Acoustic Simultaneous Localization And Mapping (SLAM)
thesisposted on 20.12.2021, 16:19 by Akul MadanAkul Madan
The current technologies employed for autonomous driving provide tremendous performance and results, but the technology itself is far from mature and relatively expensive. Some of the most commonly used components for autonomous driving include LiDAR, cameras, radar, and ultrasonic sensors. Sensors like such are usually high-priced and often require a tremendous amount of computational power in order to process the gathered data. Many car manufacturers consider cameras to be a low-cost alternative to some other costly sensors, but camera based sensors alone are prone to fatal perception errors. In many cases, adverse weather and night-time conditions hinder the performance of some vision based sensors. In order for a sensor to be a reliable source of data, the difference between actual data values and measured or perceived values should be as low as possible. Lowering the number of sensors used provides more economic freedom to invest in the reliability of the components used. This thesis provides an alternative approach to the current autonomous driving methodologies by utilizing acoustic signatures of moving objects. This approach makes use of a microphone array to collect and process acoustic signatures captured for simultaneous localization and mapping (SLAM). Rather than using numerous sensors to gather information about the surroundings that are beyond the reach of the user, this method investigates the benefits of considering the sound waves of different objects around the host vehicle for SLAM. The components used in this model are cost-efficient and generate data that is easy to process without requiring high processing power. The results prove that there are benefits in pursuing this approach in terms of cost efficiency and low computational power. The functionality of the model is demonstrated using MATLAB for data collection and testing.