Bosch Invests in AI Sensors That can Tell if They’re Being Spoofed

With the release of self-driving cars on the horizon, a lot of responsibility rests on the shoulders of sensor manufacturers like Robert Bosch GmbH. To date, Bosch makes sensors that can be used to detect road signs and allow the car to take a specific action. However, a known weakness in Bosch’s system is an inability to tell an altered road sign from one that is normal. With a bit of tape, a malicious person could make the AI think that a stop sign is a speed-limit sign, with potentially disastrous outcomes. Bosch has put together an AI system that is designed to find and ignore these attempts at spoofing the onboard AI system.

Using a Secondary Failsafe System

How Bosch intends to get over this potentially critical hurdle is by introducing a secondary system that will only come into play if the primary system detects something strange with a road sign. The secondary system aims to mimic the human eye, using cameras to get a “view” of the road. If the cameras “see” the same thing that the AI detects, then everything is as it should be. However, if the camera picks up something different, then an error will trigger and alert the system that someone is trying to fool it.

Taking Pre-Emptive Action

While no attacks like this have occurred in the real world as yet, Bosch prefers to be safe as opposed to sorry. The fact that this sort of problem is a known exploit makes it the highest priority that it should be dealt with before cars start using the German manufacturer’s sensors daily. The attack can be considered a cyberattack, albeit one that doesn’t utilize something as complex as breaking into a secure data center or computer system. Instead, by explicitly changing some aspects of input, a malicious actor can cause the neural network inside the vehicle to malfunction, to their benefit.

CloudWedge
Logo