top of page
Search
sidharthtalia

ADAS on the cheap

Updated: Nov 27, 2021

Preface:

In the summer of 2018, I, along with two others (Samarjeet Kaur and Nikunj Aggarwal) participated the Celestini program-India. It is a project-based competition held by the Marconi Society at IIT-Delhi. Our problem statement was to create a low-cost Advanced Driver Assistance System (ADAS) with Vehicle-to-Vehicle communication using commercial-off-the-shelf(COTS) components. While our project did not win the competition, I think it was a pretty good attempt at a solution to the problem.


Introduction

Road accidents kill a large number of people every year. This problem has been approached in several ways. Self-driving cars, driver attention monitoring systems, and ADAS are some of the most prominent solutions to this problem. Driver monitoring systems try to ensure that accidents don’t happen due to distracted driving. Self-driving cars solve the problem by taking over the responsibility of driving (ideally, in its entirety). An ADAS is a sort of mid-point between the two; they don’t take over the control, but rather nudge the driver when something bad is about to happen.


Problem

During the summer of 2018, my team and I worked on the problem of creating a low-cost Advanced Driver Assistance System (ADAS) with Vehicle to Vehicle(V2V) communication using commercial-off-the-shelf(COTS) parts. Why did we want to add vehicle-to-vehicle communications? Because while distracted driving is responsible for a lot of accidents, factors such as lack of visibility and information, in general, can also be the cause of such crashes. Pile-up crashes due to fog are not unheard of, and are preventable if cars have inter-car communication.


Such a system would be comprised of the following components at a high level:

  1. A Perception system

  2. A communication system for V2V communication

  3. A navigation system for positioning and velocity data of the car

  4. A system that manages all of the above (likely a microprocessor)


Let us first look at the possible solutions if the constraints were removed. The two constraints we placed on the system were low cost and the use of COTS parts.

Approach 1:

If money were no object, this is the hardware we would have used:


Perception/Detection:

Using millimeter-wave RADAR (Radio Detection And Ranging) or solid-state LIDAR(Light Detection and Ranging) or Event cameras with some post-processing.


Communications:

High wattage microwave transceiver with mesh networking capabilities with a large high gain antenna(or multiple antennas for diversity).


Navigation system:

GNSS aided inertial navigation system for getting the car’s position, velocity and orientation.


Management system:

A ruggedized microprocessor with good support for peripherals would be ideal for this purpose.


Putting the constraints back on:

RADAR and solid-state LIDAR are both quite expensive, with prices starting around $500 and up. The price for the high wattage communications system would start at around $400-$500 with the antenna and accessories. The price for a “low-cost” inertial navigation system would be around $2000-$3000 (or at minimum $500 if you go really cheap). A ruggedized microprocessor would likely cost around $400. The overall cost would add up to ~$4000 for the whole system, not including the cost of integrating it all together and the profit margin if you intend to make a business out of it.


Doing everything on a smartphone:

One of the approaches that could be used to reduce the costs significantly is to get rid of the LIDAR/RADAR, INS, communication system and do it all from a smartphone. A smartphone already has a GPS, IMUs, a camera, a communication system(WIFI), and its own processor! An app that could combine the power of all these systems sounds like a fairly obvious solution, right? Hate to kill your dream but the answer is a surprising no.


The reason for this is not obvious, because there is more than one. The primary reason is that it is quite difficult to standardize the performance of the app across multiple phone models. A safety-critical device like an ADAS should not depend on the smartphone you have at your disposal. This is also why comma.ai, a company that started out with a smartphone-based system for lane assistance and advanced cruise control, quickly shifted to their own hardware so that they could standardize the hardware (and also make some money at the same time). Standardizing the hardware also allows you to make efficient systems as the software is no longer one-size-fits-all.

Secondly, smartphones aren’t exactly designed with heat management in mind. Running the computer vision algorithms along with the WIFI, the GPS, and its associated algorithms would put quite a lot of thermal strain on the system. We found out about this issue after building a basic app that ran all of this together with some basic algorithms. The heat was enough to cause my phone to reboot itself after 4-5 minutes of operation or completely hang up.


Our work:

In order to deal with these additional implicit constraints of standardization and heat management, we split the system into individual components. The main management system was running on a Raspberry pi 3b+, equipped with a camera and an Xbee 2.4GHz Pro module for communication. The phone was running the GPS-ins navigation algorithms and also acting as an alert generator. The system also needed its own battery bank because we didn’t want to draw power from the phone, although one could run the system from the car’s cigarette lighter as well.


The perception system:

We reduced the job of our ADAS’s perception system to only generate alerts for potential collisions, as lane detection is in of itself a mammoth task on Indian roads and frankly not all that important in India (those who’ve driven on Indian roads understand that lanes are for the most part ignored by almost everyone). To this end, we only needed a “threat detection system”. There were two possible ways of achieving this through computer vision.


The first approach was to use machine learning-based object detection to detect all objects in the scene, find their relative movement across frames and then categorize them as either threat or non-threat. The approach made sense at face value and given all the hype around computer vision and neural networks for object detection we were inclined towards pursuing it.


However, an object detection framework of moderate accuracy was found to be quite slow on a raspberry pi 3b+ @ 0.75 fps. The best we could possibly manage at the time was 1-3 fps, but not more. The other, more important downside, was that the object detection framework could only detect what it was trained for. For instance, farmers usually travel in make-shift carts that may not be detected by the system.


The second approach was to use the optical flow of the image to determine whether something was about to hit the car or not. The approach was bio-inspired (most mammals rely on a movement-before-recognition framework for detection) and was validated using a car racing game as well before being tested in the real world. The benefit of using optical flow was that high fidelity optical flow computations could be done much faster at 15 fps on the raspberry pi 3b+, with higher fps requiring a reduction in image size but still possible. The other benefit of using the optical flow to detect threats was that it didn’t matter what the threat looked like; as long as it was approaching the car, it was a threat. It could be a UFO as far as the system is concerned.

The object detection system missed the cow and the road-divider in the scenario shown here. We couldn't collect a lot of "pre-collision" data because that would require reckless driving.

Now, it was not all hunky-dory in the land of optical flow. It turns out that while optical flow-based approaches are very good at detecting threats, they have a high false-positive rate if the system is unaware of its own motion. For instance, when a car goes over a speed bump, the optical flow system thinks everything is coming towards the car. It may also produce false positives when the car is going over rough terrain. The way humans (and other mammals) deal with this is one, by having a sort of gimbal for our eyes (the eyes can maintain a lock on a particular direction regardless of the orientation of the body up to a limit) along with vibration damping. The brain is also aware of the general motion of the car itself and usually aware of what to expect from a speed bump.


While the optical flow had its downsides, we still used it because it seemed like a better approach than the object detection one.


The navigation system:

We leveraged the GPS and IMU sensors of the phone itself and ran a simple Kalman filter app that talked to the raspberry pi and gave it the car’s location, velocity, and orientation, and raw sensor data from the IMU (accelerations and rates of rotation). This has the obvious benefit of not requiring the purchase of additional GPS and IMU sensors. Position-velocity accuracy was not mission-critical and so the smartphone-based solution made the most sense.


The raw sensor data from the accelerometer was also used for generating alerts. In the event that the ego vehicle changed its velocity in a sudden manner, an alert message would be broadcasted to all the other cars (this could be due to sudden braking or a crash).


The position-velocity-orientation data was also used to determine the relevance of an alert. Using the position meta-data of the alert message, the management system could figure out whether the alert was coming from a car up ahead or from a car behind it/going in the opposite direction.


The communication system:

Communication between the vehicles was the cornerstone of our project. The point of communication was to alert drivers around the ego vehicle of possible dangers ahead, in order to prevent pile-up accidents due to fog or other reasons

The candidate systems for this were WIFI Direct, NRF APC 220 transceiver, and WiFi Xbee S2C pro transceiver.


WIFI direct seemed like a nice approach since it could be done from the smartphone, however, it has(or had) a fixed delay of about 1 second regardless of message size. It would also have a limited range due to the limited transmission power of the WiFi chip in the phone and the lack of a dedicated antenna . Most importantly, it was a point-to-point approach, which would require us to connect and transmit to each car separately (or at least this was the case at the time).


The NRF APC 220 had a better range (100 meters minimum), lower latency (200ish milliseconds), and broadcasting capabilities, however, it did not have the niceties of collision avoidance (CSMA/CA).


The Xbee S2C pro had a practical range of 80 meters (with more range in LOS operation), low latency (8 ish ms “warm-up” delay and 2 ms per message), broadcasting+mesh capabilities, and CSMA/CA for collision avoidance between multiple transmitters. As the Xbee S2C’s packet drop rate was 1 in 4 at a range of 80 meters, we had to repeat the message 10 times to ensure six-sigma reliability (1 in 1 million chance of the message not being received at all by the receiver). This put our worst-case latency at 28 milliseconds which was still pretty good.



Thus, we used the Xbee S2C pro module for our communication system.

Overall system architecture:

Example case


Aftermath

The system we created worked on paper, but I felt that there was significant room for improvement in almost all aspects of the system. We wanted to build a second system after the project was over, one which was supposed to improve on the previous ones on several grounds.



In this system, we were using a smart camera by the name of Jevois A33. This thing cost about $60, had its own processor and GPU for high-speed computation and could perform salient object detection, which combined object detection and optical flow in one neat package. It also had a hardware UART interface using which it could communicate with a microcontroller.


The raspberry pi and the smartphone would be replaced by a $3 microcontroller. It would perform GPS-IMU sensor fusion, send alerts using the Xbee S2C (or any other UART capable Transceiver), and communicate with the smart camera. This significantly streamlined the project (since we didn't have to deal with the android system anymore) and was supposed to improve its performance


The second version of the project ended up not being made, however, this did kickstart the 3rd version of my self-driving car project. You can spot similarities in the design and hardware of this system and the third version of my mini-self-driving car V3.




70 views0 comments

Recent Posts

See All

Comments


Commenting has been turned off.
bottom of page