Engineering a Driverless Vehicle
Contributed by P.J. Morley
Modern cars are robust machines, designed to protect their occupants in the event of a crash, but no amount of engineering can make a car completely safe if the driver isn't paying attention....or can it?
Self-driving cars are something we've been promised for years and they just might be on the roads sooner than later. This summer, at the University of Arizona, I was one of the ten undergraduate students chosen from across the country to participate in the Electrical and Computer Engineering department's CATVehicle research program.
Part of the challenge in teaching a computer how to drive comes from the way computers work. Computers are simple machines capable of performing complex tasks only because they can work so fast. Every task a computer performs needs to be broken down into very simple steps. Driving, and doing it safely, is a very complicated task. Humans are really good at learning physical movements and then pushing them into afterthought, but computers don't have that capability (yet, anyway). If you're driving home late at night and a deer jumps out into the road, you know how to react. You recognize what a deer is, you have an expectation of how the deer is going to move, and how your car is going to move, and you slow down or turn accordingly. A computer, on the other hand, doesn't have 15+ years of experience riding in a car before it starts to drive one. It doesn't know what a deer is, or how a deer moves, or what happens if it runs into one. The same could be said for children, squirrels, snow, other cars - think about it for very long, and you realize there's an infinite list of things that drivers have to react to. This is where we come in.
Four teams worked on four different projects this summer - each having something to do with the car. One worked on interfacing the car with MATLAB, a computational scripting language that would allow the car to use high-level mathematics to refine its maneuvering. Another enabled the car to determine if the paths it was told to follow were safe or not using a mobile device as input. The third team considered the possibilities of using platoons of autonomous cars, and used a top-mounted LiDAR (laser radar) to detect obstacles. My team worked on getting the car to communicate with sensor packages and a physics simulator, expanding the machine's ability to interpret the outside world.
All of these projects expanded the functionality of the car in a way that had never been done before so that it will be able to better understand the world around it.
The CATVehicle is currently able to follow a given path safely. It decides how much it should work the accelerator, brake, and steering and will do so without flipping the car over. The computer uses dead reckoning for now, meaning that it navigates by calculating its travel from its starting point, and follows its directions with no regard for anything in its way. However, thanks to the work completed this summer, the car will soon be able to use its sensors to make its own decisions about what is safe or not. Using a technique called code generation, the car will be able to quickly add new sensors and computational packages that will allow future engineers to focus on the high-level decision making of the machine.
There is still a lot to do. Once we have a car that can use sensors to view its physical environment, we will have to figure out ways for the car to become aware of things like speed limits, street names, and house numbers, and how to read traffic signals and another car's turn signals, and understand traffic rules like what to do at a four-way stop. There are probably other challenges we haven't even discovered yet. It promises to be one of the most complex engineering projects ever tried.