Controlling smart cars, part 3

Self-driving safety relies on fast computing, the right sensors and better use of GPS.

By Béla Lipták

Part 1 of this series discussed the state of the art exemplified by the Tesla accident on May 7, 2016, in Williston, Fla. Part 2 suggested improvements based on experience in the process industries.

Here, I focus on the hardware needs of smart car technology, particularly the sensors and chips needed to detect, evaluate and respond to detected conditions. I will start with the chips, their memory size, speed and artificial intelligence quotient (AIQ, see sidebar). I'll also mention the challenges posed by the need for periodic updates of the fleet's software with revisions.

White Paper: Understanding Smart Machines - How They Will Shape the Future

Smart cars are supercomputers on wheels that take in information from sensors (radar to identify obstacles, cameras to detect traffic light colors and pedestrians, 3-D maps to determine the exact position of the vehicle, etc.), and use this information to make split-second decisions in controlling navigation, brakes, power etc. For example, Tesla presently uses the Nvidia system that can perform 24 million operations per second. Nvidia is developing an even more powerful package for Audi. Such specialized chip systems are key components of self-driving technology, and they're going through an explosion in development. Intel is working with BMW, Mobileye (an Israeli vision system company) and Delphi (an auto supplier) to develop a chip capable of 20 trillion operations per second, which they expect to achieve in a couple of years. As of today, Uber's autonomous Ford Fusion car has been road tested in Pittsburgh and San Francisco (Figure 1).

As for software updating, theoretically, one can assume that in a decade or so we'll have smart car fleets that are self-learning. This means that after each accident, when its causes have been determined and the software has been modified to prevent the reoccurance of that accident, the revised software package will be wirelessly transmitted to the entire fleet, so the safety of the fleet will be continuously improved. This has already been done in the case of Tesla's fleet after the accident in Florida. So why do I say that it might take a decade or more? Two reasons:

One is, even if we assume the particular revision requires no new sensors or other hardware changes, as the fleet grows and as revisions increase, the complexity of the updating task and associated memory size requirement will increase exponentially. This is because before each new fix, the compatibility of each individual car must be confirmed and all latent bugs removed. It will also be necessary to check each car to see if it has all the previous updates and is ready in every other way to receive the new fix.

The second reason is that both the individual cars and the total fleet must be protected from hackers. This is no easy task, because the bad guys can be just as smart as the good ones (see the 2016 election).

For these two reasons, I believe overcoming these challenges will take time, and therefore, the whole fleet learning from each accident by revisions being transmitted over the air will not occur for years. It's more likely that initially the car's software will be checked when it's brought in for service.

The need for better sensors

Smart cars use many different methods to sense their surroundings and avoid collisions, the most common being radiation sensors. Their designs vary as far as the wavelengths used (Figure 2), focusing (laser), viewing angles, interferences, accuracies, ranges and single-or multi-path designs. Of these considerations, inteference that varies with wavelength is probably the most important. Some key interference sources include dust, rain, snow, solar radiation, nightime conditions (including reflector lights in the opposite lane), vehicles blocked by buiding corners, etc.

Cameras: Cameras are usually used to detect traffic lights, read road signs, keep track of the positions of other vehicles, and look out for pedestrians and obstructions on the road. While they can operate at different wavelengths including infrared (IR) and visible, they're not very good at measuring distance. Under some lighting conditions, cameras do very poorly (wet roads, reflective surfaces, low sun angles, at night, etc.). IR is better for nightime vision and overcoming interference from brightly lit skies. In general, the wider the range of wavelengths, the better.

Radar: Radar can measure distance accurately, but it’s less precise in determining obstructions (size and shape). Radar signals are good at detecting objects that reflect electromagnetic radiation (e.g. metal objects), so they're good for monitoring other vehicles nearby (parking) and in adaptive cruise control, but they can make metal objects seem larger and plastic or wood objects seem transparent. Radar’s value is somewhat limited because resolution is relatively low, but it's good for looking through dust, mist, snow, fog, etc. It can measure speed and distance, and can operate either in frequency-modulated or in pulse-detection modes. Both modes detect the time of flight of the microwave (radar) signal. The first measures the difference in frequencies between the sent and return signals, and the second measures transit time. The strength of the return signal depends on the reflective properties of the reflecting surface, and on the dielectric constant of the air in between.

Light detection and ranging: LIDAR usually operates in the radar (or infrared) range, where it bounces pulses of light off surroundings and provides information, for example, of the locations of lane markings and road edges. The sensor is mounted on the roof of the vehicle, and consists of an emitter, mirror and receiver. The emitter sends out a laser beam that bounces off a mirror rotating along with the cylindrical housing at 10 revolutions per minute. After bouncing off the objects in the area, the laser beam returns to the mirror and is bounced back towards the receiver, where it can be interpreted (Figure 2). The vehicle can then generate a map of its surroundings and use the map to avoid objects.

Smart cars are supercomputers on wheels that take in information from sensors and make split-second decisions.

These pulsed-laser systems can detect objects over longer distances (up to 200 m); can provide immunity to natural light; have wide fields of vision; and can be accurate enough to make out the fingers on the hands of pedestrians. Their life expectancy is high, up to 100,000 hours. Both photocells and phototransistors can be used as detectors, but phototransistors and photodiodes are most sensitive in the IR region, and are faster than photocells. These sensors are able to distinguish different objects, and thereby allow software to draw conclusions about the type of the object viewed, though older generations of these sensors had problems of losing performance in bad weather.

GPS: The potentials of global positioning system-based sensors have not been fully exploited. While their potential is substantial in vehicle-to-vehicle (V2V) communication, the federal safety standard mandating V2V communication systems hasn't been fully developed. If GPS information from satellites is combined with the readings of tachometers, gyroscopes and altimeters, the accuracy of the GPS position can be much improved.

If all vehicles (or at least trucks) had GPS, they could communicate their position and mobility information, which could not only prevent accidents, but also ease traffic congestion, etc. I'd recommend that all vehicles be provided with GPS to inform all other vehicles in the area of their location, direction and speed. Today's GPS uses radio signals that measure the locations of vehicles at an accuracy of only about 5-10 meters.

Other sensor technologies include ultrasonic, which can measure objects that are very close to the car such as curbs or other vehicles during parking. Digital 3D maps provide total information, so they can be useful in tasks like distinguishing overhead signs from other objects.