Expecting autonomous cars by 2025? We’ll need edge computing and a few other enabling technologies


In politics, the principle of subsidiarity is an organizing principle that recommends handling matters at the most immediate or local level consistent with their resolution, rather than rely on a centralized authority.

How do these apply to decision-making in autonomous vehicles (AVs)?

I would argue that edge computing – essentially, decentralized decision-making — will increase the overall responsiveness, reliability and efficiency of these vehicles.

So can it really drive itself?

A plethora of terms used to describe vehicles — self-driving, auto-assisted, connected, driverless — are flying around as investment and research on AVs grows. NHTSA defines five levels of automated vehicles, from an advanced driver assistance system (ADAS) that sometimes assists the driver to an automated driving system (ADS) that does all the driving in all circumstances.  Currently, various players are running tests on geofenced, prescreened routes/streets with AVs of advanced levels of automation.

Autonomous vehicles – also called self-driving or driverless — can drive themselves without any human intervention. In automated vehicles, on the other hand, a driver is required behind the wheel who can take over the vehicle or choose to let it drive itself.

Automation of either type is made possible by a number of sensors that interact with each other to help navigate.  An AV essentially has a LiDAR (a radar-like laser technology that interprets objects, spaces and distances around AVs), sensors, cameras, radars, machine learning and scalable, real-time geospatial maps of streets, area, routes and terrains.

Autonomous vehicles produce and consume large volumes of data (0.75 to 1 GB data per second) to make real-time decisions for accurate navigation. According to Intel CEO Brian Krzanich, an autonomous car can use up to 4,000 GB of data in just one hour of driving.

An autonomous data environment is a complex ecosystem that comprises not only the data being stored and processed by the vehicle, but also transportation infrastructure data like roads or surrounding infrastructure, data from passengers in the vehicle, data from other vehicles, as well as weather and other data from various providers.

AVs cannot afford even a millisecond lag, as it can mean the difference between having a serious collision or avoiding that accident altogether. It currently takes roughly 100 milliseconds for large amounts of data to travel back and forth from the cloud. According to Toyota, the amount of data transmitted between cars and the cloud could reach up to 10,000 times the current amount by 2025 — up to 10 exabytes a month.  Current cloud capabilities are not designed to handle this amount of data.

Life and death decision: Edge or centralized computing?

Researchers are looking at two approaches – centralized processing in the cloud and distributed processing on the edge. I believe there needs to be a balance.

With centralized processing there is too much weight added on the network to take data back to the main processor, which adds latency and lag along with a major processing burden on the central hub. But AVs cannot wait for transmission of data to the cloud and back to perform a critical function. They need actionable information more readily available — at hand.

Distributed processing, also known as edge computing, enables processing at the edge — or extremities — of the system. Edge computing moves computing functions, intelligence and machine learning closer to where the data is generated and action is taken. This reduces the lag between data processing and the vehicles and quickly turns the data into useful information that can control and manage the car while in motion. By moving machine learning to local cameras, the computer vision, recognition and signal to apply brakes can all be processed on the device where the data is generated, rather than where it is collected.

Edge computing helps AVs achieve situational awareness. It improves reliability, availability and quick recall of data — which can be challenging with centralized cloud processing, and further worsened with clogged or erratic networks. Multiple smaller processors spread throughout the vehicle handle the inputs from various systems like video stream data, and categorize information and behavior from metadata and context. Then, encrypted text is used to detail the object and coordinates and send that wirelessly or via secure physical connection to the computer to make higher level decisions. But if there is a delay in the transmission, analysis or relay of a decision back to the vehicle, it can have grave consequences. Edge computing and edge analytics pave the way for quick detection of patterns in sensor data and faster storage and transfer of data to aid real-time decisions at local nodes.

In terms of maintenance and analysis, it is also easier and more economical to validate and certify smaller components — pinpointing flaws and changing them in the vehicle rather than changing the centralized system every time a component malfunctions.

A balance needs to be maintained, though, between edge computing and centralized cloud processing, as loading the vehicle with sensors and processors will add to the cost, weight and, ultimately, economics of the AV. Critical safety and real-time decision making for driving actions need to be handled at the edge, while overview, update-focused efforts that are not time sensitive can be handled at the cloud level. This balance can facilitate an expeditious move from human-assisted automated vehicles towards a fully autonomous vehicle-to-everything (V2X) vehicle.

Driver’s training

One of the challenges that AV players face is obtaining the equally massive amount of data that is required to train the artificial intelligence (AI) systems that go into these autonomous platforms. Organizations like Waymo and Tesla have the most data on AVs as of today.  John Krafick, CEO of Waymo, tweeted in July that Waymo has self-driven 8 million miles on public roads, now at rate of 25,000 miles per day. In the same tweet, Krafick said Waymo has completed over 5 billion miles in simulation.

University of Berkeley (‘BDD100K’) and Baidu (ApolloScape) have also released large volumes of data sets on autonomous vehicles for open source exploration on further aspects like road object detection, lane marking, drivable area segmentation, and domain adaptation of semantic segmentation in order to improve overall research on AV safety and reliability.

Vehicles will ultimately generate and consume roughly 40 terabytes of data for every eight hours of driving, according to Krzanich. That means fully autonomous cars can only be successful with a supportive infrastructure that may take another five to seven years to mature.

Today, safety standards, component technical specification standards and safety guides are being standardized to create an AV framework. But it is also quite clear that advanced edge computing and cloud capabilities — coupled with a strong data strategy focused on data acquisition, management and analytics that can handle the large volume of data generated from complex AV ecosystems — are a must if we’re to enjoy the ride from a connected to an autonomous V2X vehicle.

After all, it’s not just about getting there. It is about a reliable and safe driving experience.

Speak Your Mind


This site uses Akismet to reduce spam. Learn how your comment data is processed.