The AV Stack – A look at key enablers for automated vehicles

The race towards automated vehicle technology moved into high gear recently when Intel announced a bid to acquire Mobileye for $15 billion.   

Intel has recognized the growth opportunities in automated vehicle technologies and has assembled a group of core assets necessary to build out the “AV Stack.”  The AV Stack will consist of multiple domains consolidated into a platform that can handle end-to-end automation. This includes perception, data fusion, cloud/OTA, localization, behavior (a.k.a. driving policy), control and safety. 

The AV Stack is also known as “domain control” although some have begun to label it multi-domain control since it consolidates the functions of many domains into one. The trend towards domain consolation has been going on for a while although there has been a slow movement. However, automation tends to push for this architecture because it would not be efficient to do it in a highly-distributed way.  

Domain control also makes sense from a middleware standpoint. Virtualization of processes is now possible through the OS Stack where you can isolate safety critical functions from non-safety critical functions.   Furthermore, middleware abstraction layers enable developers to write to a common interface specification and not have to worry about RTE and BSW components.  

The AV Stack is really the brains behind autonomous cars including all supporting tasks such as communications, data management, fail safe as well as the middleware and software applications. It is a collection of hardware and software components that are tightly integrated into a domain controller and will be the basis for Automated Systems Level 3 and higher.  

The AV Stack represents the greatest collection of advanced IP content in future cars and is a big opportunity for suppliers that have the capacity to string it all together. 

copyright 2017 -- Vision Systems Intelligence, LLC

copyright 2017 -- Vision Systems Intelligence, LLC

For suppliers of automotive processors developing an AV Stack is the right thing to do assuming you are targeting vehicle automation. Most of the leading suppliers of processors for automotive are already doing this to some capacity. NXP, Renesas, TI, Intel and Nvidia all have development kits that support multiple nodes in the AV value chain. 

You also have tier-one suppliers getting into this space on the premise that processor companies don't necessarily have all the knowhow to build-out a full ECU domain.  Recently, Nvidia has done deals with both ZF and Bosch which are along these lines.  Delphi is active with their CSLP platform and counts Mobileye and Intel as their partners for processing logic. 

Another player in the space is TTTech, an Austrian firm that specializes in ECU technologies and is a major partner in Audi’s Zfas controller.  TTTech’s approach is supported by Renesas processors as well as application development framework called TTA Integration.  

Outlook

It is not that easy to estimate the total available market (TAM) for the AV Stack because the take rate for Level-3 automation (or higher) will be gradual at first. You also have so many supporting domains and licensed IP from third parties. You have multicore architectures, co-processors, lots of memory, a communications stack, and lots and lots of firmware. 

The AV Stack is probably worth at least $10,000 (ASP) if you include the sensors. Within the context of future mobility the AV Stack is the highest concentration of value and probably becomes the single most valuable piece of future vehicles.

Tesla’s Model S: Key Observations About Autopilot & OTA

Tesla-Model-S.JPG

VSI recently rented a Tesla Model S to examine the functionality of Autopilot as well as gain a deeper understanding of the overall architecture of the vehicle.

The vehicle we had access to was a 2015 Model P90D and configured with Autopilot 1.0 V8 which is Level 2 automation.  As a research company, VSI has been examining the building blocks of automation for nearly three years and are very familiar with the technologies that are used in the Tesla Model S.

What makes the Tesla Model S so interesting?

  • The over-the-air (OTA) digital communications of the Tesla Model S is by far the most interesting element of this vehicle and is probably the most critical element of the vehicle architecture. 
  • This vehicle talks a lot to the network and most of it done over Wi-Fi as we found out. Within a 24-hour period this vehicle exchanged over 50MB of data with the Tesla’s Mothership, a virtual private network (VPN) that manages the data exchange. About 30% of that data is flowing out of the vehicle.  
  • There have been multiple updates to Autopilot over the past few months, particularly with regards to v8.0 (rev. 2.52.22) where vast improvements were made to the performance of the radar.  Further improvements have been made to enable fleet learning, and is likely the reason the volume of data exchange is so high.
  • v8.0 accesses more raw data from the front-facing radar and new Tesla software processes those inputs in enhanced ways.
  • Architecturally, the Tesla E/E systems rely a lot on the main media unit which manages all communications and HMI elements. The consolidation of so many functions into a single domain is remarkable.  Many of the Autopilot calculations are made on the main media unit plus another control ECU separate from the main media unit. The vehicle camera module have their own processing so they take some load off the main media unit.
  • We think the Model S is a proxy for future vehicle architectures, at least those with partial automation features. And again, we think the OTA capabilities of this vehicle is the most important element of the vehicle architecture. This becomes more obvious when you visit a Tesla vehicle center where service bays are less than traditional vehicles. Short of mechanical failures, this vehicle is repairable over the network!      
Tesla-Cluster.jpg

Autopilot 1.0 (VSI Profile

Tesla’s Tech Package with Autopilot costs $4,250 and it enabled through an over-the-air update. The current system consists of a forward-looking camera, a radar, and (12) 360-degree ultrasonic sensors.

  • The camera-based sensor comes from Mobileye (camera & EyeQ3 SoC.) – this is a single monochromatic camera.  However, fallout from a May 7 fatal accident led to a split between Tesla and Mobileye.  Mobileye will not supply hardware/software to Tesla beyond the EyeQ3 or beyond the current production cycles.
  • Bosch supplies the radar sensor/module. Autopilot v8.0 will have access to six times as many radar objects with the same hardware with a lot more information per object. Radar captures data cycles at 10 times per second. By comparing several contiguous frames against vehicle velocity and expected path, the car can tell if something is real and assess the probability of collision. The radar also has the ability to look ahead of vehicles it is tracking and spot potential threat before the driver can.

Control Domain

  • Perception and control is enabled through the Nvidia Tegra X1 processor.
  • Tesla provided their own self-driving control algorithms and some of the software algorithms fusing radar and camera data. 

HMI Domain

The Model S works with two tracking mechanisms:

  • Locking onto the car ahead or sighting the lane marks. When there’s difficulty reading the road, a “Hold Steering Wheel” advisory appears. If lane keeping is interrupted, a black wheel gripped by red hands and a “Take Over Immediately” message appear on the dash. Failing to heed these suggestions cues chimes, and if you ignore all the audible and visible warnings, the Model S grinds to a halt and flashes its hazards. A heartbeat detector is not included.
     
  • A thin control stalk tucked behind the left steering-wheel commands the cruise-control speed (up or down clicks), the interval to the car ahead (twist of an end switch), and Autosteer (Beta) initiation (two quick pulls back). A chime signals activation, and the cluster displays various pieces of information: the car ahead, if it’s within radar range, and lane marks, illuminated when in use for guidance. A steering-wheel symbol glows blue when your steering input is no longer needed, and ­Tesla’s gauge cluster also displays the speed limit and your cruise-control setting. 

The Model S is considered Level 2 but will change lanes upon command via a flick of the turn signal stalk (Auto Lane Change). To move two lanes, you must signal that desire with two separate flicks of the stalk. This function also can be used on freeway entrance and exit ramp.

Autopilot Software v8.0 (rev. 2.52.22) will warn drivers if they're not engaged with their hand on the wheel (for 1 minute if not following a car, 3 minutes if following another car.)

If a driver ignores 3 audible warnings within an hour, Autopilot v8.0 will disengage until the car has been parked.

Autopilot 2.0 (VSI Profile)

Although not tested yet, it is important for us to explain Tesla’s newer Autopilot 2.0. Eventually we will test this once the software functionality is more complete. At the moment, Autopilot 2.0 is less capable than Autopilot 1.0 because data is being collected via shadow mode to validate the performance of the advanced features.  

Tesla’s new Autopilot 2.0 hardware suite ('Hardware 2' or 'HW2') consists of 8 cameras, 1 radar, ultrasonic sensors and a new Nvidia supercomputer to support its “Tesla Vision” Tesla’s new end-to-end image processing software and neural net.  Available today on Tesla Model S and X and will be available on Tesla Model 3, the new Autopilot consists of the following:

  • Cameras: Three forward facing cameras (Main, Wide, Narrow), 2 side cameras in the B-pillar, rear camera above the license plate, left-rear and right-rear facing cameras.
  • Processor: Nvidia Drive PX 2 capable of 12 trillion operations per second. This is 40 times the processing power of 1.0 Teslas. 
  • Sonar: 12 Ultrasonic Sensors capable of 8 meters
  • GPS and IMU
  • Radar: Forward Facing Radar
  • Software: Tesla Vision that uses a deep neural networks developed in-house by Tesla

Enhanced Autopilot - $5,000 at vehicle purchase, $6,000 later - The vehicle will match speed to traffic conditions, keep within a lane, automatically change lanes without requiring driver input, transition from one freeway to another, exit the freeway when your destination is near, self-park when near a parking spot and be summoned to and from your garage. Tesla’s Enhanced Autopilot software is expected to complete validation and be rolled out to your car via an over-the-air update in December 2016, subject to regulatory approval.

Full Self-Driving Capability - $8,000 at purchase or $10,000 later -  This doubles the number of active cameras from four to eight, enabling full self-driving in almost all circumstances, at what Tesla claims will be a probability of safety at least twice as good as the average human driver. The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat. All the user needs to do is get in and tell their car where to go. The autopilot system will figure out the optimal route, navigate urban streets (even without lane markings), manage complex intersections with traffic lights, stop signs and roundabouts, and handle densely packed freeways with cars moving at high speed. This feature is expected to roll-out by the end of 2017.

Conclusion:

Tesla is by far the most important production car and is a proxy for future passenger cars.  The software enablement through over-the-air updating is the most striking differentiator in our opinion. The rate in which new features, updates and patches are deployed is astonishing. The volume of data is also a good indicator of what the requirements of cloud connected car should be.

Although not relevant so much for the purposes of VSI, it should be mentioned that the fit and finish of this vehicle is sub-par when compared to German vehicles. This is especially true for the interior components with the exception of the center stack, which is outstanding in quality and functionality. The same can be said about the instrument cluster, as would be expected in a digital vehicle like this.

The infotainment systems in this vehicle is enhanced by the large display but much more intuitive than most conventional vehicles. There is no switchgear in this vehicle at all exceptt for steering wheel stalks and controls.   

Performance is another key attribute of this vehicle. Power management and battery management are outstanding for this vehicle and can be attributed to the all-electronic powertrain as well as the ability to update the power management software via OTA.

Autopilot works very well and gets better all the time. Especially v8.0 where enhancements made to the sensor performance as well as the reduction of false positives are critical.  The self-learning capabilities are reflected in the amount of data that is now exchanged between the mothership and the car itself.

In normal driving modes the Tesla Model S is very tight and performance oriented. Handling is surprisingly good for a vehicle that weighs nearly 5,000 pounds. Acceleration is outstanding and rivals or exceeds most high-end performance (internal combustion) sedans. Braking is also very good, in part enhanced by the regenerative braking that feels like engine brake on conventional vehicles.   

Autopilot HW2 (v8.1) will undoubtedly continue the path that Tesla is on. We don’t have any reason to doubt Tesla’s abilities to realize full automation with the new hardware platform.   

 

Understanding Operational Design Domains

Safety-Assessment-Letter.JPG

NHTSA’s HAV policy, published in September 2016, is a regulatory framework and best practices for the safe design, development, testing, and deployment of HAVs for manufacturers and all entities involved.  

Any company which plans to test or deploy highly automated vehicles on public roads in the United States is required to submit a “Safety Assessment Letter” to the NHTSA’s Office of the Chief Counsel. The NHTSA’s guideline for automated vehicle development calls for many items to be detailed in the letter, regarding whether the company is meeting this guidance.

Among others, defining driving scenarios is the critical first step for OEMs, tier ones and other technology companies that want their HAVs to be out on the road. The definition of where (such as roadway types, roadway speeds, etc.) and when (under what conditions such as day/night, normal or work zone, etc.) an HAV is designed to operate is required to be described in detail in the letter.

Operational-Design-Domains.JPG

To realize such scenarios, core functional requirements that would be enabled by perception, processing and control domain technologies as well as safety monitors, whose systems should be rigorously tested, simulated and validated need to be defined.

Such processes documented in NHTSA’s “Guidance Specific to Each HAV System” within the “Framework for Vehicle Performance Guidance” fall into four parts – 1) ODD, 2) OEDR, 3) Fall Back and 4) Testing/ Validation/ Simulation. Below is VSI’s understanding and guidance to the key tasks related to each part in developing and designing HAVs.

  • A vehicle with automated features must have established an Operational Design Domain (ODD). This is a requirement and core initial element for the letter. A SAE Level 2, 3 or 4 vehicle could have one or multiple systems, one for each ODD (e.g., freeway driving, self-parking, geo-fenced urban driving, etc.). 

The key task here is to define the various conditions and “scenarios” (ODD) that would be able to detect and respond to a variety of normal and unexpected objects and events (OEDR), and even to fall back to a minimal risk condition in the case of system failure (Fall Back)

  • A well-defined ODD is necessary to determine what OEDR (Object and Event Detection and Response) capabilities are required for the HAV to safely operate within the intended domain. OEDR requirements are derived from an evaluation of normal driving scenarios, expected hazards (e.g., other vehicles, pedestrians), and unspecified events (e.g., emergency vehicles, temporary construction zones) that could occur within the operational domain. 

The key task here is defining the “functional requirements” as well as the “enabling technologies” (perception, driving policy and control) per scenario defined in ODD.

  • Manufacturers and other entities should have a documented process for assessment, testing, and validation of their Fall Backapproaches to ensure that the vehicle can be put in a minimal risk condition in cases of HAV system failure or a failure in a human driver’s response when transitioning from automated to manual control.

The key task here is defining what the fall back strategy should be and how companies should go about achieving it. A fall Back “system” should be part of a HAV system, which operates specifically in a condition of system failure (especially in L4 automation where the driver is out of the loop). System failure is another “condition” within ODD where you need to design system architecture, accommodating fail-operational or fail-over Fall Back safety system. On the other hand, OEDR functional requirements are coming from outside the vehicle and are coping with environmental “conditions,” whether they are predictable or not. 

VSI believes and that HAVs will come to rely on AI-based systems to cope with “reasoning” that will become necessary for vehicles to handle edge cases. In L4 vehicles you may have a rule-based, deterministic, deductive system complemented by a probabilistic, AI-based, inductive system to enable full/fail-operational (as opposed to L3 fail-over which goes back to the driver) automated driving systems in all-driving scenarios. 

When using a probabilistic model, it is important to use a large dataset that includes a wide variety of data and many types of environments to improve the performance of the AI system. It is quite challenging for these AI modules to go through validation in performance/safety even if the accuracy is very high. A common practice to give the AI modules some credibility is to do extensive testing via simulations, test tracks, and real world testing, etc.  However, ultimately, it may be difficult to assign a high ASIL rating to an AI-based system despite favorable outcome-based validation. 

Considering that an AI-based system will be difficult to assign a high ASIL rating because of its limited traceability, there is a growing school of thought that an approach to coping with low-ASIL rated probabilistic algorithms like AI is to pair them with a high-ASIL rated deductively-based system that will monitor the probabilistic system and the decisions it makes [safety monitor system]. 

On the other hand, the deductive system is not capable of full driving/navigation but only capable as a fail-over system that safely shuts down the system (pulling over/ coming to a stop/ or just continuing to safely follow the lane). For AI to be deployed in a pragmatic way, there will still be traditional deterministic approaches to collecting, processing and preparing data for input into the AI system. On the back side, you have deterministic systems that compare the output data from the AI system. This provides a safety net layer for the AI based autonomous control system that is probabilistic in nature.

  • Autonomous Vehicle Testing, Validation and Simulation: Software testing is all too often simply a bug hunt rather than a well-considered exercise in ensuring quality, according to Philip Koopman, of Edge Case Research. There are challenges that await developers who are attempting to qualify fully autonomous, NHTSA Level 4 vehicles for large-scale deployment. We also need to consider how such vehicles might be designed and validated within the ISO 26262 V framework. The reason for this constraint is that this is an acceptable practice for ensuring safety. It is a well-established safety principle that computer-based systems should be considered unsafe unless convincingly argued otherwise.

The key task here is understanding best practices/solutions for the test and validation of the HAV system. It is widely known that the development done within the traditional V-model is highly relevant for many of the systems and components used. However, system level performance would likely include simulation as well as outcome-based validation, on the premise that real road testing could not be complete enough for edge cases (infeasibility of complete testing). It is impractical to develop and deploy an autonomous vehicle that will handle every possible combination of scenarios in an unrestricted real-world environment. Therefore, it is critical to find unique testing and simulation tool companies in the process of developing HAV scenarios.

There are a couple of companies that are stepping up to offer solutions and knowhow for this complex software developmentissue especially in simulation techniques.

  • Ricardo is leveraging agent-based modeling (ABM) simulation methodologies to support advanced testing and analysis of autonomous vehicle performance. The approach combines agents (vehicles, people or infrastructure) with specific behaviors (selfishness, aggression) and connects them to a defined environment (cities or test tracks) to understand emergent behaviors during a simulation. The practice is used to recreate real-world driving scenarios in a virtual environment to test complex driving scenarios.
  • Edge Case Research is developing an automated software robustness testing tool that prioritizes tests that are most likely to find safety hazards. Scalable testing tools give developers the feedback they need early in development, so that they can get on the road more quickly with safer, more robust vehicles.
  • All of driving simulation and test methods require the generation of test scenarios for which the systems are to be tested. Vertizan developed constrained randomization test automation tool, Vitaq, to automatically create requited test scenarios for testing ADAS and autonomous systems in a driving simulator setup. The constrained randomization is deployed at two levels: 1) static randomization 2) dynamic randomization. Static randomization is used to automatically create the base scenario w.r.t., possible path trajectories of vehicles, environment variables and traffic variables. Dynamic randomization is achieved by real-time communication between the driving simulator and the constrained randomization tool via a TCP/IP HiL interface (client-server interface). Constrained randomization is then used to intelligently explore the possible sample space to find the corner cases for which an ADAS or an autonomous system may fail.

Conclusion

Developing autonomous vehicles and the deployment of them presents challenges to the companies that are designing, developing or deploying them. It also presents challenges to the governing bodies who must ensure the safety of these technologies.

To create a framework for this, NHTSA recently established requirements as per the Safety Assessment Letter that is essentially a detailed document that covers many areas of interest. The most significant and challenging element of the requirement is defining the Operational Design Domain (ODD). The ODD is the definition of where (such as what roadway types, roadway speeds, etc.) and when (under what conditions, such as day/night, normal or work zone, etc.) an HAV is designed to operate.