NHTSA’s HAV policy, published in September 2016, is a regulatory framework and best practices for the safe design, development, testing, and deployment of HAVs for manufacturers and all entities involved.
Any company which plans to test or deploy highly automated vehicles on public roads in the United States is required to submit a “Safety Assessment Letter” to the NHTSA’s Office of the Chief Counsel. The NHTSA’s guideline for automated vehicle development calls for many items to be detailed in the letter, regarding whether the company is meeting this guidance.
Among others, defining driving scenarios is the critical first step for OEMs, tier ones and other technology companies that want their HAVs to be out on the road. The definition of where (such as roadway types, roadway speeds, etc.) and when (under what conditions such as day/night, normal or work zone, etc.) an HAV is designed to operate is required to be described in detail in the letter.
To realize such scenarios, core functional requirements that would be enabled by perception, processing and control domain technologies as well as safety monitors, whose systems should be rigorously tested, simulated and validated need to be defined.
Such processes documented in NHTSA’s “Guidance Specific to Each HAV System” within the “Framework for Vehicle Performance Guidance” fall into four parts – 1) ODD, 2) OEDR, 3) Fall Back and 4) Testing/ Validation/ Simulation. Below is VSI’s understanding and guidance to the key tasks related to each part in developing and designing HAVs.
- A vehicle with automated features must have established an Operational Design Domain (ODD). This is a requirement and core initial element for the letter. A SAE Level 2, 3 or 4 vehicle could have one or multiple systems, one for each ODD (e.g., freeway driving, self-parking, geo-fenced urban driving, etc.).
The key task here is to define the various conditions and “scenarios” (ODD) that would be able to detect and respond to a variety of normal and unexpected objects and events (OEDR), and even to fall back to a minimal risk condition in the case of system failure (Fall Back).
- A well-defined ODD is necessary to determine what OEDR (Object and Event Detection and Response) capabilities are required for the HAV to safely operate within the intended domain. OEDR requirements are derived from an evaluation of normal driving scenarios, expected hazards (e.g., other vehicles, pedestrians), and unspecified events (e.g., emergency vehicles, temporary construction zones) that could occur within the operational domain.
The key task here is defining the “functional requirements” as well as the “enabling technologies” (perception, driving policy and control) per scenario defined in ODD.
- Manufacturers and other entities should have a documented process for assessment, testing, and validation of their Fall Backapproaches to ensure that the vehicle can be put in a minimal risk condition in cases of HAV system failure or a failure in a human driver’s response when transitioning from automated to manual control.
The key task here is defining what the fall back strategy should be and how companies should go about achieving it. A fall Back “system” should be part of a HAV system, which operates specifically in a condition of system failure (especially in L4 automation where the driver is out of the loop). System failure is another “condition” within ODD where you need to design system architecture, accommodating fail-operational or fail-over Fall Back safety system. On the other hand, OEDR functional requirements are coming from outside the vehicle and are coping with environmental “conditions,” whether they are predictable or not.
VSI believes and that HAVs will come to rely on AI-based systems to cope with “reasoning” that will become necessary for vehicles to handle edge cases. In L4 vehicles you may have a rule-based, deterministic, deductive system complemented by a probabilistic, AI-based, inductive system to enable full/fail-operational (as opposed to L3 fail-over which goes back to the driver) automated driving systems in all-driving scenarios.
When using a probabilistic model, it is important to use a large dataset that includes a wide variety of data and many types of environments to improve the performance of the AI system. It is quite challenging for these AI modules to go through validation in performance/safety even if the accuracy is very high. A common practice to give the AI modules some credibility is to do extensive testing via simulations, test tracks, and real world testing, etc. However, ultimately, it may be difficult to assign a high ASIL rating to an AI-based system despite favorable outcome-based validation.
Considering that an AI-based system will be difficult to assign a high ASIL rating because of its limited traceability, there is a growing school of thought that an approach to coping with low-ASIL rated probabilistic algorithms like AI is to pair them with a high-ASIL rated deductively-based system that will monitor the probabilistic system and the decisions it makes [safety monitor system].
On the other hand, the deductive system is not capable of full driving/navigation but only capable as a fail-over system that safely shuts down the system (pulling over/ coming to a stop/ or just continuing to safely follow the lane). For AI to be deployed in a pragmatic way, there will still be traditional deterministic approaches to collecting, processing and preparing data for input into the AI system. On the back side, you have deterministic systems that compare the output data from the AI system. This provides a safety net layer for the AI based autonomous control system that is probabilistic in nature.
- Autonomous Vehicle Testing, Validation and Simulation: Software testing is all too often simply a bug hunt rather than a well-considered exercise in ensuring quality, according to Philip Koopman, of Edge Case Research. There are challenges that await developers who are attempting to qualify fully autonomous, NHTSA Level 4 vehicles for large-scale deployment. We also need to consider how such vehicles might be designed and validated within the ISO 26262 V framework. The reason for this constraint is that this is an acceptable practice for ensuring safety. It is a well-established safety principle that computer-based systems should be considered unsafe unless convincingly argued otherwise.
The key task here is understanding best practices/solutions for the test and validation of the HAV system. It is widely known that the development done within the traditional V-model is highly relevant for many of the systems and components used. However, system level performance would likely include simulation as well as outcome-based validation, on the premise that real road testing could not be complete enough for edge cases (infeasibility of complete testing). It is impractical to develop and deploy an autonomous vehicle that will handle every possible combination of scenarios in an unrestricted real-world environment. Therefore, it is critical to find unique testing and simulation tool companies in the process of developing HAV scenarios.
There are a couple of companies that are stepping up to offer solutions and knowhow for this complex software developmentissue especially in simulation techniques.
- Ricardo is leveraging agent-based modeling (ABM) simulation methodologies to support advanced testing and analysis of autonomous vehicle performance. The approach combines agents (vehicles, people or infrastructure) with specific behaviors (selfishness, aggression) and connects them to a defined environment (cities or test tracks) to understand emergent behaviors during a simulation. The practice is used to recreate real-world driving scenarios in a virtual environment to test complex driving scenarios.
- Edge Case Research is developing an automated software robustness testing tool that prioritizes tests that are most likely to find safety hazards. Scalable testing tools give developers the feedback they need early in development, so that they can get on the road more quickly with safer, more robust vehicles.
- All of driving simulation and test methods require the generation of test scenarios for which the systems are to be tested. Vertizan developed constrained randomization test automation tool, Vitaq, to automatically create requited test scenarios for testing ADAS and autonomous systems in a driving simulator setup. The constrained randomization is deployed at two levels: 1) static randomization 2) dynamic randomization. Static randomization is used to automatically create the base scenario w.r.t., possible path trajectories of vehicles, environment variables and traffic variables. Dynamic randomization is achieved by real-time communication between the driving simulator and the constrained randomization tool via a TCP/IP HiL interface (client-server interface). Constrained randomization is then used to intelligently explore the possible sample space to find the corner cases for which an ADAS or an autonomous system may fail.
Developing autonomous vehicles and the deployment of them presents challenges to the companies that are designing, developing or deploying them. It also presents challenges to the governing bodies who must ensure the safety of these technologies.
To create a framework for this, NHTSA recently established requirements as per the Safety Assessment Letter that is essentially a detailed document that covers many areas of interest. The most significant and challenging element of the requirement is defining the Operational Design Domain (ODD). The ODD is the definition of where (such as what roadway types, roadway speeds, etc.) and when (under what conditions, such as day/night, normal or work zone, etc.) an HAV is designed to operate.