Autonomous Vehicle

Verification & validation (V&V) to ensure high quality of complex automotive software

The automotive industry is racing to newer heights with evolution in technology and modernization. The advancement of technology in the areas of infotainment, connected vehicle, autonomous driving, and high-performance computing has taken the world of mobility into a new era. According to a report by MarketsandMarkets Research Private Ltd., the global automotive software market size is projected to grow at a compound annual growth rate (CAGR) of 16.9%, to USD 37.0 billion by 2025. The Original equipment manufacturers (OEMs) and Tier1s not only have to rapidly adapt to the technology trends but should also enable faster production cycles with no compromise to quality & safety.

Fig. 1: Complex automotive software

These latest advancements in the automotive industry are leading to more software components in the vehicle. The numerous lines of code and functionalities are making the vehicles driven by software. In today’s scenario a premium vehicle has typically 100+ million lines of code that have stupendously been increasing over the last years and will be exponentially increased in the years to come. An industry estimate predicts an approximately 10-time increase of the current software size in the next few years. The complexity is growing not just because of the numerous lines of code but also based on the increased level of integration across different vehicle components.

The vehicle software architecture encompasses multiple layers with a scalable, extendable operating system provisioning middleware and APIs handling multiple business functionalities. The architecture also hosts HMI (Human-Machine Interface) layers interacting with various applications both on-board and off-board with an integration of cloud infrastructure enabling improvements in connected vehicle features. In autonomous driving functions, the algorithms should possess advanced computing standards and real-time integration with various cameras and sensors. The autonomous solutions should effectively handle the time synchronization of the data retrieved from the cameras/sensors and should handle high volumes of structured/unstructured data elements. All these should be processed in milliseconds enabling the real-time decision-making ensuring the safety of driver, passenger, and pedestrians.

Adding to this, the architecture should be able to seamlessly integrate across the technology streams, for example, infotainment systems with driver assistance functions and data analytics with cloud connectivity. These integrations could invoke various functions or code snippets multiple times during the drive period. This makes the frequency of execution of a software function remarkably high in the current vehicles. With this high frequency of execution, even a small failure rate will lead to major faults in the overall system. Sometimes these faults could be fatal or lead to serious injuries. According to J.D. Power 2019 U.S. Vehicle Dependability Study SM (VDS), the industry average for 2019 is 136 problems per 100 vehicles (PP100), an improvement of six PP100s from last year, which is a lower rate of improvement than the 14 PP100s in 2018 compared with 2017. The report also stated that “Flawless dependability is a determining factor in whether customers remain loyal to a brand”. The automotive software applications should be almost defect-free as it is not just an impact to the brand and cost but also relates to people’s safety. This emphasizes the need for efficient verification and validation (V&V) methodologies to ensure a high quality of the complex automotive software. In order to seamlessly handle today’s dynamic business scenarios, verification & validation should be based on the following principles:

  • Early involvement with Continuous Testing approach
  • Effective solutions powering automation to meet quality with speed
  • Efficient frameworks enabling production like test environments
  • Enhanced test strategy with well-defined quality standards

Early involvement with Continuous Testing approach

Gone are the days when testing is considered at the far end of the project life cycle. With agile ways of working, imbibing test-driven and behavioral-driven methodologies testing should be applied at every aspect and early stages of the project life cycle. The concept of early involvement in testing is better explained when based on V-model methodology.

Does your test approach take the standard V-model principle into consideration?

The V-model comprises of both quality assurance (QA) and quality control (QC) aspects. Quality assurance is a preventive mechanism. A defined QA approach leads to defect prevention while optimizing the overall cost of quality of the software development. Quality assurance encompasses verification of the process applied to build the software whereas quality control pertains to the validation of the developed software.

Fig. 2: V-model: conceptual view

The left side of the V-model depicts the verification practices and the right side depicts the validation aspects. Principles of this model can be applied across multiple development methodologies like agile, iterative, and waterfall. A well-defined gating criterion based on industry standards like ASPICE and ISO enables quality-driven software development right from the project inception. This helps project teams to quantitatively measure quality and in turn make right decisions on the production releases.

In today’s scenario many organizations are planning frequent releases to continuously enhance product capabilities and customer experience. This demands a continuous delivery with frequently new builds, integrations, and deployments. Testing should upscale to meet this demand by adopting Continuous Testing (CT) approach.

Are you falling behind to implement Continuous Testing (CT)?

Continuous Testing (CT) is based on the principle of triggering tests for each and every change to the code base as and when it happens. The tests are automated and are time-triggered for every code merge. The tests are configured and get executed in each branch associated to the project release methodology from a local developer branch to the Release Master. Given below are the critical elements to implement Continuous Testing:

  • Robust test automation framework
  • Compatible tool chain with regard to build management, code repository, test artefacts, and reporting
  • Configurable test bots to set up on-demand simulated test environments
  • Extendable test bots with hardware control system to enable remote executions by reducing the need of multiple hardware setups
Fig. 3: Continuous testing approach

Fig. 3 depicts the typical Continuous Testing approach across various phases of software development. For instance, when a developer is ready to merge code, a set of functional automated tests gets triggered. After the successful execution of the tests the code gets merged into the repository; in case of failure it loops back to the developer until all the configured tests have been passed. This approach is adhered to for each developer, thus ensuring quality and defect containment at the developer level. In the next stages, in line with the project quality goals, the required tests get triggered at the software integration and software qualification levels. These tests are time-triggered for every build; upon successful execution of the tests the build gets certified for deployment. Thus, Continuous Testing enables early testing, making the software development right the first time.

Effective solutions powering automation to meet quality with speed

Effective test automation frameworks are crucial to improve validation capabilities and adapt principles like Continuous Testing. While building the automation frameworks, the organization should be performed with caution not to mushroom multiple tools in order to handle various topics. Such a scenario will add complexity to maintain the different tools, their integration, and usage in the projects. An automation framework should be built with a mindset of a product development and all the best practices of building and maintaining a product should be applied to the automation framework.

What makes an automation framework effective?

Specific to the automotive industry, the automation framework should be able to cater to the needs of the diversified technology streams like infotainment, connected vehicle, autonomous solutions, and user experience. The framework should be designed by factoring the key characteristics below:

  • Platform-agnostic, for example, to be able to validate the application built on Android, QNX, or Linux and be able to write automated UI tests even when the code of SUT (System Under Test) is not yet ready
  • Cohesive to validate service layer, user interface, and hardware integration
  • Modularized to be extendable to incorporate newer use cases like speech validations and audio/video streaming, and to build simulated environments
  •  Test bots, time-triggered schedulers to enable Continuous Testing with Continuous Integration (CI) chain
  • Reusable to be implemented across various functional and non-functional test types
  • Capable of process automation by integrating test/defect/project management tools, thus minimizing the efforts into test reports

Abiding by these characteristics, automation can be applied early in the life cycle. The need of a stabilized application is not a constraint. The validation engineers can start working on the automation scripts in parallel to the development activities. This will enable in-sprint automation and the ease of adaption to the test-driven/behavior-driven development methodologies.

Efficient frameworks enabling production like test environments

Pertaining to technologies like autonomous driving, the frameworks should be more enhanced to simulate varied driving scenes and sensor evaluations. Developing more accurate and safe autonomous solutions heavily depends on reliable data gathered during test drives. More advanced functions require even more data that is generated by different vehicle sensors like radar or lidar. With increasing amounts of data being processed, organizations need cutting-edge tooling to record, ingest, enhance, find, and transfer the data. Fig. 4 represents a typical test environment view to validate autonomous driving solutions.

Fig. 4: Autonomous driving: typical test environment

Capture and replay are the critical factors to effectively validate autonomous driving solutions. Capture is the process of collating data from all sources during the test drive and replay is the process of re-injecting the data in HiL (Hardware-in-the-Loop) or SiL (Software-in-the-loop) during verification and validation. In real time, autonomous vehicles should identify various objects (e.g. vehicles, pedestrians, road signs, markings, trees, buildings, traffic lights) along with weather and lighting conditions. To master the extensive task of validating the autonomous driving function, the test environment is aided by automated labeling, enhancing and simulation of data. This allows for an efficient generation and administration of test cases. The test environment is further enhanced by cloud technology that helps to effectively manage petabytes of driving scenes, enhance data storage and reuse data through proper data annotations. The data and test artefacts should be well structured to seamlessly manage the constantly changing project goals and requirements.

Enhanced test strategy with well-defined quality standards

Current software development methods should be flexible to the changing requirements and customer needs. Many test teams face challenges to handle these frequent changes and an inefficient management of test artefacts will add to the chaos. This makes test team perform redundant tests, sometimes overdo testing, and yet lack confidence to decide on the “Go Live”. Hence it is important to validation teams to define and align quality goals of the project based on certain industry standards like Automotive SPICE, ISO 9001, and ISO 26262 Functional safety. A well-defined test strategy based on these standards will enable test teams to take the right decisions during the software development life cycle. A good test strategy takes the following into consideration:

Pic 5: Quantitative measurements
  • Innovative test techniques with principles based on impact analysis, risk-based approach, fault injection, and non-functional tests
  • Quantitative measurement of quality through matured key performance indicators (KPIs) and metrics covering productivity, quality, and test effectiveness
  • Business-oriented with preventive measures to minimize and reduce the impact of failures
  • Aligning the release strategy to enable faster time to market and readiness to mass production as needed
  • Optimizing test efforts to maintain the balance between cost, time, and quality

Abiding by the Automotive SPICE standards, the validation comprises various test phases to evaluate the application in-vehicle, at ECU (software-hardware) integration and at software level. Each test phase has a focused coverage with evaluation and compliance criteria on various work products as indicated in the below table:

The test specifications should include all possible evaluations and compliance criteria measuring all test objectives like functionality, reliability, robustness, usability, safety, and security. In technologies like connected vehicles, the security testing should be given a special focus on both on-board and off-board components. Fig. 6 illustrates various layers of security checks to be performed in a connected vehicle.

Fig. 6: Illustration of automotive system security layers

Thus, verification & validation based on the aforementioned principles drive efficiency in the testing principles of an organization, thereby ensuring high-quality products and software applications. The validation teams should aim at investing in appropriate preventive measures. Optimize testing efforts through robust frameworks and infrastructure. Focus at being innovative, flexible, and adaptable to the changing technological needs. Strive to render state-of-the-art services to meet high-quality standards across the organization’s software development and to maximize value creation.

Fig:.7: Conclusion

Author’s Profile:

Kalyan Boggarapu is a Team Manager in Elektrobit India Pvt. Ltd. He heads Validation Centre of Competence (India) has more than 17 years of experience in quality assurance service lines.  A TMMi certified professional with profound expertise in improving organizations quality maturity leveraging TPI, CoQ models. He is a recipient of Great Manager Award 2019 in “Enhancing People Performance” category presented by People Business. Has extensive global exposure across multiple industries in Automotive, Insurance, Financial Services and Retail.

Published in Telematics Wire

Back to top button