|Note: Test Automation covers several distinct areas of testing a product, and can entail anything where a test is completely or partially automated. The reasons for automating testing include e.g. increasing testing performance, minimizing human error and performing complex or cumbersome tests or measurements not feasible in manual testing. This article focuses on functional and regression testing, where the main target is to respond to the needs of modern software driven product development by increasing testing performance. The target here is not to completely replace manual testing, only the parts with most to gain.|
Two disruptions that are ongoing in the modern automotive world are the ever-increasing role of software and the need to get product features out faster and closer to final integration. Software is understood as the enabler for rapid feature development and updates, without the need to change the actual car hardware or even bring the car in for service. This separates the software development and release timeline from the rest of the car’s development and integration timelines. So, software is more complex, it is expected to be able to provide new features quickly from planning to delivery, and work on an already designed hardware platform (“the car”). Still, as the end-product sold and its customer experience are a result of both car hardware and software working together, the expected rapid software development cycle cannot cut corners in testing compared to the more HW-oriented working modes.
Software Driven Product Development
Software development in other product development areas has used Agile methods successfully – sometimes more, sometimes less – in shortening the long lead time development models. The main driver has been the inevitable changes identified during development, and the cost of setting things in stone early in the project. The later the plans need to be fixed, the better information the planners have about the final product they want to produce. In order to replicate the success seen in other software development areas with regards to speeding up the feature development cycle, the automotive industry is looking into ways of integrating Agile methods as part of automotive development.
There is much to like in software driven product development. The promises of a faster product cycle with low effort variants, faster feature development, later “feature freeze” deadlines, easy after sales updates and maintenance without servicing the car. However, unlike, e.g. consumer electronics, the automotive field has a much larger body of legal and quality related standards. Integrating the same methods will require considerable adaptation both in the methods and the mindset of the people working with the product development. Traditional testing models are based purely on the existing legal and quality standard framework of the automotive industry. As the new development models are adapted to fit the framework, but not all the traditions, new testing models must follow.
The primary conflict with traditional thinking in testing and new development models is having the testable product defined early in the project. If only we had the product requirements defined early, both product development and test automation development could proceed their own ways. Both would have their targets set, and both would target implementing the full requirements coverage of code and tests according (roughly) in the same schedule. Unfortunately, that is no longer possible. Both product development and test development must instead follow a short-term plan with a high-level overall picture of the final target, to allow all the benefits of e.g. Agile-style development. In such a project, changes are the new norm.
|Lesson learned: Automated test development for agile-style projects requires close co-operation and communication between product development teams and test development. Going as far as having functional automated tests implemented by the product development teams themselves. Having separate teams handle product software and test development resulted in agile changes to product development only becoming visible once tests failed. Finally, new test development ground to a halt because all of the team’s effort was spent just tracking and reworking tests based on any changes to the product software. Testing cannot be ready in time to test the software, if it has to constantly play catch-up.|
Small changes in product development constantly breaking automated tests highlight another aspect of test automation and agile-style development models. Traditional development emphasizes interface control in SW development to make sure any changes do not cause uncontrolled effects outside the modified components. As with all other changes, interface changes are more rapid and common. Control is still important, but its role is more communicative and less blocking.
Automated functional tests have several possible interfacing levels, for example, in the UI. Tests can interface with any combination of the actual display and control hardware (test robots), UI elements programmatically (cuTeDriver, Squish, Selenium) or via a lower level API. The higher the interaction level, the more completely the test system tests the full product stack, but also the more fragile the test system and tests are to any changes in the product UI. When working with multiple variants of the same product family, the harder it becomes to use the higher layers. When working with an unstable UI – colour themes, small changes requested by the customer, unfinished features and so on – test development on the highest level suffers. Capable manual testers can overcome such minor changes, but test automation very rarely can. Even when product development knows the limitations of test automation, trying to work with unstable test interfaces will incur a considerable risk and cost to the project. Both in testing and in development.
|Lesson learned: Test automation lives or dies with the stability of the test interfaces. Finding the right balance for the test interface level is crucial. It should be as high as possible to cover as big part of the product as possible, but not too high to either prevent test automation sync to product development or stifle product development capability to respond rapidly to change needs. This interface decision needs to be part of the product development architecture and design work to provide the suitable interfaces for testing. Again something that needs to be considered when any changes’ impacts are analyzed.|
Automated testing is now practically intertwined in the product architecture, design and implementation. Having a test development team separated from product development is not practical. It’s not important, how the work is organised. Tests can be implemented just as well by a dedicated test team members assigned to work closely with product teams, by dedicated test developers inside product teams or even the product developers themselves. The – critically – important part is planning and implementing tests and product code together.