Test Automation System Evaluation and Evolution – A Practical Case

Part 3 of the blog series on Test Automation

Read Part 1 and Part 2

Prologue

This is the third part of my blog series on test automation and the lessons we’ve learned while building a working test automation system for our product. In the previous two parts I’ve generalized the lessons on purpose, so that you could easily apply them to your needs. This third part takes a different, more practical approach: What really happened in Link Motion’s case? I might still slip in a lesson or two, though.

To Make or to Buy – Are Those Actually Binary Choices in Test Automation?

We started with rose-tinted glasses and a generous sprinkling of naïve ideals about the world of testing tools. We already had significant experience applying certain automated testing tools to different fields before transitioning to our own automotive product development. Because of our experience we had good confidence in our ability to make a proper “make or buy” evaluation. Our evaluation of the tools we knew and used told us that covering all the features and interfaces in our automotive product would need a lot of effort. Both in development and maintenance effort, an effort we were likely not going to be able to buy or subcontract.

Our first option was looking for the “buy” solution. A solution that we could buy, one that would be immediately ready for use and maintained by the vendor and one that would cover all or most of our features. We immediately ran into issues, with every tool evaluated having critical limitations in one way or another. We looked at a lot of different tools and certain commonalities started to pop up. I mentioned earlier in the blog series that each automated testing tool usually focuses on solving one part of the puzzle. It can be a specific test environment or a specific part of the whole test automation setup, or both. There are tools to test UI applications with software or robots and hardware testers, for example. Then there are tools to manage the test process where you can plug in almost anything to do the actual tests. But all of these require you to do the hard work of adapting and integrating each piece of the puzzle to form your own complete test automation solution, and then maintaining all these adaptations and integrations.

We needed a setup that would be able to combine UI application testing with computing interfaces connecting the system to the rest of the world, like Bluetooth, USB, Ethernet and WiFi and automotive interfaces connecting the system to other parts of the car, like CAN and audio I/O. We also needed something to manage the test runs and devices, and work with our project tools. It soon became clear that our puzzle was going to have a lot of pieces. Nobody else was going to take responsibility for setting up and maintaining that puzzle. “Buy” option became “buy and then make” instead, and that tilted the scale a lot.

In the end, the decision was easy. We started from the Open Source test framework we already knew well, and started planning how to integrate it to our other tools. Knowing the core tool inside and out has helped us in deciding which areas of testing to target first for automation. Having no closed source components at the core of the system has also given us the confidence in our plans. Some parts are clearly too much work for the expected gains, but there are no barricades and brick walls waiting for us around the corner. Our “make or buy” decisions moved to the separate components and integrating them to the existing system.

Evolution of Our Test System

The main criteria in choosing what to automate and when, has always been the feasibility and amount work on one side, and gains compared to manual testing on the other. Our core framework focus was on software based UI testing via Ethernet IP networking. Any features integrated to the UI and which would not require separate control or analyzing by the test system were immediately at our disposal. Some of these included USB media support for playback tests with known test media and FM and AM radio with known test frequencies.

Once we were able to run tests on our device, the work branched to several parallel tracks. We needed to start working on tests with the coverage we had, and increase the coverage with new testable features, such as WiFi. We also needed to build a test controller, because – let’s be frank – test automation setup loses much of its glory if it can’t handle the automation part. So we started building the controller that would keep an eye on any new device images arriving, assign the devices for testing, flash the images, run the tests and create a report of the results. Simple one at first and expanded later. We also recognized that the most important interface in automotive device is the CAN bus interface, so we started developing our own CAN simulator, which would work seamlessly with our test setup and framework. All of these parallel development tracks are still active, and there’s no end in sight for any of them.

Over time we’ve been able to re-use our test automation setup for more than just functional testing. We use the same features with minimal changes for stability and performance testing. We run repeated boot tests and power state tests with it, and have created a prototype system for making power measurements while running automated tests. None of these performance measurements replace the measurements done with calibrated lab tools, but we can spot any anomalies early by running the same tests for all images.

The latest additions to the test coverage have added Bluetooth phone support, audio testing and computer vision -based testing. Bluetooth phone support is done by interfacing with Android adb and core test framework, and implementing a per phone model helpers for making the tests. It works with our reference phones and verifies that the platform works as expected all the time. Product deliveries usually define a large set of phones for delivery testing, and due to the per phone model implementation needed, those are still more effective to test manually.

Our audio test setup can simultaneously capture and analyse four channel line out/speaker out audio (easily expandable) using an external sound-card. This gives us a lot of options to test for, such as unwanted noise, frequency shifting or cross talk between channels. The system proved its worth already the first time we did prototyping for it. We found an mp3 decoder bug creating frequency shifted “ghost” of the main frequency, which would’ve been practically impossible to find by listening only.

Computer vision is something we’ve wanted to add for a long time, but finding the right setup did take time. At the time, our existing system could test the UI comprehensively at the SW level. The part that was missing was the real display. It didn’t seem like that big of a gap, but we did learn that it was big enough for a bug to appear. On SW level everything was hunky dory, but the screen was pitch black. The only practical way to test for that seemed like a CV-system. So we did one. We took Open CV and some webcams and integrated also those to our core system.

Epilogue

Already now the scope of what we could test is greater than the tests we can actually create in short-term. So we haven’t really suffered by selecting the open source components as the core pieces of our test automation. I would instead argue that the scope of our opportunities is possible because of the choice we made. But that also highlights the constant pain point of automated testing: How easy it is to implement the tests themselves once the possibilities are there? This is something where we still need a lot of work. Whether the solution is easier tools, better training for the tools we have or something else is our next big challenge.

Markku Tamski has been connecting the dots from software and systems technology development and industry standardization all through to product solutions for more than 20 years. During this time the work has led Markku to explore a wide variety of technology areas and roles in consumer facing software driven products. In his current role he is heading the Delivery Operations teams at Link Motion, including Quality Assurance, Software Builds and Releasing, having the most interesting view point over the automotive industry's software development practices in transition. He holds an M. Sc. from Tampere University of Technology.