This is Part 2 of Impeccable Engineering Series, follow the link to read Part 1.
In my previous blog post I expatiated on Impeccable Engineering. In a nutshell, its main tenet was that you should be focusing first and foremost on the quality of your own work as the ticket to contentment and as a corollary, great products. Now I’d like to expand on the topic and talk about a particularly insidious problem we all face periodically throughout our working life: how to ensure good quality in the face of the unknown? If you simply don’t know all you need to know, how are you supposed to get your work done at all, let alone in good quality? At worst, this may lead to analysis paralysis – a vicious cycle of generating problems that may not need to be solved at all. This may cause your work to grind to an unnecessary halt or lead to an overly complex end product.
The key to attaining impeccability when surrounded by more questions than answers is simple: containment. Fill in the gaps in your knowledge by making assumptions and documenting them. If possible, document them formally. If not, make notes for yourself – even mental ones – so you may revisit and adjust them later if needed. This way of thinking can be applied to anything: architecture design, API design, solving scientific problems – even building a house. As the future is unknowable, we must make assumptions on who might be living in the house and how they will use it, then design accordingly. Similarly, when building a complex system, we must attempt to reduce the problem space by means of containment, that is, using assumptions.
To make all of this more concrete, I’ll pull an example from our work at Link Motion. Nowhere else is the fact of building upon assumptions as evident as in the development of an ISO 26262 qualified functional safety system. Our carputer, Motion T, is designed to be a generic computer. It can be integrated to any kind of vehicle: old or new, with a human driver or with an AI, with wheels or without. While the spectrum of possibilities is wide, a common denominator with all of these is safety.
How to ensure good quality in the face of the unknown? If you simply don’t know all you need to know, how are you supposed to get your work done at all, let alone in good quality?
Building a safe system requires a thorough analysis of possible hazards and accompanying risks – a hazard being like a lamp post and a risk being that you might walk into one while playing with your smartphone. These risks must be mitigated, usually by technical means, to reduce their probability to an acceptable level. To understand the hazards and risks pertinent to our system, we need to have detailed knowledge of the environment it operates in: the usage patterns, the inputs and the outputs. But how can we build a safe system, if we cannot know what features the customer may want and what the end product will look like? Luckily, it turns out that the standard itself offers a solution: building a Safety Element out of Context (SEooC).
As the name implies, an SEooC is a system that is designed to be used in an automotive context, but that has not been specifically designed for any vehicle. The key to building an SEooC is in making and documenting assumptions. In ISO 26262 this process starts from Item Definition, which is the first step in the safety life-cycle. Item Definition is essentially a statement about the purpose, functionality and boundaries of the system where the standard is being applied to. As our product is a generic computer capable of supporting virtually any feature with only our imagination as a limit, we must make hard assumptions on what kinds of environments we want our system to operate in. This is probably the most difficult step in the process and the one with the longest-lasting implications. Nevertheless, to make any progress at all, we must make bold assumptions and document them carefully.
In our case, one such Item was Seat Control. Initially we thought that this was a sure bet as every car we’ve seen so far includes seats and a mechanism is needed to control them. What better way to do that than through a sleek user interface on the IVI screen? Optimistically, we proceeded to add the Item to our initial product specification. Our optimism did not last long.
But how can we build a safe system, if we cannot know what features the customer may want and what the end product will look like?
After conducting the first round of Hazard and Risk Analysis (HARA), we concluded that the safety implications of supporting that particular feature were technically infeasible for us. Unintentional control of seats can after all lead to dangerous situations for the driver: you really don’t want your seat to start seesawing about while doing 150 on the Autobahn. This would have led to the most stringent (ASIL D) level of safety requirements. Hence, we took a few steps back, revised our assumptions and, naturally, documented everything carefully. With updated assumptions and now a simpler system we were again able to make progress.
As was seen in the previous example, making the wrong assumptions can unnecessarily widen the problem space. In our case making a wrong assumption would have led to higher safety requirements than we were comfortable with. Often, especially when building your first Minimal Viable Products (MVPs), making bold assumptions is necessary to make headway. Just make sure to document them and understand the restrictions they impose upon your product. One should not be afraid of making wrong assumptions – keep calm and revisit them if needed. In functional safety this may mean redoing parts of the HARA and reworking the safety concept. In software design, APIs may need to be adjusted. In house building, you may have to erase a wall in the blueprints to make one room out of two. With a systematic approach you will eventually find your way through the dark and be able to trace your way back whenever needed.
Your assumptions do not need to be impeccable. Just be sure to make them (and know when you did).