Blog

Confidence and “Confidence”

Confidence and “Confidence”

Adam / September 27, 2017

Confidence is used in reliability to give a probabilistic value to the likelihood that a taken measurement will represent a full population.  It is determined by measuring a sample size and then using a selected statistical distribution table to translate to a likelihood. Emotional confidence is how one feels about making a decision based on known information at a specific point in time.   Similar but different. But it is important to connect them.  This is why.

Unless we are running the production equivalent product in the actual user, or perfectly simulated environment we are including assumptions.  This means that any data generated during product development has a set of assumptions that make it valid.  These assumptions are based on arguments that the deviations from “reality” do not affect the results projections of performance in real field usage. The link here between statistical confidence and emotional confidence is the term “assumptions based on arguments”.  This sounds a lot like legal jargon.  In a legal setting, the resulting decision for action is a conviction based on the best argument, not necessarily “the truth”.

It’s important to remember that our reliability statistical statements are rooted in some very subjective arguments as well.  It’s not too hard to end with the confidence statement you were looking for from the only available data you have.

Here is an example.  If there is a product that requires a 95% statistical confidence in a reliability goal before it is approved for release the team needs to access what resource is required to produce the required data.  In this example, it will require running 42 full products for 18 months with no failures to make that 95% confidence statement.  The product development schedule tells us that release is 3 months after the first full assemblies are ready.  The request for resource from the reliability initiative just turned the reliability test program into the first production customer for a year and a half.  That’s unlikely to be approved.

Our new strategy is to complete a risk analysis and find which of the 37 major subassemblies are the greatest technical risk.  If we can isolate the top three high-risk subassemblies and do compressed testing we could generate the 95% confidence statement on their subgoals.  We could then state that we are confident that the 34 un-tested subassemblies will meet their goal based on legacy data.  This is doable before release with only a slight delay in product release.

We identify three high-risk assemblies through a DFMEA and find that to demonstrate their goal we need to run 84 of each for four months with no errors.  We get that approved.  In the second-month of test we have found six unique errors.  We believe we have root caused them accurately and can remove them from the dataset.  The fixes will be in place before the first production run.  We did it!

So our argument that we have a 95% confidence in this new product is the following.

The product has a high-level reliability goal of 99.9%.  The product was broken down into 37 major sub-assemblies and structured as a reliability allocation model.  A reliability goal was derived for each subassembly.  Through risk analysis, three subassemblies were identified as high risk.  The three high-risk subassemblies were tested to demonstrate a confidence in their allocation goal of 99.999% each.  Six unique issues were identified with the high-risk assemblies.  The issues were root caused.  Fixes will be implemented before production.

That argument logically holds up as a way to demonstrate the statistical confidence of a new product. We hit our goal and released the product with only a slight delay.

Let’s do something with this argument. Change the word “product” to “bridge”.  The three high-risk subassemblies identified were a new type of truss fastener, new alloy hardening process, and new cable crimp design.

Six technical issues were found when testing those three high risk features.  They were root caused and the un-demonstrated confidence fixes will be in the final production bridges.  The engineers report a 95% confidence in the bridge reliability goal based on this process.

What is your confidence if you were asked to be the first person to drive a car over that bridge?

We are all good at making arguments.  Exercise caution when the pressure is on to make big statements with little resource.

-Adam

Comments are closed.