Converting Test Data into Information
Throughout its execution, an electronic system testbench will record a lot of data: test steps, interface and equipment logs, measures and acceptance criteria, tester feedback and many more. Logging all of this data for future consultation is mandatory.
You need to know which units passed the test(s) and can be delivered. You need to quickly understand their failures to properly troubleshoot and debug them. When units come back in a return of merchandise, you need to be able to go back to their results to be sure they were tested correctly to understand if there's a break in the production process.
The reality is that this data must remain permanently accessible.
A tool that receives and displays this data is a pillar of every production department. Using csv and text files in a network folder architecture is one thing, but as the operations departments ask to extract key performance indicators from the data, a more ergonomic tool is needed.
Why Not Transform your Data into Information ?
This leads to the crux of the issue. Data is raw and useless at a higher level if not processed and transformed into actionable, insightful information. This is what the Spintop Suite aims to do with the recorded results its connected test benches.
It will extract key performance indicators such as yields and test times to feed back into the production process model to improve efficiency on the next runs;
It will allow the engineering and production teams to execute comparisons and statistical analysis on the measurements taken and on the KPIs to understand whether the design, the manufacturing process and the testing are stable;
It will allow the execution of Pareto analyses of the test failures to better target the problems in the design, the manufacturing process or the test bench;
This is information that is expected to be extracted from the test results. The operations manager will request KPIs and analyses and the production engineer will extract them using the tools at his disposition, whether they are specialized tools, Microsoft Excel or the good old notebook and pen.
A Depth of Untapped Information
Nonetheless, the test data is full of different types of informational wealth, which are easy to miss without the right tools. For instance, let's say a test bench has tested and produced 100 units without failures. A hundred full PASS. The analysis of that production batch would be very quick to do. However in the test data could be hidden problems that were not anticipated by the test bench developer or the system designer, that could be detected by an in-depth analysis of the data. Examples are
- Log lines appearing only in one test potentially indicating a manufacturing or design problem.
- Differences in test case execution time (e.g. a test bench left idle by the tester)
- Differences in the actual test cases executed, potentially detecting a skipped test that could hide a defect (e.g. a test voluntarily skipped by a tester or due to a bad test bench update leaving a test out of the test plan)
- A measurement being isolated from the rest of the production batch, but still within the accepted criteria, potentially showing errors in the criteria definition
- And more
Our aim is, through A.I. post-processing, to crunch all of the test data in real time, as each new test result is received and to raise warning flags when anomalies are detected if, obviously, sufficient data is available for the analysis.
Other examples of planned analyses are:
- Cross-referencing failures across test benches and systems to pin-point problematic ICs in their designs.
- Trace the system test failures to their sub-assemblies and highlight anomalies of PASS sub-assemblies assembled in systems that FAIL.
- Categorize tests based on all the available data, therefore speeding FAIL debugging if a very similar test occured at some point.
- Compare test with each others, rapidly identifying key differences between two test runs.
Would you like to use our software suite to perform analysis on your test results? Get started with our test executor, spintop-openhtf.