The anatomy of an automated electronic system test bench

Automating the production testing of an electronic system necessitates an ecosystem of electronic, mechanical and software tools. Once the test strategy has been laid out, the DFT analysis effectuated and the test plan consolidated, it becomes the transfer to production engineer’s responsibility to choose, design, and develop these tools to create the test bench.

In this entry, we aim to detail the anatomy of an automated electronic system production test bench to better understand the scope of its design and development. We will review the different components and discuss the main objectives of each and how we expect them to help us create an efficient test bench. We can first start by stating our intention when designing and developing a test bench.

We aim to

  • Create a test bench which will test the device under test to the extent of the intended coverage
  • In the least amount of time possible
  • For the lowest cost possible

In our test strategy entry, we had already defined the test costs items and discussed them extensively. They were:

  • The cost of test execution labor

  • The cost of the bill of materials

  • The non-recurring cost of the design, development and deployment

All these items were evaluated and defined in the preliminary analysis phases, but can be drastically affected by design and development choices, good or bad. We shall therefore, for each test bench component, identify how to target the different cost items, as well as the time and coverage elements.

Anatomy

The above image shows the complete ecosystem of an automated electronic testbench.

The device under test (DUT) is installed in a mechanical test fixture by the tester. Different external test equipment are connected to the DUT and the test PC to perform automated measures or be manipulated by the tester.

The tester logs on the test PC and launches the test through the test engine, following the tester documentation. The test engine executes the test sequences, using DUT specific libraries to control and monitor the DUT and generic automation libraries to manage the testbench equipment. At the end of the test, the test records are uploaded to a local or remote storage.

Device Under Test

The device under test is the system, or sub-system being tested on the testbench. Of course, in order to create an efficient test bench, an in-depth knowledge of the system is important. Its electronic schematic, software layers and mechanical interfaces must be understood completely.

The main responsibility and objective of the transfer to production engineer regarding the DUT is ensuring the R&D team delivers a highly testable design, as we have covered in our DFT blog post. The DFT analysis and feedback into the design should simplify the development of all peripheral or in-system test tools.

Test fixture

The text fixture holds the DUT during the test. It can be as simple as a table and a few cables or as complicated as a bed of nails or a flying probe apparatus.

The main objective of the test fixture is to facilitate the interconnections between the DUT and the test ecosystem

  • It should shorten the DUT installation time, and thus the total test time
  • It should alleviate the mechanical wear on the DUT connectors.
  • It should add or expand the DUT test interfaces, meaning that it provides access to test points unaccessible through the productized interfaces of the system, thus increasing its testability.

These attributes will lead to lower test execution and non-recurring development costs.

Fixture design

The most important thing when designing a test fixture is to make sure to leave room for changes in the next revisions of the tested system and subsystem. Unless the tested system has to fit a particular form factor standard with defined, unalterable connector emplacements, it is likely that in the system’s life cycle, a new hardware or mechanical revision will require a modification to the test fixture. This is compounded by the fact that the test fixture is usually initially design around a revision A prototype which are even more likely to change. Before investing in the manufacturing of an expensive test fixture, try to design it in a way where a new system revision will not render it obsolete, necessitating the design and manufacturing of a new one. The fixture needs to be able to evolve with the product.

Maintenance

Another good practice is to plan the maintenance of the test fixture. As it is designed to alleviate the mechanical wear on the DUT connectors, the test fixture will take the brunt of the fatigue brought by continuous connections and disconnections. Without replacing key parts at the wear points following a pre-defined schedule, the connections will eventually fail leading to unexplainable test failures (until debugged by the test bench developer himself) and a possibly critical loss of time.

Tester

The tester is the most important part of the test ecosystem. Without him or her, no tests will be executed and no systems can be delivered.

The tester manipulates the DUT, installs it in the test fixture, starts the test execution and monitors its progress. He interacts with the DUT or test equipment during the test when requested and removes the DUT when the test ends.

Responsibility

The main responsibility of the transfer to production engineer regarding the tester is to provide him with the proper training on how to operate the test bench. Additionally if the tester has technical knowledge and skills, he can be taught into more details the inner workings of the system, from the electronics, mechanics and the software to how they interact together. This often has the effect of creating a sense of purpose and understanding in the tester’s execution as well as giving him the tools to efficiently debug the system and even sometimes test bench failures, saving time and money.

Efficiency

A well trained tester obviously increases the efficiency of the testing, reducing the number of tester errors and increasing the test yield. A knowledgeable and motivated tester reduces the amount of support to be given by the transfer to production engineer as test failures causes are better understood and can readily be classified as DUT problems or test bench problems.

Listen to your testers

The tools used to transfer the training and knowledge to the tester are first and foremost the tester documentation which we will discuss next.. An open communication with the transfer to production engineer, ideally as soon as the test bench design phase is also necessary. The tester is the best resource to comment on the ergonomy of the test bench, on how to save time and on how to maintain his attention throughout the test to minimize errors. Listening carefully to his inputs will save time and money down the line.

Tester documentation

The tester documentation consists principally in the test execution procedure written for the tester. It should also include the following:

  • A test bench deployment procedure, explaining to either duplicate the bench or move it to a new location
  • A test bench maintenance procedure, listing all maintenance and verifications to be made on the testbench
  • A troubleshooting section giving debugging advice on the typical failures encountered

The tester documentation allows the tester to perform the test the exact same way every time. It should help reduce the amount of manipulation errors and thus increase the yield. Following the test procedure to the letter also ensures that the obtained test result fits the true status of the tested unit.

Selfishly the transfer to production engineer’s objective when producing test documentation is to reduce the amount of support he will have to give to the tester. A well made and complete test documentation, coupled with a well trained and motivated tester, should ensure most problems can be dealt with internally in the production team.

A good tester documentation should always be complete and up to date. As we know, writing documentation is too often the last portion of a project to be performed. Its scope is then reduced and its quality can be left doubtful. Moreover, when a modification is made on a test bench, more often than not the test procedure does not follow. The modification is made rapidly to ensure the production line keeps rolling and the procedure is never updated.

To solve these issues, we suggest to use automatically generated test documentation, directly from the test plans and from the test sequences code. When the code is updated, a new procedure is generated to fit the new deployed test bench.

Test PC

The test PC is the center of the test ecosystem. It interconnects and allows the remote control and automation of everything in the testbench.

Depending on the scope of the test bench and the cost of the equipment in it, the test PC can either be the main expense or a rather insignificant portion of it. Choosing the right test PC is a matter of measuring the tradeoffs between needed features and price. A good test PC should have the following traits.

Be easily replaceable, cloneable and duplicatable

A PC can fail anytime. Either its power supply, its storage or its ports failing will lead to an inoperable test bench and stopped production line. In such a case, it is very important to be able to replace the PC and redeploy the test bench very quickly to ensure the continued operation of the production. Select a PC model which can be procured easily and tools which can be redeployed without too much trouble.

Have enough ports

Simply, the PC must be able to interconnect the test equipment and the test fixture without relying too much on expander and hubs. These days, the equipment and fixtures use USB and Ethernet connections. The test PC should have a minimum quantity of these to support all of its ecosystem.

Have enough RAM and processing power

The execution of the test and the management of the test tools can often be processing intensive. It is important for the PC to be able to support the peak load of the test bench with enough headroom to continue updating the user interface and other basic functionalities of the PC (such as example, a PDF viewer for the tester documentation). Using an underperforming PC can lead to run-time errors on the test bench, leading to a decreased yield, unnecessary retests, reboots and a general headache of the tester.

Test equipment

The test equipment are all the active tools, automated or manually operated, that allow the taking of measures or the actuation of sensors on the system. Typical equipment in electronic system testbenches are:

  • Power supplying equipment, such as voltage sources and relay arrays.
  • Signal generating equipment, such as, you guessed it, signal generators.
  • Signal measuring equipment, such as multimeters, oscilloscopes and spectrum analyzers.

In most test benches, when they are necessary, test equipment are the biggest test BOM cost driver. However, well designed equipment can greatly help the test developer deliver its test bench sooner and at a lower development cost. It can also accelerate the test execution, and reduce active tester time, and thus lower the test execution cost. Therefore, the selection of the right test equipment for the test bench is one of the most important choices made in the test bench design phase.

A good equipment needs to be:

  • calibrated,
  • fast,
  • cheap,
  • using industry standards,
  • be upgradable with specialized libraries

Calibrated

Electronic test often need to be very accurate. Using uncalibrated equipment will eventually lead to delivering units that do not meet the accuracy requirements or having good units fail the tests. Follow the calibration schedule recommended by the equipment manufacturers to ensure the quality of you deliveries.

Fast

A main objective of automation is to reduce the test time, lowering the test cost and increasing the unit throughput rate. As most of the expected time gain is provided by the remote operation of the test equipment, having fast and responsive tools to work with can help increase these gains. Therefore, when choosing between multiple equipment models, make sure to take into account the command execution and measure speed in your decision.

Cheap

*or if we rephrase it less pejoratively, as inexpensive as possible while meeting the measurement or generation requirements. *

It is easy to be impressed by the features and completeness of a certain equipment model, and to overpay as a consequence. The technical requirements leading to the selection of a particular equipment should be carefully laid out. These include the accuracy of signal generation or measurement, the ease of use during the test, the ease of automation (time of development), etc. All options fitting the requirements should be considered. For example, it is not necessary to use a 20 000$ signal generator with all the bells and whistles when a 600$ USB controlled generator fits the needed signal requirements.

Using industry standards

The automation of test equipment is easier to perform when they use industry standards such as the VISA and SCPI protocols. The equipment manufacturers often provide remote libraries or APIs ready for use using these industry standards and full documentation of how to use them.

Be upgradeable with specialized libraries

Test equipment can also provide advanced specialized libraries, for example the measurement and generation of signals of a specific telecommunication standard. Using such libraries can accelerate the development of the test bench automation, as developing these features is no longer necessary. These libraries are often expensive but are usually well worth the cost compared to the energy necessary to develop them from scratch.

Test Framework Software

The test framework is the piece of software running on the test PC which executes the tests, one at a time. It is the heart of the automation, interconnecting the DUT and the remotely controlled test equipment as defined by the test sequences.

The use of a test executor aims to reduce the testbench software development time by providing features that do not need to be reimplemented by the developer.

Features

  • It handles the deployment of the test bench software on the test PC.
  • It supplies a graphical user interface, which allows the tester to configure, start, monitor, and terminate the test. The interface also manages the actions necessary to be performed by the tester during the test, as well as possible feedback entries.
  • It loads and executes test sequences which, as we will see below, implement the test logic.
  • It manages the evaluation of the test criteria, verifying measures against the expected results.
  • It captures execution errors exposing the underlying problems in the tested device.
  • It adds up the individual test results in a final consolidated result and produces standard result files.

To run the test, the test executor relies on the following libraries and sequences

Generic Automation libraries

The generic automation libraries are libraries allowing the management of standard interfaces to the DUT or the test equipment. They implement standard interfaces such as serial port and ssh shells, the VISA protocol. They also build on these standard interfaces to provide libraries for the remote control and automation of test equipment such as oscilloscopes, power supplies, signal generators, etc.

The generic automation libraries are typically included with the test executor itself but can also be developed by the test bench developers to add specific capabilities to an existing supported interface or to add a new standard interface to the test executor ecosystem.

DUT Specific Libraries

The DUT Specific libraries are software functions directly targeting the control and the monitoring of the DUT. They provide specific methods of configuring the unit for different tests and of monitoring the unit from its point of view.

These libraries are typically developed by the test bench developper and require a good partnership with the software R&D. The test bench developer should not have to reinvent the wheel here. The previously completed DFT analysis should have led to a specific list of software test interfaces or “hooks” for the developer to use, simplifying greatly the development.

This highlights a typical failing of teamwork that can be observed in multidisciplinary teams. The software R&D, hardware R&D and V&V teams all need DUT specific libraries to remotely operate and test the system. It happens too often that these tools are developed independently by each team for essentially the same purpose. A close partnership, an open communication between the teams and a complete DFT analysis should help plan the reuse of other teams work for everyone’s benefit.

Test sequences

The test sequences implement the test logic within the test executor framework. Each test case is defined in code as outlined by the test plan. A lot could be said on what to aim at and how to create great test sequences.

Moreover, debugging sequences should be added to the deployed testbench. The tester should not have to run an official test to create debugging conditions or to test a targeted modification made on an electronic card to solve a failed test. It is our custom to create such debugging sequences for each of the test cases which to expand the test capabilities of the testbench. Just make sure you do not give to much power to the tester to alter the official test gating deliveries.

Test results storage

The last part of a test bench, but a very important one nonetheless, is the final destination of all of the labor leading to a completed test, the test result storage. Simply, a test bench cannot only display the result at the end of the test and not save it anywhere, for everyone to forget it and for confusion to reign on the status of the systems on the production floor.

The test result is the ultimate record determining the status of a unit. It also typically includes a wealth of information which can be analyzed for the following purposes:

  • Analysing results (measures and logs) to better understand a failure and target debugging efficiently
  • Comparing units or production runs averages to better understand the production process.

As they are that much more important, the test results need to reside in a safe and secure location, be it a backupped local server or a remote cloud storage. In all cases, they should be easily accessible for consultation.