|NI TestStand 2014 Help|
|NI TestStand 2014 SP1 Help|
|TestStand 2016 Help|
|TestStand 2016 SP1 Help|
|TestStand 2017 Help|
Most automated test systems need to generate a report that logs information about each test run against the unit under test (UUT), including the test status, measurement results, test parameters, and diagnostic information collected during the test. Test systems also have system-level requirements and constraints—such as hardware platform, test throughput, tester up-time, and linkage to Manufacturing Execution Systems (MES) or other enterprise management systems—that can influence the report generation strategy the test system uses.
In most cases, the report generation strategy defined for one test system might not meet the requirements of another test system, even within the same organization. Test system architects must typically consider many or all of the following requirements when defining the appropriate report generation strategy for a particular test system:
The topics in this help file examine each of these requirements, recommend TestStand report generation settings and configurations that best meet each requirement in typical scenarios, and identify other requirements you might have to sacrifice to optimize a test system for the requirement. National Instruments recommends that you use the information in this help file as a starting point for designing a report generation strategy for a test system and iteratively measure and modify the test system to achieve the optimal balance among the requirements.
Consider the different scenarios in which you run a test system and which requirements are most important for each scenario. For example, when you first develop and debug a test system, throughput might not matter, but support for post-failure information recovery might be critical. When you move to production, throughput might be the most important requirement.