GUIDANCE
Toolsets/ Human Factors
Chapter 8: Determine Human Factors Requirements in System Testing
 
PURPOSE

This chapter discusses the determination of human factors testing requirements for the Integrated Product Team (IPT) to ensure that human factors considerations are adequately integrated into the system acquisition testing program.

Testing is performed to assess the operational effectiveness and suitability of the products to meet system requirements. The purpose of human factors in system testing is to produce evidence of the degree to which the total system can be operated and maintained by members of the target population in an operational environment. If the total system exhibits performance deficiencies when operated or maintained by members of the target population, the testing should produce human factors causal information.

TIMING Human factors planning for test and evaluation (T&E) activities is initiated early in the acquisition process during Investment Analysis. Specific human factors-related T&E tasks and activities are subsequently identified in the Integrated Program Plan. The conduct of the human factors T&E is integrated with the system T&E program, which is largely performed during Solution Implementation. Post deployment assessments that include human performance parameters assist in lifecycle planning and continuous improvement.
"HOW TO"

Key principles for addressing human factors requirements in system testing are:

- Coordinate human factors test planning early in the acquisition program.

- Measure human performance of critical tasks during testing in terms of time, accuracy, and operational performance.

- Leverage human factors data collection by integrating efforts with system performance data collection.

- Make recommendations for human factors design and implementation changes and human performance improvements.

Providing human factors in system testing entails an early start and a continuous process. Figure 8-1 illustrates the flow of this process. During the conduct of a front-end analysis, and in conjunction with developing the Human Factors Program, plans and analyses help identify the system functions. The human factors experts review the system functions and identify the human tasks that may be critical to the performance of those functions.

Figure 8-1. Process for providing human factors in system testing.

Simulations, studies, analyses, prototype evaluations, research, and trade-off studies may be required by the human factors experts to determine the effect of human performance on system performance.

Using the system's mission objectives, critical operating issues and related criteria, the human factors experts derive measures of effectiveness, measures of suitability, and the criteria and performance thresholds associated with these measures. Data requirements and data collection plans are formulated along with resources required (e.g., funding, analytical personnel, data collection equipment). Human performance is then tested, analyzed, and evaluated for its impact on system performance.

Since the purpose of incorporating human factors in system acquisition is to produce safer, more effective systems, a continuous feedback loop is established to the other IPT members and the user representative to recommend design and implementation changes and possible staffing and training solutions.

Step 1:

Conduct

Front-End Analysis

This step consists primarily of applying the results from the front-end analysis conducted during mission analysis and investment analysis to feed the Human Factors Program. Predecessor system(s), similar system components, lessons learned, and other documentation are used to identify critical operational issues, resource limitations and constraints, critical tasks, and operator and maintainer performance levels, as well as system performance thresholds that should be incorporated into the testing program.

Step 2:

Develop

Human Factors

Testing

Requirements

Using the system critical operational issues, human performance operational issues are derived. Based on the results of the front-end analysis, human performance measures of effectiveness (MOE) and measures of performance (MOP) are developed in terms that relate human performance to system performance and operational suitability.

Human factors requirements should identify the data to be collected that is necessary to satisfy the MOEs and MOPs. The data to be collected must be integrated into the system test and evaluation planning and should identify needed support (e.g., personnel and other resources, facilities, software tools, equipment).

Products of this step may include:

- Human factors test planning for inclusion in the system test and evaluation planning

- Issues for resolution by the Human Factors Program

- New or changed procedures for operational test and evaluation

- Operator and maintainer task lists to include identification of critical tasks

- Human performance measures of effectiveness and measures of performance

- Identification of data requirements

- A listing of data collection tools, surveys, questionnaires, analyses, and evaluation schemes

- Resource requirements including equipment, software, data analysis skills, data collection personnel, computer time, personnel training requirements, and the like.

Step 3:

Conduct Human Performance Testing

Human factors involvement in early system test and evaluation is critical to producing safe, suitable, and effective systems. Developmental testing, conducted early to reduce risk, often provides useful operational and human factors information. Developmental testing assesses progress toward meeting critical operational issues as well as readiness to proceed to operational testing. Operational test and evaluation, conducted to estimate or verify operational effectiveness and suitability, provides information about human performance as an integral part of system performance.

Data are collected during the developmental and operational tests and the effect of human performance on system performance and operational suitability is calculated or estimated. Inconsistencies between the measures used in the investment analysis and the results obtained from actual test data need to be resolved. Testing and evaluation should assess the validity of the assumptions and conclusions made during the analysis of various alternatives.

Human performance testing of nondevelopmental or commercial-off-the-shelf items should take advantage of warrantees, previous commercial testing, and product experience. Modeling and simulation are some of the powerful tools used to verify human performance associated with various design approaches.

Step 4:

Apply Results of Human Performance Testing

The information developed by the human factors test and evaluation effort provides the other IPT members and the user representative feedback to produce the safest and most effective system possible within program baselines. Recommendations may be made for design or implementation changes or human performance improvements, or training solutions.

CHECKLIST

QUESTIONS

- Has a front-end analysis adequately identified the human performance issues for test planning?

- Have human performance critical operational issues and criteria been identified?

- Have human performance Measures of Effectiveness (MOEs) and Measures of Performance (MOPs) been identified?

- Are data requirements identified that will satisfy the MOEs and MOPs?

- Have the resources necessary to support the collection of human performance data been identified and made available?

- Has the human factors data collection effort been integrated with the system data collection effort(s)?

- Have options been identified for human performance data collection if the primary data collection plans are not feasible or practical?

- Are human performance data collected in terms of task performance time and accuracy?

- Are data collectors trained to identify and report potential human performance issues?

- Are other sources of data (such as user comments) being reviewed for human performance issues?

- Have human performance data been analyzed with respect to training effectiveness, task overloading, skill creep, safety, health hazard or procedural inadequacy issues?

- Has feedback been provided to the other IPT members.