Chapter8.pptx

CHAPTER 8

Validity of Selection Procedures

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Learning Objectives

Explain the difference between and importance of a selection procedure’s reliability and validity.

Know the major steps a consulting firm should take and the deliverables it should provide if your company has contracted with the firm to undertake a selection procedure validation study.

Explain the differences among the major types of validation strategies.

Understand how it is possible to use statistical methods and validated selection procedures to predict the future, that is, in terms of job-related employee behaviors.

Communicate to managers and executives the meaning and importance of a statistically significant validity coefficient for a selection procedure.

3

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

An Overview of Validity

In this chapter, we focus on validity – its relation to reliability, and the principal analytic strategies available for determining the validity of selection procedure data

Validity represents the most important characteristic of data produced from measures used in HR selection

Like reliability, the importance of validity applies to both selection procedures as well as criteria

Validity shows what is assessed by selection measures and determines the kinds of legitimate inferences or conclusions we can draw from data such measures produce

4

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

An Overview of Validity

Validity: A Definition

Validity concerns the accuracy of judgments or inferences made from scores on selection measures, including predictors and criteria

We want to know the accuracy of hypothesized predictions about employee work behaviors for job success

The research process for discovering what and how well a selection procedure measures is called validation – involves the research processes we go through in testing the appropriateness of our inferences from our selection procedures

5

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

An Overview of Validity

The Relation between Reliability and Validity

It is possible to have a measure that is reliable yet does not assess what we want for selection

The quantitative relationship between validity and reliability is:

6

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

An Overview of Validity

Types of Validation Strategies

A validation study provides the evidence for determining the legitimate inferences that can be made from scores on a selection measure

Three classical approaches used for validating measures in HR selection:

Content validation

Criterion-related validation – includes both concurrent and predictive validation strategies

Construct validation

7

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Content Validation Strategy

A selection procedure (test) or a criterion (performance evaluation) has content validity when it is shown that its content (items, questions, behaviors, etc.) representatively samples the content or aspects of the job associated with successful performance

“Content of the job” is a collection of job behaviors and the associated knowledge, skills, abilities, and other characteristics (competencies, personality, physical requirements, licenses, certifications, etc.) necessary for effective work performance

Emphasizes the role of expert judgment in determining the validity of a measure rather than relying on statistical methods

Judgments used to describe the degree to which content of a selection method reflects important aspects of work performance

8

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Content Validation Strategy

Face validity sometimes confused with the concept of content validity

Content validity deals with the representative sampling of the content domain of a job by a selection measure

Face validity concerns the appearance of whether a measure is measuring what is intended

Perceived face validity of selection procedures the strongest correlate of participants’ beliefs regarding both the procedure’s effectiveness in identifying qualified people and the procedure’s fairness

9

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Content Validation Strategy

Major Aspects of Content Validation

Conducting a comprehensive job analysis

Describing the tasks performed on the job

Measuring criticality and/or importance of the tasks

Specifying WRCs required to perform these critical tasks

Measuring the criticality and/or importance of WRCs

operational definition of each WRC

relationship between each WRC and job task

complexity/difficulty of obtaining each WRC

whether an employee is expected to possess each WRC

whether each WRC is necessary for successful job performance

Linking important job tasks to WRCs

10

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Content Validation Strategy

Major Aspects of Content Validation

Selecting experts participating in a content validation study

Specifying selection procedure content

Selection procedure as a whole

Item-by-item analysis

Supplementary indications of content validity – predictor validity

Assessing selection procedure and job content relevance

11

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Content Validation Strategy

Some Examples of Content Validation

A reading skills test based on actual safety procedures and operating procedures employees need to read upon job entry and their importance to work performance

Job-related employment interview questions to assess work performance dimensions

For content validity, derive test content from what incumbents do on the job (Figure 8.1)

12

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Content Validation Strategy

13

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Content Validation Strategy

Appropriateness of Content Validation?

In Section C1 of the Uniform Guidelines, it is noted:

A selection procedure based upon inferences about mental processes cannot be supported solely or primarily on the basis of content validity. Thus, a content strategy is not appropriate demonstrating the validity of selection procedures which purport to measure traits or constructs, such as intelligence, aptitude, personality, commonsense, judgment, leadership, and spatial ability.

Recently, however, some industrial psychologists agree that “content validity is appropriate scientifically and professionally for use with tests of specific cognitive skills used in work performance”

14

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Content Validation Strategy

Appropriateness of Content Validation?

Job analysis and content validation

Figure 8.2 summarizes major inference points that take place when using job analysis to help establish the content validity of a selection procedure

Point 1 is from the job itself to the tasks identified as composing it

Point 2 is from the tasks of the job to identified WRCs

Point 3 is the most critical – final judgments regarding content validity of the selection measure are made

15

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Content Validation Strategy

16

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Content Validation Strategy

Appropriateness of Content Validation?

Job analysis and content validation

To make the inferential leap supporting content validity, three important issues that contribute to physical and psychological fidelity must be addressed:

Does successful performance on the selection procured require the same WRCs needed for successful work performance?

Is the mode used for assessing performance on WRCs the same as that required for job or task performance?

Are WRCs not required for the job present in the predictor?

17

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Content Validation Strategy

Appropriateness of Content Validation?

Job analysis and content validation

Uniform Guidelines specify some situations in which content validation alone is not appropriate – in these situations, other validation methods must be used:

When mental processes, psychological constructs, or personality traits – judgment, integrity, dependability, motivation – are not directly observable but inferred from the selection method

When the selection procedures involves WRCs an employee is expected to learn on the job

When the content of the selection devise does not resemble a work behavior or the work setting

18

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Content Validation Strategy

Appropriateness of Content Validation?

How content validation differs from criterion-related validation

In content validity, the focus is on the selection procedure and its manifest relation with the job content domain, whereas in others the focus is on the relations of the selection procedure with an external criterion

Criterion-related validity is narrowly based on a specific set of data, whereas content validity is based on a broader base of data and inference

Criterion-related validity is couched in terms of quantitative indices, whereas content validity is characterized using broader, more judgmental descriptors

19

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Criterion-Related Validation Strategies

Two approaches typically undertaken when conducting an empirical, criterion-related study:

Concurrent validation

Information obtained on both a predictor and a criterion for a current group of employees

Validity of the inference signified by a statistically significant (p ≤ 0.05) relationship

Figures 8.3 and 8.4 demonstrate an example of a concurrent validation study

Predictive validation

Involves the collection of data over time

Job applicants rather than job incumbents serve as the data source

Figure 8.5 illustrates five variations in which a predictive study might be conducted

Table 8.1 outlines the basic steps taken in both concurrent and predictive validation studies

20

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Criterion-Related Validation Strategies

21

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Concurrent Validation

Strengths and weaknesses – several factors mitigate usefulness of a concurrent validation study:

Availability of a large sample working in comparable settings who will participate in the study

Differences in job tenure or length of employment

Representativeness of present employees to job applicants

Certain employees failing to participate

Motivation of employees to participate or employee manipulation of answers

Criterion-Related Validation Strategies

22

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Predictive Validation

Strengths and weaknesses:

Because of the inference tested by predictive validation, the method is appropriate for measures used in HR selection

Predicts how well job applicants will be able to perform on the job

One big weakness is the time interval required to determine the validity of the measure being examined

Can be difficult to explain to managers the importance of filing selection measure information before using the data for HR selection purposes

Criterion-Related Validation Strategies

23

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Criterion-Related Validation Strategies

24

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Criterion-Related Validation Strategies

25

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Criterion-Related Validation Strategies

26

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Criterion-Related Validation Strategies

Concurrent versus Predictive Validation Strategies

Generally assumed that a predictive validation design is superior to a concurrent one because it more closely resembles an actual employment situation – predictive designs have been thought to provide a better estimate of validity

Minimal differences found in the validation results of two types of designs – another review revealed no significant differences

For ability tests, studies suggest that a concurrent validation approach is just as viable as a predictive one

Studies have reported different results for predictive versus concurrent validation designs for both personality and integrity measures

27

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Criterion-Related Validation Strategies

Requirements for a Criterion-Related Validation Study

At least four requirements are necessary before a criterion-related study should be considered:

The job should be reasonably stable and not in a period of change or transition

A relevant, reliable criterion that is free from contamination must be available or feasible to develop

It must be possible to base the validation study on a sample of people and jobs representative to which the results will be generalized

A large enough, and representative, sample of people on whom both predictor and criterion data have been collected must be available

28

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Criterion-Related Validation Strategies

Stability of Criterion-Related Validation Over Time

A review indicated that the predictive validity of some measures rapidly decayed over time

Critics of the review noted that only one study reviewed incorporated an ability test to predict actual performance on the job

Another study found that the predictive validity of mental ability tests actually increased over time, job experience validity decreased, and predictive validity remained about the same for dexterity tests

29

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Criterion-Related Validation Strategies

The Courts and Criterion-Related Validation

There is no guaranteed outcome of a legal case – only 5 of 12 defendants won their case. Among the findings:

Some courts preferred to judge validity on the basis of format or content of the selection instrument

Some courts were swayed by a test’s legal history even though existing evidence was available of the validity of the test, others influenced by the type of test used

Some judges had preferences for the use of a predictive validation strategy versus a concurrent one

A statistically significant validity coefficient alone did not guarantee judgment for the defendant

Judges differed on their willingness to accept statistical corrections to predictor scores or correction for unreliability of the criterion

30

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Criterion-Related Validation Strategies

Content versus Criterion-Related Validation: Some Requirements

Drawing from the Uniform Guidelines, Principles for the Validation and Use of Employee Selection Procedures, and other sources, the major feasibility requirements for conducting content and criterion-related (concurrent and predictive) validation methods are summarized in Table 8.2

The requirements are not meant to be exhaustive, only illustrations of major considerations when HR selection is involved

31

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Criterion-Related Validation Strategies

32

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Construct Validation Strategy

Psychologists use the term construct to refer to a theoretical psychological concept, attribute, characteristics, or quality

When a psychological test is used in selection research, it assesses a construct – intelligence, sociability, clerical ability are all theoretical abstracts called constructs

Specific measures are operational measures hypothesized to represent a specific construct

Construction validation helps us determine whether a measure does indeed reflect a specific construct

Figure 8.6 shows the hypothesized links between the constructs and their measures

33

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Construct Validation Strategy

34

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Construct Validation Strategy

The example highlights the major steps for implementing a construct validation study:

The construct is carefully defined and theoretically developed – hypotheses are formed concerning the relationships between the construct and other variables

A measure hypothesized to assess the construct is developed

Studies testing the hypothesized relationship between the constructed measure and other, relevant variables are conducted

35

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Construct Validation Strategy

Results of studies such as the following are particularly warranted in construct validation research:

Intercorrelations among the measure’s items, questions, etc. should show whether the items cluster into one or more groupings

Items of the measure belonging to the same grouping should be internally consistent or reliable

Different measures assessing the same construct as our developed measure should be related with the developed measure

Content validity studies show how experts have judged the manner in which items, questions, etc. of the measure were developed and how these items sample the job content domain

36

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

Even when we have conducted validation studies on a selection procedure, we will want to answer two important questions:

Is there a relationship between applicants’ or employers’ responses to the selection procedures and their performance on the job?

If so, is the relationship strong enough to warrant the measure’s use in employment decision making?

37

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

Correlation

Computing validity coefficients

A validity coefficient is an index that summarizes the degree of relationship between a predictor and criterion

Table 8.3 shows example data from a hypothetical collection of sales ability score (predictor) and job performance rating (criterion) on 20 salespeople – an example scattergram of the data is shown in Figure 8.7

If a validity coefficient is not statistically significant, then the selection measure is not a valid predictor of a criterion (Figure 8.8)

38

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

39

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

40

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

41

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

Correlation

Importance of large sample sizes

A validity coefficient computed on a small sample must be higher in value to be considered statistically significant than a validity coefficient based on a larger sample

A validity coefficient computed on a small sample is less reliable than one based on a large sample – greater variability

The chances of detecting that a predictor is truly valid is lower for small sample sizes than for large one

42

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

Correlation

Interpreting validity coefficients

By squaring the validity coefficient, we obtain an index – coefficient of determination – that indicates our test’s ability to account for individual performance differences

The coefficient of determination represents the percentage of variance in the criterion that can be explained by variance associated with the predictor

In addition to the coefficient of determination, expectancy tables and charts are useful

Utility analysis can be used – its computation is far more complex

43

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

Prediction

A statistically significant validity coefficient is helpful in showing that for a group of persons a test is related to job success

For individual prediction purposes, linear regression and expectancy charts can be used to aid in selection decision making – these tools should be employed only for those predictors that have a statistically significant relationship with the criterion

44

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

Prediction

Linear regression

Involves the determination of how changes in criterion scores are related to changes in predictor scores

A regression equation is developed that mathematically describes the functional relationship between the predictor and criterion

Two common types of linear regression – simple and multiple

Simple regression – only one predictor and one criterion (Figure 8.9 shows the regression line which summarizes the relationship between inventory scores and work performance ratings)

Multiple regression – assumes two or more predictors used to predict a criterion

45

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

46

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

Prediction

Cross-validation – involves the following steps:

A large group of people on whom predictor and criterion data are available is randomly divided into two groups

A regression equation is developed on one of the groups – the “weighting group”

The equation is used to predict the criterion for the other group – the “holdout group”

Predicted criterion scores are obtained for each person in the holdout group

For people in the holdout group, predicted criterion scores are than correlated with their actual criterion scores

47

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

Prediction

Expectancy tables and charts

An expectancy table is a table of numbers that shows the probability that a job applicant with a particular predictor score will achieve a defined level of success

An expectancy chart presents essentially the same data except that it provides a visual summarization of the relationship between a predictor and criterion

48

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

Prediction

The construction of expectancy tables and charts is a five-step process:

Individuals on whom criterion data are available are divided into two groups – superior performers and others

For each predictor score, frequencies of the number of employees in each group are determined

The predictor score distribution is divided into fifths

The number and percentage of individuals in each group are determined for each “fifth” of the predictor score distribution

An expectancy chart that depicts these percentages is then prepared

Figure 8.10 shows the scattergram of the interview scores plotted against the performance ratings – Table 8.4 is the expectancy table developed from the plotted data

49

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

50

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

51

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

Prediction

Expectancy tables and charts

Two types of expectancy charts can be prepared – individual and institutional

Individual expectancy chart shows the probability that a person will achieve a particular level of performance (Figure 8.11)

Institutional expectancy chart indicates what will happen within an organization if all applicants above a minimum interview score are hired (Figure 8.12)

52

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

53

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

54

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

Factors Affecting the Magnitude of Validity Coefficients

Reliability of criterion and predictor

restriction of range

Criterion contamination

Violation of statistical assumptions – Figure 8.13

55

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

56

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

Utility Analysis

A definition of utility analysis

The goal is to translate the results of a validation study into terms that are important to and understandable by managers

Using dollars-and-cents terms as well as other measures, such as percentage increases in output, utility analysis shows the degree to which use of a selection measure improves the quality of individuals selected versus what would have happened had the measure note been used

57

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

Utility Analysis

Some preliminary work on utility analysis

Validity coefficient – the magnitude of the correlation of the selection procedure with a criterion

Selection ratio – the ratio of the number of people to be hired to the number of applicants available

Base rate – the percentage of employees currently successful on the job using the selection procedure

58

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

Utility Analysis

Applying utility analysis to selection: some examples

Costing the value of selection procedure

Enhancing recruitment

Using a method with low validity

59

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

60

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Empirical Considerations in Criterion-Related Validation Strategies

61

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Broader Perspectives of Validity

Validity Generalization

An overview

The idiosyncrasies of jobs, organizations, and other unknown factors contributed to the differences in validity results that were obtained

Wide variations in the magnitudes of validity coefficients across validation studies, even when the same test had been used

Validity is generalizable across situations

62

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Broader Perspectives of Validity

Validity Generalization

Validity generalization methods:

Obtain a large number of published and unpublished validation studies for a selection procedure

Compute the average validity coefficient for these studies

Calculate the variance of differences reported

Subtract the variance due to the effects of small sample size

Correct the average validity coefficient and variance for errors due to other methodological deficiencies

Compare the corrected variance to the average validity coefficient to determine the variation in study results

If the differences are small, then differences are concluded to be due to methodological deficiencies and not to the nature of the situation

63

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Broader Perspectives of Validity

Validity Generalization

Conclusions from validity generalization studies

Schmidt and Hunter concluded it is not necessary to conduct validity studies within each organization for every job

Mental ability tests can be expected to predict work performance is most employment situations

64

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Broader Perspectives of Validity

Validity Generalization

Criticisms of validity generalization

The correction formula used to generalize results usually not based on sufficient data but on hypothetical values derived from other research work assumed appropriate

Correction formulas may be inappropriate and may overestimate the amount of variance attributable to study deficiencies

65

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Broader Perspectives of Validity

Validity Generalization

Validity generalization requirements

The user must be able to show that the proposed selection procedures assesses the same WRCs or that it is a representative example of the measure used in the study database

The user must be able to show that the job in the new employment setting is similar to the jobs or group of jobs included in the study database

66

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Broader Perspectives of Validity

Job Component/Synthetic Validity

An overview

There are a number of different approaches to job component validity – synthetic validity

Involves demonstrating that a correlation exists between a selection procedure and at least one specific aspect or component of a job

Once established, it is assumed that the selection procedure is valid for predicting performance on that job component if it exists on other jobs

67

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Broader Perspectives of Validity

Job Component/Synthetic Validity

Conducting a job component validation study

Conduct an analysis of the job using the position analysis questionnaire (PAQ)

Identify the major components of work required on the job

Identify the attributes required to perform the major components of the job

Choose tests that measure the most important attributes identified from the PAQ

68

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Broader Perspectives of Validity

Job Component/Synthetic Validity

Accuracy of job component validity studies

Job component validity estimates generally lower and more conservative than validity coefficients obtained in actual validation studies

The Department of Labor’s O*NET database consists of job analysis information collected on a large array of occupations – information on 42 generalized work activities that occupations may involve is available

Use of the O*NET for job component validation is in its infancy

More developmental research is needed

69

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Broader Perspectives of Validity

Job Component/Synthetic Validity

Criticisms of job component validity strategy

Mossholder and Arvey have noted that the method has been less successful in predicting actual validity coefficients

The strategy has been relatively less useful in predicting psychomotor test data

The strategy generally has reported test results from the General Aptitude Test Battery – available only to public employers

70

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Validation Options for Small Sample Sizes

Content validity

Validity generalization

Job component validity or some other form of synthetic validity – a logical process of inferring test validity for components of jobs (Figure 8.16 illustrates three jobs and work performance components common to each)

71

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Validation Options for Small Sample Sizes

72

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

The Future of Validation Research

73

© 2019 Wessex Press • Human Resource Selection 9e • Gatewood, Feild, Barrick

Changes occurring in the workplace:

Increasing numbers of small organizations without the resources, technical requirements, and technical skills available to undertake traditional validation strategies

Increasing use of teams of workers rather than individuals

Changes in the definitions of job success to include such criteria as organization and job commitment, teamwork, and quality of service delivered to customers

The changing nature of work – jobs and requirements for performing them are becoming more fluid, requiring job analytic methods that focus on broader work capacities rather than on molecular task requirements