contact us

The most crucial concept underlying the use of pre-employment assessments is validity. The question is never just "Is this test valid?"

The real question: "Is this test valid for this intended purpose and does it support the decisions that are going to be made?"

What is validity?  Validity measures how appropriate a test is for a specific purpose. A test may be considered valid for one use and invalid for another. Validity is the degree to which evidence and theory support specific interpretations and uses of test scores.

Why do pre-employment tests need to be validated? In 1978, the Equal Employment Opportunities Commission (EEOC) created guidelines to ensure that the knowledge gained from testing is applied with impartiality to protect minority applicants from discriminatory employment procedures. Aaron Wallis is committed to applying and upholding these guidelines to ensure that the tests that you administer are developed in compliance with the EEOC Guidelines.

What's the best method of validation?  The guidelines do not state that one method is better than another is; the method used must fit the needs of the business or organization.

There are three methods of validation set forth by the EEOC.

Criterion Validity

If data demonstrates that a test is significantly correlated with a vital measure of job performance, the test is said to demonstrate criterion validity. If all the current managers that scored highly on a selected test to measure project management skills completed their projects on time and under-budget, the test would demonstrate criterion validity.

Construct Validity

The term construct is a technical term for personality traits like intelligence and creativity. Construct validity is demonstrated if a test measures traits that have been found to influence successful performance of a job. A test that measures the interpersonal communication skills of a potential customer service representative would probably demonstrate construct validity.

Content Validity

Content validity is demonstrated if the questions that make up an assessment are representative of content that is required to perform a particular activity or task. A test made up of algebra questions given to an applicant for a math teacher's position would demonstrate content validity.

Aaron Wallis tests are provided by Kenexa Prove It! and are content validated and focus on real-life scenarios and knowledge-based actions to assess the skill level of a particular skill set.

For example, Aaron Wallis currently offers many tests applicable for an Administrative Assistant. Unlike criterion-based studies, we do not have one test that covers every aspect of the job position, but offer you several tests that cover the diversity of skills for your particular hiring needs. We also do not determine whether an applicant has the right demeanor or personality (construct validity) to be an Administrative Assistant, for example. You get to pick and choose those tests that cover the facets of the job description that are important to you. The important thing to note is that each of these must be actual skills used on the job.

Methodologies for content validity studies require documentation that a test results in a balanced, representative and bias-free assessment of the test takers knowledge base of a particular skill set. Such documentation is derived from content analysis, evaluation of the target skill, and unbiased examination; this is systematically conducted and reported by the Content Validation Experts.

Customer Responsibility in Maintaining Content Validity

Content validity is situational. While Aaron Wallis is committed to ensuring internal content validity, it is also the responsibility of the test administrator to ensure external validity. Internal content of a test may be valid, but can become invalid if administered improperly. Test administrators are responsible to conduct any testing in compliance with the EEOC Guidelines.

The First Step of Customer Compliance

Only administer assessments that test skills that will be employed on the job. For example, do not administer a Java 2 test to someone that will not be working with Java 2 on the job. By conducting a study on the skills required for each position, you will gain greater knowledge of your applicants' abilities as well as ensure that your testing is in compliance with the EEOC Guidelines.

The Second Step of Customer Compliance

Review the content of each test before administering. The content of the test must fit the skills required for the position. If, upon review, you find that the content falls outside of the core responsibilities of the job, reassess your initial choice of tests or edit existing Prove It! tests to craft the perfect assessment to meet your needs.

The Third Step of Customer Compliance

Review the skills that are job applicable and which you would like to test. Can the skill that you would like to test feasibly be taught in a brief on the job training period? The EEOC Guidelines specifically state that a pre-employment test should not cover skills that conceivably could be learned in a brief, on the job orientation.

Cutoff Rates

A common misconception is that validation can be employed to create cutoff rates for pre-employment decision-making. Although our experts define common levels of proficiency, it is the test administrator's responsibility to define acceptable scores based on insight into the job requirements as well as situational concerns. Based on the data provided by the test results, the test administrator should be able to interpret the skill level of the test taker and have a great basis for conducting further analysis or validation studies.

Scoring

An important item to remember when interpreting test scores is that the tests' scoring methodologies are different from those employed in academic institutions. For example, a 58% score does not reflect "failure." Rather, it reflects the percentage of questions within a skill level, skill type, and task that the test taker answered successfully. Prove It! tests are created to provide well balanced tests; if the position does not require all of the skills that an original test checks for, and/or does not require advanced knowledge, a 58% score may be acceptable. Taking the time to edit your selected tests to the position's needs will result in more accurate data and higher scores.

If you have any questions regarding score interpretation or test validation, please feel free to contact Aaron Wallis. We will be happy to discuss at length those procedures that are conducted to ensure content validity as well as your responsibilities towards EEOC compliance.

For a full Validation Report contact Rob Scott, Managing Director at Aaron Wallis