Our writers are ready to help! Get 15% OFF your first paper

Hire our writerHire writer

Concurrent Validity

If you want to check the agreement between two testing tools or measures used simultaneously, concurrent validity will help you. It can compare a new test with the previous one that is considered valid.

This type of validity belongs to criterion validity. “Concurrent” means “simultaneous.” You obtain the criterion variables and new test scores at the same time.

Example

Suppose you want to evaluate the concurrent validity of your new survey related to the personnel’s workplace satisfaction. You can ask the sample of office workers to answer the questions of two surveys - one that has been validated and the other which is new - and compare the answers. Or you can ask these employees to answer the questions of your new survey and compare the results with those that belong to a common measure of workplace satisfaction. If the results obtained with the help of these two techniques are similar, you will ensure that both surveys help measure the same concept - workplace satisfaction - and your survey has high concurrent validity.

This subtype of validity plays an essential role when you want to introduce a new measure because you believe it is more objective and cost-efficient.

What Stands Behind Concurrent Validity?

Concurrent validity is needed to make a comparison between new and validated tests. The validated test is regarded as a criterion or a “gold standard.” To be valid, new tests should measure the same constructs and introduce new valid methods instead of widely accepted and validated ones.

The results obtained with the help of a new test should correlate with the results of existing tests. Only this way you can establish concurrent validity.

Remember!

You can use this measure only if you have something to compare it with. If there is no another criterion or validated measure, you cannot use concurrent validity to assess your test’s efficiency.

Example of Concurrent Validity

Let’s imagine you have research on workplace satisfaction among office workers. Suppose you want to survey the IT company’s employees with a previously validated tool and your new survey. The existing survey you have found consists of 32 questions that measure workplace satisfaction by using the Likert scale.

Nevertheless, you think that your newly developed 16-item survey will take less time and will not be so effort-consuming. Though you know that the scores obtained with the help of both surveys should help differentiate employees’ attitudes in a similar way. It means that if an employee got a high score on the previously validated 32-item scale, the same employee should demonstrate that high score on your new 16-item scale.

Comparison of Concurrent and Predictive Validity

Criterion validity consists of concurrent and predictive validity, so both these subtypes are referred to as criterion validity and can show how well the new test can correlate with a gold standard or already established criterion.

Example

Suppose you are researching consumers’ behavior when they are dissatisfied with the product they have just bought. You want to validate your new survey based on self-reports. The most common criterion here is the number of incidents when your respondents were deeply dissatisfied with the products. Therefore, your self-report has to relate to the reasons why your participants were frustrated.

So, you want to set up concurrent validity by measuring the test scores and the criterion of the dissatisfaction level variable within the same period, for example, two weeks. You will also establish predictive validity by getting the test scores for two weeks and then the criterion score in some time, for instance, the same two weeks but after a month without any testing passes. Here, you will be able to evaluate the predictive potential of self-reports.

Here lies the main difference between concurrent and predictive validity. Concurrent validity requires test scores and criterion variables to be received simultaneously, while predictive validity needs that the criterion variables to be assessed after receiving the test scores.

Concurrent Validity Limitations

Many researchers think that concurrent validity is a weak validity type because of the three essential limitations.

  1. The gold standard can be biased.
    If that happens, no new test can have concurrent validity, even if it is extremely valid, if the gold standard is biased. It will turn out to be biased, too, and two biased measures cannot produce validity because they will only confirm one another. That is why concurrent validity, even if it is properly established, cannot be enough to become evidence of a valid measure or tool. Other types of validity should be applied in this case.
  2. You can use concurrent validity only when you have criterion variables.
    These variables or gold standards may be difficult to obtain. For example, if you measure a specific emotion, you cannot find any objective standard. All your considerations are based on your respondents’ subjective answers.
  3. You can apply concurrent validity only to the tools (or tests) that evaluate the current situation.
    You cannot use it for assessing potential or future outcomes. If you need to do that, you should opt for predictive validity.

Final Thoughts

As you can see, concurrent validity as a subtype of criterion validity is helpful when you need to measure how valid your new test is in comparison to the existing one at the present moment when checked simultaneously. You know now how to assess it and why it is different from predictive validity.

However, you also need to consider specific limitations to concurrent validity use and things that can prevent you from using this criterion for your new tool validation.

More interesting articles