Random or Systematic Error?
Measurement errors happen in scientific research when there is a difference between the value you receive from observations and the actual value of the subject. Such errors are also called observation or experimental ones.
The two fundamental types of errors are the following:
- Random error is an occasional discrepancy between something you observe and something that exists in real life. For example, an observer misreads the measurements of temperature changes on the scale and writes them down.
- Systematic error is a significant difference between experimental and real-life measurements that happens proportionally and continuously. For example, you have used a miscalibrated scale that shows temperatures higher than they are.
You need to see the type and source of the error to diminish its effect on the results and correct the measurements.
What Error Is Worse - Random or Systematic?
Random research errors may happen often and do not cause a big problem. They can be natural, and the variability in measurements can produce a specific discrepancy even if you measure something several times in a row. That may be caused by slight changes in the environment, tools, or your unique interpretations.
However, you should be accurate with variability because the conclusions made in this case may not be valid if they concern relationships between two or more variables. In this event, you should check your results for a systematic error. This type of error in your measurements is much worse because such distorted results can influence the overall outcomes of all your work.
What Do You Mean to Gain - Accuracy or Precision?
When a random error occurs in measurements, it will impact precision. It means that reproducibility will suffer, and you will not be able to repeat the experiment under the same conditions. In addition, systematic errors influence the results. They will not be accurate, and the discrepancy between the experimental measurements and real-life values will cause distortion.
When you shoot at the target, you have to hit the center. You aim (observe) and then shoot to the central part (real-life values) as close as possible. That is why you may need to repeat the observations to get closer. Random errors can cause variability when you measure the same thing several times. Systematic errors distort the measurement entirely from the target (actual value). It resembles shooting in a specific direction all the time (either to the right or to the left of the central target).
You do not need to worry if you get a random error. Your results will be clustered around the real-life value in any event when you repeat the experiment or observation several times. Some results will show higher values, and others will show lower, but you will still be close to the average measurement. That is why random errors are not a big problem when you gather data within a large sample. The diverging errors will compensate when you use descriptive statistics to calculate the measurements. However, if the sample is insignificant, the results caused by this error may need to be more precise.
A systematic error in your results can be problematic because the research results and upcoming conclusions can be invalid or false. All the measurements will be distorted and biased. They will diverge from the actual values, and your conclusions will be false, either as too positive or too negative (depending on the layout, they can be outcomes of Type I or Type II errors). Furthermore, it will only be possible to accurately establish the relationships between the variables of interest.
Random Error and How to Deal with It
You can never predict how a random error will impact the measurements - they can be either lower or higher than the actual values of the subject. Such errors are known as ‘noise’ because they blur the results and valid values. The lower the random errors are, the more precise data you will manage to collect.
Where Do Random Errors Come From?
The most common sources of random errors are the following:
- the existence of variations in experimental and real-life contexts;
- the measurement tools that do not work correctly;
- the unique differences between units or respondents;
- the lack of control over the experimental procedure.
Natural variations can occur when you experiment with people’s performance efficiency at different times of the day. For example, even if you investigate teenagers’ moods after playing violent computer games, you should consider that some individuals may feel worse in the morning, and others will be down in the evening, which may not relate to video game playing.
Poorly working instruments can influence the results of measuring blood pressure in patients with hypertonic disease. You want to test the external influences, such as the time of the day or food consumption. Unfortunately, if your toll works incorrectly, you will get false results, which will be the same when you repeat the experiment. If you record this invalid data, you will need to round up and down to make these measurements more consistent.
Unique differences between the individuals can display themselves when you intend to measure the memory abilities of office workers at different times of the day. The individual biorhythms are responsible for memory production in every individual. You will not see the precise results if you have night owls and early birds in one group.
How to Reduce Random Errors?
Random errors can usually appear in all types of research, even if you do your best to control the settings. Unfortunately, you cannot eliminate them, but you must reduce them.
Here are some practical tips on how to do that.
Repeat Measurements Several Times
You can boost precision by repeating the experiment in
the same environment. |
Make Your Sample Larger
Random errors are rarer in large samples. Even if they exist in such samples, they are spread in different directions and cancel each other. You will increase statistical power by using larger samples, and the results will be more precise. |
Controlled Variables Work Better
Do controlled experiments because extraneous variables are restricted there. Therefore, you can eliminate the source of a random error. |
How to Deal with Systematic Errors?
You can predict the variations in measurements, so the appearance of systematic errors will result from some more serious flaws. For instance, in some experiments, the measurements can differ from the real-life values at the same points and even in similar amounts. Such errors are also known as bias because the data differs from the actual values in a standardized way. That can produce inaccurate and invalid conclusions.
Systematic Error Types
We can define two types of systematic errors - offset and scale. You can encounter an offset error when the scale is poorly calibrated or not calibrated to a zero point. This error is also known as a zero-setting or additive error. For example, you measure the blood pressure in patients and misread the indicators on the tool by 20 points. As a result, all the ongoing measurements will have 20 points added to them.
You will get a scale error when all measurements differ consistently and proportionally from the actual measurements. Such errors are also called multiplier or systematic correlational. For example, your blood pressure measuring tool systematically adds 20 points to each indicator. So an actual measurement of 120 is read as 140, and the measurement of 80 is shown as 100.
Where Do Systematic Errors Come From?
The sources can differ. You can get this error from inconsistent research materials, poorly organized procedures, or incorrectly chosen analysis methods and techniques. This list can be continued because these errors can appear at every research stage.
There are other possible causes of systematic errors. One of them is response bias. It appears when the instructions to research materials (e.g., questionnaires) make participants behave or answer unnaturally. For instance, the respondents may need to follow societal norms. It is called social desirability bias. They may feel differently but attempt to conform to the established rules, so the results must be more accurate.
Leading questions can be a cause of systematic errors. An example of a leading question is the following. You ask the participants about their attitude to electricity efficiency. The question goes like that: ‘Experts argue that the reduced electricity consumption in households can produce money savings and reduce environmental threats. How do you feel about that?’ When you mention ‘experts’ in the question, it tells the participants that they have to accept this opinion, and it is a loaded question that makes them feel guilty if they disagree.
If the respondents strongly disagree with this statement, they will answer reluctantly or skip this question.
You may also face the experimenter drift. It happens when an observer is not motivated, bored, or tired after a long working day or coding actions. As a result, they can skip or change the standardized procedures to complete the work faster and get the expected results.
For example, you are conducting a social experiment by using a qualitative method and need to code the videos you have recorded to see how cooperative the participants are.
First, you are coding any apparent behavior corresponding to your cooperation criteria. However, after a few days of coding, you feel exhausted and start processing only the most apparent cooperative actions. As a result, you continuously move away from the initial criteria, and your measurements are unreliable.
You can also face sampling bias. It happens when some population members are prioritized, and others are excluded from the study. Your findings lose their generalizability because the reduced sample is no longer representative of the entire population.
How to Reduce Systematic Errors
We want to offer you some methods to reduce systematic errors. Let’s consider the following.
Triangulation
This method uses several techniques of observation recording because you cannot rely on only one tool. For example, if you measure the changes in teenagers’ moods after playing violent video games, you can use responses to surveys, times of reactions, or physiological signs. They all can be indicators, and they need to overlap or converge to ensure that the results do not depend on one tool only. |
Regular Calibration
It would be best if you calibrated your tools regularly to compare their results with the values taken from real life or their standard measurements. Remember that references should be accurate to reduce the possibility of systematic errors. You may also calibrate other researchers or observers participating in the project. They need to use the same code, protocols, and techniques for recording data. Check routines regularly to eliminate possible drifts on their part. |
Randomization
Randomization is a sampling method that shows that the chosen sample does not differ much from the entire population. Use random assignments to organize various treatment conditions for all participants. You can balance participants’ qualities across groups and avoid bias. |
Masking
In some events, you should conceal the assignment of conditions from participants and researchers via blinding or masking. Suppose experimenters know what kind of characteristics are required. In that case, it can influence the behavior of the groups and individuals in the experimental environment so that you can reduce the bias coming from systematic errors. |
Final Thoughts
Now that you know what random and systematic errors are and how they can influence the results of your research, consider the tips and advice from this article to reduce their impact.
You will be on the safe side if you test the results of your experiments or observation and do your best to correct the errors. Thus, you will obtain more concise, accurate, and reliable results. Moreover, the conclusions based on these results will be valid, and your academic reputation will improve significantly.