Source: Laboratories of Gary Lewandowski, Dave Strohmetz, and Natalie Ciarocco—Monmouth University
In order to study something scientifically, a researcher needs to determine a way to quantify it. However, psychological constructs can be challenging to measure and quantify. This video examines reliability in the context of content analysis.
A recent study in the journal Pediatrics reported that 4-year-olds who watched a fast-paced cartoon had worse performance on cognitive tasks, such as following rules in a game, listening to direction from an adult, and delaying gratification, compared to other children who watched a slower paced cartoon.1 In addition to the pace of the cartoon, the content of the cartoon may also have deleterious effects on its young viewers.
This video uses a simple two-group design, to exemplify the issue of reliability, in examining the question of whether the cartoon SpongeBob SquarePants has more inappropriate content than does the cartoon Caillou.
1. Define key variables.
2. Create coding categories from the operational definition of inappropriate content.
Coding Categories | Themes and Exemplars | Count |
Crude Behavior | Toilet humor Purposefully disgusting behaviors |
|
Rude Behavior | Disrupting others Poor Manners |
|
Language | Using curse words | |
Verbal Aggression | Insults Yelling Name-Calling |
|
Physical Aggression | Hitting Pushing/Shoving Tripping |
|
Drug References | Verbal (suggestive statements/conversation) Nonverbal (mimicking drug use) |
|
Sexual References | Verbal (suggestive statements/conversation) Nonverbal (mimicking sexual acts) |
Table 1. Example of how to record instances of inappropriate behaviors. This log can be systematically used across raters.
3. Instruct raters to separately watch the same episode of SpongeBob SquarePants and provide coding counts.
4. Instruct raters to separately watch the same episode of Caillou and provide coding counts.
5. Compare ratings to see if the Raters came up with similar ratings for each show.
Scientific research uses precise methods to collect data, yet variability in obtaining measurements often exists.
Reliability can be assessed for any experimental measurement, and today, we’ll have a look at measurements of inappropriate behaviors in cartoons.
When viewers agree on the amount of inappropriate material within the same show—across multiple episodes—their judgments are considered highly reliable. In this case, assessments can extend across different shows because of the consistency between observers, which is referred to as inter-rater reliability.
This video demonstrates how to design and perform, as well as how to analyze and interpret, an experiment examining whether one cartoon has more inappropriate content than another.
To examine reliability and inter-rater reliability, a within-subjects design is used in this experiment. Participants are asked to watch two episodes of two different cartoons—SpongeBob SquarePants and Caillou.
Within this context of cartoon watching, the dependent variable is the number of inappropriate behaviors participants observe. These include: any crude and rude behaviors, bad language, verbal and physical aggression, and references to drugs and sexual content.
If reliability exists in the scoring of inappropriate content of a specific cartoon, participants will consistently rate that cartoon across different episodes.
Moreover, if multiple participants are in agreement with the number of inappropriate instances they count, inter-rater reliability exists.
Thus, establishing inter-rater reliability allows researchers to use the same participants to more powerfully compare data between multiple conditions.
To conduct the study, prepare four clips: two different episodes from two different cartoons, SpongeBob SquarePants and Caillou.
To allow participants to systematically identify instances of inappropriate behavior, create a coding sheet with categories, concrete examples, and space to count each occurrence.
With the participant sitting in front of the screen, hand them four coding sheets. Instruct the participant to separately watch two episodes of SpongeBob SquarePants.
As the participant watches each episode, instruct them to identify every occurrence of inappropriate behavior.
Using the same coding scheme, instruct the participant to watch and rate two episodes of Caillou.
To analyze the reliability of participants’ ratings of cartoon content, compare the coding sheets between each participant across the different episodes of cartoons. Sum all of the responses on a master sheet.
Graph the total number of inappropriate behaviors for each rater across episodes and cartoons.
Note that high reliability was observed in the scoring of the two different cartoons, as SpongeBob is consistently scored higher than Caillou.
However, stronger inter-rater reliability was found in the scoring of inappropriate content in Caillou compared to SpongeBob. Reduced inter-rater reliability was more obvious in the scoring of Episode 2 of SpongeBob.
Now that you are familiar with reliability in the context of content analysis, you can apply this approach to other areas of research.
Many psychological experiments gather information by utilizing cognitive assessments and surveys, in which reliability between each of the items must be consistent between participants.
Reliability in neurophysiological measures, such as EEG or eye tracking, is essential to conducting repeatable experiments. This reliability allows researchers to make associations between brain function and disease states across multiple subjects.
Additionally, researchers must ensure certain measurements in an experiment are consistent over time. For example, weight measurements are reliably taken to compare data before and after exercise routines.
You’ve just watched JoVE’s introduction to determining reliability in psychological experiments. Now you should have a good understanding of how to quantify a psychological construct such as inappropriate behavior, design an experiment, and finally how to evaluate reliability from the results.
Thanks for watching!
The results indicate that the raters had a high level of agreement or consistency in their ratings within each cartoon episode, which indicates high inter-rater reliability (Figure 1). There is also reliability or consistency in SpongeBob SquarePants episodes having more inappropriate content than Caillou. The results also revealed individual biases amongst raters. For example, Rater 3 reported more inappropriate content in SpongeBob than the other 2 raters, and Rater 1 reported less in Caillou than other raters.
Figure 1. Instances of inappropriate content by rater and cartoon for episodes 1 (top) and 2 (bottom).
Researchers have increasingly turned their attention toward analyzing television’s content, especially as it relates to children. As discussed prior to this current experiment, a recent study in the journal Pediatrics correlated the fast pace of the SpongeBob SquarePants cartoon to relatively poor cognitive abilities in the children who watch it.
Since the results of our experiment appear reliable, future research could examine whether the relative amount of inappropriate content in SpongeBob is also (or alternatively) responsible for children’s lower cognitive performance after watching.
One of the most important applications of reliability is in the use of survey instruments. Researchers must be sure that participants will consistently answer each of the items in a particular scale. That is, in a 5-item measure of life satisfaction, participants should answer items 1 and 2 in a somewhat similar fashion to how they answer questions 3, 4, and 5. In addition, researchers want to make sure that their measurements in an experiment are consistent over time. So if a researcher is using pupil dilation to indicate interest in a stimulus, the researcher must be sure that pupil dilation is a consistent indicator of interest.
Scientific research uses precise methods to collect data, yet variability in obtaining measurements often exists.
Reliability can be assessed for any experimental measurement, and today, we’ll have a look at measurements of inappropriate behaviors in cartoons.
When viewers agree on the amount of inappropriate material within the same show—across multiple episodes—their judgments are considered highly reliable. In this case, assessments can extend across different shows because of the consistency between observers, which is referred to as inter-rater reliability.
This video demonstrates how to design and perform, as well as how to analyze and interpret, an experiment examining whether one cartoon has more inappropriate content than another.
To examine reliability and inter-rater reliability, a within-subjects design is used in this experiment. Participants are asked to watch two episodes of two different cartoons—SpongeBob SquarePants and Caillou.
Within this context of cartoon watching, the dependent variable is the number of inappropriate behaviors participants observe. These include: any crude and rude behaviors, bad language, verbal and physical aggression, and references to drugs and sexual content.
If reliability exists in the scoring of inappropriate content of a specific cartoon, participants will consistently rate that cartoon across different episodes.
Moreover, if multiple participants are in agreement with the number of inappropriate instances they count, inter-rater reliability exists.
Thus, establishing inter-rater reliability allows researchers to use the same participants to more powerfully compare data between multiple conditions.
To conduct the study, prepare four clips: two different episodes from two different cartoons, SpongeBob SquarePants and Caillou.
To allow participants to systematically identify instances of inappropriate behavior, create a coding sheet with categories, concrete examples, and space to count each occurrence.
With the participant sitting in front of the screen, hand them four coding sheets. Instruct the participant to separately watch two episodes of SpongeBob SquarePants.
As the participant watches each episode, instruct them to identify every occurrence of inappropriate behavior.
Using the same coding scheme, instruct the participant to watch and rate two episodes of Caillou.
To analyze the reliability of participants’ ratings of cartoon content, compare the coding sheets between each participant across the different episodes of cartoons. Sum all of the responses on a master sheet.
Graph the total number of inappropriate behaviors for each rater across episodes and cartoons.
Note that high reliability was observed in the scoring of the two different cartoons, as SpongeBob is consistently scored higher than Caillou.
However, stronger inter-rater reliability was found in the scoring of inappropriate content in Caillou compared to SpongeBob. Reduced inter-rater reliability was more obvious in the scoring of Episode 2 of SpongeBob.
Now that you are familiar with reliability in the context of content analysis, you can apply this approach to other areas of research.
Many psychological experiments gather information by utilizing cognitive assessments and surveys, in which reliability between each of the items must be consistent between participants.
Reliability in neurophysiological measures, such as EEG or eye tracking, is essential to conducting repeatable experiments. This reliability allows researchers to make associations between brain function and disease states across multiple subjects.
Additionally, researchers must ensure certain measurements in an experiment are consistent over time. For example, weight measurements are reliably taken to compare data before and after exercise routines.
You’ve just watched JoVE’s introduction to determining reliability in psychological experiments. Now you should have a good understanding of how to quantify a psychological construct such as inappropriate behavior, design an experiment, and finally how to evaluate reliability from the results.
Thanks for watching!