Blogs | 8.8.2022
Deepfake Science: How Vulnerable Are Educators and Learners?
Deepfakes are photos, videos, or audio where a person’s face, body, or voice has been digitally altered through artificial intelligence (AI). While significant expertise has been required to alter digital content in the past, AI and the internet now make it easier to create and disseminate highly realistic manipulated digital content.
It’s extreme Photoshopping.
How Can Deepfakes Impact Science?
Deepfakes are used to share false information by highlighting uncertainty, undermining the credibility and consensus of leading figures and institutions, and spreading pseudoscientific alternatives.
As a society, we’re at a tipping point regarding misinformation on the internet. While consequences of exposure to misinformation can range in severity and scale, false information about science, technology, engineering, and math (STEM) related topics have major repercussions, such as impeding policy reforms needed to combat global challenges like climate change.
Within STEM education, increased access to and reliance on digital content is needed by not only learners but educators as well. Their risk of exposure to deliberately misleading educational and scientific content has increased with the rapid sharing of misinformation on social media.
Who’s Most Vulnerable to STEM Deepfakes?
No one is immune to being fooled by deepfake videos. But in the education field, there hasn’t been much research to understand how vulnerable different types of learners are to STEM deepfakes, the characteristics of both an individual and videos that contribute to these vulnerabilities, and their impact on the education system.
A research partnership between Challenger Center, RAND Corporation, and Carnegie Mellon University recently studied five learner populations—adults, K-12 teachers, K-12 principals, middle school students, and college students—and their ability to distinguish between real and deepfake videos about climate change.
In the study, Deepfakes and Scientific Knowledge Dissemination, the topic of climate change was chosen because it’s a particularly compelling aspect of STEM to explore due to the polarizing nature of the topic, leaving individuals particularly vulnerable to misinformation.
About 33-50% of the learner populations were unable to distinguish between authentic videos and deepfakes, but their level of vulnerability varied due to:
- Technical aspects of a deepfake (i.e., video quality, facial features, etc.). A greater focus on these helped individuals more accurately detect deepfakes.
- The survey respondent’s background (age, political orientation, educational background, level of tech-savvy, etc.) and trust in information sources. Older respondents and individuals who tended to trust their information sources were more vulnerable to deepfakes.
About 40-50% of middle school students were unable to identify real and deepfake videos correctly, while college-age students were better able to identify the videos with 66-80% accuracy.
Adults, including teachers and principals, exhibited higher vulnerability compared to students, indicating that those providing education, and thus the broader education system, are susceptible to deepfake-based science misinformation. This vulnerability to deepfakes can translate to inadvertent teaching of misinformation to students, who themselves, are vulnerable to deepfakes.
It was also found that a person’s vulnerability to deepfakes increases with more exposure, suggesting that deepfakes become more harmful when there isn’t any intervention.
Combatting STEM Deepfakes
The most promising answer for deepfake detection is to develop a combined human-digital solution.
The successful design, development, and deployment of human-digital solutions for deepfake detection require a comprehensive understanding and education around
- an individual’s ability to successfully detect deepfakes,
- social aspects that impact an individual’s vulnerability to deepfakes, and
- technical aspects that influence successful deepfake detection.
While prior studies focused on the more technical aspects of videos and the relationship between human and algorithmic detections, results of this study show that education focused on the social context of deepfakes is a promising strategy for combatting deepfakes.
By empowering educators with the skills to better detect deepfakes, we decrease the likelihood of students being exposed to false information within the classroom.
About the Study
Deepfakes and Scientific Knowledge Dissemination is a working paper under review at Scientific Reports. It represents a first step in understanding vulnerabilities to deepfakes to create and implement robust mitigation strategies.
Challenger Center, Carnegie Mellon University, and RAND Corporation provided access to the student populations for the study. RAND American Life Panel/American Educator Panels supported the development and fielding of the surveys. Work was funded by the National Science Foundation.