Project Overview
Deepfakes (DFs) are any fake visual, audio, or audio-visual production generated by deep learning methods. Deepfake technology (DT) is still in an early stage of development and use. However, rapid synthetizing technologies advancements and access already allow users to make videos and clips of individuals doing and saying things they never did or said. Nowadays, users can synthesize an individual's voice from transcripts, produce an entirely new video of people speaking by lip-synced to their face, or swap one person's face onto another person's body.
Although DFs developments may have many benefits, the emphasis is now placed on how they can be used for unethical and malicious purposes and might affect the integrity of many social domains. DFs are mainly considered a powerful form of disinformation and they might increase the difficulty to differentiate between what is real and what is fake. Debates about DFs mainly focus on their potentialities for future disruption rather being based on examples of their effects. However, such narratives on the impact of DFs are mainly based not on empirical proofs but on discourses. The truth is that DFs are a nascent area of research and their implications only start to emerge. There is little empirical knowledge about DFs. Particularly, the psychological processes and consequences associated to DFs remain largely unstudied. Thus, researching on DFs from a human communication perspective is opportune and necessary.