Towards a Social History of Deepfakes

Increasingly “deepfakes” make the news. Whether Anthony Bourdain’s artificially recreated voice reading his written but likely never spoken words or Game of Throne’s Jon Snow apologizing for the series’ ending, technologies have made it easier to make contemporary, historical, and even fictional people appear to say and do things they never did. So how should we approach information that may not be true?

I am (for better or worse) a devout admirer of the now somewhat outdated (but still impactful) Edinburgh or strong school of the sociology of scientific knowledge (SSK), especially in its more historical forms. One of its practitioners, Steven Shapin, argued for the historical specificity of not just science but of what counts for truth. For Shapin, acceptance of everyday as well as scientific truths depends on socially constructed behavior that conveys who is to be trusted within a society.[1] Though Shapin’s evidence comes from early modern England, it suggests that truth in any society, including our present, is as much about the representation of knowledge and that of its creators and sharers as about the world it represents. Fundamentally, it means that decisions about who and what to trust are societal ones. No one person has the time or resources to “truth out” every fact they encounter in a day, much less in their entire life.

Ideas such as Shapin’s revelation that truth is the product of society were seen as threats to all (scientific) knowledge during the “science wars” of the 1990s, but reconstructing a history of truth also demonstrates how we can combat misinformation today. After all, if people and societies have always decided what sources of information they trust, then we are simply encountering a new version of an old problem. Fundamental to deciding who and what to trust is picking what experts to trust and doing that involves deciding what credentials to trust, whether they be academic or social. For me, that means favoring peer reviewed sources of information in my academic field but also thinking carefully about what information news sources and websites are spreading and why they might be doing so (and comparing between them). In the realm of deepfakes, that means trusting experts on video editing and production to decide what isn’t real. And on social media, it means verifying posts with other information.

Ultimately, it may be harder to detect some fakes due to new technology and a more open publishing environment via the internet, but the emergence of a societal mechanism for identifying true information and rejecting false information is just a new iteration of an old problem.


[1] Steven Shapin, A Social History of Truth: Civility and Science in Seventeenth-Century England (University Of Chicago Press, 1995).

css.php