In this study, we delve into the growing concern surrounding DeepFakes in contemporary society, despite its relatively low public recognition. Our research aims to scrutinize the origins, potential, and risks associated with DeepFakes. Analyzing 203 news articles from 16 media outlets across Bangladesh, India, and Pakistan, we categorized the extracted news into threats, prevention measures, and entertainment-centric content. The findings reveal a predominant focus on the threat posed by this technology, particularly evident in Pakistani newspapers, while Indian and Bangladeshi outlets also cover this issue, albeit to a lesser extent. The dissemination of misleading information through media channels may initially bolster their credibility but ultimately tarnish their reputation. Furthermore, this study underscores the pivotal role media professionals play in perpetuating disinformation. Delving into the historical roots, the emergence of DeepFakes traces back to the 1865 lithographic portrayal of Abraham Lincoln, reflecting humanity’s enduring fascination with altering facial features. With the advent of DeepFake technology, powered by deep learning algorithms, the manipulation of photos and videos has reached unprecedented levels. These algorithms, employing auto encoders or generative adversarial networks, seamlessly substitute faces in source videos with those from target videos. Initially spotlighted by Reddit’s “DeepFakes” account through fabricated pornographic videos, the term has expanded to encompass a broader array of AI-generated films mimicking individuals. These DeepFake videos fall into three primary categories: head puppetry, face swapping, and lip syncing, each posing unique challenges and ethical concerns. Importantly, the accessibility of consumer-grade hardware and software packages has democratized the creation of DeepFakes, fueling their proliferation across various domains, from entertainment to targeted attacks. While DeepFake films offer innovative possibilities, their potential for malicious exploitation, as highlighted by Chesney and Citron (2019), underscores the urgent need for comprehensive safeguards against the manipulation of reality on political, social, economic, and legal fronts.