South Korea Braces for Deepfake Threats Ahead of June Presidential Election

featured-image

SEOUL, April 13 (Korea Bizwire) — With South Korea’s presidential election just two months away, concerns are mounting over the misuse of generative artificial intelligence—particularly deepfakes—as tools for political manipulation and disinformation. Although such fears remained largely hypothetical during the 2024 parliamentary elections, experts now warn that rapid advances in AI technology over the past year have [...]The post South Korea Braces for Deepfake Threats Ahead of June Presidential Election appeared first on Be Korea-savvy.

Deepfakes and AI-generated content are already circulating widely on social media platforms like TikTok and Instagram, portraying leading candidates in fabricated photos and videos. (Image created by AI/ChatGPT) SEOUL, April 13 (Korea Bizwire) — With South Korea’s presidential election just two months away, concerns are mounting over the misuse of generative artificial intelligence—particularly deepfakes—as tools for political manipulation and disinformation. Although such fears remained largely hypothetical during the 2024 parliamentary elections, experts now warn that rapid advances in AI technology over the past year have significantly raised the risk.

Deepfakes and AI-generated content are already circulating widely on social media platforms like TikTok and Instagram, portraying leading candidates in fabricated photos and videos. While some creations are lighthearted—such as candidates performing slam dunks or riding tigers—many are malicious. These include manipulated voice recordings of one candidate allegedly insulting another, or doctored images showing candidates in prison uniforms.



Deepfakes and AI-generated content are already circulating widely on social media platforms like TikTok and Instagram, portraying leading candidates in fabricated photos and videos. (Image created by AI/ChatGPT) On April 11, the campaign team of opposition candidate Lee Jae-myung reported an attempt to spread a deepfake video purporting to show him verbally abusing his wife. The campaign vowed to pursue legal action against anyone involved in distributing the video.

Experts warn that once such content goes viral during an election period, the damage may be irreversible. “Even older generations can now create and spread manipulated media using tools like ChatGPT,” said Lee Sung-yeop, a professor at Korea University’s Graduate School of Technology Management. “This makes the spread of disinformation much faster and harder to control.

” South Korea’s new AI Framework Act—which mandates watermarking of AI-generated content—will not take effect until January 2026, leaving the upcoming election in a regulatory vacuum. In response, law enforcement agencies are ramping up efforts to prevent AI-related election crimes. Police have set up election task forces at 278 precincts nationwide and plan to closely monitor deepfake content.

Under a revised Public Official Election Act, in effect since 2023, the creation or dissemination of deepfake videos for campaign purposes is prohibited starting 90 days before election day. Though no prosecutions occurred during last year’s election, authorities believe the situation may change given the explosive growth of generative AI use in 2025. “Spreading false information or defamatory content using AI or deepfakes can lead to criminal charges,” a police spokesperson said, signaling a tougher stance ahead of the June 3 vote.

M. H. Lee (mhlee@koreabizwire.

com).