SMU partners with South Korea to develop breakthrough AI deepfake detection system

SMU partners with South Korea to develop breakthrough AI deepfake detection system

image:

Associate Professor He Shengfeng from Singapore Management University (SMU) is leading a pioneering effort to develop a multilingual deepfake detection system tailored for Asia, with future applications in the commercial sector.

Credit: Singapore Management University

SMU Office of Research Governance & Administration – Singapore Management University has secured a major research grant for creating an innovative deepfake detection platform, thanks to the leadership of Associate Professor of Computer Science, He Shengfeng. Competing against prominent institutions, SMU’s proposal stood out for addressing the complexities of multilingual and culturally specific content.

The Artificial Intelligence (AI) system, expected to be completed over the next three years, will be the first to incorporate a multilingual dataset, including regional dialects such as Singlish and Korean variants. The project’s real-world potential includes broad commercial uses and adaptability for diverse Asian language environments.

“Most current detection tools are not effective when dealing with Asian accents or regional content,” Professor He shared via an email interview with SMU’s Office of Research Governance & Administration. “Our goal is to build a system that truly addresses the linguistic landscape of this region.”

AI Singapore (AISG) and South Korea’s Institute for Information & Communication Technology Planning & Evaluation (IITP) launched a joint call in March 2025 for a tool that reflects the linguistic and cultural nuances of both countries. AISG is part of a national initiative in Singapore aimed at advancing AI research, while IITP leads South Korea’s tech planning. The grant funds a collaborative research partnership between Singapore and South Korea for region-specific fake video detection technology.

SMU’s counterpart is Sungkyunkwan University (SKKU) of South Korea, led by Associate Professor Doowon Jeong. Their cooperation follows an attack-defense research model. SMU will create and identify manipulated video content, while SKKU works on authenticating content and tracing its origin.

“This structure allows continuous improvement through mutual challenge,” explained Professor He. “One side tries to create and detect deepfakes, while the other sharpens tools for verification—like a balanced cybersecurity test between offense and defense.”

Introducing DeepShield: Innovation in detection

The collaborative team has named their system DeepShield, aiming for key innovations in the fight against manipulated digital media. This tech aims to counter sophisticated fake content that is increasingly used for false information, scams, and identity abuse.

According to the proposal, DeepShield will be the first comprehensive, explainable framework capable of identifying a wide range of media alterations—from facial swaps to background changes, lighting modifications, object insertions, and voice dubbing—all within a single system.

Another major component will be an embedded, reversible signature within manipulated videos, allowing not only detection but total reconstruction of the original material. “This is a significant step towards certifying the authenticity of digital media and making AI-generated content traceable without the need for extra storage,” said Professor He.

A third innovation involves tailoring the system for local usage, with built-in support for detecting dialect-specific tampering. Culturally tuned recognition ensures that accuracy is maintained for content from Asia, rather than being skewed toward Western-language interpretation, as highlighted in the project’s video proposal.

Unlike typical commercial tools, DeepShield aims to be more than just a detection mechanism. Its goal is to serve as a new layer of governance to ensure the integrity of digital content in a global, multicultural setting.

Starting in January 2026, the team will begin gathering extensive video data. Their primary source will be large public datasets like YouTube8M, which contain diverse, non-personal video content frequently used in academic studies. Around 200,000 clips will be curated, with both AI and human experts checking each one to ensure it meets criteria for relevance, clarity, and ethical use.

To build reliable training data for their system, the team will generate fake versions of videos on their own. “This approach keeps us in control of the alterations and ensures accurate labeling, which is essential for a trustworthy system,” Professor He explained.

Partnering with industry leaders

Support from major industry players is expected to be critical during development and testing. One key partner is Ensign InfoSecurity, headquartered in Singapore and the largest cybersecurity provider in Asia. It will create a testing environment to replicate video traffic scenarios from public and telecommunications sectors.

In South Korea, SKKU’s partnership with Deepbrain AI, a tech company known for lifelike AI avatars, will focus on assessing the system in a scalable, cloud-based platform, particularly for enterprise media content.

The involvement of these partners, according to the research team, ensures DeepShield will be tested in settings that reflect high user volume and practical application needs—key for tasks like verifying news clips and ensuring content authenticity on short-form video platforms.

If the project performs as planned, the team envisions forming a start-up to provide services like deepfake analysis, media authenticity checks, compliance support for enterprises, and governance tools for digital platforms. They also anticipate opportunities to license the technology to regional governments, financial institutions, digital platforms, and AI regulatory bodies throughout Asia.

Professor He acknowledged the scale of the challenge, calling the project “the most complex” he has ever led. He was named among the world’s top two percent most-cited researchers in 2023 and 2024 by Stanford University and Elsevier, based on citation analyses that exclude self-referencing.

“This goes far beyond building diagnostics in a lab,” he added. “It brings together universities, industry, and governments across different nations. We’re managing everything from data validation and design to real-world applications and policy implications. It’s challenging—but profoundly important.”

1139 likes 39 600 views
No comments
To leave a comment, you must .
reload, if the code cannot be seen