Photographer: Johannes Eisele/AFP via Getty Images

PCO Warns of Audio Deepfake of President Marcos Jr. Ordering Military Action Against China


April 26, 2024
Updated on July 4, 2024
{{bullet-1}}
{{bullet-2}}

The Presidential Communications Office (PCO) warned the public about a circulating altered audio of President Marcos Jr. allegedly ordering the military to act against China should military tensions escalate. On April 23, 2024, the PCO posted an advisory on their Facebook page, clarifying that “no such directives” were given by the President and that the video contained “manipulated audio” generated using Artificial Intelligence (AI) deepfake technology.  

PCO Statement

The original video was posted on YouTube, with a caption that read “Atakehin ang China! Inutos na atakehin, PBBM may go-signal na (Attack China! Order to attack has PBBM’s go-signal).” According to reports, the video, alongside the deepfake audio, contained a series of publicly released photographs from the Philippine Coast Guard (PCG) depicting Chinese activities in the West Philippine Sea. The video and the account that uploaded the video were later taken down per request of the Department of Communications and Information Technology (DICT). However, copies of the said video continue to circulate online. 

“We immediately asked YouTube to take it down and they did” 

Jeffrey Ian Dy, DICT
Screenshot of a copy of the video with deepfake audio

NBI Ordered to Investigate  

In response, Department of Justice (DOJ) Secretary Jesus Crispin “Boying” Remulla has directed the National Bureau of Investigation (NBI) to investigate the deepfake incident and “file the necessary legal actions” against those behind the creation of the video.  

“Hold accountable the personalities behind this deceiving act, make the investigation swift and comprehensive to ascertain the truth,” 

Secretary Jesus Crispin Remula, DOJ

 
Deepfake Briefer; Deepfakes on the Rise 

What are Deepfakes? 

Deepfakes are media contents such as audio clips, videos, or images generated by Artificial Intelligence (AI) through a kind of machine learning process called “deep” learning. Information such as photos or videos are fed into an AI mode and is then used as the basis for that model to produce a similar and realistic outputs.  

Use of Deepfakes by Threat Actors 

While the creation of deepfakes were intended for non-malicious purposes, threat actors have already begun exploiting its features to manipulate voices and videos to produce convincing outputs which can be used to do the following: 

  • Scam and Fraud 
  • Identity Theft 
  • Pornography and “Sextortion” 
  • Political Manipulation Conspiracies 
  • Celebrity and Political Hoaxes 

Regional Director for Southeast Asia Global Forum on Cyber Expertise Allan Cabanlog warned that with the increasing use of deepfake technology, measures will need to be taken to prevent various forms of fraudulent activities exploiting the technology.  

CNN Anchor Ruth Cabal Investment Deepfake 

On December 24, 2023, then CNN Philippines anchor Ruth Cabal became a victim of a deepfake video that circulated online. According to her Facebook post, the video, which was taken from a previous news cast, contained altered audio talking about an investment platform that was allegedly announced by a certain ‘Robin Padilla.’  

Deepfake of Ukrainian President Surrendering to Russia 

On February 18, 2022, a deepfake of Ukrainian President Volodymyr Zelensky ordering his troops to stand down against Russia circulated online. According to reports, the video was posted on a hacked Ukrainian website. 

HK Firm Loses USD 25 Million to Threat Actor Posing as CFO 

A multinational firm in Hong Kong lost USD 25 Million dollars to threat actors using deepfake technology. According to reports, a finance worker in the company was led to believe claims from the threat actors who got into a video conference call, posing as the chief financial officer and familiar colleagues who “looked and sounded” just like them. 

Means to Create Audio Deepfakes, Readily Accessible to the Public  

PSA notes that the means to create audio deepfakes is readily available and accessible to the public through various deepfake voice software and tools that can be found online. Some of these tools are free with limited features, while paid models are also available, offering more functionalities such as being able to take a voice recording sample of someone and cloning that voice to say anything. 

With audio deepfakes readily available, video deepfakes, which require more time and resources to produce, will soon follow, as recent developments in AI technology have paved the way for the creation of more “realistic and imaginative” outputs. This too will soon be accessible to the public. 

While the use of AI is not illegal per se, utilizing this technology with the intent to cause malicious harm or conduct fraudulent activities is punishable under the existing cybercrime law.   


Share this article
Email