Photographer: Johannes Eisele/AFP via Getty Images

SORA: OpenAI’s New Text-to-Video Model; Dangers and Implications


February 23, 2024
Updated on February 23, 2024
{{bullet-1}}
{{bullet-2}}
Share this article
Email

Artificial Intelligence (AI) research company OpenAI previously announced a new text-to-video AI model system known as SORA. This new generative model allows the creation of “realistic and imaginative” videos from mere text prompts, representing a tremendous advancement in AI capabilities. Aside from text-to-video functions, SORA can also video edit and create videos from images. While not publicly available yet, OpenAI has already released high-quality sample outputs that have caused both excitement for AI advancement and concern due to ethical implications.

“In addition to being able to generate a video solely from text instructions, the model is capable of taking an existing still image and generating a video from it, animating the contents of the image with accuracy and attention to small detail. The model can also take an existing video and extend it or fill in missing frames.”

OpenAI

According to the Chief Executive Officer (CEO) of OpenAI, Sam Altman, limited access is currently available to a few “Red Team” groups to assess the security risks, as well as some groups in the creative field to “gain feedback” in terms of improving the model for creative professionals. SORA is currently still in development at the time of writing, with a release date yet to be determined.

asdasd
Snapshot of a video-generated output from SORA

How Does SORA Work?

SORA employs a diffusion model that involves processing video data, then breaking down bits of the fed data into “patches or segments” which are then used as the “building blocks” that SORA can use to create something entirely new. Additionally, Sora combines the use of deep learning, natural language processing, and computer vision to accurately achieve the desired output.

Diffusion Model

But where does OpenAI get its data to train its AI models? While the exact source for their materials was not officially stated by OpenAI, a current partnership with Shutterstock may indicate a possible answer. Shutterstock, a photography media company that offers stock photography and footage, extended its partnership with OpenAI in 2023 through a six-year agreement, giving OpenAI access to “…training data including Shutterstock’s image, video, and music libraries and associated metadata.”

“We’re pleased to be able to license Shutterstock’s high-quality content library. This extended collaboration not only enhances the capabilities of our image models but also empowers brands, digital media, and marketing companies to unlock transformative possibilities in content creation and ideation.”

Brad Lightcap, Shutterstock Chief Operating Officer

Implications and Concerns Surrounding SORA

While SORA has promising applications in various fields, the main concern revolves around how it may be used to amplify disinformation, manipulate people through its “realistic outputs” and commit various forms of online fraudulent activities. Threat actors or individuals with malicious intent who have access to SORA may find it easy to produce deepfake videos or misleading content used to spread fake news or disinformation. Election campaigns can be influenced through AI-generated campaign materials. With its current capability, it is also possible to utilize SORA to produce “fake evidence” to disrupt a criminal investigation. 

“Sora is absolutely capable of creating videos that could trick everyday folks. Video does not need to be perfect to be believable as many people still don’t realise that video can be manipulated as easily as pictures.”

Rachel Tobac, SocialProof Security co-founder.

While the use of AIs is not illegal per se, tapping into the technology with the intent to cause malicious harm or conduct fraudulent activities may be punishable under the existing Philippine cybercrime law. Regional Director for Southeast Asia Global Forum on Cyber Expertise Allan Cabanlog notes that while this law already exists, authorities lack the resources to uncover these threat actors due to their anonymity, and lack of manpower and resources.

Hotline and Resources for Reporting Online Scams or Fraudulent Activities:

Scam Watch Piipinas: 24/7 Inter-Agency Response Center Hotline

  • Main Hotline: 1326
  • SMART: 0947-714-7105
  • GLOBE: 0966-976-5971
  • DITO: 0991-481-4225

Cybercrime Investigation and Coordinating Center (CICC)

Philippine National Police (PNP): Anti Cybercrime Group (ACG)

National Bureau of Investigation (NBI): Cyber Crime Division (CCD) 

Department of Justice: Office of Cybercrime