February 18, 2024 – In a recent development, OpenAI has unveiled the Sora model, which has the remarkable ability to generate video content based on textual descriptions provided by users. This announcement has sparked widespread discussion among netizens. However, the capabilities of Sora extend beyond this initial impression.
On Saturday, local time, OpenAI research scientist Bill Peebles shared an image on social media platform X, stating, “This is a sample of a video generated in one shot by Sora, not a compilation of five separate videos. Sora decided to have five different perspectives at the same time!”
The image showcases multiple angles of people walking and playing in the snow, all captured seamlessly by Sora in a single take. This remarkable feat indicates that the model supports the generation of multi-camera videos in one go, potentially disrupting the short-video and filmmaking industries.
Traditionally, video production involves writing a script, capturing footage using cameras, and editing multiple perspectives to create a cohesive final product. However, with Sora, the future of video production may involve simply inputting a script and generating multi-angle, multi-camera videos, which can then be edited by humans to produce a complete work.
Previously reported, Sora adheres strictly to user-provided prompts and can produce videos up to one minute in length while maintaining high visual quality. This opens up endless possibilities for artists, filmmakers, and students alike who require video content.
Sora’s versatility extends to creating complex scenes featuring multiple characters, specific types of movement, and detailed backgrounds. The model generates videos that accurately reflect user prompts. For instance, Sora can produce footage of a stylish woman walking the neon-lit streets of Tokyo, a giant mammoth in a snowy landscape, or even a movie trailer for a space adventure.
Currently, Sora is not available to the public. According to OpenAI, the model is undergoing testing and has only been shared with a select group of researchers and scholars.