OpenAI has unveiled its latest innovation, the “SORA” module, marking a significant advancement in AI technology. CEO Sam Altman made the groundbreaking announcement today on X, formerly known as Twitter, heralding a new era of possibilities in artificial intelligence.
The SORA video generation module has already undergone testing and has been granted access to select content creators, as confirmed by OpenAI CEO Sam Altman. This development underscores the remarkable progress being made in the realm of AI, offering a glimpse into the future of its limitless capabilities.
Videos generated by “SORA” have also been shared, showcasing a remarkable level of realism. Many of these videos are incredibly lifelike, blurring the lines between reality and AI-generated content. Below, we present some examples of these captivating videos.
BackGround: According to Open AI’s website on the debut of “SORA” it describes it as a diffusion model, which generates a video by starting off with one that looks like static noise and gradually transforms it by removing the noise over many steps. Sora is capable of generating entire videos all at once or extending generated videos to make them longer. By giving the model foresight of many frames at a time, we’ve solved a challenging problem of making sure a subject stays the same even when it goes out of view temporarily.
Similar to GPT models, Sora uses a transformer architecture, unlocking superior scaling performance. It represent videos and images as collections of smaller units of data called patches, each of which is akin to a token in GPT. By unifying how we represent data, we can train diffusion transformers on a wider range of visual data than was possible before, spanning different durations, resolutions and aspect ratios.
Sora builds on past research in DALL·E and GPT models. It uses the recaptioning technique from DALL·E 3, which involves generating highly descriptive captions for the visual training data. As a result, the model is able to follow the user’s text instructions in the generated video more faithfully.
In addition to being able to generate a video solely from text instructions, the model is able to take an existing still image and generate a video from it, animating the image’s contents with accuracy and attention to small detail. The model can also take an existing video and extend it or fill in missing frames. Sora serves as a foundation for models that can understand and simulate the real world, a capability we believe will be an important milestone for achieving AGI.