Whose job will OpenAI's Sora smash?

Source:Sina Technology, Chinese web portal and online media company

Author: Zheng Jun

Whose job will OpenAI's Sora smash?

Unexpectedly, the same day to make a move. Two AI giants have each unleashed their new AI nukes on the same day, theOpenAISora, the Vincennes video model, has once again won a standing ovation, and Sora's stunning debut not only crushed many of its AGI video peers, but has the potential to be a game changer for the future of the movie and TV ad game industry.

Google's new model is a performance winner

On Thursday, Google suddenly released a new generation ofmultimodal macromodelGemini 1.5 Pro, accelerated beyond in the battle of big models with OpenAI. This is the industry's strongest big language model to date, supporting up to 10,000K Token contexts, which directly boosts performance to the million level, completely killing OpenAI's GPT-4 Turbo in terms of performance.

What Does a Million Level Token Mean? Head of Google's AI programJeff Dean.(Jeff Dean) explained that with Gemini 1.5 Pro's ability to support millions of context windows, users can complete complex content interactions, easily parse entire books, movies, and podcasts, understand very long documents, and even code libraries with hundreds of files and hundreds of thousands of lines.

The release of Gemini 1.5 Pro means that Google has a strong performance advantage in the arms race with OpenAI. In comparison, OpenAI's GPT-4 Turbo can only handle 128k Token and has been experiencing an unfavorable performance dip even more recently, until the release of an update last month.

Whose job will OpenAI's Sora smash?

However, OpenAI didn't leave Google alone. On the very same day, they released Sora, an AI model for text-generated video, which automatically generates video with just text; following the text modelChatGPTand image model Dall-E, OpenAI is back to disrupt the video space.

Compared to the Google Gemini 1.5 Pro's hardcore data performance-based advantages, Sora's stunning visual aesthetics-based performance was clearly more impressive and quickly became a hot topic on social media sites.

Stunning details that are as good as the real thing

What's so amazing about Sora?OpenAI has shown several clips of Sora-produced video content, and these clips alone are enough to blow people's minds.OpenAI writes in its official blog that Sora not only understands the needs of its users, but also knows how these things exist in the real world.

By simply typing in a piece of text, Sora automatically generates HD videos of up to one minute in length. Incredibly, Sora not only captures the complexity of the user's text, but also breaks down the different elements and transforms them into specific creative ideas for video content that looks like it was professionally directed, filmed and edited.

Whose job will OpenAI's Sora smash?

A fashionable woman wearing sunglasses and a leather jacket walking on the streets of downtown Tokyo after a rainy night, the corners of her lips slightly curled with bright lip gloss, even with sunglasses you can see her smile, the water on the ground reflects her silhouette and the neon lights of the red lights; the bustling Chinatown is in the midst of a dragon dance performance, the eyes of the crowd are focusing on the colorful dragons leaping, and the whole environment of a festive atmosphere The festive atmosphere of the whole environment is as if one were there.

Unlike previous AI videos that had a distinctly plastic feel, the video produced by Sora has a significant difference in realism and artistry: the slightly curled hair of the characters, the moles and pimples on a woman's face, the neon light reflected in the water on the ground, the many food items sold by street vendors, and the cherry blossom snow falling from the sky, the fineness of the details is almost as fine as if they were real.

Whose job will OpenAI's Sora smash?

More surprisingly, Sora video in the composition, color, creativity and operation of the camera, all show a clear style of film, whether it is a shot to the end or multi-camera can be seamlessly switched, and even the expression of the "actor" demeanor, which is the previous Vincennes video products do not have.OpenAI will be the whole AI video industry a level up. OpenAI has elevated the entire AI video industry by one level.

Although the video produced by Sora has not reached the point of perfection, a closer look can still be seen "help" place, the character has eaten the cookies will even be intact, but in the image quality has been compared to the previous AI video has made a qualitative leap, and even have the texture of the movie. Moreover, just based on an abstract text can produce a movie-like multi-camera video, this semantic understanding and the ability to use the lens is close to the level of human director, camera and editing. Clearly, the ChatGPT moment in video has arrived.

AI is evolving at an alarming rate

After the release of Sora, the Internet was in awe, almost stealing the limelight of Gemini.The speed of evolution of AI is really shocking. It's shocking how fast AI is evolving. You know, it's only been 14 months since OpenAI launched ChatGPT and started the generative AI era. Until last year, we were just getting familiar with text-generated image products, and just six months ago, AI images created by MidJourney featured six-fingered characters. And now, Sora's videos alone are starting to make everyone feel the blurring of the lines between reality and the virtual.

While OpenAI's GPT-4 Turbo had previously suffered from performance degradation and slowdowns, raising concerns that the growth of generative AI had hit a bottleneck; the release of Sora has certainly put all those fears to rest. Aaron Levie, founder and CEO of cloud computing company Box, lamented after the release of Sora, "If anyone was still worried about AI evolution slowing down, we've once again seen the exact opposite paradigm."

Currently Sora is open for testing only for invited producers and security experts to discover and resolve possible security issues, and no official public beta schedule has been announced. After all, in an Internet awash in disinformation, theDeepFakeThe ethical issues have also come to the forefront of concern, and videos like Sora's that are faked to look real could have disastrous consequences if misused.

At almost the same time as the release of Sora, OpenAI also closed a tender offer share sale deal, not to raise money for company purposes, but to allow employees to cash in on the sale of their existing shares to venture capitalists led by Thrive Capital. It's worth noting that, as a member of OpenAI's board of directors, Altman himself doesn't own stock in the company, and the valuation spike doesn't bring him a huge fortune.

The deal values OpenAI at $80 billion overall, more than tripling the $30 billion it was valued at early last year. OpenAI has become one of the most highly valued startups in the world, behind only ByteHop and SpaceX, as per investment and financing market research firm CB Insights.

In fact, the deal was supposed to be finalized last November, and was only forced to be put on hold because of the furor over Altman's clash with the board. With Altman back as OpenAI CEO, investors have once again given the AI giant a vote of confidence. Obviously, OpenAI's valuation will soar even further after the official launch of Sora.

Giants step up to crush AGI peers

So what exactly is the impact of the amazing text born video Sora?

Whose job will OpenAI's Sora smash?

AGI video counterparts were undoubtedly the ones who suffered the most immediate impact; following the release of Sora, AI video startup Runway CEO Cristóbal Valenzuela posted two simple words on Platform X (and previously Twitter), "Game On." (The competition is on! ). Runway just released its Gen-2 video model a few months ago. And Emad Mostaque, CEO of Stability, another AI video company, directly lamented, "Ultraman is such a magician."

Founded five years ago, Runway has a first-mover advantage in the AI video space and is already being used by mainstream Hollywood studios. Last year's film of the year, "Omniverse", which won seven Oscars, used Runway to produce AI video. After the success of "Omniverse", Runway's new round of funding has increased its valuation to $1.5 billion, triple the valuation of a year ago.

The text-generated video space is the hottest startup space right now. Over the past few months, as the generative AI boom has surged, so have a number of text-born video and image-born video startups. a16z's AI investing partner Justin Moore lists more than 20 text-born video startups he tracks, including many startups such as Pika and Zeroscope, which have once sparked internet amazement.

At the end of last year, Pika Video, founded by Chinese Stanford graduates, once triggered the awe of the Chinese and American Internet. Thanks to the stunning performance of AI videos, the startup, with only four people, completed three rounds of financing of more than $55 million in less than six months, and its valuation soared to $250 million.

But now, AI giant OpenAI directly threw out Sora. whether it is the length of the video, or the fineness of the picture, or the completeness of the details, or multi-camera shooting, Sora is far beyond the video of these small startups, and it is not too much to say that it is crushed. Although the AI video field still has a huge room for improvement and growth, but the future of these small companies have the ability to compete with OpenAI is still a huge question mark.

Swaying Hollywood Labor Negotiations

However, Sora affects not only the viability of other AGI video startups, but will be a game changer for the future of Hollywood as a whole, as well as the movie, TV, advertising, and gaming industries.

Hollywood's use of AI to produce pictures and videos is nothing new. From CG (computer animation), VR to AI, the film and entertainment industry has always been the first adopter of high technology. However, unlike other technologies, AI tools have always been a thorn in the heart of Hollywood practitioners.

In addition to Instant Omniverse using Runway's AI video tools, last year 21st Century Fox had already partnered with IBM Watson to use AI tools to create a trailer for Morgan, a horror movie about the subject of AI; and Disney's Marvel even animated the beginning of Secret Invasion entirely in AI.

It was during the Hollywood Actors and Writers Union general strike. The use of generative AI in the film and television industry was also one of the points of contention between the two sides. It was during the negotiations between the two sides that the actor-writers learned that Disney Marvel's new season of Secret Invasion had used AI technology exclusively to create the opening scene. This news put the negotiations between the two sides on hold again.

Why has the use of AI tools in the movie and television industry sparked so much controversy? The industry is mainly concerned that producers are using existing material for AI training and frequently using AI tools to generate content in the future, which not only infringes on the copyrights of the creators' existing works and doesn't give them enough return, but also affects the creators' future job opportunities and space.

While writers and actors went out of their way to bring the industry to a halt and put themselves out of work last year in exchange for temporary concessions from producers to set more regulations on the use of AI tools. But at the next labor negotiations in three years, actor-writers may be in an even tougher position in the face of AI, which is bound to have a major upgrade in performance.

TikTokization of Movies and TV

With the stunning debut of Vincennes' video model Sora, perhaps the entire Hollywood workforce is faced with a huge question: at the exponential rate of AI's evolution, and perhaps without waiting much longer for AI to be able to generate a fully plotted short or even a movie, with everything from scripting to filming to acting to post-production completely taken care of, what will the future of Hollywood look like?

Whose job will OpenAI's Sora smash?

Dave Clark, the Hollywood director who made the horror movie When She Wakes, is already using AI tools to make movies. In his view, AI technology such as Sora poses no threat; creators need to embrace it to create content that was previously unattainable or unimaginable. "It's game-changing technology. You shouldn't have to worry about what you're doing, but who is using these tools."

A survey of 300 Hollywood industry leaders released last month by industry research firm CVL Economics reveals that concern pervades all of Hollywood.36% of respondents said that generative AI has reduced their company's day-to-day job skill needs,72% and all of the companies surveyed were among the earliest adopters of generative AI tools.

The harsher reality is that 75% of respondents admitted that generative AI (tools, software, models) has prompted job cuts and consolidation in their business units. These people who control the order of the industry in Hollywood expect a total of more than 200,000 Hollywood jobs to be hit by AI over the next three years, especially in post-production jobs such as visual effects, sound engineers, and graphic artists.

Jason Hellerman, the writer of the movie Shovel Buddies, believes that as AI tools become more sophisticated, producers in the future may of course use tools such as Sora to generate videos and no longer need to pay a team of producers.AI-generated content may also create a whole new genre, but if anyone can produce videos and movies with AI and become a "content creator," this will inevitably bring down professional standards. AI-generated content may also create a whole new genre, but if anyone can use AI to make videos and movies and become a "content creator," this will inevitably lead to a lowering of professional standards.

He predicts that in the future everyone will be able to generate their own videos, just as everyone now shoots and watches short TikTok videos on their cell phones. Young Gen Zers who are used to short videos will gradually eschew long content like movies and TV in the future. Perhaps in the future of AI-generated video, movies and TV will become similar to TikTok short videos.

The above content are reproduced from the Internet, does not represent the position of AptosNews, is not investment advice, investment risk, the market need to be cautious, in case of infringement, please contact the administrator to delete.

Like (0)
Donate WeChat Sweep WeChat Sweep Alipay Sweep Alipay Sweep
Previous February 16th, 2024 at 8:22 am
Next February 18, 2024 at 3:23 pm

Related posts

Leave a Reply

Please Login to Comment
WeChat Sweep
Baidu Sweep

Subscribe to AptosNews

Subscribe to AptosNews to stay on top of Aptos.


This will close in 25 seconds

This site has no investment advice, Investment risk, Enter the market with caution.