Hot take on Sora
The experience quickly moves from the fantastical to tremendously disappointing.
Totally not worth $200 a month for anyone.
I am 100% bought into the concept, vision, and even a lot of their product execution but the outputs are unusable 99% of the time for me. If it was $4.99, I’d use it as a game to remix in the community but at $200, I’d need real value and don’t see it.
For what they have today, I’d love access to the featured feed and have rights to download or share with credit.
Google Lab’s Veo 2 feels like it’s several steps ahead. I’m speaking from seeing that Ethan Mollick, et al., have shared.
I think the “success metric” for these tools are how many iterations it takes to get a usable output along with the “discard rate”, e.g. how many times a user gives up on a prompt.
With respect to the latter, with Sora, mine is nearing 100%.
Sora is now able to collect this discard rate data as they’re now explicitly allowing you to keep your versions or discard them. Today, I’ve made four attempts with zero success.
Note that each iteration takes minutes to get an output so this loop is discouraging. I’d like to consider myself a relatively persistent person.
The silver lining is this feels very similar to image generation early on (two years ago).
I’d say it took major five versions of MidJourney until I was 80/20 able to use the first version without multiple iterations. Google Lab’s Image/FX is totally hitting the mark on the first attempt, today, as well.
Because video models are so dependent on data, Facebook or Google is clearly going to have the largest first party corpus of video data.
My belief (from a distance) is Sora is a bet that should be sunset ASAP. OpenAI should focus elsewhere to grow their impact.
It’s not possible, to me, to beat Google and FB at this battle without a massive breakthrough in the approach which (at this moment) Sora doesn’t seem to have that insight integrated.