OpenAI Pulls Sora, but the Questions Around AI Remain

Chat GPT, owned by Open AI, a main investor in Leading the Future. Photo by Jacquelyn Drain '27

When OpenAI announced that it would discontinue Sora, it did more than retire a tool that once looked like the future of AI video. It exposed something that has become increasingly hard to ignore: artificial intelligence is moving fast, but not always cleanly, and not always responsibly. Sora was supposed to represent progress, a glimpse at what creative technology could become. Now, its disappearance stands as a reminder that not every innovation is ready to survive the pressure that comes with being treated like the next big thing.

That is what makes this shutdown worth paying attention to. This is not just about one company pulling one product. It is about what happens when a major name in AI steps back from a tool that once seemed central to the conversation. For all the hype that surrounds artificial intelligence, Sora’s shutdown shows that the technology is still unstable, still being tested in real time, and still raising questions that the people building it have not fully answered. 

Christian Galvez did not see the shutdown as shocking, but he did see it as significant. To him, Sora’s end reflects both the promise of AI and the risks that come with moving too fast. 

That reaction says a lot by itself. AI products have entered the public spotlight with huge promises attached to them, only to later be reworked, criticized, or pulled back entirely. Sora now joins that list. These tools are often launched with so much momentum that they seem impossible to stop, but their disappearance shows that excitement alone does not guarantee they will last.

Still, Galvez did not see the shutdown as entirely negative. In his view, it could actually be a step in the right direction. “I think it’s good that it got shut down because it shows growth,” he said. “It gives them time to make better software.” 

That idea sits at the center of the larger debate. A shutdown can look like failure, but it can also look like restraint. In an industry that usually rewards speed, scale, and constant expansion, there is something notable about a company being willing to pull back instead of forcing a product forward before it is ready.

Video AI does not just create opportunities. It creates risk. A convincing video carries a kind of power that text alone does not. It can mislead more easily, spread faster, and do more damage when used the wrong way. That is where the conversation around Sora moves beyond product design and into something larger. It becomes a question of trust.

Galvez acknowledged that risk as well. “AI can be used for good, but there are also risks like deepfakes and people getting scammed,” he said.

Olivia Locarno ‘28 also expressed how important it is to make sure OpenAI gets any program like Sora right before releasing it to the public. “AI video can be exciting because it gives people new creative possibilities, but it also makes it easier for misinformation to spread if it is not handled carefully,” she said.

Locarno also talked about the risk of people believing what they see on screen without stopping to fact-check it. “That is what makes AI video different. People are more likely to trust what they can see, even when it is false.” That concern helps explain why tools like Sora raise so many questions beyond just creativity. While AI video can be innovative, it can also make misinformation feel more believable and harder to challenge once it starts to spread.

That concern points to the larger issue behind Sora’s shutdown. For both Olivia Locarno and Christian Galvez, the problem is not simply whether AI should continue to grow, but whether the people creating these tools are doing so responsibly. As AI becomes more advanced, especially in video form, the consequences become harder to ignore. That is why this shutdown feels more significant than the loss of one product. It suggests that the conversation around AI is no longer just about innovation, but also about trust.

That is where regulation becomes impossible to ignore. Galvez said there needs to be “a better moral and legal understanding of what AI is being used for,” while Locarno emphasized how easily people can trust what they see on screen without questioning it. Together, their concerns reflect a wider uncertainty about where this technology is headed. Companies have moved quickly to show what their products can do, but the rules surrounding those products have not always kept pace. The result is a space filled with potential, but also one filled with risk.

Still, Sora's shutdown doesn't always imply that individuals are losing faith in artificial intelligence in general. Galvez made that clear when he stated his faith in artificial intelligence had not changed throughout the closure. One item withdrawn is not equivalent to technology as a whole falling apart. If anything, it might reveal that businesses are realizing some tools need more time before they can be relied on.

From this angle, Sora's shutdown has more to do than just one program vanishing. It mirrors the broader issue artificial intelligence currently faces: how to strike a balance between safety and responsibility with originality and invention. AI still holds great promise, but as both Locarno and Galvez argue, it cannot be based only on hype if that promise is to last. It has to be based on trust, caution, and a willingness to admit when something is not ready.

Tyler SteinbergComment