Sora AI and Its Social Implications
With the release of Sora AI and its hyperrealistic video generation, a threat is posed to our consumption of online media, particularly in our news sources. It will be difficult for consumers to know which sources are real and which are AI-generated. One must ask in this new digital age how people will be able to trust what they see online.
Many AI outlets utilize watermarks to distinguish the fact that the content is artificial, and Sora AI has implemented watermarks as well for this exact reason. However, this can be negated eventually if Sora AI chooses to remove the watermark. With this, it should be mandated that hyper-realistic AI videos and photos like this must be watermarked to protect from misinformation. This can be accomplished using specific coding and disabling an AI from removing this type of AI watermark. This would function similarly to an “I am not a Robot” button. Without it, AI will pose an aforementioned threat to credibility, as well as artistic value.
This advancement provides another ethical conundrum surrounding AI. When is there too much advancement? When will the line with AI be drawn? Can a line be drawn at this point? Will a line be drawn?
As long as companies are making more money than they are losing, AI will continue its march on.
I am hoping that if there is anything this advancement will provide for us, it is that we will become exhausted from AI and social media. That advancement marks a hard swing in the social pendulum towards technological advancement.
Hopefully, the pendulum will swing back so hard and break. By disconnecting from social media and the saturation of information, we can begin to trust our tangible sources and what our newspapers and planet are telling us.
With this, hopefully we will begin to identify harder with the lakes AI is draining and the air AI enables the pollution of. When we connect with what’s real more, we will understand what we can really trust and what really matters.