Excellent follow-up article on the whole growing and concerning
issue of AI reported on from VOX.com with their headline (formatted for the blog):Follow
“What will stop AI from
flooding the internet with fake images?”
On May 22, a fake photo of an explosion at the
Pentagon caused chaos online. The photo
was quickly determined to be a hoax, likely generated by AI. But in the short
amount of time it circulated, the fake image had a real impact and even briefly
moved financial markets.
Within a matter of minutes of being posted, the
realistic-looking image spread on Twitter and other social
media networks after being retweeted by some popular
accounts.
Reporters asked government officials all the way up to
the White House press office what
was going on.
This isn’t an entirely new problem.
Online misinformation has existed since the dawn of the internet, and
crudely photo-shopped images fooled people long before generative AI became
mainstream.
But recently, tools like
ChatGPT, DALL-E, Midjourney, and even new AI feature updates to
Photoshop have supercharged the issue by making it easier
and cheaper to create hyper-realistic fake images, video, and text, at scale.
Experts say we can expect
to see more fake images like the Pentagon one, especially when they can cause
political disruption.
One report by Europol (European Union’s Law Enforcement Agency), predicted that as much as 90% of content on the internet could be created
or edited by AI by 2026.
Already, spam news sites seemingly
generated entirely by AI are popping up. The anti-misinformation
platform NewsGuard started tracking such sites and found nearly three times as
many as they did a few weeks prior.
Joshua Tucker, a
professor and co-director of NYU’s Center for Social Media and Politics says:
“We already saw what happened in 2016 when we had the first election with a
flooding of disinformation. Now we’re going to see the other end of this
equation.”
So what, if anything, should the tech companies that are
rapidly developing AI be doing to prevent their tools from being used to
bombard the internet with hyper-realistic misinformation?
The answer to that question and the rest of this story continues here with
extensive details – good read.
My 2 Cents: FYI: My previous post on this growing issue and potentially
serious problem is also posted here.
The B/L to me is simple: AI has good potential, but with savvy hackers there is also great potential harm, too.
High tech can and should work
to put up or in place guardrails and notices to alert readers and posters to prevent harm that is for the public good and
to stop nefarious no-goodniks.
Thanks for stopping by.
No comments:
Post a Comment