With the advent of AI, there exists great potential to use it to accomplish good, but there’s also a lot of potential to misuse it. One scenario that we’ve been warned about is that the artificial generation of photorealistic images would become so advanced that most people would not be able to suspect that something was up.
We were warned about it, and now here we are.
Just a generation ago, if you wanted to find an image, and it already existed, you could use a search engine to find it. And if you couldn’t find it, a skillful use of software such as Photoshop could be used to manufacture it.
As the saying goes, “Seeing was believing, until Adobe made Photoshop.”
The problem with Photoshop is that, even after hours of skilled use, the result may still not be convincing, as most people would know the signs to look for of a shopped image.
Within the last few years, the public has gained access to the generation of “deepfakes”, which use samples of people’s voices to create audio of public figures which sound real, some of which can be off the wall. All that would need to be done is to enter a sufficiently large sample of audio of that person’s voice, as well as what you want to hear the voice saying, and the computer would generate it.
Alternatively, in some cases, a person would enter a prompt, and allow an AI to fulfill the request, such as, “Have Jordan Peterson explain which Bionicle is his favorite, and explain why”.
Someone who listens closely might hear a subtle clue in things like the voice’s inflection which would indicate that something is off, but the problem is, many people might be convinced that what they hear is legit, and it doesn’t take much imagination to determine how something like that can be misused.
It hasn’t been for long that the public has had access to the use of AI to generate images, rather than search for them. And on the user’s end, it’s as easy as entering in a description of what you want to see. Such as this image I requested, of what Made In Abyss would look like if done by Disney in the infamous Calarts beanmouth style:
The algorithm sampled a bunch of images, and gave me a piccie of what is intended to be Riko, dressed up for adventure (note the madokajack above the surface). All I had to do was ask.
Afterwards, I requested an image that captured the tone of this very blog:
I liked it so much that I edited it and re-entered the credit, and used the result as this blog’s header.
For a while, the AIs have struggled with realistic depictions of things like hands and facial expressions. But as the months went on, the AIs have grown so sophisticated that they can now produce photorealistic images with the right prompts. It’s gotten to the point that, when facing legitimate photographic evidence, a person can simply claim that the image was AI generated, and that might work as a defense!
A recent example came in the wake of the Oct 7 Hamas terror attack, wherein Hamas brazenly attacked Jewish civilians in Israel, including a concert held for peace. Among the claims made about Hamas is that they deliberately attacked babies, killing as many as 40 of them.
If you know a few things about Hamas, you’d know that activities such as targeting civilians is well within their MO. In fact, they’re such a nasty group that if you knew everything else about them, and then were told that they kill babies, you’d probably take their word for it. “Oh, so Hamas kills babies? That’s horrendous, but if that’s all they did, they wouldn’t be as bad.”
And yet, some people did. “Hamas killed 40 babies as they rampaged about, killing civilians? Prove it!” That was a sentiment expressed by people who, for some reason, weren’t anywhere close to as skeptical when it came to any claim made by Hamas. So, an image was shared on X showing the charred remains of an infant on a stretcher.
Don’t worry, I’m not sharing the example here. But if you were to go looking for it, you might not have much trouble finding it.
What came next was a battle over whether the image was legit or AI generated. Some claimed to run the image through software that could determine whether images were AI generated, and last I checked, there was a community note on the post indicating that even AI generated image detectors could give false positives.
There is s popular expression which goes, “the first casualty of war is truth”. While truth can never really be a casualty, it remains that with war comes propaganda that is designed to sway public opinion, and now, even ordinary people can make an image that fools the world!
One thing that generates traffic in social media is video of fast food brawls. If someone wanted to, they might be able to make a convincing video deepfake of an early-morning brawl at a Waffle House, and it might not occur to thousands of viewers that the brawl didn’t happen. Or, if a person is an accelerationist, a faked instance of police brutality could be all it takes to get things to pop off.
So, what can we do about it? That’s about the point where we’re at, and as the AI becomes more sophisticated, bad people might become more effective at exploiting it.
For one thing, I expect that there will be a back-and-forth race between deepfakes and deepfake detectors. Currently, deepfake detectors are apparently not perfect. What I fear is that we may be reaching a point of “deepfake singularity”, where any reasonable measure to detect deepfakes could not distinguish real from fake.
Another thing we can do is learn to be skeptical of anything that we see on social media, even if it’s photographic or video. The dynamic of evidence for crimes may change, as it used to be that capturing an act on video was the gold standard for proving to the world that it happened. We’re nearly at the point that video would have to clear some specific hurdles to be considered proof.
Another point I can suggest is that AI programs be coded to defeat attempts to use them to produce inflammatory content, or include a digital watermark to indicate a deepfake, which would be hidden from the user.
I don’t know what the future of AI holds. But it stands to reason that any government that makes a significant advancement in AI will have a distinct advantage over those that don’t, in a similar way to the development of atomic weapons. It’s easy to imagine any government, including the most influential on a global scale, using AI to propagandize their own people, and maybe even people all over the world.
Of course, far more uses have been found for AI, or uses that we might have to look forward to in the near future.
With recent implementation of drone surveillance, the age of getting away with stuff has just about drawn to a close. But AI is already proving to be a boon to law enforcement, as they could use it to determine where crimes are more likely to happen and when, and they can have officers patrol accordingly. While that might not sound bad, imagine if AI were used to profile people based on their likelihood of committing a crime, and when and where!
Or a program could be used to determine a person’s psychological profile based on their social media usage, making it easy to hack their mind to manipulate them into doing things, or even psychologically destroy them with just a few words. Keep it nice in the comment section, by the way.
Oh, and no prize for guessing that AI is being used to create realistic pornography.
Also, I dread the day that game companies create a formula for games to goad players into spending a lot more money on micro-transactions.
Of course, it wouldn’t surprise me if companies stopped using Indian call centers to social engineer the market, and switched to using AI to help them lowball candidates with wages.
Considering what all AI can be used for, it would seem foolish to not implement it for ourselves. While we may still be a while away from holographic assistants like SARA from Toonami, we can at least use AI programs to help us with some day to day tasks. Like coming up with meal plans. Or coming up with an effective studying schedule. Or finding a more ideal workout routine. Or finding what you can do to increase your likelihood of getting into a lasting relationship.
Or, if you have a blog like this one, you could ask one to write up a new entry for you. Have I done this, yet? Can you tell?
Of course, not everyone is as enthusiastic about AI, as it may make certain professions obsolete. Artists are worried that AI may put them out of their jobs. Hollywood script writers are concerned that AI may replace them. Coders can take days to come up with something that could be generated with a prompt in a matter of minutes, perhaps even seconds. The age of teams of engineers sitting at their desks with AutoCAD open, occasionally rotating 3D models to make it look like they’re doing something, may be almost over.
Society might not be ready for the changes that AI would bring.
In the meantime, people are using fake stuff in an effort to manipulate you. If you’re aware that it’s happening, and you’re the right amounts of skeptical and knowledgeable, it’s not likely to have as much of an effect on you.