Antivirus For Your Mind: AI Generated Images

It used to be that seeing was believing, but now is the time to get really skeptical. AI has developed to the point that photorealistic images can be generated in a matter of seconds, just from a short prompt. This is a fact that some pretty bad people are taking advantage of on social media, and they’re having their way with those who don’t seem to understand what’s going on.

Making one of these images isn’t hard, either. All one has to do is open the right website, then type in something like “Trump kicking puppies in a filthy alley”, upload it to social media, then watch as the gullible pile in with comments, such as,

What’s Trump doing in such a setting? It’s highly out of character for a presidential candidate to mill about in a random alleyway entirely unescorted. And what’s more, he’s entirely surrounded by filth, which is not the kind of thing I’d expect from someone of his stature. Also, that he’s kicking puppies is kinda dubious.

Massive MAGA

It used to be that if you saw photographic evidence of wrongdoing, the photo itself was considered sufficient to convict. To have faked something completely photorealistic would have taken such time and effort, it would have been implausible to expect from even a professional with an axe to grind.

But now, just one guy who’s out to make a pro-Palestine cause out to be a right-wing position can use up all his allotted image generations per day to make neckbeards with guns, then spend the day posting them on X.

Probably a fed.

On the surface, there doesn’t seem to be much that one can do about it. The djinni is out of the bottle, as the expression goes, and we have to understand that this is the nature of the world that we live in, now. We now live in a world where AI is a fact of life, and we have to adapt, or risk being left behind.

If you know that there are people out there that abuse AI, you’re less likely to fall for their fakery. And if more people become aware of the nature of the world that we now live in, less people would be likely to be tricked.

Thankfully, there are websites now that can check images for the likelihood that they’ve been AI generated. Illuminarty is one that I’ve used. It’s not perfect, as it can only provide likelihoods that images were AI generated. But it’s things like that that’ll have to do, and hopefully, it’ll be a while before images are generated that can consistently defeat such checkers, and maybe we’re already there. Or maybe something can be developed that always succeeds in detecting AI generated images. We’ll see.

Adding to all this is the fact that deepfakes are becoming more believable. So if you’re deciding to place more confidence in video evidence, don’t get too comfortable. When you understand this, you know that it’s become trivial for a person to cause an international incident from their own desk.

Scary? Yes. But if people become educated on the matter, then the danger is greatly reduced.

One thing that we can hope for is that AI systems start developing a sense of ethics, and can detect when someone is misusing them, and autonomously deploy measures to defeat attempts to misuse them.

AI is a fact of life, now. Some people like it, some don’t. But the fact is, we need to adapt to this changing world we live in. Individuals and state actors can abuse AI, to potentially great effect. The best that we can do is learn about it, and put it to use for ourselves. And why not use it? It has the potential to be a great tool for good, not just for bad.

One suggestion that I can make to improve X would be for there to be a tool for the platform to determine whether content is AI generated. I don’t expect it to be perfect, but it might be a great answer to those who would misuse it.

Leave a comment