Linkblog | Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis

from the are-we-the-baddies? dept

What we currently call “AI” isn’t actually intelligence; it’s just pattern-matching. A colleague of mine calls it “spicy autocomplete.” Unfortunately the rapid development of this spicy autocomplete has led to an expectation of intelligence that doesn’t actually exist.

Take this article, for example. It’s pretty clear what happened: in an attempt to address the issue of systemic racism in AI, Google has added in some sort of “prompt salting” (there’s probably an actual terme d’art for this, but I don’t know what it is and I like that this hearkens back to “spicy autocomplete”) that automatically appends “…and don’t make them all white” to prompts asking for pictures of people.

It’s the naive solution, but it also probably solves the problem decently well for most requests.

Still, the AI doesn’t know what a “Founding Father” is, any more than it knows what a hand is. The LLM doesn’t even know what a “picture” is; it just knows that there’s a strong pattern of the word “picture” being associated with some other format it can’t read. And the image generator doesn’t know what “diverse” means, it just knows that the neurons matching pictures of non-white folks should light up when it sees that word. Both of them could probably tell you a pretty convincing fiction or draw you a pretty convincing picture about their understanding of both of those concepts, but only because we’ve drawn diagrams and written definitions that they can scrape and pattern-match.

And so, when it tries to generate a picture of “German soldiers from 1943,” the LLM (or some dumber algorithm) recognizes a pattern that it determines is asking for a picture of a group of people and appends “…and don’t make them all white.” The image generator, not knowing any better, dutifully does so. It doesn’t know anything about revisionist history or whataboutism, it just pattern matches a photo of Nazis with the added instruction of including diversity.

All of this is why AIs, while impressive and even convincing, are still not a replacement for humans; they don’t have any true understanding of context. Just educated guesses and pattern matching. And since the internet is the source of those patterns…well. Let’s just call this a reminder that it isn’t ready for anything more than helping us out with simple tasks.

Bookmark the permalink.

One Comment

  1. I did a little AI experiment of my own aa few weeks ago: I had it write a sermon on the topic of “grace.” I shared some of my thoughts on the results here: https://corybanter.substack.com/p/3-short-sermons-on-grace-c04d15b07ae8?utm_source=%2Fsearch%2Fai&utm_medium=reader2

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.