-Advertisement-

Google’s AI blunder over images reveals a much bigger problem

In the 1968 film “2001: A Space Odyssey,” audiences found themselves staring at one of the first modern depictions of an extremely polite but uncooperative artificial intelligence system, a character named HAL.

Given a direct request by the sole surviving astronaut to let him back in the spaceship, HAL responds: “I’m sorry, Dave. I’m afraid I can’t do that.”

Recently, some users found themselves with a similarly (though less dramatic) polite refusal from Gemini, an integrated chatbot and AI assistant that Google rolled out as a competitor to OpenAI’s ChatGPT. When asked, Gemini politely refused in some instances to generate images of historically White people, such as the Vikings.

Unlike the fictional HAL, Google’s Gemini at least offered some explanation, saying that only showing images of White persons would reinforce “harmful stereotypes and generalizations about people based on their race,” according to Fox News Digital.

The situation quickly erupted, with some critics dubbing it a “woke” AI scandal. It didn’t help when users discovered that Gemini was creating diverse but historically inaccurate images. When prompted to depict America’s Founding Fathers, for example, it generated an image of a Black man. It also depicted a brown woman as the Pope, and various people of colour, including a Black man, in Nazi uniforms when asked to depict a 1943 German soldier.

The backlash online was so swift that Google CEO Sundar Pichai admitted that Gemini had offended some of its users. Google also hit pause on Gemini’s ability to generate people in images. It was presented to the public as a simple oversight done with good intentions gone wrong, with Google explaining in a blog post that “we tuned it to ensure it doesn’t fall into some of the traps we’ve seen in the past with image generation technology.”

Those “traps” — for which Google overcorrected — were of course clear biases in previous AI systems (which are built on the same kinds of tech that Gemini is). These systems had a tendency to show bias against minorities. Facial recognition software didn’t always recognize Black people, for example, or even labeled them as “gorillas.” Loan approval AI algorithms ended up showing bias against minorities. In the image space, if you asked previous AI image generators for an image of a CEO or a doctor, they initially almost always showed images of White males.

Ironically, Google was criticized in 2020 for firing a Black AI scientist who asserted that its AI efforts were biased, and this backlash may have contributed to the company’s overcorrection in the other direction with Gemini. The underlying problem that Google is trying to solve is not an easy one.

Historically, many new technological products have shown biases. These can range from how biomedical devices measure blood oxygen levels for different ethnic groups, resulting in underdiagnosis of certain conditions for Black patients, to how sensors don’t always register darker-skinned individuals and the lack of women in clinical drug trials.

In the case of AI, this problem is exacerbated because of biases that exist in the training data — usually public data on the internet — which the AI tool then learns.

The latest scandal, in which Gemini appears to value diversity over historical accuracy, may have uncovered a much bigger issue. If Big Tech organizations such as Google, which have become the new gatekeepers to the world’s information, are manipulating historical information based on possible ideological beliefs and cultural edicts, what else are they willing to change? In other words, have Google and other Big Tech companies been manipulating information, including search results, about the present or the past because of ideology, cultures or government censorship?

In the 21st century, forget censoring films, burning books or creating propaganda films as forms of information control. Those are so 20th century. Today, if it ain’t on Google, it might as well not exist. In this technology-driven world, search engines can be the most effective tool for censorship about the present and the past. To quote a Party slogan from George Orwell’s “1984,” “Who controls the past controls the future: who controls the present controls the past.”

As AI becomes more sophisticated, these fears of Big Tech censorship and manipulation of information (with or without the participation of governments) will only grow. Conversational AI such as ChatGPT may already be replacing search as the preferred method to find and summarize information. Both Google and Microsoft saw this possibility and jumped all in on AI after the success of ChatGPT.

The possibility even led The Economist to ask, with respect to AI, “Is Google’s 20-year dominance of search in peril?”

Apple has been looking at incorporating OpenAI and more recently, Gemini, into new versions of its iPhones, which would mean significantly more people would use AI on a regular basis.

As a professor, I already see this trend firsthand with my students. They often prefer to use ChatGPT not only to find but also to summarize information for them in paragraphs. To the younger generation, because of AI, Web search engines are rapidly becoming as antiquated as physical card catalogues are in libraries today.

What makes censorship and manipulation worse with AI is that today’s AI already has a well-known hallucination problem. In other words, sometimes AI makes things up. I learned this fact the hard way when students began turning in obviously AI-generated assignments, complete with references that looked great but had one problem: They didn’t actually exist.

Given the hallucination problem, whoever the leaders of AI are in the future (whether Google, Microsoft, OpenAI or a new company) will be tempted to “fill in” their own rules for what AI should and shouldn’t produce, just like Google did with Gemini. This “filling in” will inevitably come from the biases and culture of each company and could eventually restrict or at least drastically modify what AI is allowed or is willing to show us, just like Google did with Gemini.

That’s why this one little scandal goes beyond excessive diversity, equity and inclusion, or DEI, enthusiasm in one company. It may be a portent of what’s to come with AI and Big Tech leading us into Orwellian territory. In a few years, you may just want to ask your friendly AI companion to give you some historical information, only to have the AI respond in that maddeningly polite way: “I’m sorry, Dave. I’m afraid I can’t do that.”

Leave A Comment

Your email address will not be published.

You might also like