top of page
Search
  • Writer's pictureAsli Cazorla Milla

Lost in Translation: When AI Meets Images

Every time I come here I say to myself that I won't talk about AI but I seem not to be doing that promise well. Last week, we all read bout the Gemini debacle and the apology of Google. Here is my takeaway on this interesting development.


In the world of intelligence (AI) Google has consistently been a leader in introducing innovative technologies that influence how we interact digitally. However recent discussions, about Google's AI image creation model once called Bard but now rebranded as Gemini have stirred quite the debate. Brought up questions about the intricacies of AI and its potential impact on society.



It all started with a few images that I am sure you have seen already, if not, please "Google it" to see them. I am not going to re-share them here. There have been reports suggesting that despite its algorithms and extensive data collection Gemini displayed a pattern; it seemed to lack images featuring white individuals when given various requests. These requests, coming from countries with populations such as the United States, Australia, the United Kingdom, and Germany should logically have resulted in diverse outcomes reflecting the demographic composition of these regions. Instead, users noticed a representation or complete absence of individuals in the images generated.


The biggest issue was the failure to produce results even when prompted for historically white figures like popes and knights. I mean, you do not make mistake about that, am I right? This irregularity was consistent across scenarios until Gemini suddenly stopped responding to prompts adding more uncertainty, about how it generates images. Navigating the thrilling world of AI technology requires us to stay alert and actively combat biases. It's crucial to ensure that these advancements benefit society and uphold values such, as fairness, equality, and inclusivity.


Google now stopped its image-generating AI service for a while and apologized, like usual, and they said that they are working on it. Overall, this is not the first and won't be the last incident. However, it should serve as a reminder for the AI community to reflect deeply and take steps to address the ingrained biases that persist in our environment.

3 views0 comments
bottom of page