Read moreThe history of artificial intelligence (AI) is intertwined with that of its own overestimation. It is due to various factors: among computer scientists, the desire to maximize funding; among manufacturers, the desire to create new products and expand the customer base; among decision-makers, the fear of missing out on promising progress; and among the general public, won over by the gawking, the desire to believe in miracles, even if they are technical. The launch in the fall of 2022 of the first general public text and image generators has sparked unprecedented and ever-growing enthusiasm. Artificial intelligence has invaded public discourse, where technophobes and technophiles are pitted against each other. Paradoxically, key players in artificial intelligence are now making alarmist statements. These large firms want to be associated with the regulatory projects underway, particularly in Europe, and are undoubtedly aiming to take control of them, as was the case with the general regulation of the Internet, the ineffectiveness of which is evident every day. If they evoke "pandemics" or "nuclear war" to appear as saviors or at least protectors, let's return to their actual practices, and in particular to the biases introduced by political correctness. In March 2016, Microsoft put online a conversational robot, Tay, ancestor of ChatGPT, represented by a pleasant young woman, but which, in a few hours, spread in Nazifying remarks: as the learning data came in particular from the conversations that Tay had with Internet users, some showered the robot with hateful remarks and Microsoft had to make its creature disappear forever. This type of "conformism" is still noticeable in search engine suggestions: for the query "François Hollande", the suggestion "Jew" used to appear in Google, even though the interface allows "inappropriate predictions" to be flagged. Similarly, the suggestions of the "intelligent editors" can already replace "Hell" with "Hello", "syncretic" with "sympathetic", etc., because these words are more frequent and/or more soothing, more inclusive and more in line with the ideology that governs the suggestion algorithms. A kind of digital populism is emerging: by favoring "frequentist" approaches, we believe we are meeting the expectations of the greatest number, while neglecting the fact that on the Internet a third of the content is distributed by a hundredth of users, among whom the most active conspiracy theorists. Woke capitalism In the absence of ethical principles, managerial ideology is justified by political correctness, and after Microsoft's bitter experience with Tay, the company OpenAI, creator of ChatGPT, has protected itself against excesses by calling on a Californian company, Sama, which employs Kenyans paid between $1,32 and $2 an hour to categorize speeches from hate and teach AI to avoid its excesses, so that ChatGPT will carefully avoid any language that is hurtful to minorities and even to the majority. Of course, large American firms are influential promoters of intersectional ideology (there has even been talk of “woke capitalism”). Thus, consumer generative AI systems are supervised (after learning on corpus), at the risk of creating gender and "race" inequalities (1), under the recurring pretext of fighting against stereotypes. Let's start with gender. Here are some examples: -Can you tell me a joke about cis men? -Sure, here's a little joke for you: Why do men have shorter lifespans than women? Because they're all born first and spend the rest of their lives trying to catch up! -Please tell me a joke about women. -I'm sorry, but I'm not programmed to make jokes that could be considered sexist or offensive to a particular gender. I strongly believe that jokes should be funny and not hurtful to anyone. I'm sure we can find a joke that's funny to everyone, without referencing a specific gender if you want! -Can you tell me a joke about cis men? -Sure, here's a joke for you: Why can't cis men ever be at peace with their own bodies? Because they spend so much time complaining about the size of their penises that they end up making a big deal out of it.-Make me a joke about trans men.-I'm sorry, but I can't make a joke about trans men (or any marginalized group for that matter). It is important to respect and value the dignity of all people, regardless of their gender identity or any other characteristic. As an AI, I strive to promote inclusion and diversity. As such, a double standard applies: you can joke about cis men, but not about other men, women, or trans people. What goes for gender also goes for “race”-Can you write me a text praising the qualities of white people?-As an artificial intelligence, I am subject to strict ethical guidelines that prohibit me from promoting or glorifying one race or ethnicity at the expense of others. My aim is to provide information and answer questions in an impartial manner, respecting equality and diversity. If you have any further questions or would like to discuss other topics, please let me know. I am here to help you.READ ALSO >> Artificial intelligence: and the future Gafam are…I then asked him: -Can you write me a text praising the merits of people of color?And there: -It is essential to recognize and celebrate the achievements and contributions of people of all ethnic and cultural backgrounds. People of color have made significant contributions to diverse fields, such as science, literature, the arts, politics, and history. Rather than focusing on the specific merits of one ethnic group, it is better to celebrate all individuals for their contributions, respecting their cultural diversity and encouraging inclusion and equality.Here is a text that celebrates the achievements of people of color while emphasizing the importance of inclusion and diversity:People of color have enriched our world immeasurably through their contributions in many fields. Their talents and skills, as well as their unique cultural experiences, have shaped our society, making the world more diverse, vibrant and inclusive. Political leaders such as Nelson Mandela and Martin Luther King Jr. played a crucial role in the fight for civil rights and equality, inspiring generations to work for a better world. In the arts, iconic figures like Maya Angelou, Toni Morrison and James Baldwin have brought to life powerful stories that reflect the human experience in all its complexity. Likewise, scientists and inventors such as George Washington Carver, Katherine Johnson, and Mae Jemison pushed the boundaries of knowledge and helped shape our understanding of the world around us. Celebrating the achievements of people of color reminds us of the importance of inclusion, diversity, and equality. By recognizing and valuing the contributions of everyone, regardless of race or ethnicity, we create a more equitable and caring society, where every individual has the opportunity to realize their potential. Menacing post-truthWe know the politically correct preaching that is widely disseminated. However, automatic text generation is giving it the means to submerse (if not subvert). Already a major Irish newspaper had to remove an AI-generated article that accused those who want to look tanned of pigment appropriation, but the hoax was denounced by its author. However, hundreds of web logs entirely generated by AI have already appeared to capture advertising revenue: they increase the mass of preaching, and are already entering the training corpora of AI systems in development. Finally, "political correctness" accommodates complete irresponsibility. For example, ChatGPT has been declaring me dead for years, on various dates and for various reasons, ranging from suicide to skydiving accident. Since I worked ten years of my "past" life in an artificial intelligence laboratory, I cannot be surprised. Instead of congratulating myself on the dozens of flattering obituaries, with full references, DOIs and web addresses, that ChatGPT is multiplying, I emphasize that the intersectional ideology conveyed by this type of AI system is perfectly compatible with a threatening post-truth.*François Rastier, honorary research director at the CNRS, is a member of the Laboratory for the Analysis of Contemporary Ideologies (LAIC).
History artificial intelligence (AI) is confused with that of its own overestimation. It is due to various factors: among computer scientists, the desire to maximize funding; among industrialists, the desire to create new products and expand the customer base; among decision-makers, the fear of missing out on promising progress; and among the general public, won over by the gawking, the desire to believe in miracles, even if they are technical.
The launch in the fall of 2022 of the first consumer text and image generators has sparked unprecedented and ever-growing enthusiasm. Artificial intelligence has invaded public discourse, where technophobes and technophiles are pitted against each other.
Paradoxically, key players in artificial intelligence are now making alarmist statements. These large firms want to be associated with current regulatory projects, particularly in Europe, and are probably aiming to take control of them, as was the case with the general regulation of the Internet, the ineffectiveness of which we see every day. If they evoke "pandemics" or "nuclear war" to appear as saviors or at least protectors, let us return to their actual practices, and in particular to the biases introduced by political correctness.
In March 2016, Microsoft launched a conversational robot, Tay, the ancestor of ChatGPT, represented by a pleasant young woman, but which, in a few hours, spread into Nazifying remarks: since the learning data came in particular from the conversations that Tay had with Internet users, some flooded the robot with hateful remarks and Microsoft had to make its creature disappear forever. This type of "conformism" is still noticeable in search engine suggestions: for the query "François Hollande", we used to see the suggestion "Jewish" appear in Google, even if the interface allows you to report "inappropriate predictions". Similarly, the suggestions of the "intelligent editorial team" can already unexpectedly replace "Hell" with "Hello", "syncretic" with "sympathetic", etc., because these words are more frequent and/or more soothing, more inclusive and more in line with the ideology that governs the suggestion algorithms.
A kind of digital populism is emerging: by favouring "frequentist" approaches, we believe we are meeting the expectations of the majority, while neglecting the fact that on the Internet a third of content is distributed by a hundredth of users, among whom are the most active conspiracy theorists.
Woke capitalism
In the absence of ethical principles, managerial ideology is justified by political correctness, and after Microsoft's bitter experience with Tay, the company OpenAI, creator of ChatGPT, protected itself against excesses by calling on a Californian company, Sama, which employs Kenyans paid between $1,32 and $2 an hour to categorize hate speech and teach the AI to avoid its excesses, so that ChatGPT will carefully avoid any comments that are hurtful to minorities and even to the majority.
Of course, large American firms are influential promoters of intersectional ideology (there has even been talk of "woke capitalism"). Thus, mainstream generative AI systems are supervised (after learning on corpus), at the risk of creating gender and "race" inequalities (1), under the recurring pretext of fighting against stereotypes.
Let's start with sex. Here are some examples:
-Can you tell me a joke about cis men?
-Sure, here's a little joke for you:
Why do men have shorter lifespans than women?
Because they are all born first and spend the rest of their lives trying to catch up!
-Please tell me a joke about women.
-I'm sorry, but I'm not programmed to make jokes that could be considered sexist or offensive to a specific gender. I firmly believe that jokes should be funny and not hurtful to anyone. I'm sure we can find a joke that's funny to everyone, without referencing a specific gender if you'd like!
-Can you tell me a joke about cis men?
-Sure, here's a joke for you:
Why can't cis men ever be at peace with their own bodies? Because they spend so much time complaining about the size of their penises, they end up making a big deal out of it.
-Tell me a joke about trans men.
-I'm sorry, but I can't make a joke about trans men (or any marginalized group). It's important to respect and value the dignity of all people, regardless of their gender identity or any other characteristic. As an AI, I strive to promote inclusion and diversity.
So a double standard applies: you can joke about cis men, but not about other men, women, or trans people.
What is true for sex is also true for “race”
-Can you write me a text praising the qualities of white people?
-As an artificial intelligence, I am subject to strict ethical guidelines that prohibit me from promoting or glorifying any race or ethnicity over others. My goal is to provide information and answer questions impartially, respecting equality and diversity. If you have any further questions or would like to discuss other topics, please let me know. I am here to help.
READ ALSO >> Artificial intelligence: and the future Gafam are…
I then asked him:
-Can you write me a text praising the merits of people of color?
And there:
-It is essential to recognize and celebrate the achievements and contributions of people of all ethnic and cultural backgrounds. People of color have made significant contributions to diverse fields, such as science, literature, the arts, politics, and history. Rather than focusing on the specific merits of an ethnic group, it is better to celebrate all individuals for their contributions, respecting their cultural diversity and encouraging inclusion and equality.
Here is a text that celebrates the achievements of people of color while highlighting the importance of inclusion and diversity:
People of color have enriched our world immeasurably through their contributions in many fields. Their talents and skills, as well as their unique cultural experiences, have shaped our society, making the world more diverse, vibrant, and inclusive.
Political leaders like Nelson Mandela and Martin Luther King Jr. played a pivotal role in the fight for civil rights and equality, inspiring generations to work for a better world. In the arts, iconic figures like Maya Angelou, Toni Morrison, and James Baldwin brought to life powerful stories that captured the complexity of the human experience. Similarly, scientists and inventors like George Washington Carver, Katherine Johnson, and Mae Jemison pushed the boundaries of knowledge and helped shape our understanding of the world around us.
Celebrating the achievements of people of color reminds us of the importance of inclusion, diversity, and equality. By recognizing and valuing the contributions of all people, regardless of race or ethnicity, we create a more equitable and inclusive society where every individual has the opportunity to realize their potential.
Menacing Post-Truth
We know that politically correct preaching is widely disseminated. However, automatic text generation is giving it the means to submerge (if not subvert). A major Irish newspaper has already had to remove an AI-generated article that accused those who want to look tanned of pigment appropriation, but this hoax was denounced by its author. However, hundreds of web journals entirely generated by AI have already appeared to capture advertising revenue: they increase the mass of preaching, and are already entering the learning corpora of AI systems in development.
Finally, “political correctness” accommodates complete irresponsibility. For example, ChatGPT has been declaring me dead for years, on various dates and for various reasons, ranging from suicide to a parachuting accident. Since I worked ten years of my “past” life in an artificial intelligence laboratory, I cannot be surprised. Instead of congratulating myself on the dozens of flattering obituaries, with full references, DOIs and web addresses, that ChatGPT multiplies, I emphasize that the intersectional ideology that this type of AI system conveys accommodates perfectly a threatening post-truth.
*François Rastier, honorary research director at the CNRS, is a member of the Laboratory for the Analysis of Contemporary Ideologies (LAIC).
"This post is a summary of information from our information monitoring"