Logo LeBonLLM
Carnet de code et de réflexions autour de l’IA générative à la française
codecorpuscontributionsconversationscurationlexiquefaqrecherche
Communauté

Pour échanger, demandez l’accès au :

Infolettre

Nous avons le projet d’une infolettre, seriez-vous intéressé ?

Misc
XLinkedInMentions légales
Contact

Les LLM ont-ils leur propre vision du monde ?

25/04/2024

Tags : IA, AI, LLM, société, sciencessociales

Date de récolte : [[2024-04-25-jeudi]]

Machine Bias. Generative Large Language Models Have a Worldview of Their Own

Auteurs : Julien Boelaert, Samuel CoavouxEtienne OllionIvaylo D. Petev, and Patrick Präg

Mon avis :

Les auteurs discutent le fait de savoir si les LLM peuvent remplacer les enquêtés humains pour mener des enquêtes sociologiques à moindre coût. Alors que les sociologues sont en désaccord sur le fait de savoir si les LLM sont représentatifs de larges pas de la société (à condition d'être correctement promptés) ou de certains groupes seulement (les élites des Big tech californiennes), les auteurs montrent, de manière empirique, que les LLM répondent de manière arbitraire et non cohérente, ne traduisant pas une vision du monde sous-jacente idéologiquement et socialement cohérente : "Models do show a bias, but one that is not steered systematically towards a particular social group". Plus encore, ce qui est remarquable est le fait que les LLM présentent beaucoup moins de variabilité dans leurs réponses que les humains. Les auteurs appellent cette combinaison (biais non systématique et faible variance) le "machine bias". À noter que "the small variance of LLM answers is the main driver behind LLM error patterns that are only marginally governed by socio-demographic variables.". Dit autrement, "social factors only marginally make the LLM answers budge from their central tendency, and not in a way that is consistent with real-world answers." Bref, les LLM racontent toujours la même chose, et s'en fichent un peu des variables qu'on leur passe dans le prompt. Dans sa section discussion, le papier remet les pendules à l'heure sur l'utilisation de la notion de biais s'agissant des LLM et en appelle à un sens relativement restreint du terme :

The clarification concerns the concept of bias. At the moment, the term is widely used, a sign of the enthusiasm mixed with fear that surrounds the use of generative AI. It has also become particularly vague, as a number of researchers have already pointed out (Hovy and Prabhumoye, 2021). One contribution of this study is to call for more rigor when we employ the term “social bias.” We contend that in order to establish the existence of such a bias, we need to demonstrate that the model consistently favors the position of a person or of a group. What matters is not only to show distance from a ground truth, but also to demonstrate some form of (probabilistic) systematicity. In other words, we need to show that what we call bias is not simply noise.

URL : https://osf.io/preprints/socarxiv/r2pnb

Résumé de l'article

Generative Al is increasingly presented as a potential substitute for humans in many areas. This is also true in research. Large language models (LLMs) are often said to be able to replace human subjects, be they agents in simulated models, economic actors, subjects for experimental psychology, survey respondents, or potential consumers. Yet, there is no scientific consensus on how closely these in-silico clones represent their human counterparts. Another body of research points out the models inaccuracies, which they link to a bias. According to this skeptic view, LLMs are said to take the views of certain social groups, or of those which predominate in certain countries. Through a targeted experiment on survey questionnaires, we demonstrate that these critics are right to be wary of generative Al, but probably not for the right reasons. Our results i) confirm that to date, the models cannot replace humans for opinion or attitudinal research; and ii) that they also display a strong bias. Yet we also show that this bias is iii) both specific (it has a very low variance) and not easily related to any given social group. We devise a simple strategy to test for 'social biases' in models, from which we conclude that the social bias perspective is not adequate. We call machine bias the large, dominant part of the error in prediction, attributing it to the nature of current large language models rather than to their training data only.