أنت هنا

الموارد

Challenging Systematic Prejudices: An Investigation into Bias Against Women and Girls in Large Language Models
مكان النشر | عام النشر | الترتيبات: 
Paris, Ljublijana | 2024 | 20 p.
المؤلف: 
Daniel van Niekerk et al.
المؤلف المشارك: 
UNESCO; International Research Centre on Artificial Intelligence (IRCAI)
المنطقة: 
جميع دول العالم

This study explores biases in three significant large language models (LLMs): OpenAI’s GPT-2 and ChatGPT, along with Meta’s Llama 2, highlighting their role in both advanced decision-making systems and as user-facing conversational agents. Across multiple studies, the brief reveals how biases emerge in the text generated by LLMs, through gendered word associations, positive or negative regard for gendered subjects, or diversity in text generated by gender and culture. The research uncovers persistent social biases within these state-of-the-art language models, despite ongoing efforts to mitigate such issues. The findings underscore the critical need for continuous research and policy intervention to address the biases that exacerbate as these technologies are integrated across diverse societal and cultural landscapes. The emphasis on GPT-2 and Llama 2 being open-source foundational models is particularly noteworthy, as their widespread adoption underlines the urgent need for scalable, objective methods to assess and correct biases, ensuring fairness in AI systems globally.

الملفات: 
نوع المصدر: 
تقارير المؤتمر والبرنامج
الموضوعات: 
حقوق الانسان
محو الأمية الإعلامية والمعلوماتية / المواطنة الرقمية
مستوى التعليم: 
رعاية وتعليم الطفولة المبكرة
التعليم الابتدائي
التعليم الثانوي
التعليم العالي
التعلم مدى الحياة
التعليم والتدريب التقني والمهني
التعليم غير الرسمي
الكلمات المفتاحية: 
Artificial intelligence
AI
Gender stereotypes
Prejudice
Languages