AI and Masculinities
This presentation critically examines the intersection of artificial intelligence and masculinities, focusing on how large language models (LLMs) such as ChatGPT and GPT-4 conceptualize and reproduce gender norms. Drawing on the recent study by Walther, Logoz, and Eggenberger (2024), it highlights how AI-generated responses reflect and reinforce traditional, biologically anchored, and culturally coded understandings of masculinity, often omitting marginalized masculinities such as Black or queer identities. Further case studies demonstrate gendered biases in AI-driven financial and mental health advice, revealing systemic tendencies to frame male users in risk-promoting, less empathetic terms. Methodologically, the presentation discusses various bias detection and mitigation strategies—including red teaming, counterfactual augmentation, and alignment datasets—offering a foundation for critical engagement with gender bias in AI systems.