000012081 001__ 12081
000012081 005__ 20251007025633.0
000012081 0247_ $$2doi$$a10.6082/uchicago.12081
000012081 037__ $$aTHESIS
000012081 037__ $$bThesis
000012081 041__ $$aeng
000012081 245__ $$aUnveiling Gender Bias in Large Language Models
000012081 260__ $$bUniversity of Chicago
000012081 269__ $$a2024-06
000012081 336__ $$aThesis
000012081 502__ $$bM.A.
000012081 520__ $$aThis paper investigates gender bias in Large Language Model (LLM)-generated teacher evaluations in higher education setting, focusing on evaluations produced by GPT-4 across six academic subjects. By applying a comprehensive analytical framework that includes Odds Ratio (OR) analysis, Word Embedding Association Test (WEAT), sentiment anal- ysis, and contextual analysis, this paper identified patterns of gender-associated language reflecting societal stereotypes. Specifically, words related to approachability and support were used more frequently for female instructors, while words related to entertainment were predominantly used for male instructors, aligning with the concepts of communal and agen- tic behaviors. The study also found moderate to strong associations between male salient adjectives and male names, though career and family words did not distinctly capture gen- der biases. These findings align with prior research on societal norms and stereotypes, reinforcing the notion that LLM-generated text reflects existing biases.
000012081 540__ $$a© 2024 Violet Huang
000012081 542__ $$fCC BY
000012081 690__ $$aSocial Sciences Division 
000012081 691__ $$aComputational Social Sciences (MACSS)
000012081 7001_ $$aHuang, Violet$$uUniversity of Chicago
000012081 72012 $$aPedro Alberto Arroyo
000012081 72014 $$aPedro Alberto Arroyo
000012081 8564_ $$9501adccc-bc03-4b10-96d8-1a1af06ea0e6$$erestricted$$s1905285$$uhttps://knowledge.uchicago.edu/record/12081/files/Violet%20Huang%20Thesis%20Final.pdf
000012081 908__ $$aI agree
000012081 909CO $$ooai:uchicago.tind.io:12081$$pTheses$$pGLOBAL_SET
000012081 982__ $$aR_MAPSS
000012081 983__ $$aThesis