Social Group Bias in AI Finance
- Ai agents
- Working Paper
Financial institutions increasingly rely on large language models (LLMs) for highstakes decision-making. However, these models risk perpetuating harmful biases if deployed without careful oversight. This paper investigates racial bias in LLMs specifically through the lens of credit decision-making tasks, operating on the premise that biases identified here are indicative of broader concerns across financial applications.