Week 5 - BALT 4362 - AI Bias
Chapter 8 of Automation with AI introduces some concerns with AI that I would not have considered. Since it is artificial intelligence, I thought it would not be possible for it to exhibit bias or discriminate. I decided to go straight to the source and ask ChatGPT how it could have these traits. It agreed with what the book was saying that training data was the root of this problem.
"ChatGPT is trained on massive amounts of text data from the internet, books, and other sources. Because those sources can contain stereotypes, biases, or discriminatory views, the model can sometimes reflect or amplify them. OpenAI uses techniques like reinforcement learning with human feedback (RLHF), safety filters, and careful dataset curation to reduce these risks, but bias can still slip through. There are two main ways bias might appear: Content bias – If certain perspectives or groups are underrepresented or misrepresented in training data, the model’s responses may skew in a particular direction. Interaction bias – Depending on how you phrase a question, ChatGPT may frame its response differently, sometimes unintentionally favoring one view or group."
Because their sources were written with bias, the response cannot help but have bias. OpenAI has acknowledged that bias can never be fully removed. They can continue to improve their algorithms, and they will need to, as the number of people relying on AI to summarize their content is increasing rapidly. One of the best examples I can provide as evidence of this is the social media app X. X has its own AI called "Grok". If you tag Grok in the comments of a post, it will come up with a response. On almost every post that deals with a world event, you will see hundreds to thousands of people tagging Grok, asking the AI to summarize, confirm, or prove the post wrong.
Comments
Post a Comment