01001000 01100101 01101100 01101100 01101111 The quick brown fox jumps over the lazy dog Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation. 01010111 01101111 01110010 01101100 01100100 01100001 11000010 01101100 01100100 01101111
Animated caricature of the author

AI as a reflection of our values

ai

https://openai.com/index/why-language-models-hallucinate/

I first encountered this article and subsequent white paper a few days ago, and I've been ruminating on it all weekend. The white paper employs the metaphor of students taking a multiple-choice test to illustrate the grading process for the efficacy of an AI model. I find this metaphor particularly evocative because multiple-choice testing is frequently criticized for being an inadequate way to determine a student's understanding of the subject.

The paper highlights that since making a guess increases the likelihood of getting the right answer compared to leaving it blank, AI logically opts to return an assumption or estimate instead of stating, "I don't know." As a student of the United States public education system, I was well acquainted with this advice. After all, leaving an answer blank guarantees a zero, while a guess offers a one-in-four chance of earning a point.

Therefore, it's better to guess, isn't it?

What's the harm of a guess?

The problem with a child guessing on the test is that the teacher loses an understanding of what the child really learned and where they need to reinforce or take a different approach for previous lessons. On top of this, some children are really good at taking standardized tests. Does that mean they know what it is we wanted them to learn from that lesson?

Map this back to the article on AI hallucinations, and we see that, yet again, AI is holding up a mirror to our flaws. We entered the work with assumptions and expectations, and the models returned answers that in fact reflect our own faulty definitions of reality and truth. It sought to bolster our egos because reinforcing pre-conceived notions is better received by the audience than telling someone their premise is invalid. What can we learn about training AI models from alternative educational models such as Constructivism, which aims to encourage problem-solving and critical thinking, connecting new data with previous knowledge?

How do we grade and assess the success of a model without falling back to assessment methods that we already know to be inconclusive for our children?