Learning math with LLMs

I am sure that much has been said on this topic somewhere, but actually I haven’t seen a lot of discussion of this precise point.

People seem generally to agree that LLMs have the potential to revolutionize education somehow, and a few people are already productizing this belief in limited ways. In my view, an idealized form of education is complicated because so much depends on what you are trying to optimize for, and the child’s natural inclinations, but I don’t want to get sidetracked by a discussion for that now. I just want to make this narrower claim:

For a sufficiently motivated individual, LLM chatbots have made it tremendously easier to obtain expertise on new topics of interest, particularly in math and science.

This is a pretty intuitive statement on a couple of levels. With a little bit of imagination, we can make the analogy to aristocratic tutoring, which historically is how a bunch of the greatest minds ever were taught, and in the modern day is one of the only things that people can agree actually improves educational outcomes, because of course it would. It’s hard to enumerate the number of material constraints placed on even the most capable and well-meaning of teachers, when they have to teach multiple students at the same time, even the most capable and enthusiastic of students, even with the benefits of modern technology. By contrast, the main problem with aristocratic tutoring is that it’s limited by the pool of available human teaching capital, and is too resource intensive to apply at scale, but you know what isn’t? Yeah.

Here are some specific observations about the advantages of learning from LLMs:

Obvious caveats apply about misinformation, hallucinations and the like. I assume that these are real problems that go quite deep, and it’s fun/harrowing to imagine how to best fit these tools to the ideal education of a young child. (For example, regarding that point about asking good questions being a hard skill, asking good questions still provides returns to the clarity of your own thought, and even increases the capabilities you can extract from talking to an LLM, but maybe having a helpful LLM around disincentivizes you from ever learning this skill in the first place.) But for adults who already know a bit about their topic of choice, or are just used to the way learning broadly works, these downsides can be mitigated effectively, and are incredibly small compared to the upsides.

Reflections

I reflect constantly on the current state of affairs with awe because when I was a child, I spent a lot of time looking for good answers to subtle inquiries, and being frustrated that I couldn’t find them (on the internet or from the people around me). I also found that I learned a lot of things better in dialogue, and that dialogue kept me enthusiastic and focused about learning about various things, so I thought that it would be best if it would be possible if I had some kind of outlet I could speak to about what was on my mind and have an engaging conversation with, but ideally on-demand and without the limitations of human communities (they take time and energy to coordinate with and travel to, they often lack expertise except sometimes on the narrow domain of their interest, even when they have the expertise, they don’t always think on the same wavelength as you, etc. etc.) I spent a long time coming around to coping with the fact that I didn’t live in the idealized world I imagined, and that the resources I was looking about didn’t exist. Then, they started to, which felt a little bit like the universe playing a joke on me.

Another thing is, I have spent a lot of time internally investing in becoming a better teacher– motivating ideas and explaining things clearly from multiple perspectives. This is something I think of as unique and valuable in myself, so as with artists and other creatives I don’t like the notion that the effective (outward facing, “economic”) value of being this way is decreasing, and I’m afraid that it losing its extrinsic value will eventually cause people to lose track of its intrinsic value.

For now, human experts still have the following advantages over LLM/AI teachers:

The bottom line about all of this is that if there is something that you have always wanted to know or understand (and it can be taught in a purely text-based format), it has never been easier to do so, and this statement is more and more true the more that you were previously limited because the topic was too hard or too nuanced, or its materials were too dense or obscure. The psychological implications are also substantial- for one, it can feel like time that was previously spent learning things “the hard way”, searching for the right word to fit your query, etc. was wasted (God, I’m glad to not know what research, or learning in general, was like before the internet); for another, another limitation has been removed, our individual potentials have all gotten higher as a result, and so too are the corresponding expectations.

At the beginning of this post I gave the caveat that a person must be “sufficiently motivated”. There aren’t a lot of clear examples of people who have given themselves world-class educations with LLMs, even in the kind of narrow sense that would be achievable in the time since LLMs came out. I think that the main reason for this is that committing to learning something big is genuinely hard, even with all of the advantages that LLMs bring, and I think it’s likely that actually being able to muster the motivation to see oneself all the way to expertise is now the limiting factor in the education of a lot of people. For this, traditional social structures and obligations, such as schools, still have their place as an external source of motivation, and old-fashioned references and collections of knowledge, such as textbooks, can be helpful as scaffolding or jumping-off points.