In Seeing Like A State, James C. Scott contrasts formalized systems of knowledge with what he calls mētis. Mētis is “a wide array of practical skills and acquired intelligence in responding to a constantly changing natural and human environment“.
Mētis is usually associated with traditional forms of knowledge and formalized systems of knowledge with modernity. Scott resists this comparison because many “traditional” forms of knowledge only look ancient and can quickly adapt in response to new conditions. I wanted to find an example where the reverse is true: where formal knowledge is traditional and mētis is new.
I believe that I have found such an example in computer science.
Prerequisites: My book review of Seeing Like A State, especially the section on mētis, might be helpful, but I think I covered the important information in the introduction.
Originally Written: September 2021.
Confidence Level: I know people who use machine learning, but I have not done so myself. I’m sure I’m oversimplifying some things and that someone else could come up with better examples. I am more confident in the big picture.
Knowledge in traditional computer science is extremely formal.
For a simple program, there is a single developer who wrote the entire code. She should know everything that the code does on every line.
More complicated programs are not (usually) written by a single person. The knowledge here is still formal. Each component of the code should be completely understood by the person who wrote it and there should be at least one person who understands how all of the components fit together. The success of the program depends on how well it can be understood from the center.
Of course, this ideal isn’t always realized. Deviations from this ideal are considered to be worse than fully legible code.
Over the last decade or so, a different style of coding has risen to prominence. It is called “machine learning” or “deep learning” or “neural nets” or “artificial intelligence”, although these names are often more aspirations than descriptions.
Machine learning does not involve writing a particular program to accomplish some goal. Instead, it involves multiple layers of generic programs, with lots of free parameters.
The program is then shown many examples of correct solutions to the problem it is trying to solve. The free parameters in the program are automatically modified to improve the program’s performance. The program is trained on data, rather than being told entirely how to behave by the programmer.
An example might make this clearer. Suppose that you want a program that identifies if there is an apple in a picture. In traditional computer science, the programmer would have to define what an apple looks like in a picture. Luckily, apples have a fairly distinctive shape. So you write a program that looks for a patch of the picture with that shape that is red or yellow or green. Except some apples are multiple colors, so defining an apple as a color patch is not a great idea. Instead, you might look for edges in the picture: places where the color varies abruptly. After tracing all of the edges in the picture, you can look for the shape of the apple among the edges. But this has its own challenges. Figuring out if a picture has an apple in it is quite difficult for a computer.[1]Which is why image recognition is often used for captchas.
The machine learning approach is very different. Take a general function that takes a picture as its input and outputs ‘Yes’ or ‘No’. Give the program a lot of pictures, each of which is labeled with whether it contains an apple. As it goes through the pictures, the program modifies the free parameters so it gets better at identifying which of those pictures have apples in them. Hopefully, if given enough pictures to train on, the program gets good at identifying apples in any picture.
Machine learning is mētis for computers. It is knowledge gained through experience with a large group of similar situations, rather than knowledge which can be systematically described. How the program works is usually completely illegible to even the person who wrote the code. Once trained, the program functions as a ‘black box’, which converts the input (e.g. a picture) into an output (e.g. whether or not it contains an apple), without us knowing what happens inside.
Machine learning has not just been successful at image recognition. It has revolutionized many fields of computer science. I will focus on two of them.
Go is a board game that has been played in China for over 2,500 years. It involves placing black and white stones on the corners of a grid. The goal is to surround and eliminate your opponent’s stones. Much like chess, there are huge variations in skill between different players.
Go is even harder to analyze systematically than chess. For your turn, you can place a stone on (almost) any unoccupied corner of the board. The number of possible moves is much larger than in chess. This makes predicting your opponent’s moves and planning ahead almost impossible. The techniques that allowed Deep Blue to defeat the world chess champion Garry Kasparov in 1997 would not work for Go.
AlphaGo is a machine learning program that plays Go. It trains using games played by professional Go players, then plays against itself to generate a much larger training set. From 2015-2017, AlphaGo was the first computer program to beat a professional Go player, then the first computer program to defeat the world champion, then dramatically improved even further against its earlier versions.
Taking a generic multilayered function and giving it a huge amount of experience playing Go was far more effective than attempting to write down the best strategies for Go.
OpenAI has an even more ambitious goal. It wants to make a program that can write. This program is called GPT and is currently on GPT-3.
GPT is trained on a database of 8 million webpages. Its goal is to predict the next word that will appear. If you give it a short initial prompt, it will predict the next word, then the next, until it’s written entire paragraphs.
The results are impressive, but still are not as good at writing as people are. They are certainly much better than any previous attempt at teaching a computer to write.
Similar programs have also used machine learning for translation, creating pictures of people and cats, and even making inspirational quotes,[2]Although this one might be a joke. just to name a few examples.
Machine learning has been extremely effective at solving many problems that traditional computation found challenging. There are also still many problems where traditional computation is better.
The success of machine learning is because it is based on mētis instead of on formal knowledge. The behavior of the code is not all determined by the programmer(s). Instead, the code is trained using lots of particular examples.
To figure out which problems lend themselves better to mētis and which problems lend themselves better to formal knowledge, we can look at what problems machine learning has been the most successful at. This will not give an exhaustive list: many problems best approached by mētis also require physically interacting with the world.
The strategy used for machine learning is based on mētis, the informal knowledge gained through experience, not the formal knowledge of traditional computing.