In order to explore problem solving and critical thinking a little more, I would like to talk about problem solving. Mainly, I'll talk about a 1963 paper by Slagle, who programmed the first symbolic integrator using punch cards. And a very clever algorithm. It was able to solve 52 of 54 integrals on what sounds like a Calculus II final at MIT* using a ladder of proximate goals reaching up to an ultimate goal, the solution of the integral. It needed, to mimic some aspects of human problem solving by implementing two sets of rules: those that are always good and those that are heuristics and sometimes break down. But it does get to the answer. This isn't so surprising in itself, since commercial products, e.g., Mathematica, have been doing so for decades, but the fact that it could be done with a 1959 mainframe computer and with so few rules is an astounding fact. I will use this algorithm as the backbone for how to think about problem solving.
The general way that people solve math problems is by a searching method.** You start with what you're given and then you search the space with known rules and inspiration and try to move forward, one step at a time. From time to time you get stuck, and then you take a few steps back and try another likely path. One of the differences between an expert and a novice is how they react to a setback: a novice usually chooses one way of doing a problem, then never steps back.*** An expert tries several tactics, until they complete the task. Matthew Shoenfeld's work quantified this behavior, and private discussions make it clear that many people consider this searching, a la Polya,(*4) almost a moral imperative of reasoning.
Slagle's system worked on a method of proximate goals. That is, you start with a main goal, where you'd like to get to, and then as you work towards it, you sometimes identify a subgoal. When you do, you keep track of that subgoal. Sometimes as you work, you can reach a point where several directions are possible. In these cases you keep all of the goals, and place them in your tree. You don't try to work all of the goals together. Instead, you use various methods to assign the goals priority. Some goals can be reached through automatic processes, processes that are guaranteed to move you toward the main goal. You do these first. Others are heuristics, and you look to those only when there is no automatic option. Furthermore, for heuristics, you need to judge how costly each one is in the given situation: how hard it is compared to the goal. And you try the least costly first, as long as its line remains the least costly. When you find a way to reach the main goal, you have solved the problem.
This should remind you of the method that people use to solve problems in the real world, as observed on Betamax by Shoenfeld: you search the space with known rules, trying to move forward, and when you get stuck, you take a few steps back and try a likelier path.
There were three types of methods used by the program to solve the integrations problem. The first was a short integration table, a list of "Standard Forms." If you came to a standard form, then you had effectively solved the problem. In effect, the rest of the edifice is built to put a non-standard form in the image of a standard form. The second set of methods were those that always improved the situation. Whenever you came to one of these you tried it, and then checked it against the standard forms. These had no deviations and didn't really require any interesting tracking. You tried one after another until there were no obvious steps to try. Finally, there were the heuristics. These are rules that sometimes improve the situation, but sometimes do not (sometimes, trying them makes matters worse). When you came to this point, the program would try all of the applicable variants, and then assess the character of each try to judge which is the best way to proceed. And as I said, it worked in 52 of the 54 cases, and the other two could not be solved because the IBM 7090 didn't have enough memory for more entries in its integral table.
This goal-directed reasoning is what I'll use as a paradigm for problem-solving. I often characterize it as a tangram in my classes: physics gives you a set of tiles with which to form the shape of the solution. The tiles are limited, but the forms are infinite.(*5) Arranging the tiles into the correct form is called thinking.
* At MIT, it's part of 18.01 Single Variable Calculus. 'most everywhere else, it's Calculus II - Methods of Integration -- with some idiosyncratic identifier cataloged by the inane, parochial system used by universities (a college catalog is a definitive, smack-down, irrefutable argument against expert judgement, although listening to the faculty senate is even better). I first learned of this algorithm from a very good lecture by Patrick Winston from 6.034, Reasoning: Goal Trees and Problem Solving. A short description of the algorithm can be found in Winston's Artificial Intelligence textbook.
** See Mathematical Problem Solving, Matthew Shoenfeld. I would like to show some diagrams from his work, but at this time I don't have a way to use images.
*** A college freshman thinks that problems should take 2-3 minutes to solve, and literally thinks that a problem that takes more than 10 minutes is impossible.
(*4) See, e.g., How to Solve It, George Polya.
(*5) Well, not really. Firstly, of course, there is room for disagreement about how many tiles there are, and what constitutes a different tile. I am sure there are many fewer than the 60-odd list of things the engineering college says it wants the students to learn (about one per 25 minutes of instruction), since "solve a quadratic equation," "Newton's Third Law," and "Kepler's Laws" are all very different types of things. Also, I think that there aren't technically an infinite number of solutions. I am 95% sure there are only seven one-dimensional kinematics problems for one process on one object (and 100% certain for uniformly accelerated motion). Well, I guess, then, if I allow an infinite number of processes and an infinite number of objects, I could end up with an infinite number of kinematics problems. Probably. And these can be sutured onto more complicated dynamics problems, which unlike kinematics, are really physics.
Sunday, November 28, 2021
Physics Frontiers Episode 67: Limits of General Relativity with James Owen Weatherall All Physics Frontiers Episodes Most Popular Physic...
I finally got done with finals, and as a reward, I decided to see if I could find a Coursera course to play with. I had been looking to see...
Podcast Home Posted Shows: 1. G4V: The Gravitational 4-Vector Formulation of Gravity (Recorded: 10/8/2016) (Published: 10/31/2017) [v...