Category Archives: representations

Every post that talks about representation of mathematical objects in the most general sense.

Presenting math on the web

This is a long post about ways to present math on the web, in the context of what I have done with The Handbook of Mathematical Discourse and abstractmath.org (Abmath).  “Ways to present math” include both organization and production technology.

The post is motivated by and focused on my plans to reconstruct Abmath this fall, when I will not be teaching.    During the last couple of years I have experimented with several possibilities for the reconstruction (while doing precious little on the actual website) and have come to a tentative conclusion about how I will do it.  I am laying all this out here, past history and future plans, in the hope that readers will have suggestions that will help the process (or change my mind).

I set out to write both the Handbook and Abmath using ideas about how math should be presented on the web.  They came out differently.  Now I think I went wrong with some of the ways in which I organized Abmath and that I need to reconstruct it so that it is more like the Handbook.  On the other hand, I have decided to stick with the production method I used for Abmath. I will explain.

Organization

My concept for both these works was that they  would have these properties:

1) Each work would be a cloud of articles. They would have little or no hierarchy.  They would consist of lots of short articles, not organized into chapters, sections and subsections.

2) The articles would be densely hyperlinked with each other and with the rest of the web. The reader would use the links to move from article to article. The articles might occur in alphabetical order in the production file but to the reader the order would be irrelevant.

I wanted the works to be organized that way because that is what I wanted from an information-presenting website.  I want it that way because I am a grasshopper. Wikipedia and n-lab are each organized as a cloud of articles. I started writing the Handbook in the late nineties before Wikipedia began.

The Handbook exists in two forms. The web version is a hypertext PDF file that consists of short articles with extensive interlinking. The printed book has the same short articles arranged in alphabetical order. In the book form, the links are replaced by page indices (“paper hyperlinks”). In both forms some links are arranged as lists  of related topics.

Abstractmath.org is a large, interlinked collection of html pages.  They are organized in four large sections with many subsections.

Many entrances

For this cloud of articles arrangement to work, there must be many entrances into the website, so that a reader can find what they want. The Handbook has a list of entries in alphabetical order. Certain entries (for example the entries on attitudes, on behaviors, and on multiple meanings) have internal lists of links to examples of what that entry discusses.  In addition, the paper version has an index that (in theory) provides links to all important occurrences of each concept in the book.  This index is not included in the current hypertext version, although the LaTeX package hyperref would make it possible to include it.  On the other hand, the hypertext version has the PDF search capability.

Abmath has a table of contents, listing articles in hierarchical form, as well as an index, which is different from the Handbook index in that it gives only one link from each word or phrase. In addition, it has header sections that briefly describe the contents of each main section and (in some cases) subsection, and also a Diagnostic Examples section (currently fragmentary)in which each entry provides a description of a particular problem that someone may have in understanding abstract math, with links to where it is discussed. The website currently has no search capability.

The Handbook is really a cloud of articles, and Abmath is not. I made a serious mistake imposing a hierarchy on Abmath, and that is the main thing I want to correct when I reconstruct it.  Basically, I want to dissolve the hierarchy into a cloud of articles.

Production methods

The Handbook was composed using LaTeX.  It originally existed in hypertext form (in a PDF file) and lived on the web for several years, generating many useful suggestions. I wrote a LaTeX header that could be set to produce PDF output with hyperlinks or PDF output formatted as a book with paper hyperlinks; that form was eventually published as a book.

I used a number of Awk programs to gather the various kinds of links.  For example, every entry referring to a math word that has multiple meanings was marked and an Awk program gathered them into a list of links.

I generated the html pages for Abmath using Microsoft Word and MathType.  MathType is very easy to use and has the capability (recently acquired) of converting all math entries that it generated  into TeX. The method used for Abmath has several defects.  You can’t apply Awk (or nowadays Python) programs to a Word document since it is in a proprietary format.  Another problem is that the appearance of the result varies with browser.

But the Abmath method also has advantages.  It produces html documents which can be read in windows that you can make narrower or wider and the text will adjust.  PDF files are fixed width and rigid, and I find clicking on links requires you to be annoyingly precise with your fingers.

So my original thought was to go back to LaTeX for the new version of Abmath. There are several ways to produce html files from LaTeX, and converting the MathType entries to TeX provides a big headstart on converting the Word files into text files.  Then I could use Awk to do a lot of bookkeeping and cut the hyperlink errors, the way I did with the Handbook.

So at first I was quite nostalgic about the wonderful time I had doing the Handbook in LaTeX — until I remembered all the fussing I did to include illustrations and marginal remarks. (I couldn’t just put the illo there and leave it.) Until I remembered how slowly the resulting PDF file loads because there seems to be no way to break it into individual article files without breaking the links.

And then I found that (as far as I could determine) there is no HTMLTeX that produces a reasonable HTML file from any TeX file the way PDFTeX produces a PDF file from any TeX file, using Knuth’s  TeX program. In fact all the TeX to HTML systems I investigated don’t use Knuth’s program at all — they just have code in some programming language that reads a TeX file and interprets what the programmer felt like interpreting.  I would love to be contradicted concerning this.

So now my thought is to stick with Word and MathType.  And to do textual manipulation I will have to learn Word Basic.  I just ordered two books on Word Basic. I would rather learn Python, but I have to work with what I have already done.  Stay tuned.

Send to Kindle

Function as map

This is a first draft of an article to eventually appear in abstractmath.

Images and metaphors

To explain a math concept, you need to explain how mathematicians think about the concept. This is what in abstractmath I call the images and metaphors carried by the concept. Of course you have to give the precise definition of the concept and basic theorems about it. But without the images and metaphors most students, not to mention mathematicians from a different field, will find it hard to prove much more than some immediate consequences of the definition. Nor will they have much sense of the place of the concept in math and applications.

Teachers will often explain the images and metaphors with handwaving and pictures in a fairly vague way. That is good to start with, but it’s important to get more precise about the images and metaphors. That’s because images and metaphors are often not quite a good fit for the concept — they may suggest things that are false and not suggest things that are true. For example, if a set is a container, why isn’t the element-of relation transitive? (A coin in a coinpurse in your pocket is a coin in your pocket.)

“A metaphor is a useful way to think about something, but it is not the same thing as the same thing.” (I think I stole that from the Economist.) Here, I am going to get precise with the notion that a function is a map. I am acting like a mathematician in “getting precise”, but I am getting precise about a metaphor, not about a mathematical object.

A function is a map

A map (ordinary paper map) of Minnesota has the property that each point on the paper represents a point in the state of Minnesota. This map can be represented as a mathematical function from a subset of a 2-sphere to {{\mathbb R}^2}. The function is a mathematical idealization of the relation between the state and the piece of paper, analogous to the mathematical description of the flight of a rocket ship as a function from {{\mathbb R}} to {{\mathbb R}^3}.

The Minnesota map-as-function is probably continuous and differentiable, and as is well known it can be angle preserving or area preserving but not both.

So you can say there is a point on the paper that represents the location of the statue of Paul Bunyan in Bemidji. There is a set of points that represents the part of the Mississippi River that lies in Minnesota. And so on.

A function has an image. If you think about it you will realize that the image is just a certain portion of the piece of paper. Knowing that a particular point on the paper is in the image of the function is not the information contained in what we call “this map of Minnesota”.

This yields what I consider a basic insight about function-as-map:  The map contains the information about the preimage of each point on the paper map. So:

The map in the sense of a “map of Minnesota” is represented by the whole function, not merely by the image.

I think that is the essence of the metaphor that a function is a map. And I don’t think newbies in abstractmath always understand that relationship.

A morphism is a map

The preceding discussion doesn’t really represent how we think of a paper map of Minnesota. We don’t think in terms of points at all. What we see are marks on the map showing where some particular things are. If it is a road map it has marks showing a lot of roads, a lot of towns, and maybe county boundaries. If it is a topographical map it will show level curves showing elevation. So a paper map of a state should be represented by a structure preserving map, a morphism. Road maps preserve some structure, topographical maps preserve other structure.

The things we call “maps” in math are usually morphisms. For example, you could say that every simple closed curve in the plane is an equivalence class of maps from the unit circle to the plane. Here equivalence class meaning forget the parametrization.

The very fact that I have to mention forgetting the parametrization is that the commonest mathematical way to talk about morphisms is as point-to-point maps with certain properties. But we think about a simple closed curve in the plane as just a distorted circle. The point-to-point correspondence doesn’t matter. So this example is really talking about a morphism as a shape-preserving map. Mathematicians introduced points into talking about preserving shapes in the nineteenth century and we are so used to doing that that we think we have to have points for all maps.

Not that points aren’t useful. But I am analyzing the metaphor here, not the technical side of the math.

Groups are functors

People who don’t do category theory think the idea of a mathematical structure as a functor is weird. From the point of view of the preceding discussion, a particular group is a functor from the generic group to some category. (The target category is Set if the group is discrete, Top if it is a topological group, and so on.)

The generic group is a group in a category called its theory or sketch that is just big enough to let it be a group. If the theory is the category with finite products that is just big enough then it is the Lawvere theory of the group. If it is a topos that is just big enough then it is the classifying topos of groups. The theory in this sense is equivalent to some theory in the sense of string-based logic, for example the signature-with-axioms (equational theory) or the first order theory of groups. Johnstone’s Elephant book is the best place to find the translation between these ideas.

A particular group is represented by a finite-limit-preserving functor on the algebraic theory, or by a logical functor on the classifying topos, and so on; constructions which bring with them the right concept of group homomorphisms as well (they will be any natural transformations).

The way we talk about groups mimics the way we talk about maps. We look at the symmetric group on five letters and say its multiplication is noncommutative. “Its multiplication” tells us that when we talk about this group we are talking about the functor, not just the values of the functor on objects. We use the same symbols of juxtaposition for multiplication in any group, “{1}” or “{e}” for the identity, “{a^{-1}}” for the inverse of {a}, and so on. That is because we are really talking about the multiplication, identity and inverse function in the generic group — they really are the same for all groups. That is because a group is not its underlying set, it is a functor. Just like the map of Minnesota “is” the whole function from the state to the paper, not just the image of the function.

Send to Kindle

Technical meanings clash with everyday meanings

Recently (see note [a]) on MathOverflow, Colin Tan asked [1] “What does ‘kernel’ mean in ‘integral kernel’?”  He had noticed the different use of the word in referring to the kernels of morphisms.

I have long thought [2] that the clash between technical meanings and everyday meaning of technical terms (not just in math) causes trouble for learners.  I have recently returned to teaching (discrete math) and my feeling is reinforced — some students early in studying abstract math cannot rid themselves of thinking of a concept in terms of familiar meanings of the word.

One of the worst areas is logic, where “implies” causes well-known bafflement.   “How can ‘If P then Q’ be true if P is false??”  For a large minority of beginning college math students, it is useless to say, “Because the truth table says so!”.  I may write in large purple letters (see [3] for example) on the board and in class notes that The Definition of a Technical Math Concept Determines Everything That Is True About the Concept but it does not take.  Not nearly.

The problem seems to be worse in logic, which changes the meaning of words used in communicating math reasoning as well as those naming math concepts. But it is bad enough elsewhere in math.

Colin’s question about “kernel” is motivated by these feelings, although in this case it is the clash of two different technical meanings given to the same English word — he wondered what the original idea was that resulted in the two meanings.  (This is discussed by those who answered his question.)

Well, when I was a grad student I made a more fundamental mistake when I was faced with two meanings of the word “domain” (in fact there are at least four meanings in math).  I tried to prove that the domain of a continuous function had to be a connected open set.  It didn’t take me all that long to realize that calculus books talked about functions defined on closed intervals, so then I thought maybe it was the interior of the domain that was a, uh, domain, but I pretty soon decided the two meanings had no relation to each other.   If I am not mistaken Colin never thought the two meanings of “kernel” had a common mathematical definition.

It is not wrong to ask about the metaphor behind the use of a particular common word for a technical concept.  It is quite illuminating to get an expert in a subject to tell about metaphors and images they have about something.  Younger mathematicians know this.  Many of the questions on MathOverflow are asking just for that.  My recollection of the Bad Old Days of Abstraction and Only Abstraction (1940-1990?) is that such questions were then strongly discouraged.

Notes

[a] The recent stock market crash has been blamed [4] on the fact that computers make buy and sell decisions so rapidly that their actions cannot be communicated around the world fast enough because of the finiteness of the speed of light.  This has affected academic exposition, too.  At the time of writing, “recently” means yesterday.

References

[1] Colin Tan, “What does ‘kernel’ mean in ‘integral kernel’?

[2] Commonword names for technical concepts (previous blog).

[3] Definitions. (Abstractmath).

[4] John Baez, This weeks finds in mathematical physics, Week 297.

Send to Kindle

Syntax Trees in Mathematicians’ Brains

Understanding the quadratic formula

In my last post I wrote about how a student’s pattern recognition mechanism can go awry in applying the quadratic formula.

The template for the quadratic formula says that the solution of a quadratic equation of the form ${ax^2+bx+c=0}$ is given by the formula

$\displaystyle x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$

When you ask students to solve ${a+bx+cx^2=0}$ some may write

$\displaystyle x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$

instead of

$\displaystyle x=\frac{-b\pm\sqrt{b^2-4ac}}{2c}$

That’s because they have memorized the template in terms of the letters ${a}$, ${b}$ and ${c}$ instead of in terms of their structural meaning — $ {a}$ is the coefficient of the quadratic term, ${c}$ is the constant term, etc.

The problem occurs because there is a clash between the occurrences of the letters “a”, “b”, and “c” in the template and in the equation to solve. But maybe the confusion would occur anyway, just because of the ordering of the coefficients. As I asked in the previous post, what happens if students are asked to solve $ {3+5x+2x^2=0}$ after having learned the quadratic formula in terms of ${ax^2+bx+c=0}$? Some may make the same kind of mistake, getting ${x=-1}$ and ${x=-\frac{2}{3}}$ instead of $ {x=-1}$ and $ {x=-\frac{3}{2}}$. Has anyone ever investigated this sort of thing?

People do pattern recognition remarkably well, but how they do it is mysterious. Just as mistakes in speech may give the linguist a clue as to how the brain processes language, students’ mistakes may tell us something about how pattern recognition works in parsing symbolic statements as well as perhaps suggesting ways to teach them the correct understanding of the quadratic formula.

Syntactic Structure

“Structural meaning” refers to the syntactic structure of a mathematical expression such as ${3+5x+2x^2}$. It can be represented as a tree:

(1)

This is more or less the way a program compiler or interpreter for some language would represent the polynomial. I believe it corresponds pretty well to the organization of the quadratic-polynomial parser in a mathematician’s brain. This is not surprising: The compiler writer would have to have in mind the correct understanding of how polynomials are evaluated in order to write a correct compiler.

Linguists represent English sentences with syntax trees, too. This is a deep and complicated subject, but the kind of tree they would use to represent a sentence such as “My cousin saw a large ship” would look like this:

Parsing by mathematicians

Presumably a mathematician has constructed a parser that builds a structure in their brain corresponding to a quadratic polynomial using the same mechanisms that as a child they learned to parse sentences in their native language. The mathematician learned this mostly unconsciously, just as a child learns a language. In any case it shouldn’t be surprising that the mathematicians’s syntax tree for the polynomial is similar to the compiler’s.

Students who are not yet skilled in algebra have presumably constructed incorrect syntax trees, just as young children do for their native language.

Lots of theoretical work has been done on human parsing of natural language. Parsing mathematical symbolism to be compiled into a computer program is well understood. You can get a start on both of these by reading the Wikipedia articles on parsing and on syntax trees.

There are papers on students’ misunderstandings of mathematical notation. Two articles I recently turned up in a Google search are:

Both of these papers talk specifically about the syntax of mathematical expressions. I know I have read other such papers in the past, as well.

What I have not found is any study of how the trained mathematician parses mathematical expression.

For one thing, for my parsing of the expression $ {3+5x+2x^2}$, the branching is wrong in (1). I think of ${3+5x+2x^2}$ as “Take 3 and add $ {5x}$ to it and then add ${2x^2}$ to that”, which would require the shape of the tree to be like this:

I am saying this from introspection, which is dangerous!

Of course, a compiler may group it that way, too, although my dim recollection of the little bit I understand about compilers is that they tend to group it as in (1) because they read the expression from left to right.

This difference in compiling is well-understood.  Another difference is that the expression could be compiled using addition as an operator on a list, in this case a list of length 3.  I don’t visualize quadratics that way but I certainly understand that it is equivalent to the tree in Diagram (1).  Maybe some mathematicians do think that way.

But these observations indicate what might be learned about mathematicians’ understanding of mathematical expressions if linguists and mathematicians got together to study human parsing of expressions by trained mathematicians.

Some educational constructivists argue against the idea that there is only one correct way to understand a mathematical expression.  To have many metaphors for thinking about math is great, but I believe we want uniformity of understanding of the symbolism, at least in the narrow sense of parsing, so that we can communicate dependably.  It would be really neat if we discovered deep differences in parsing among mathematicians.  It would also be neat if we discovered that mathematicians parsed in generally the same way!


Send to Kindle

Templates in mathematical practice

This post is a first pass at what will eventually be a section of abstractmath.org. It’s time to get back to abstractmath; I have been neglecting it for a couple of years.

What I say here is based mainly on my many years of teaching discrete mathematics at Case Western Reserve University in Cleveland and more recently at Metro State University in Saint Paul.

Beginning abstract math

College students typically get into abstract math at the beginning in such courses as linear algebra, discrete math and abstract algebra. Certain problems that come up in those early courses can be grouped together under the notion of (what I call) applying templates [note 0]. These are not the problems people usually think about concerning beginners in abstract math, of which the following is an incomplete list:

The students’ problems discussed here concern understanding what a template is and how to apply it.

Templates can be formulas, rules of inference, or mini-programs. I’ll talk about three examples here.

The template for quadratic equations

The solution of a real quadratic equation of the form {ax^2+bx+c=0} is given by the formula

\displaystyle  x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}

This is a template for finding the roots of the equations. It has subtleties.

For example, the numerator is symmetric in {a} and {c} but the denominator isn’t. So sometimes I try to trick my students (warning them ahead of time that that’s what I’m trying to do) by asking for a formula for the solution of the equation {a+bx+cx^2=0}. The answer is

\displaystyle x=\frac{-b\pm\sqrt{b^2-4ac}}{2c}

I start writing it on the board, asking them to tell me what comes next. When we get to the denominator, often someone says “{2a}”.

The template is telling you that the denominator is 2 times the coefficient of the square term. It is not telling you it is “{a}”. Using a template (in the sense I mean here) requires pattern matching, but in this particular example, the quadratic template has a shallow incorrect matching and a deeper correct matching. In detail, the shallow matching says “match the letters” and the deep matching says “match the position of the letters”.

Most of the time the quadratic being matched has particular numbers instead of the same letters that the template has, so the trap I just described seldom occurs. But this makes me want to try a variation of the trick: Find the solution of {3+5x+2x^2=0}. Would some students match the textual position (getting {a=3}) instead of the functional position (getting {a=5})? [Note [0]). If they did they would get the solutions {(-1,-\frac{2}{3})} instead of {(-1,-\frac{3}{2})}.

Substituting in algebraic expressions have other traps, too. What sorts of mistakes would students have solving {3x^2+b^2x-5=0}?

Most students on the verge of abstract math don’t make mistakes with the quadratic formula that I have described. The thing about abstract math is that it uses more sophisticated templates

  • subject to conditions
  • with variations
  • with extra levels of abstraction

The template for proof by induction

This template gives a method of proof of a statement of the form {\forall{n}\mathcal{P}(n)}, where {\mathcal{P}} is a predicate (presumably containing {n} as a variable) and {n} varies over positive integers. The template says:

Goal: Prove {\forall{n}\mathcal{P}(n)}.

Method:

  • Prove {\mathcal{P}(1)}
  • For an arbitrary integer {n>1}, assume {\mathcal{P}(n)} and deduce {\mathcal{P}(n+1)}.

For example, to prove {\forall n (2^n+1\geq n^2)} using the template, you have to prove that {2^2+1\geq  1^1}, and that for any {n>1}, if {2^n+1\geq n^2}, then {2^{n+1}+1\geq  (n+1)^2}. You come up with the need to prove these statements by substituting into the template. This template has several problems that the quadratic formula does not have.

Variables of different types

The variable {n} is of type integer and the variable {\mathcal{P}} is of type predicate [note 0]. Having to deal with several types of variables comes up already in multivariable calculus (vectors vs. numbers, cross product vs. numerical product, etc) and they multiply like rabbits in beginning abstract math classes. Students sometimes write things like “Let {\mathcal{P}=n+1}”. Multiple types is a big problem that math ed people don’t seem to discuss much (correct me if I am wrong).

Free and bound

The variable {n} occurs as a bound variable in the Goal and a free variable in the Method. This happens in this case because the induction step in the Method originates as the requirement to prove {\forall  n(\mathcal{P}(n)\rightarrow\mathcal{P}(n+1))}, but as I have presented it (which seems to be customary) I have translated this into a requirement based on modus ponens. This causes students problems, if they notice it. (“You are assuming what you want to prove!”) Many of them apparently go ahead and produce competent proofs without noticing the dual role of {n}. I say more power to them. I think.

The template has variations

  • You can start the induction at other places.
  • You may have to have two starting points and a double induction hypothesis (for {n-1} and {n}). In fact, you will have to have two starting points, because it seems to be a Fundamental Law of Discrete Math Teaching that you have to talk about the Fibonacci function ad nauseam.
  • Then there is strong induction.

It’s like you can go to the store and buy one template for quadratic equations, but you have to by a package of templates for induction, like highway engineers used to buy packages of plastic French curves to draw highway curves without discontinuous curvature.

The template for row reduction

I am running out of time and won’t go into as much detail on this one. Row reduction is an algorithm. If you write it up as a proper computer program there have to be all sorts of if-thens depending on what you are doing it for. For example if want solutions to the simultaneous equations

2x+4y+z = 1
x+2y = 0
x+2y+4z = 5

you must row reduce the matrix

2 4 1 1
1 2 0 0
1 2 4 5

(I haven’t yet figured out how to wrap this in parentheses) which gives you

1 2 0 0
0 0 1 0
0 0 0 1

This introduces another problem with templates: They come with conditions. In this case the condition is “a row of three 0s followed by a nonzero number means the equations have no solutions”. (There is another condition when there is a row of all 0’s.)

It is very easy for the new student to get the calculation right but to never sit back and see what they have — which conditions apply or whatever.

When you do math you have to repeatedly lean in and focus on the details and then lean back and see the Big Picture. This is something that has to be learned.

What to do, what to do

I have recently experimented with being explicit about templates, in particular going through examples of the use of a template after explicitly stating the template. It is too early to say how successful this is. But I want to point out that even though it might not help to be explicit with students about templates, the analysis in this post of a phenomenon that occurs in beginning abstract math courses

  • may still be accurate (or not), and
  • may help teachers teach such things if they are aware of the phenomenon, even if the students are not.

Notes

  1. Many years ago, I heard someone use the word “template” in the way I am using it now, but I don’t recollect who it was. Applied mathematicians sometimes use it with a meaning similar to mine to refer to soft algorithms–recipes for computation that are not formal algorithms but close enough to be easily translated into a sufficiently high level computer language.
  2. In the formula {ax^2+bx+c}, the “{a}” has the first textual position but the functional position as the coefficient of the quadratic term. This name “functional position” has nothing to do with functions. Can someone suggest a different name that won’t confuse people?
  3. I am using “variable” the way logicians do. Mathematicians would not normally refer to “{\mathcal{P}}” as a variable.
  4. I didn’t say anything about how templates can involve extra layers of abstract.  That will have to wait.
Send to Kindle

Thinking about mathematical objects revisited

How we think about X

It is notable that many questions posted at MathOverflow are like, “How should I think about X?”, where X can be any type of mathematical object (quotient group, scheme, fibration, cohomology and so on).  Some crotchety contributors to that group want the questions to be specific and well-defined, but “how do I think about…” questions  are in my opinion among the most interesting questions on the website.  (See note [a]).

Don’t confuse “How do I think about X” with “What is X really?” (pace Reuben Hersh).  The latter is a philosophical question.  As far as I am concerned, thinking about how to think about X is very important and needs lots of research by mathematicians, educators, and philosophers — for practical reasons: how you think about it helps you do it.   What it really is is no help and anyway no answer may exist.

Inert and eternal

The idea that mathematical objects should be thought of as  “inert” and “eternal”  has been around for awhile.  (Never mind whether they really are inert and eternal.)  I believe, and have said in the past [1], that thinking about them that way clears up a lot of confusion in newbies concerning logical inference.

  • That mathematical objects are “inert” means that the do not cause anything. They have no effect on the real world or on each other.
  • That they are “eternal” means they don’t change over time.

Naturally, a function (a mathematical object) can model change over time, and it can model causation, too, in that it can describe a process that starts in one state and achieves stasis in another state (that is just one way of relation functions to causation).  But when we want to prove something about a type of math object, our metaphorical understanding of them has to lose all its life and color and go dead, like the dry bones before Ezekiel started nagging them.

It’s only mathematical reasoning if it is about dead things

The effect on logical inference can be seen in the fact that “and” is a commutative logical operator. 

  • “x > 1 and x < 3″ means exactly the same thing as “x < 3 and x > 1″
  • “He picked up his umbrella and went outside” does not mean the same thing as “He went outside and picked up his umbrella”.

The most profound effect concerns logical implication.  “If  x > 1 then x > 0″ says nothing to suggest that x > 1 causes it to be the case that x > 0.  It is purely a statement about the inert truth sets of two predicates lying around the mathematical boneyard of objects:  The second set includes the first one.  This makes vacuous implication perfectly obvious.  (The number -1 lies in neither truth set and is irrelevant to the fact of inclusion).

Inert and eternal rethought

There are better metaphors than these.  The point about the number 3 is that you think about it as outside time. In the world where you think about 3 or any other mathematical object, all questions about time are meaningless.

  • In the sentence “3 is a prime”, we need a new tense in English like the tenses ancient (very ancient) Greek and Hebrew were supposed to have (perfect with gnomic meaning), where a fact is asserted without reference to time.
  • Since causation involves this happens, then this happens, all questions about causation are meaningless, too.  It is not true that 3 causes 6 to be composite, while being irrelevant to the fact that 35 is composite.

This single metaphor “outside time” thus can replace the two metaphors “inert” and “eternal” and (I think) shows that the latter two are really two aspects of the same thing.

Caveat

Thinking of math objects as outside time is a Good Thing when you are being rigorous, for example doing a proof.  The colorful, changing, full-of-life way of thinking of math that occurs when you say things like the statements below is vitally necessary for inspiring proofs and for understanding how to apply the mathematics.

  • The harmonic series goes to infinity in a very leisurely fashion.
  • A function is a machine — when you dump in a number it grinds away and spits out another number.
  • At zero, this function vanishes.

Acknowledgment

Thanks to Jody Azzouni for the italics (see [3]).

Notes

a.  Another interesting type of question  “in what setting does such and such a question (or proof) make sense?” .  An example is my question in [2].

References

1.  Proofs without dry bones

2. Where does the generic triangle live?

3. The revolution in technical exposition II.

Send to Kindle

Three kinds of mathematical thinkers

This is a continuation of my post Syntactic and semantic thinkers, in which I mentioned Leone Burton’s book [1] but hadn’t read it yet.  Well, now it is due back at the library so I’d better post about it!

I recommend this book for anyone interested in knowing more about how mathematicians think about and learn math.  The book is based on in-depth interviews with seventy mathematicians.  (One in-depth interview is worth a thousand statistical studies.)   On page 53, she writes

At the outset of this study, I had two conjectures with respect to thinking style.  The first was that I would find the two different thinking styles,the visual and the analytic, well recorded in the literature… The second was that research mathematicians would move flexibly between the two.  Neither of these conjectures were confirmed.

What she discovered was three styles of mathematical thinking:

Style A: Visual (or thinking in pictures, often dynamic)

Style B: Analytic (or thinking symbolically, formalistically)

Style C: Conceptual (thinking in ideas, classifying)

Style B corresponds more or less with what was called “syntactic” in [3] (based on [2]).  Styles A and C are rather like the distinctions I made in [3] that I called “conceptual” and “visual”, although I really want Style A to communicate not only “visual” but “geometric”.

I recommend jumping through the book reading the quotes from the interviews.  You get a good picture of the three styles that way.

Visual vs. conceptual

I had thought about this distinction before and have had a hard time explaining what “conceptual” means, particularly since for me it has a visual component.  I mentioned this in [3].  I think about various structures and their relationship by imagining them as each in a different part of a visual field, with the connections as near as I can tell felt rather than seen.  I do not usually think in terms of the structures’ names (see [4]).  It is the position that helps me know what I am thinking about.

When it comes time to write up the work I am doing, I have to come up with names for things and find words to describe the relationships that I was feeling. (See remark (5) below).  Sometimes I have also written things down and come up with names, and if this happened very much I invariable get a clash of notation that didn’t bother me when I was thinking about the concepts because the notations referred to things in different places.

I would be curious if others do math this way.  Especially people better than I am.  (Clue to a reasonable research career:  Hang around people smarter than you.)

Remarks

1) I have written a lot about images and metaphors [5], [6].  They show up in the way I think about things sometimes.  For example, when I am chasing a diagram I am thinking of each successive arrow as doing something.  But I don’t have any sense that I depend a lot on metaphors.  What I depend on is my experience with thinking about the concept!

2) Some of the questions on Math Overflow are of the “how do I think about…” type (or “what is the motivation for…”).  Some of the answers have been Absolutely Entrancing.

3) Some of the respondents in [1] mentioned intuition, most of them saying that they thought of it as an important part of doing math.  I don’t think the book mentioned any correlation between these feelings and the Styles A, B, C, but then I didn’t read the book carefully.  I never read any book carefully. (My experience with Style B of the subtype Logic Rules diss intuition. But not analysts of the sort who estimate errors and so on.)

4) Concerning A, B, C:  I use Style C (conceptual) thinking mostly, but a good bit of Style (B) (analytic) as well.  I think geometrically when I do geometry problems, but my research has never tended in that direction.  Often the analytic part comes after most of the work has been done, when I have to turn the work into a genuine dry-bones proof.

5) As an example of how I have sometimes worked, I remember doing a paper about lifting group automorphisms (see [7]), in which I had a conceptual picture with a conceptual understanding of the calculations of doing one transformation after another which produced an exact sequence in cohomology.  When I wrote it up I thought it would be short.  But all the verifications made the paper much longer.  The paper was conceptually BigChunk BigChunk BigChunk BigChunk … but each BigChunk required a lot of Analytic work.  Even so, I missed a conceptual point (one of the groups involved was a stabilizer but I didn’t notice that.)

References

[1] Leone Burton, Mathematicians as Enquirers: Learning about Learning Mathematics.  Kluwer, 2004.

[2] Keith Weber, Keith Weber, How syntactic reasoners can develop understanding, evaluate conjectures, and generate counterexamples in advanced mathematics. Proof copy available from Science Direct.

[3] Post on this blog: Syntactic and semantic thinkers.

[4] Post: Thinking without words.

[5] Post: Proofs without dry bones.

[6] Abstractmath.org article on Images and Metaphors.

[7] Post: Automorphisms of group extensions updated.

Send to Kindle

Naive proofs

The monk problem

A monk starts at dawn at the bottom of a mountain and goes up a path to the top, arriving there at dusk. The next morning at dawn he begins to go down the path, arriving at dusk at the place he started from on the previous day. Prove that there is a time of day at which he is at the same place on the path on both days.

Proof: Envision both events occurring on the same day, with a monk starting at the top and another starting at the bottom at the same time and doing the same thing the monk did on different days. They are on the same path, so they must meet each other. The time at which they meet is the time required.

The pons asinorum

Theorem: If a triangle has two equal angles, then it has two equal sides.

Proof: In the figure below, assume angle ABC = angle ACB. Then triangle ABC is congruent to triangle ACB since the sides BC and CB are equal and the adjoining angles are equal.

PATriangle

I considered the monk problem at length in my post Proofs Without Dry Bones.  Proofs like the one given of the pons asinorum, particularly its involvement with labeling, recently came up on the mathedu mailing list.  See also my question on Math Overflow.

Naive proofs

These proofs share a characteristic property; I propose to say they are naive, in the sense Halmos used it in his title Naive Set Theory.

The monk problem proof is naive.

For the monk problem, you can give a model of a known mathematical type (for example model the paths as  smoothly parametrized curves on a surface) and use known theorems (for example the intermediate value theorem) and facts (for example that clock time is cyclical and invariant under the appropriate mapping) to prove it.  But the proof says nothing about that.

You could imagine inventing an original set of axioms for the monk problem, giving axioms for a structure that are satisfied by the monk’s journeys and their timing and that imply the result.  In principle, these could be very different from multivariable calculus ideas and still serve the purpose. (But I have not tried to come up with such a thing.)

But the proof as given simply uses directly  known facts about clock time and traveling on paths.  These are known to most people.  I have claimed in several places that this proof is still a mathematical proof.

Every proof is incomplete in the sense that they provide a mathematical model and analyze it using facts the reader is presumed to know.  Proofs never go all the way to foundations.  A naive proof simply depends more than usual on the reader’s knowledge: the percentage of explication is lower.  Perhaps “naive” should also include the connotation that the requisite knowledge is “common knowledge”.

The pons asinorum proof is naive.

This involves some subtle issues.  When I first wrote about this proof in the Handbook I envisioned the triangle as existing independently of any embedding in the plane, as if in the Platonic world of ideals.  I applied some labels and a relabeling and used a known theorem of Euclid’s geometry.  You certainly don’t have to know where the triangle is in order to understand the proof.

That’s a clue.  The triangle in the problem does not need to be planar. It is true for triangles in the sphere or on a saddle surface, because the proof does not involve the parallel axiom. But the connection with the absence of the parallel axiom is illusory.  When you imagine the triangle in your head the proof works directly for a triangle in any suitable geometry, by imagining the triangle as existing in and of itself, and not embedded in anything.

Questions

  1. How do you give a mathematical definition of a triangle so that it is independent of embedding?  This was the origin of my question on Math Overflow, although I muddled the issue by mentioning specific ways of doing it.
  2. (This is a variant of question 1.)  Is there anything like a classifying topos or space for a generic triangle?  In other words, a category or space or something that is just big enough to include the generic triangle and from which mappings to suitable spaces or categories produce what we usually mean by triangles.
  3. Some of the people on mathedu thought a triangle obviously had to have labels and others thought it obviously didn’t.  Specifically, is triangle ABC “the same” as triangle ACB?  Of course they are congruent.  Are they the sameThis is an evil question. The proof works on the generic isosceles triangle.  That’s enough.  Isn’t it?  All three corners of the generic isosceles triangle are different points.  Aren’t they?  (I have had second, third and nth thoughts about this point.)
  4. You can define a triangle as a list of lengths of edges and connectivity data.  But the generic triangle’s sides ought to be (images of) line segments, not abstract data.  I don’t really understand how to formulate this correctly.

Note

1.  I could avoid discussion of irrelevant side issues in the monk problem by referring to specific times of day for starting and stopping, instead of dawn and dusk.  But they really are irrelevant.

Send to Kindle

Syntactic and semantic thinkers

A paper by Keith Weber

Reidar Mosvold’s math-ed blog recently provided a link to an article by Keith Weber (Reference [2]) about a very good university math student he referred to as a “syntactic reasoner”.  He interviewed the student in depth as the student worked on some proofs suitable to his level.  The student would “write the proofs out in quantifiers” and reason based on previous steps of the proof in a syntactic way rather than than depending on an intuitive understanding of the problem, as many of us do (the author calls us semantic reasoners).  The student didn’t think about specific examples —  he always tried to make them as abstract as possible while letting them remain examples (or counterexamples).

I recommend this paper if you are at all interested in math education at the university math major level — it is fascinating.  It made all sorts of connections for me with other ideas about how we think about math that I have thought about for years and which appear in the Understanding Math part of abstractmath.org.  It also raises lots of new (to me) questions.

Weber’s paper talks mostly about how the student comes up with a proof.  I suspect that the distinction between syntactic reasoners and semantic reasoners can be seen in other aspects of mathematical behavior, too, in trying to understand and explain math concepts.  Some thoughts:

Other behaviors of syntactic reasoners (maybe)

1) Many mathematicians (and good math students) explain math using conceptual and geometric images and metaphors, as described in Images and metaphors in abstractmath.org.   Some people I think of as syntactic reasoners seem to avoid such things. Some of them even deny thinking in images and metaphors, as I discussed in the post Thinking without words.   It used to be that even semantic reasoners were embarassed to used images and metaphors when lecturing (see the post How “math is logic” ruined math for a generation).

2) In my experience, syntactic reasoners like to use first order symbolic notation, for example eq0001MP

and will often translate a complicated sentence in ordinary mathematical English into this notation so they can understand it better.  (Weber describes the student he interviewed as doing this.)  Furthermore they seem to think that putting a formula such as the one above on the board says it all, so they don’t need to draw pictures, wave their hands [Note 1], and so on.  When you come up with a picture of a concept or theorem that you claim explains it their first impulse is to say it out in words that generally can be translated very easily into first order symbolism, and say that is what is going on.  It is a matter of what is primary.

The semantic reasoners of students and (I think) many mathematicians find the symbolic notation difficult to parse and would rather have it written out in English.  I am pretty good at reading such symbolic notation [Note 2] but I still prefer ordinary English.

3) I suspect the syntactic reasoners also prefer to read proofs step by step, as I described in my post Grasshoppers and linear proofs, rather than skipping around like a grasshopper.

And maybe not

Now it may very well be that syntactic thinkers do not all do all those things I mentioned in (1)-(3).  Perhaps the group is not cohesive in all those ways.  Probably really good mathematicians use both techniques, although Weyl didn’t think so (quoted in Weber’s paper).   I think of myself as an image and metaphor person but I do use syntax, and sometimes even find that a certain syntactic explanation feels like a genuinely useful insight, as in the example I discussed under conceptual in the Handbook.

Distinctions among semantic thinkers

Semantic thinkers differ among themselves.  One demarcation line is between those who use a lot of visual thinking and those who use conceptual thinking which is not necessarily visual.  I have known grad students who couldn’t understand how I could do group theory (that was in a Former Life, before category theory) because how could you “see” what was happening?  But the way I think about groups is certainly conceptual, not syntactic.  When I think of a group acting on a space I think of it as stirring the space around.  But the stirring is something I feel more than I see.  On the other hand, when I am thinking about the relationships between certain abstract objects, I “see” the different objects in different parts of an interior visual space.  For example, group is on the right, stirring the space-acted-upon on the left, or the group is in one place, a subgroup is in another place while simultaneously being inside the group, and the cosets are grouped (sorry) together in a third place, being (guess what) stirred around by the group acting by conjugation (Note [3]).

This distinction between conceptual and visual, perhaps I should say visual-conceptual and non-visual-conceptual, both opposed to linguistic or syntactic reasoning, may or may not be as fundamental as syntactic vs semantic.   But it feels fundamental to me.

Weber’s paper mentions an intriguing sounding book (Reference [1]) by Burton which describes a three-way distinction called conceptual, visual and symbolic, that sounds like it might be the distinction I am discussing here.  I have asked for it on ILL.

Notes

  1. Handwaving is now called kinesthetic communication.  Just to keep you au courant.
  2. I took Joe Shoenfield’s course in logic when his book  Mathematical Logic [3] was still purple.
  3. Clockwise for left action, counterclockwise for right action.  Not.

References

  1. Leone L. Burton, Mathematicians as Enquirers: Learning about Learning Mathematics.  Springer, 2004.
  2. Keith Weber, How syntactic reasoners can develop understanding, evaluate conjectures, and generate counterexamples in advanced mathematics. Proof copy available from Science Direct.
  3. Joseph Shoenfield, Mathematical logic, Addison-Wesley 1967, reprinted 2001 by the Association for Symbolic Logic.
Send to Kindle

Composites of functions

In my post on automatic spelling reform, I mentioned the various attempts at spelling reform that have resulted in both the old and new systems being used, which only makes things worse.  This happens in Christian denominations, too.  Someone (Martin Luther, John Wesley) tries to reform things; result: two denominations.   But a lot of the time the reform effort simply disappears.  The Chicago Tribune tried for years to get us to write “thru” and “tho” —  and failed.  Nynorsk (really a language reform rather than a spelling reform) is down to 18% of the population and the result of allowing Nynorsk forms to be used in the standard language have mostly been nil.  (See Note 1.)

In my early years as a mathematician I wrote a bunch of papers writing functions on the right (including the one mentioned in the last post).  I was inspired by some algebraists and particularly by Beck’s Thesis (available online via TAC), which I thought was exceptionally well-written.  This makes function composition read left to right and makes the pronunciation of commutative diagrams get along with notation, so when you see the diagram below you naturally write h = fg instead of h = gf. Composite

Sadly, I gave all that up before 1980 (I just looked at some of my old papers to check).  People kept complaining.  I even completely rewrote one long paper (Reference [3]) changing from right hand to left hand (just like Samoa).  I did this in Zürich when I had the gout, and I was happy to do it because it was very complicated and I had a chance to check for errors.

Well, I adapted.  I have learned to read the arrows backward (g then f in the diagram above).  Some French category theorists write the diagram backward, thus:

CompositeOp

But I was co-authoring books on category theory in those days and didn’t think people would accept it. Not to mention Mike Barr (not that he is not a people, oh, never mind).

Nevertheless, we should have gone the other way.  We should have adopted the Dvorak keyboard and Betamax, too.

Notes

[1] A lifelong Norwegian friend of ours said that when her children say “boka” instead of “boken” it sound like hillbilly talk does to Americans.  I kind of regretted this, since I grew up in north Georgia and have been a kind of hillbilly-wannabe (mostly because of the music); I don’t share that negative reaction to hillbillies.  On the other hand, you can fageddabout “ho” for “hun”.

References

[1] Charles Wells, Automorphisms of group extensions, Trans. Amer. Math. Soc, 155 (1970), 189-194.

[2] John Martino and Stewart Priddy, Group extensions and automorphism group rings. Homology, Homotopy and Applications 5 (2003), 53-70.

[3] Charles Wells, Wreath product decomposition of categories 1, Acta Sci. Math. Szeged 52 (1988), 307 – 319.

Send to Kindle