Tag Archives: math education

The Mathematics Depository: A Proposal


This post is about taking texts written in mathematical English and the symbolic language and encoding it in a formal language that could be tested by an automated proof verifier. This is a very difficult undertaking, but we could get closer and closer to a working system by a worldwide effort continuing over, probably, decades. The system would have to contain many components working together to create incremental improvements in the process.

This post, which is a first draft, outlines some suggestions as to how this could work. I do not discuss the encoding required, which is not my area of expertise. Yes, I understand that coding is the hard part!

Much work has been done by computing scientists in developing proof checking and proof-finding programs. Work has also been done, primarily by math education workers but also by some philosophers and computing scientists, in uncovering the many areas where ordinary math language is ambiguous and deviates from ordinary English usage. These characteristics confuse students and also make it hard to design a program that can interpret the language. I have been working in that area mostly from the math ed point of view for the last twenty years.

The Reference section lists many references to the problem of parsing mathematical English, some from the point of view of automatic translation of math language into code, but most from the point of view of helping students understand how to understand it.

The Mathematics Depository

I imagine a system for converting documents written in math language into machine-readable language and testing their claims. An organization, call it the Mathematics Depository, would be developed that is supported by many countries, organizations and individual supporters. It should consist of several components listed below, no doubt with other components as we become aware of needing them. The organization would be tasked with supporting and improving these components over time.

The main parts of the system

Each component is linked to a more detailed description that is given later in this post.

  • A Proof Verifier (PV), that inputs a proof and determines if it is correct.
  • A specification of a supported subset of Mathematical English and the symbolic language, that I will call Strict Math English (SME).
  • A Text-SME Converter, a program that would input a text written in ordinary math English that has been annotated by a knowledgeable person and convert it into SME.
  • An SME-PV Converter that will convert text written in SME into code that can be directly read by the Proof Verifier.
  • One or more Automatic Theorem Provers, that to begin with can take fairly simple conjectures written in SME and sometimes succeed in proving them.
  • An Annotation System containing an Annotation Editor that would allow a person to use SME to annotate an article written in ordinary math English so that it could be read by the Text-SME Converter.
  • A Data Base that would include the texts that have been collected in this endeavor, along with the annotations and the results of the proof checking.
  • A Data Base Miner that would watch for patterns in the annotations as new papers were submitted. The operators might also program it to watch for patterns in other aspects of the operation.

These facilities would be organized so that the systems work together, with the result that the individual components I named improve over time, both automatically and via human intervention.

Flow of Work

  1. A math text is submitted.
  2. If it is already in Strict Math English (SME), it is input to the Proof Verifier (PV).
  3. Otherwise, the math text is input into the Annotation System.
  4. The resulting SME text is input into the Text-SME Converter.
  5. The output of the Text-SME Converter is input into the Proof Verifier.
  6. The PV incorporates each definition in the text into the context of the math text. This is a specific meaning of the word “context”, including a list of the status of variables (bound, unbound, type, and so on), meanings of technical words, and other facts created in the text. “Context” is described informally in my article Context in abstractmath.org. That article gives references to the formal literature.
  7. In my experience mathematicians spend only a little time reading arguments step by step as described in the Context article. They usually look at a theorem and try to figure it out themselves, “cheating” occasionally by glancing at parts of the proof.

  8. Each mathematical assertion in the text is marked as a claim.
  9. The checking process records those claims occurring in the proof that are not proved in the text, along with any references given to other texts.
  10. If a reference to a result in another text is made, the PV looks for the result in the Database. If it does not find it, the PV incorporates the result and its location in the Database as an externally proven but untested claim.
  11. If no reference or proof for a claim is given, the PV checks the Database to see if it has already been proved.
  12. Any claim in the current text not shown as proven in the Database is submitted to the Automatic Theorem Prover (ATP). The output of the ATP is put in the database (proved, counterexample found, or unable to determine truth).
  13. If a segment of text is presented as a proof, it is input into the PV to be verified.
  14. The PV reports the result for each claimed proof, which can consist of several possibilities:
    • A counterexample for a proof is found, so the claim that the proof was supposed to report is false.
    • The proof contains gaps, so the claim is unsettled.
    • The proof is reported as correct.
  15. At the end of the process, all the information gathered is put into the Database:
    • The original text showing all the annotations.
    • The text in SME.
    • All claims, with their status (proven true, proven false, truth unknown, reference if one was given).
    • Every proof, with its status and the entire context at each step of the proof.


The proof verifier

  • Proof checking programs have been developed over the last thirty or so years. The MD should write or adapt one or more Proof Verifiers and improve it incrementally as a result of experience in running the system. In this post I have assumed the use of just one Proof Verifier.
  • The Proof Verifier should be designed to read the output of the SME-PV converter.
  • The PV must read a whole math text in SME, identify and record each claim and check each proof (among other things). This is different from current proof verifiers, which take exactly one proof as input.
  • The PV must create the context of each proof and change it step by step as it reads each syntactic fragment of the math text.
  • Typically the context for a claimed proof is built up in the whole math text, not just in the part called “Proof”.
  • The PV should automatically query the Data Base for unproved steps in a proof in the input text to see if they have already been verified somewhere else. These results should be quoted in a proof verifier output.
  • The PV should also automatically submit steps in the proof that haven’t been verified to the Automatic Theorem Provers and wait for the step to be verified or not.
  • The Proof Verifier should output details of the result of the checking whether it succeeded in verifying the whole input text or not. In particular, it should list steps in proofs it failed to verify, including steps in proofs for which the input text cited the proof in some other paper, in the MD system or not.
  • The Proof Verifier should be available online for anyone to submit, in SME, a mathematical text claiming to prove a theorem. Submission might require a small charge.

Strict Math English

  • One of the most important aspects of the system would be the simultaneous incremental updating of the SME and the SME-PV Converter.
  • The idea is that SME would get more and more inclusive of the phrases and clauses it allows.

Example: Universal Assertions

At the start SME might allow these statements to be recognized as the same universal assertion:

  • “$\forall x(x^2+1\gt0)$”
  • “For all [every, any] $x$, $x^2+1\gt0$.” (universality asserted using an English word.)
  • “For all [every, any] $x$, $x^2+1$ is positive.”

As time goes on, a person or the Data Base Miner might detect that many annotators also recognized these statements as saying the same thing:

  • “$x^2+1\gt0\,\,\,\,\,(\text{all } x)$” (as a displayed statement)
  • “$x^2+1$ is positive for every $x$.” Universality asserted using an adjective in a postposited phrase.
  • “$x^2+1$ is always positive.” Universality hidden in a postposited adverb that seems to be referring to time!
  • There are more examples in my article Universally True Assertions. See also Susanna Epp’s article on quantification for other problems in this area.

These other variations would then be added to the Strict Math Language. (This is only an example of how the system would evolve. I have no doubt that in fact all the terminology mentioned above would be included at the outset, since they are all documented in the math ed literature.)

Even at the start, SME will include phrases and clauses in the English language as well as symbolic expressions. It is notorious that automatically parsing general English sentences is difficult and that the ubiquity of metaphors makes it essentially impossible to reliably construct the meaning of a sentence. That is why SME must start with a very narrow subset of math English. But even in early days, it should include some stereotyped metaphors, such as using “always” in universal assertions.

The SME-PV Converter

  • The SME-PV Converter would read documents written in SME and convert them into code readable by the proof checking program, as well as by the automatic theorem provers.
  • Such a program is essentially the subject of Ganesingalam’s book.
  • Converting SME so that the Proof Verifier can handle it involves lots of subtleties. For example, if the text says, “For any $x$, $x^2+1\gt0$”, the translation has to recognize not only that this is a universally quantified statement with $x$ as the bound variable, but that $x$ must be a real number, since complex numbers don’t do greater-than.
  • Frequent revisions of the SME-PV Converter will be necessary since its input language, the SME, will be constantly expanded.
  • It may be that the output language of the SME-PV Converter (which the Proof Verifier and Automatic Theorem Provers read) will require only infrequent revisions.

The Automatic Theorem Provers

  • The system could support several ATP’s, each one adapted to read the output of the SME-PV Converter.
  • The Automatic Theorem Provers should provide output in such a way that the Proof Verifier can include in its report the positive or negative results of the Theorem Prover in detail.

The Annotation System

  • The Annotation system would facilitate construction of a data structure that connects each annotation to the specific piece of text it rewrites. The linking should be facilitated by the Annotation Editor.
  • For example, an annotation that is meant to explain that the statement (in the input text) “$x^2+1$ is always greater than $0$” is to be translated as “$\forall x(x^2+1\gt0)$” (which is presumably allowed by SME) should cause the first statement to be be linked to the second statement. The first statement, the one in the input text, should not be changed. This will enable the Data Base Miner to find patterns of similar text being annotated in similar ways.
  • The annotations should clarify words, symbolic expressions and sentences in the input text to allow the Proof Verifier to input them correctly.
  • In particular, every claim that a statement is true should be marked as a proposed theorem, and similarly every proof should be marked as a proof and every definition should be marked as a definition. Such labeling is often omitted in the math literature. Annotators would have to recognize segments of the text as claims, proofs and definitions and annotate them as such.
  • The annotations would be written in the current version of Strict Math English. Since SME is frequently updated, the instructions for the annotator would also have to be frequently updated.


  • If a paper used the word “domain” without defining it, the annotator would clarify whether it meant an open connected set, a type of ring, a type of poset, or the domain of a function. See Example 1
  • Annotators will note instances in which the same text will use a symbol with two different meanings. See Example 2.
  • In a phrase, a single occurrence of a symbol can require an annotation that assigns more than one attribute to the symbol. See Example 3.

The Annotation Editor

  • The annotators should be provided with an Annotation Editor designed specifically for annotation.
  • The editor should include a system of linking an annotation to the exact phrase it annotates that is easy for a person reading the annotated document to understand it as well as providing the information to the Text-SME Converter.

The Annotators

  • Great demands will be made of an annotator.
  • They must understand the detailed meaning of the text they annotate. This means they must be quite familiar with the field of math the text is concerned with.
  • They must learn SME. I know for a fact that many mathematicians are not good at learning foreign languages. It will help that SME will be a subset of the full language of math.
  • All this means that annotators must be chosen carefully and paid well. This means that not very many papers will get annotated by paid annotators, so that there will have to be some committee that chooses the papers to be annotated. This will be a genuine bottleneck.
  • One thing that will help in the long run is that the SME should evolve to include more features of the general language of math, so many mathematicians will actually write their papers in SME and submit it directly to the Depository. (“Long run” may mean more than ten years).

The Text-to-SME Converter

  • This converter takes a math text in ordinary Math English that has been annotated and convert it into SME.
  • The format for feeding it to the Automatic Theorem Prover may very well have to be different from the format to be read by a human. Both formats should be saved.

The Data Base

  • The Data Base would contain all math papers that have been run through the Proof Verifier, along with the results found by the Proof Verifier. A paper should be included whether or not every claim in the paper was verified.
  • Funding agencies (and private individuals) might choose particularly important papers and pay more money for annotation for those than for other papers.
  • Mathematicians in a particular field could be hired to annotate particular articles in their field, using a standard annotation language that would develop through time.
  • The annotated papers would be made freely available to the public.
  • It will no doubt prove useful for the Data Base to contain many other items. Possibilities:
  • A searchable list of all theorems that have been verified.
  • A glossary: a list of math words that have been defined in the papers in the Depository. This will include synonyms and words with multiple meanings.

The Data Base Miner

Watch for patterns

The DBM would watch for patterns in annotation as new annotated papers were submitted. It should probably look only at annotated papers whose proofs had been verified. The patterns might include:

  • Correlation between annotations that associate particular meanings to particular words or symbols with the branch of math the paper belongs to. See Example 1.
  • Noting that a particular format of combining symbols usually results in the same kind of annotation. See Example 4.
  • Providing data in such a way that lexicographers studying math English could make use of them. My Handbook began with my doing lexicographical research on math English, but I found it so slow that when I started abstractmath.org I resolved not to such research any more. Nevertheless, it needs to be done and the Database should make the process much easier.

Statistical translation

Since the annotated papers will be stored in the Data Base, the Data Base Miner could use the annotations in somewhat the same way some language translators work (in part): to translate a phrase, it will find occurrences of the phrase in the source language that have been translated into the target language and use the most common translation. In this case the source language is the paper (in English) and the target language is in annotated math English readable by the Proof Verifier. Once the Database includes most of the papers ever published (twenty years from now?), statistical translation might actually become useful.


Example 1: Meaning varies with branch of math

  • Field” means one thing in an algebra paper and another in a mathematical physics paper.
  • Domain” means
  • An open connected set in topology.
  • A type of ring in algebra.
  • A type of poset in theoretical computing science.
  • The domain of a function –everywhere in math, which makes it seem that this is going to be very hard to distinguish without human help!
  • Log” usually implies base $2$ in the computing world, base $10$ in engineering (but I am not sure how prevalant this meaning is there), and base $e$ in pure math. With exceptions!
  • Example 2: Meaning varies even in the same article

    • The notation “$(a,b)$” can mean an ordered pair, an open interval, or the GCD. What’s worse, there are many instances where the symbol is used without definition. Citation 139 in the Handbook provides a single sentence in which the first two meanings both occur:

      $\dots$ Richard Darst and Gerald Taylor investigated the differentiability of functions $f^p$ (which for our purposes we will restrict to $(0,1)$) defined for each $p\geq1$ by\[F(x):=
      0 &
      \text{if }x\text{ is irrational}\\
      \displaystyle{\frac{1}{n^p}} &
      \text{if }x = \displaystyle{\frac{m}{n}}\text{ with }(m,n)=1\\ \end{cases}\]

      The sad thing is that any mathematician will know immediately what each occurrence means. This may be a case where the correct annotation will never be automatically detectable.

    Example 3: One mention of a symbol may require several meanings

    In the sentence, “This infinite series converges to $\zeta(2)=\frac{\pi^2}{6}\approx 1.65$,” the annotator would provide two pieces of information about “$\frac{\pi^2}{6}$”, namely that it is both the right constituent of the equation “$\zeta(2)=\frac{\pi^2}{6}$” and the left constituent of the approximation statement “$\frac{\pi^2}{6}\approx 1.65$” — and that these two statements were the constituents of an asserted conjunction. (See my post Pivoted symbols.)

    Example 4: Function to a power

    Some expressions not in the SME will almost always be annotated in the same way. This makes it discoverable by the Data Base Miner.

    • “$\sin^{-1}x$” always means $\arcsin x$.
    • For positive $n$, “$\sin^n x$” always means $(\sin x)^n$. It never means the $n$-fold application of $\sin$ to $x$.
    • In contrast, for an arbitrary function symbol, $f^n(x)$ will often be annotated as $n$-fold application of $f$ and also often as $f(x)^n$. (And maybe those last two possibilities are correlated by branch of math.)


    I believe that work in formal verification has tended to overlook the work on math language difficulties in math ed, so I have included some articles from that specialty.

    The following are posts from my blog Gyre&Gimble. They are in reverse chronological order.

    Creative Commons License

    This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 License.

    Send to Kindle

    A very early satori that occurs with beginning abstract math students

    In the previous post Pattern recognition and me, I wrote about how much I enjoyed sudden flashes of understanding that were caused by my recognizing a pattern (or learning about a pattern). I have had several such, shall we say, Thrills in learning about math and doing research in math. This post is about a very early thrill I had when I first started studying abstract algebra. As is my wont, I will make various pronouncements about what these mean for teaching and understanding math.


    Early in any undergraduate course involving group theory, you learn about cosets.

    Basic facts about cosets

    1. Every subgroup of a group generates a set of left cosets and a set of right cosets.
    2. If $H$ is a subgroup of $G$ and $a$ and $b$ are elements of $G$, then $a$ and $b$ are in the same left coset of $H$ if and only if $a^{-1}b\in H$. They are in the same right coset of $H$ if and only if $ab^{-1}\in H$.
    3. Alternative definition: $a$ and $b$ are in the same left coset of $H$ if $a=bh$ for some $h\in H$ and are in the same right coset of $H$ if $a=hb$ for some $h\in H$
    4. One of the (left or right) cosets of $H$ is $H$ itself.
    5. The relations
      $a\underset{L}\sim b$ if and only if $a^{-1}b\in H$


      $a\underset{R}\sim b$ if and only if $ab^{-1}\in H$

      are equivalence relations.

    6. It follows from (5) that each of the set of left cosets of $H$ and the set of right cosets of $H$ is a partition of $G$.
    7. By definition, $H$ is a normal subgroup of $G$ if the two sets of cosets coincide.
    8. The index of a subgroup in a group is the cardinal number of (left or right) cosets the subgroup has.

    Elementary proofs in group theory

    In the course, you will be asked to prove some of the interrelationships between (2) through (5) using just the definitions of group and subgroup. The teacher assigns these exercises to train the students in the elementary algebra of elements of groups.


    1. If $a=bh$ for some $h\in H$, then $b=ah’$ for some $h’\in H$. Proof: If $a=bh$, then $ah^{-1}=(bh)h^{-1}=b(hh^{-1})=b$.
    2. If $a^{-1}b\in H$, then $b=ah$ for some $h\in H$. Proof: $b=a(a^{-1}b)$.
    3. The relation “$\underset{L}\sim$” is transitive. Proof: Let $a^{-1}b\in H$ and $b^{-1}c\in H$. Then $a^{-1}c=a^{-1}bb^{-1}c$ is the product of two elements of $H$ and so is in $H$.
    Miscellaneous remarks about the examples
    • Which exercises are used depends on what is taken as definition of coset.
    • In proving Exercise 2 at the board, the instructor might write “Proof: $b=a(a^{-1}b)$” on the board and the point to the expression “$a^{-1}b$” and say, “$a^{-1}b$ is in $H$!”
    • I wrote “$a^{-1}c=a^{-1}bb^{-1}c$” in Exercise 3. That will result in some brave student asking, “How on earth did you think of inserting $bb^{-1}$ like that?” The only reasonable answer is: “This is a trick that often helps in dealing with group elements, so keep it in mind.” See Rabbits.
    • That expression “$a^{-1}c=a^{-1}bb^{-1}c$” doesn’t explicitly mention that it uses associativity. That, too, might cause pointing at the board.
    • Pointing at the board is one thing you can do in a video presentation that you can’t do in a text. But in watching a video, it is harder to flip back to look at something done earlier. Flipping is easier to do if the video is short.
    • The first sentence of the proof of Exercise 3 is, “Let $a^{-1}b\in H$ and $b^{-1}c\in H$.” This uses rewrite according to the definition. One hopes that beginning group theory students already know about rewrite according to the definition. But my experience is that there will be some who don’t automatically do it.
    • in beginning abstract math courses, very few teachers
      tell students about rewrite according to the definition. Why not?

    • An excellent exercise for the students that would require more than short algebraic calculations would be:
      • Discuss which of the two definitions of left coset embedded in (2), (3), (5) and (6) is preferable.
      • Show in detail how it is equivalent to the other definition.

    A theorem

    In the undergraduate course, you will almost certainly be asked to prove this theorem:

    A subgroup $H$ of index $2$ of a group $G$ is normal in $G$.

    Proving the theorem

    In trying to prove this, a student may fiddle around with the definition of left and right coset for awhile using elementary manipulations of group elements as illustrated above. Then a lightbulb appears:

    In the 1980’s or earlier a well known computer scientist wrote to me that something I had written gave him a satori. I was flattered, but I had to look up “satori”.

    If the subgroup has index $2$ then there are two left cosets and two right cosets. One of the left cosets and one of the right cosets must be $H$ itself. In that case the left coset must be the complement of $H$ and so must the right coset. So those two cosets must be the same set! So the $H$ is normal in $G$.

    This is one of the earlier cases of sudden pattern recognition that occurs among students of abstract math. Its main attraction for me is that suddenly after a bunch of algebraic calculations (enough to determine that the cosets form a partition) you get the fact that the left cosets are the same as the right cosets by a purely conceptual observation with no computation at all.

    This proof raises a question:

    Why isn’t this point immediately obvious to students?

    I have to admit that it was not immediately obvious to me. However, before I thought about it much someone told me how to do it. So I was denied the Thrill of figuring this out myself. Nevertheless I thought the solution was, shall we say, cute, and so had a little thrill.

    A story about how the light bulb appears

    In doing exercises like those above, the student has become accustomed to using algebraic manipulation to prove things about groups. They naturally start doing such calculations to prove this theorem. They presevere for awhile…

    Scenario I

    Some students may be in the habit of abandoning their calculations, getting up to walk around, and trying to find other points of view.

    1. They think: What else do I know besides the definitions of cosets?
    2. Well, the cosets form a partition of the group.
    3. So they draw a picture of two boxes for the left cosets and two boxes for the right cosets, marking one box in each as being the subgroup $H$.
    4. If they have a sufficiently clear picture in their head of how a partition behaves, it dawns on them that the other two boxes have to be the same.
    Remarks about Scenario I
    • Not many students at the earliest level of abstract math ever take a break and walk around with the intent of having another approach come to mind. Those who do Will Go Far. Teachers should encourage this practice. I need to push this in abstractmath.org.
    • In good weather, David Hilbert would stand outside at a shelf doing math or writing it up. Every once in awhile he would stop for awhile and work in his garden. The breaks no doubt helped. So did standing up, I bet. (I don’t remember where I read this.)
    • This scenario would take place only if the students have a clear understanding of what a partition is. I suspect that often the first place they see the connection between equivalence relations and partitions is in a hasty introduction at the beginning of a group theory or abstract algebra course, so the understanding has not had long to sink in.

    Scenario II

    Some students continue to calculate…

    1. They might say, suppose $a$ is not in $H$. Then it is in the other left coset, namely $aH$.
    2. Now suppose $a$ is not in the “other” right coset, the one that is not $H$. But there are only two right cosets, so $a$ must be in $H$.
    3. But that contradicts the first calculation I made, so the only possibility left is that $a$ is in the right coset $Ha$. So $aH\subseteq Ha$.
    4. Aha! But then I can use the same argument the other way around, getting $Ha\subseteq aH$.
    5. So it must be that $aH=Ha$. Aha! …indeed.
    Remarks about Scenario 2
    • In step (2), the student is starting a proof by contradiction. Many beginning abstract math students are not savvy enough to do this.
    • Step (4) involves recognizing that an argument has a dual. Abstractmath.org does not mention dual arguments and I can’t remember emphasizing the idea to my classes. Tsk.
    • Scenario 2 involves the student continuing algebraic calculations till the lightbulb strikes. The lightbulb could also occur in other places in the calculation.


    Send to Kindle

    Conceptual blending

    This post uses MathJax.  If you see formulas in unrendered TeX, try refreshing the screen.

    A conceptual blend is a structure in your brain that connects two concepts by associating part of one with part of another.  Conceptual blending is a major tool used by our brain to understand the world.

    The concept of conceptual blend includes special cases, such as representations, images and conceptual metaphors, that math educators have used for years to understand how mathematics is communicated and how it is learned.  The Wikipedia article is a good starting place for understanding conceptual blending. 

    In this post I will illustrate some of the ways conceptual blending is used to understand a function of the sort you meet with in freshman calculus.  I omit the connections with programs, which I will discuss in a separate post.

    A particular function

    Consider the function $h(t)=4-(t-2)^2$. You may think of this function in many ways.


    $h(t)$ is defined by the formula $4-(t-2)^2$.

    • The formula encapsulates a particular computation of the value of $h$ at a given value $t$.
    • The formula defines the function, which is a stronger statement than saying it represents the function.
    • The formula is in standard algebraic notation. (See Note 1)
    • To use the formula requires one of these:
      • Understand and use the rules of algebra
      • Use a calculator
      • Use an algebraic programming language. 
    • Other formulas could be used, for example $4t-t^2$.
      • That formula encapsulates a different computation of the value of $h$.


    $h(t)$ is also defined by this tree (right).
    • The tree makes explicit the computation needed to evaluate the function.
    • The form of the tree is based on a convention, almost universal in computing science, that the last operation performed (the root) is placed at the top and that evaluation is done from bottom to top.
    • Both formula and tree require knowledge of conventions.
    • The blending of formula and tree matches some of the symbols in the formula with nodes in the tree, but the parentheses do not appear in the tree because they are not necessary by the bottom-up convention.
    • Other formulas correspond to other trees.  In other words, conceptually, each tree captures not only everything about the function, but everything about a particular computation of the function.
    • More about trees in these posts:


    $h(t)$ is represented by its graph (right). (See note 2.)

    • This is the graph as visual image, not the graph as a set of ordered pairs.
    • The blending of graph and formula associates each point on the (blue) graph with the value of the formula at the number on the x-axis directly underneath the point.
    • In contrast to the formula, the graph does not define the function because it is a physical picture that is only approximate.
    • But the formula does represent the function.  (This is "represents" in the sense of cognitive psychology, but not in the mathematical sense.)
    • The blending requires familiarity with the conventions concerning graphs of functions. 
    • It sets into operation the vision machinery of your brain, which is remarkably elaborate and powerful.
      • Your visual machinery allows you to see instantly that the maximum of the curve occurs at about $t=2$. 
    • The blending leaves out many things.
      • For one, the graph does not show the whole function.  (That's another reason why the graph does not define the function.)
      • Nor does it make it obvious that the rest of the graph goes off to negative infinity in both directions, whereas that formula does make that obvious (if you understand algebraic notation).  


    The graph of $h(t)$ is the parabola with vertex $(2,4)$, directrix $x=2$, and focus $(2,\frac{3}{4})$. 

    • The blending with the graph makes the parabola identical with the graph.
    • This tells you immediately (if you know enough about parabolas!) that the maximum is at $(2,4)$ (because the directrix is vertical).
    • Knowing where the focus and directrix are enables you to mechanically construct a drawing of the parabola using a pins, string, T-square and pencil.  (In the age of computers, do you care?)


    $h(t)$ gives the height of a certain projectile going straight up and down over time.

    • The blending of height and graph lets you see instantly (using your visual machinery) how high the projectile goes. 
    • The blending of formula and height allows you to determing the projectile's velocity at any point by taking the derivative of the function.
    • A student may easily be confused into thinking that the path of the projectile is a parabola like the graph shown.  Such a student has misunderstood the blending.


    You may understand $h(t)$ kinetically in various ways.

    • You can visualize moving along the graph from left to right, going, reaching the maximum, then starting down.
      • This calls on your experience of going over a hill. 
      • You are feeling this with the help of mirror neurons.
    • As you imagine traversing the graph, you feel it getting less and less steep until it is briefly level at the maximum, then it gets steeper and steeper going down.
      • This gives you a physical understanding of how the derivative represents the slope.
      • You may have seen teachers swooping with their hand up one side and down the other to illustrate this.
    • You can kinetically blend the movement of the projectile (see height above) with the graph of the function.
      • As it goes up (with $t$ increasing) the projectile starts fast but begins to slow down.
      • Then it is briefly stationery at $t=2$ and then starts to go down.
      • You can associate these feelings with riding in an elevator.
        • Yes, the elevator is not a projectile, so this blending is inaccurate in detail.
      • This gives you a kinetic understanding of how the derivative gives the velocity and the second derivative gives the acceleration.


    The function $h(t)$ is a mathematical object.

    • Usually the mental picture of function-as-object consists of thinking of the function as a set of ordered pairs $\Gamma(h):=\{(t,4-(t-2)^2)|t\in\mathbb{R}\}$. 
    • Sometimes you have to specify domain and codomain, but not usually in calculus problems, where conventions tell you they are both the set of real numbers.
    • The blend object and graph identifies each point on the graph with an element of $\Gamma(h)$.
    • When you give a formal proof, you usually revert to a dry-bones mode and think of math objects as inert and timeless, so that the proof does not mention change or causation.
      • The mathematical object $h(t)$ is a particular set of ordered pairs. 
      • It just sits there.
      • When reasoning about something like this, implication statements work like they are supposed to in math: no causation, just picking apart a bunch of dead things. (See Note 3).
      • I did not say that math objects are inert and timeless, I said you think of them that way.  This post is not about Platonism or formalism. What math objects "really are" is irrelevant to understanding understanding math [sic].


    definition of the concept of function provides a way of thinking about the function.

    • One definition is simply to specify a mathematical object corresponding to a function: A set of ordered pairs satisfying the property that no two distinct ordered pairs have the same second coordinate, along with a specification of the codomain if that is necessary.
    • A concept can have many different definitions.
      • A group is usually defined as a set with a binary operation, an inverse operation, and an identity with specific properties.  But it can be defined as a set with a ternary operation, as well.
      • A partition of a set is a set of subsets of a set with certain properties. An equivalence relation is a relation on a set with certain properties.  But a partition is an equivalence relation and an equivalence relation is a partition.  You have just picked different primitives to spell out the definition. 
      • If you are a beginner at doing proofs, you may focus on the particular primitive objects in the definition to the exclusion of other objects and properties that may be more important for your current purposes.
        • For example, the definition of $h(t)$ does not mention continuity, differentiability, parabola, and other such things.
        • The definition of group doesn't mention that it has linear representations.


    A function can be given as a specification, such as this:

    If $t$ is a real number, then $h(t)$ is a real number, whose value is obtained by subtracting $2$ from $t$, squaring the result, and then subtracting that result from $4$.

    • This tells you everything you need to know to use the function $h$.
    • It does not tell you what it is as a mathematical object: It is only a description of how to use the notation $h(t)$.


    1. Formulas can be give in other notations, in particular Polish and Reverse Polish notation. Some forms of these notations don't need parentheses.

    2. There are various ways to give a pictorial image of the function.  The usual way to do this is presenting the graph as shown above.  But you can also show its cograph and its endograph, which are other ways of representing a function pictorially.  They  are particularly useful for finite and discrete functions. You can find lots of detail in these posts and Mathematica notebooks:

    3. See How to understand conditionals in the abstractmath article on conditionals.


    1. Conceptual blending (Wikipedia)
    2. Conceptual metaphors (Wikipedia)
    3. Definitions (abstractmath)
    4. Embodied cognition (Wikipedia)
    5. Handbook of mathematical discourse (see articles on conceptual blendmental representationrepresentation, and metaphor)
    6. Images and Metaphors (article in abstractmath)
    7. Links to G&G posts on representations
    8. Metaphors in Computing Science (previous post)
    9. Mirror neurons (Wikipedia)
    10. Representations and models (article in abstractmath)
    11. Representations II: dry bones (article in abstractmath)
    12. The transition to formal thinking in mathematics, David Tall, 2010
    13. What is the object of the encapsulation of a process? Tall et al., 2000.


    Send to Kindle

    A tiny step towards killing string-based math

    I discussed endographs of real functions in my post  Endographs and cographs of real functions.  Endographs of finite functions also provide another way of thinking about functions, and I show some examples here.  This is not a new idea; endographs have appeared from time to time in textbooks, but they are not used much, and they have the advantage of revealing some properties of a function instantly that cannot be seen so easily in a traditional graph or cograph.

    In contrast to endographs of functions on the real line, an endograph of a finite function from a set to itself contains all the information about the function.  For real functions, only some of the arrows can be shown; you are dependent on continuity to interpolate where the infinite number of intermediate arrows would be, and of course, it is easy to produce a function, with, say, small-scale periodicity, that the arrows would miss, so to speak.  But with an endograph of a finite function, WYSIATI (what you see is all there is).

    Here is the endograph of a function.  It is one function.  The graph has four connected components.

    You can see immediately that it is a permutation  of the set \{1,2,3,4,5,6\}, and that it is involution (a permutation f for which f f=\text{id}).  In cycle notation, it is the permutation (1 2)(5 6), and the connected components of the endograph correspond to the cycle structure.

    Here is another permutation:

    You can see that to get f^n=\text{id} you would have to have n=6, since you have to apply the 3-cycle 3 times and the transposition twice to get the identity.   The cycle structure (1 2 4)(0 3) tells you this, but you have to visualize it acting to see that.  The endograph gives the newbie a jumpstart on the visualization.  “The power to understand and predict the quantities of the world should not be restricted to those with a freakish knack for manipulating abstract symbols” (Brett Victor).   This is an argument for insisting that this permutation is the endograph, and the abstract string of symbols (1 2 4)(0 3) is a representation of secondary importance.  [See Note 1.]

    Here is the cograph of the same function.  It requires a bit of visualization or tracing arrows around to see its cycle structure.

    If I had rearranged the nodes like this

    the cycle structure would be easier to see.  This does not indicate as much superiority of the endograph metaphor over the cograph metaphor as you might think:  My endograph code [Note 2] uses Mathematica’s graph-displaying algorithm, which automatically shows cycles clearly.   The cograph code that I wrote specifies the placement of the nodes explicitly, so I rearranged them to obtain the second cograph above using my knowledge of the cycle structure.

    The following endographs of functions that are not permutations exhibit the general fact that the graph of a finite function consists of cycles with trees attached.   This structure is obvious from the endographs, and it is easy to come up with a proof of this property of finite functions by tracing your finger around the endographs.

    This is the endograph of the polynomial 2 n^9+5 n^8+n^7+4 n^6+9 n^5+1 over the finite field of 11 elements.

    Here is another endograph:

    I constructed this explicitly by writing a list of rules, and then used Mathematica’s interpolating polynomial to determine that it is given by the polynomial

    6 x^{16}+13 x^{15}+x^{14}+3 x^{13}+10 x^{12}+5  x^{11}\\ +14 x^{10}+4 x^9+9 x^8+x^7+14 x^6\\ +15  x^5+16 x^4+14 x^3+4 x^2+15 x+11

    in GF[17].

    Quite a bit is known about polynomials over finite fields that give permutations.  For example there is an easy proof using interpolating polynomials that a polynomial that gives a transposition must have degree q-2.  The best reference for this stuff is Lidl and Niederreiter, Introduction to Finite Fields and their Applications

    The endographs above raise questions such as what can you say about the degree or coefficients of a polynomial that gives a digraph like the function f below that is idempotent (f f=f).  Students find idempotence vs. involution difficult to distinguish between.  Digraphs show you almost immediately what is going on.  Stare at the digraph below for a bit and you will see that if you follow f to a node and then follow  it again you stay where you are (the function is the identity on its image).  That’s another example of the insights you can get from a new metaphor for a mathematical object.

    The following function is not idempotent even though it has only trivial loops.  But the digraph does tell you easily that it satisfies f^4=f^3.


    [1] Atish Bagchi and I have contributed to this goal in Graph Based Logic and Sketches, which gives a bare glimpse of the possibility of considering that the real objects of logic are diagrams and their limits and morphisms between them, rather than hard-to-parse strings of letters and logical symbols.  Implementing this (and implementing Brett Victor’s ideas) will require sophisticated computer support.  But that support is coming into existence.  We won’t have to live with string-based math forever.

    [2] The Mathematica notebook used to produce these pictures is here.  It has lots of other examples.

    Send to Kindle

    Computable algebraic expressions in tree form

    Invisible algebra

    1. An  expression such as $4(x-2)=6$ has an invisible abstract structure.  In this simple case it is

    using the style of presenting trees used in academic computing science.  The parentheses are a clue to the structure; omitting them results in  $4x-2=6$, which has the different structure

    By the time students take calculus they supposedly have learned to perceive and work with this invisible structure, but many of them still struggle with it.  They have a lot of trouble with more complex expressions, but even something like $\sin x + y$ gives some of them trouble.

    Make the invisible visible

    The tree expression makes the invisible structure explicit. Some math educators such as Jason Dyer and Bret Victor have experimented with the idea of students working directly with a structured form of an algebraic expression, including making the structured form interactive.

    How could the tree structure be used to help struggling algebra students?

    1) If they are learning on the computer, the program could provide the tree structure at the push of a button. Lessons could be designed to present algebraic expressions that look similar but have different structure.

    2) You could point out things such as:

    a) “inside the parentheses pushes it lower in the tree”
    b) “lower in the tree means it is calculated earlier”

    3) More radically, you could teach algebra directly using the tree structure, with the intention of introducing the expression-as-a-string form later.  This is analogous to the use of the initial teaching alphabet for beginners at reading, and also the use of shape notes to teach sight reading of music for singing.  Both of these methods have been shown to help beginners, but the ITA didn’t catch on and although lots of people still sing from shape notes (See Note 1) they are not as far as I know used for teaching in school.

    4) You could produce an interactive form of the structure tree that the student could use to find the value or solve the equation.  But that needs a section to itself.

    Interactive trees

    When I discovered the TreeForm command in Mathematica (which I used to make the trees above), I was inspired to use it and the Manipulate command to make the tree interactive.

    This is a screenshot of what Mathematica shows you.  When this is running in Mathematica, moving the slide back and forth causes the dependent values in the tree also change, and when you slide to 3.5, the slot corresponding to $ 4(x-2)$ becomes 6 and the slot over “Equals” becomes “True”:

    As seen in this post, these are just screen shots that you can’t manipulate.  The Mathematica notebook Expressions.nb gives the code for this and lets you experiment with it.  If you don’t have Mathematica available to you, you can still manipulate the tree with the slider if you download the CDF form of the notebook and open it in Mathematica CDF Player, which is available free here.  The abstractmath website has other notebooks you may want to look at as well.

    Moving the slider back and forth constitutes finding the correct value of x by experiment.  This is a peculiar form of bottom-up evaluation.   With an expression whose root node is a value rather than an equation, wiggling the slider constitutes calculating various values with all the intermediate steps shown as you move it.  Bret Victor s blog shows a similar system, though not showing the tree.

    Another way to use the tree is to arrange to show it with the calculated values blank.  (The constants and the labels showing the operation would remain.)   The student could start at the top blank space (over Times)  and put in the required value, which would obviously have to be 6 to make the space over Equals change to “True”.  Then the blank space over Plus would have to be 1.5 in order to make multiplying it by 4 be 6.  Then the bottom left blank space would have to be 3.5 to make it equal to 1.5 when -2 is added.  This is top down evaluation.

    You could have the student enter these numbers in the blank spaces on the computer or print out the tree with blank spaces and have them do it with a pencil.  Jason Dyer’s blog has examples.


    My example code in the notebook is a kludge.  If you defined a  special VertexRenderingFunction for TreeForm in Mathematica, you could create a function that would turn any algebraic expression into a manipulatable tree with a slider like the one above (or one with blank spaces to be filled in).  [Note 2]. I expect I will work on that some time soon but my main desire in this series of blog posts is to through out ideas with some Mathematica code attached that others might want to develop further. You are free to reuse all the Mathematica code and all my blog posts under the Creative Commons Attribution – ShareAlike 3.0 License.  I would like to encourage this kind of open-source behavior.


    1. Including me every Tuesday at 5:30 pm in Minneapolis (commercial).

    2. There is a problem with Equals.  In the hacked example above I set the increment the value jumps by when the slider is moved to 0.1, so that the correct value 3.5 occurs when you slide.  If you had an equation with an irrational root this would not work.  One thing that should work is to introduce a fuzzy form of Equals with the slide-increment smaller that the latitude allowed in the fuzzy Equals.

    Send to Kindle

    Endograph and cograph of real functions

    This post is covered by the Creative Commons Attribution – ShareAlike 3.0 License, which means you may use, adapt and distribute the work provided you follow the requirements of the license.


    In the article Functions: Images and Metaphors in abstractmath I list a bunch of different images or metaphors for thinking about functions. Some of these metaphors have realizations in pictures, such as a graph or a surface shown by level curves. Others have typographical representations, as formulas, algorithms or flowcharts (which are also pictorial). There are kinetic metaphors — the graph of {y=x^2} swoops up to the right.

    Many of these same metaphors have realizations in actual mathematical representations.

    Two images (mentioned only briefly in the abstractmath article) are the cograph and the endograph of a real function of one variable. Both of these are visualizations that correspond to mathematical representations. These representations have been used occasionally in texts, but are not used as much as the usual graph of a continuous function. I think they would be useful in teaching and perhaps even sometimes in research.

    A rough and unfinished Mathematica notebook is available that contains code that generate graphs and cographs of real-valued functions. I used it to generate most of the examples in this post, and it contains many other examples. (Note [1].)

    The endograph of a function

    In principle, the endograph (Note [2]) of a function {f} has a dot for each element of the domain and of the codomain, and an arrow from {x} to {f(x)} for each {x} in the domain. For example, this is the endograph of the function {n\mapsto n^2+1 \pmod 11} from the set {\{0,1,\ldots,10\}} to itself:

    “In principle” means that the entire endograph can be shown only for small finite functions. This is analogous to the way calculus books refer to a graph as “the graph of the squaring function” when in fact the infinite tails are cut off.

    Real endographs

    I expect to discuss finite endographs in another post. Here I will concentrate on endographs of continuous functions with domain and codomain that are connected subsets of the real numbers. I believe that they could be used to good effect in teaching math at the college level.

    Here is the endograph of the function {y=x^2} on the reals:

    I have displayed this endograph with the real line drawn in the usual way, with tick marks showing the location of the points on the part shown.

    The distance function on the reals gives us a way of interpreting the spacing and location of the arrowheads. This means that information can be gleaned from the graph even though only a finite number of arrows are shown. For example you see immediately that the function has only nonnegative values and that its increase grows with {x}.(See note [3]).

    I think it would be useful to show students endographs such as this and ask them specific questions about why the arrows do what they do.

    For the one shown, you could ask these questions, probably for class discussion rather that on homework.

    • Explain why most of the arrows go to the right. (They go left only between 0 and 1 — and this graph has such a coarse point selection that it shows only two arrows doing that!)
    • Why do the arrows cross over each other? (Tricky question — they wouldn’t cross over if you drew the arrows with negative input below the line instead of above.)
    • What does it say about the function that every arrowhead except two has two curves going into it?

    Real Cographs

    The cograph (Note [4] of a real function has an arrow from input to output just as the endograph does, but the graph represents the domain and codomain as their disjoint union. In this post the domain is a horizontal representation of the real line and the codomain is another such representation below the domain. You may also represent them in other configurations (Note [5]).

    Here is the cograph representation of the function {y=x^2}. Compare it with the endograph representation above.

    Besides the question of most arrows going to the right, you could also ask what is the envelope curve on the left.

    More examples

    Absolute value function

    Arctangent function


    [1] This website contains other notebooks you might find useful. They are in Mathematica .nb, .nbp, or .cdf formats, and can be read, evaluated and modified if you have Mathematica 8.0. They can also be made to appear in your browser with Wolfram CDF Player, downloadable free from Wolfram site. The CDF player allows you to operate any interactive demos contained in the file, but you can’t evaluate or modify the file without Mathematica.

    The notebooks are mostly raw code with few comments. They are covered by the Creative Commons Attribution – ShareAlike 3.0 License, which means you may use, adapt and distribute the code following the requirements of the license. I am making the files available because I doubt that I will refine them into respectable CDF files any time soon.

    [2] I call them “endographs” to avoid confusion with the usual graphs of functions — — drawings of (some of) the set of ordered pairs {x,f(x)} of the function.

    [3] This is in contrast to a function defined on a discrete set, where the elements of the domain and codomain can be arranged in any old way. Then the significance of the resulting arrangement of the arrows lies entirely in which two dots they connect. Even then, some things can be seen immediately: Whether the function is a cycle, permutation, an involution, idempotent, and so on.

    Of course, the placement of the arrows may tell you more if the finite sets are ordered in a natural way, as for example a function on the integers modulo some integer.

    [4] The text [1] uses the cograph representation extensively. The word “cograph” is being used with its standard meaning in category theory. It is used by graph theorists with an entirely different meaning.

    [5] It would also be possible to show the domain codomain in the usual {x-y} plane arrangement, with the domain the {x} axis and the codomain the {y} axis. I have not written the code for this yet.


    [1] Sets for Mathematics, by F. William Lawvere and Robert Rosebrugh. Cambridge University Press, 2003.

    [2] Martin Flashman’s website contains many exampls of cographs of functions, which he calls mapping diagrams.

    Send to Kindle

    Syntax Trees in Mathematicians’ Brains

    Understanding the quadratic formula

    In my last post I wrote about how a student’s pattern recognition mechanism can go awry in applying the quadratic formula.

    The template for the quadratic formula says that the solution of a quadratic equation of the form ${ax^2+bx+c=0}$ is given by the formula

    $\displaystyle x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$

    When you ask students to solve ${a+bx+cx^2=0}$ some may write

    $\displaystyle x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$

    instead of

    $\displaystyle x=\frac{-b\pm\sqrt{b^2-4ac}}{2c}$

    That’s because they have memorized the template in terms of the letters ${a}$, ${b}$ and ${c}$ instead of in terms of their structural meaning — $ {a}$ is the coefficient of the quadratic term, ${c}$ is the constant term, etc.

    The problem occurs because there is a clash between the occurrences of the letters “a”, “b”, and “c” in the template and in the equation to solve. But maybe the confusion would occur anyway, just because of the ordering of the coefficients. As I asked in the previous post, what happens if students are asked to solve $ {3+5x+2x^2=0}$ after having learned the quadratic formula in terms of ${ax^2+bx+c=0}$? Some may make the same kind of mistake, getting ${x=-1}$ and ${x=-\frac{2}{3}}$ instead of $ {x=-1}$ and $ {x=-\frac{3}{2}}$. Has anyone ever investigated this sort of thing?

    People do pattern recognition remarkably well, but how they do it is mysterious. Just as mistakes in speech may give the linguist a clue as to how the brain processes language, students’ mistakes may tell us something about how pattern recognition works in parsing symbolic statements as well as perhaps suggesting ways to teach them the correct understanding of the quadratic formula.

    Syntactic Structure

    “Structural meaning” refers to the syntactic structure of a mathematical expression such as ${3+5x+2x^2}$. It can be represented as a tree:


    This is more or less the way a program compiler or interpreter for some language would represent the polynomial. I believe it corresponds pretty well to the organization of the quadratic-polynomial parser in a mathematician’s brain. This is not surprising: The compiler writer would have to have in mind the correct understanding of how polynomials are evaluated in order to write a correct compiler.

    Linguists represent English sentences with syntax trees, too. This is a deep and complicated subject, but the kind of tree they would use to represent a sentence such as “My cousin saw a large ship” would look like this:

    Parsing by mathematicians

    Presumably a mathematician has constructed a parser that builds a structure in their brain corresponding to a quadratic polynomial using the same mechanisms that as a child they learned to parse sentences in their native language. The mathematician learned this mostly unconsciously, just as a child learns a language. In any case it shouldn’t be surprising that the mathematicians’s syntax tree for the polynomial is similar to the compiler’s.

    Students who are not yet skilled in algebra have presumably constructed incorrect syntax trees, just as young children do for their native language.

    Lots of theoretical work has been done on human parsing of natural language. Parsing mathematical symbolism to be compiled into a computer program is well understood. You can get a start on both of these by reading the Wikipedia articles on parsing and on syntax trees.

    There are papers on students’ misunderstandings of mathematical notation. Two articles I recently turned up in a Google search are:

    Both of these papers talk specifically about the syntax of mathematical expressions. I know I have read other such papers in the past, as well.

    What I have not found is any study of how the trained mathematician parses mathematical expression.

    For one thing, for my parsing of the expression $ {3+5x+2x^2}$, the branching is wrong in (1). I think of ${3+5x+2x^2}$ as “Take 3 and add $ {5x}$ to it and then add ${2x^2}$ to that”, which would require the shape of the tree to be like this:

    I am saying this from introspection, which is dangerous!

    Of course, a compiler may group it that way, too, although my dim recollection of the little bit I understand about compilers is that they tend to group it as in (1) because they read the expression from left to right.

    This difference in compiling is well-understood.  Another difference is that the expression could be compiled using addition as an operator on a list, in this case a list of length 3.  I don’t visualize quadratics that way but I certainly understand that it is equivalent to the tree in Diagram (1).  Maybe some mathematicians do think that way.

    But these observations indicate what might be learned about mathematicians’ understanding of mathematical expressions if linguists and mathematicians got together to study human parsing of expressions by trained mathematicians.

    Some educational constructivists argue against the idea that there is only one correct way to understand a mathematical expression.  To have many metaphors for thinking about math is great, but I believe we want uniformity of understanding of the symbolism, at least in the narrow sense of parsing, so that we can communicate dependably.  It would be really neat if we discovered deep differences in parsing among mathematicians.  It would also be neat if we discovered that mathematicians parsed in generally the same way!

    Send to Kindle

    Learning by osmosis

    In the Handbook, I said:

    The osmosis theory of teaching is this attitude: We should not have to teach students to understand the way mathematics is written, or the finer points of logic (for example how quantifiers are negated). They should be able to figure these things on their own —“learn it by osmosis”. If they cannot do that they are not qualified to major in mathematics.

    We learned our native language(s) as children by osmosis.  That does not imply that college students can or should learn mathematical reasoning that way. It does not even mean that college students should learn a foreign language that way.

    I have been meaning to write a section of Understanding Mathematics that describes the osmosis theory and gives lots of examples.  There are already three links from other places in abstractmath.org that point to it.  Too bad it doesn’t exist…

    Lately I have been teaching the Gauss-Jordan method using elementary row operations and found a good example.   The textbook uses the notation [m] +a[n] to mean “add a times row n to row m”.  In particular, [m] +[n] means “add row n to row m”, not “add row m to row n”. So in this notation ” [m] +[n] ” is not an expression, but a command, and in that command the plus sign is not commutative.   Similarly, “3[2]” (for example) does not mean “3 times row 2”, it means “change row 2 to 3 times row 2”.

    The explanation is given in parentheses in the middle of an example:

    …we add three times the first equation to the second equation.  (Abbreviation: [2] + 3[1].  The [2] means we are changing equation [2].  The expression [2] + 3[1] means that we are replacing equation 2 by the original equation plus three times equation 1.)

    This explanation, in my opinion, would be incomprehensible to many students, who would understand the meaning only once it was demonstrated at the board using a couple of examples.  The phrase “The [2] means we are changing equation [2]” should have said something like “the left number, [2] in this case, denotes the equation we are changing.”  The last sentence refers to “the original equation”, meaning equation [2].  How many readers would guess that is what they mean?

    In any case, better notation would be something like “[2]  3[1]”. I have found several websites that use this notation, sometimes written in the opposite direction. It is familiar to computer science students, which most of the students in my classes are.

    Putting the definition of the notation in a parenthetical remark is also undesirable.  It should be in a separate paragraph marked “Notation”.

    There is another point here:  No verbal definition of this notation, however well written, can be understood as well as seeing it carried out in an example.  This is also true of matrix multiplication, whose definition in terms of symbols such as a_ib_j is difficult to understand (if a student can figure out how you do it from this definition they should be encouraged to be a math major), whereas the process becomes immediately clear when you see someone pointing with one hand at successive entries in a row of one matrix while pointing with the other hand at successive entries in the other matrix’s columns.  This is an example of the superiority (in many cases) of pattern recognition over definitions in terms of strings of symbols to be interpreted.  I did write about pattern recognition, here.

    Send to Kindle

    Syntactic and semantic thinkers

    A paper by Keith Weber

    Reidar Mosvold’s math-ed blog recently provided a link to an article by Keith Weber (Reference [2]) about a very good university math student he referred to as a “syntactic reasoner”.  He interviewed the student in depth as the student worked on some proofs suitable to his level.  The student would “write the proofs out in quantifiers” and reason based on previous steps of the proof in a syntactic way rather than than depending on an intuitive understanding of the problem, as many of us do (the author calls us semantic reasoners).  The student didn’t think about specific examples —  he always tried to make them as abstract as possible while letting them remain examples (or counterexamples).

    I recommend this paper if you are at all interested in math education at the university math major level — it is fascinating.  It made all sorts of connections for me with other ideas about how we think about math that I have thought about for years and which appear in the Understanding Math part of abstractmath.org.  It also raises lots of new (to me) questions.

    Weber’s paper talks mostly about how the student comes up with a proof.  I suspect that the distinction between syntactic reasoners and semantic reasoners can be seen in other aspects of mathematical behavior, too, in trying to understand and explain math concepts.  Some thoughts:

    Other behaviors of syntactic reasoners (maybe)

    1) Many mathematicians (and good math students) explain math using conceptual and geometric images and metaphors, as described in Images and metaphors in abstractmath.org.   Some people I think of as syntactic reasoners seem to avoid such things. Some of them even deny thinking in images and metaphors, as I discussed in the post Thinking without words.   It used to be that even semantic reasoners were embarassed to used images and metaphors when lecturing (see the post How “math is logic” ruined math for a generation).

    2) In my experience, syntactic reasoners like to use first order symbolic notation, for example eq0001MP

    and will often translate a complicated sentence in ordinary mathematical English into this notation so they can understand it better.  (Weber describes the student he interviewed as doing this.)  Furthermore they seem to think that putting a formula such as the one above on the board says it all, so they don’t need to draw pictures, wave their hands [Note 1], and so on.  When you come up with a picture of a concept or theorem that you claim explains it their first impulse is to say it out in words that generally can be translated very easily into first order symbolism, and say that is what is going on.  It is a matter of what is primary.

    The semantic reasoners of students and (I think) many mathematicians find the symbolic notation difficult to parse and would rather have it written out in English.  I am pretty good at reading such symbolic notation [Note 2] but I still prefer ordinary English.

    3) I suspect the syntactic reasoners also prefer to read proofs step by step, as I described in my post Grasshoppers and linear proofs, rather than skipping around like a grasshopper.

    And maybe not

    Now it may very well be that syntactic thinkers do not all do all those things I mentioned in (1)-(3).  Perhaps the group is not cohesive in all those ways.  Probably really good mathematicians use both techniques, although Weyl didn’t think so (quoted in Weber’s paper).   I think of myself as an image and metaphor person but I do use syntax, and sometimes even find that a certain syntactic explanation feels like a genuinely useful insight, as in the example I discussed under conceptual in the Handbook.

    Distinctions among semantic thinkers

    Semantic thinkers differ among themselves.  One demarcation line is between those who use a lot of visual thinking and those who use conceptual thinking which is not necessarily visual.  I have known grad students who couldn’t understand how I could do group theory (that was in a Former Life, before category theory) because how could you “see” what was happening?  But the way I think about groups is certainly conceptual, not syntactic.  When I think of a group acting on a space I think of it as stirring the space around.  But the stirring is something I feel more than I see.  On the other hand, when I am thinking about the relationships between certain abstract objects, I “see” the different objects in different parts of an interior visual space.  For example, group is on the right, stirring the space-acted-upon on the left, or the group is in one place, a subgroup is in another place while simultaneously being inside the group, and the cosets are grouped (sorry) together in a third place, being (guess what) stirred around by the group acting by conjugation (Note [3]).

    This distinction between conceptual and visual, perhaps I should say visual-conceptual and non-visual-conceptual, both opposed to linguistic or syntactic reasoning, may or may not be as fundamental as syntactic vs semantic.   But it feels fundamental to me.

    Weber’s paper mentions an intriguing sounding book (Reference [1]) by Burton which describes a three-way distinction called conceptual, visual and symbolic, that sounds like it might be the distinction I am discussing here.  I have asked for it on ILL.


    1. Handwaving is now called kinesthetic communication.  Just to keep you au courant.
    2. I took Joe Shoenfield’s course in logic when his book  Mathematical Logic [3] was still purple.
    3. Clockwise for left action, counterclockwise for right action.  Not.


    1. Leone L. Burton, Mathematicians as Enquirers: Learning about Learning Mathematics.  Springer, 2004.
    2. Keith Weber, How syntactic reasoners can develop understanding, evaluate conjectures, and generate counterexamples in advanced mathematics. Proof copy available from Science Direct.
    3. Joseph Shoenfield, Mathematical logic, Addison-Wesley 1967, reprinted 2001 by the Association for Symbolic Logic.
    Send to Kindle

    How "math is logic" ruined math for a generation

    Mark Meckes responded to my statement

    But it seems to me that this sort of thinking has mostly resulted in people thinking philosophy of math is merely a matter of logic and set theory.  That point of view has been ruinous to the practice of math.

    with this comment:

    I may be misreading your analysis of the second straw man, but you seem to imply that “people thinking philosophy of math is merely a matter of logic and set theory” has done great damage to mathematics. I think that’s quite an overstatement. It means that in practice, mathematicians find philosophy of mathematics to be irrelevant and useless. Perhaps philosophers of mathematics could in principle have something to say that mathematicians would find helpful but in practice they don’t; however, we’re getting along quite well without their help.

    On the other hand, maybe you only meant that people who think “philosophy of math is merely a matter of logic and set theory” are handicapped in their own ability to do mathematics. Again, I think most mathematicians get along fine just not thinking about philosophy.

    Mark is right that at least this aspect of philosophy of math is irrelevant and useless to mathematicians.  But my remark that the attitude that “philosophy of math is merely a matter of logic and set theory” is ruinous to math was sloppy, it was not what I should have said.    I was thinking of a related phenomenon which was ruinous to math communication and teaching.

    By the 1950’s many mathematicians adopted the attitude that all math is is theorem and proof.  Images, metaphors and the like were regarded as misleading and resulting in incorrect proofs.  (I am not going to get into how this attitude came about).     Teachers and colloquium lecturers suppressed intuitive insights and motivations in their talks and just stated the theorem and went through the proof.

    I believe both expository and research papers were affected by this as well, but I would not be able to defend that with citations.

    I was a math student 1959 through 1965.  My undergraduate calculus (and advanced calculus) teacher was a very good teacher but he was affected by this tendency.  He knew he had to give us intuitive insights but he would say things like “close the door” and “don’t tell anyone I said this” before he did.  His attitude seemed to be that that was not real math and was slightly shameful to talk about.  Most of my other undergrad teachers simply did not give us insights.

    In graduate school I had courses in Lie Algebra and Mathematical Logic from the same teacher.   He was excellent at giving us theorem-proof lectures, much better than most teachers, but he never gave us any geometric insights into Lie Algebra (I never heard him say anything about differential equations!) or any idea of the significance of mathematical logic.  We went through Killing’s classification theorem and Gödel’s incompleteness theorem in a very thorough way and I came out of his courses pleased with my understanding of the subject matter.  But I had no idea what either one of them had to do with any other part of math.

    I had another teacher for several courses in algebra and various levels of number theory.   He was not much for insights, metaphors, etc, but he did do well in explaining how you come up with a proof.  My teacher in point set topology was absolutely awful and turned me off the Moore Method forever.   The Moore method seems to be based on: don’t give the student any insights whatever. I have to say that one of my fellow students thought the Moore method was the best thing since sliced bread and went on to get a degree from this teacher.

    These dismal years in math teaching lasted through the seventies and perhaps into the eighties.  Apparently now younger professors are much more into insights, images and metaphors and to some extent into pointing out connections with the rest of math and science.  Since I have been retired since 1999 I don’t have much exposure to the newer generation and I am not sure how thoroughly things have changed.

    One noticeable phenomenon was that category theorists (I got into category theory in the mid seventies) were very assiduous in lectures and to some extent in papers in giving motivation and insight.  It may be that attitudes varied a lot between different disciplines.

    This Dark Ages of math teaching was one of the motivations for abstractmath.org.  My belief is that not only should we give the students insights, images and metaphors to think about objects, and so on, but that we should be upfront about it:   Tell them what we are doing (don’t just mutter the word “intuitive”) and point out that these insights are necessary for understanding but are dangerous when used in proofs.  Tell them these things with examples. In every class.

    My other main motivation for abstractmath.org was the way math language causes difficulties.  But that is another story.

    Send to Kindle