Tag Archives: Euclid

Pattern recognition in understanding math

Abstract patterns

This post is a revision of the article on pattern recognition in abstractmath.org.

When you do math, you must recognize abstract patterns that occur in

  • Symbolic expressions
  • Geometric figures
  • Relations between different kinds of math structures.
  • Your own mental representations of mathematical objects

This happens in high school algebra and in calculus, not just in the higher levels of abstract math.

Examples

Most of these examples are revisited in the section called Laws and Constraints.

At most

For real numbers $x$ and $y$, the phrase “$x$ is at most $y$” means by definition $x\le y$. To understand this definition requires recognizing the pattern “$x$ is at most $y$” no matter what expressions occur in place of $x$ and $y$, as long as they evaluate to real numbers.

Examples

  • “$\sin x$ is at most $1$” means that $\sin x\le 1$. This happens to be true for all real $x$.
  • “$3$ is at most $7$” means that $3\leq7$. You may think that “$3$ is at most $7$” is a silly thing to say, but it nevertheless means that $3\leq7$ and so is a correct statement.
  • “$x^2+(y-1)^2$ is at most $5$” means that
    $x^2+(y-1)^2\leq5$. This is true for some pairs $(x,y)$ and false for others, so it is a constraint. It defines the disk below:

The product rule for derivatives

The product rule for differentiable functions $f$ and $g$ tells you that the derivative of $f(x)g(x)$ is \[f'(x)\,g(x)+f(x)\,g'(x)\]

Example

You recognize that the expression ${{x}^{2}}\sin x$ fits the pattern $f(x)g(x)$ with $f(x)={{x}^{2}}$ and $g(x)=\sin x$. Therefore you know that the derivative of ${{x}^{2}}\,\sin x$ is \[2x\sin x+{{x}^{2}}\cos x\]

The quadratic formula

The quadratic formula for the solutions of an equation of the form $a{{x}^{2}}+bx+c=0$ is usually given as\[r=\frac{-b\pm
\sqrt{{{b}^{2}}-4ac}}{2a}\]

Example

If you are asked for the roots of $3{{x}^{2}}-2x-1=0$, you recognize that the polynomial on the left fits the pattern $a{{x}^{2}}+bx+c$ with

  • $a\leftarrow3$ (“$a$ replaced by $3$”)
  • $b\leftarrow-2$
  • and $c\leftarrow-1$.

Then
substituting those values in the quadratic formula gives you the roots $-1/3$ and $1$.

Difficulties with the quadratic formula

A little problem

The quadratic formula is easy to use but it can still cause pattern recognition problems. Suppose you are asked to find the solutions of $3{{x}^{2}}-7=0$. Of course you can do this by simple algebra — but pretend that the first thing you thought of was using the quadratic formula.

  • Then you got upset because you have to apply it to $a{{x}^{2}}+bx+c$
  • and $3{{x}^{2}}-7$ has only two terms
  • but $a{{x}^{2}}+bx+c$ has three terms…
  • (Help!)
  • Do Not Be Anguished:
  • Write
    $3{{x}^{2}}-7$ as $3{{x}^{2}}+0\cdot x-7$, so $a=3$, $b=0$ and $c=-7$.
  • Then put those values into the quadratic formula and you get $x=\pm \sqrt{\frac{7}{3}}$.   
  • This is an example of the following useful principle:


    Write zero cleverly.

    I suspect that most people reading this would not have had the problem with $3{{x}^{2}}-7$ that I have just described. But before you get all insulted, remember:


    The thing about really easy examples is that they give you the point without getting you lost in some complicated stuff you don’t understand very well.

    A fiendisher problem

      Even college students may have trouble with the following problem (I know because I have tried it on them):

    What are the solutions of the equation $a+bx+c{{x}^{2}}=0$?

    The answer

             

    \[r=\frac{-b\pm
    \sqrt{{{b}^{2}}-4ac}}{2a}\]

    is wrong. The correct answer is

                                     \[r=\frac{-b\pm
    \sqrt{{{b}^{2}}-4ac}}{2c}\]


    When you remember a pattern with particular letters in it and an example has some of the same letters in it, make sure they match the pattern!

    The substitution rule for integration

    The chain rule says that the derivative of a function of the form $f(g(x))$ is $f'(g(x))g'(x)$. From this you get the substitution rule for finding indefinite integrals:

                                      \[\int{f'(g(x))g'(x)\,dx}=f(g(x))+C\]

    Example

    To find $\int{2x\,\cos
    ({{x}^{2}})\,dx}$, you recognize that you can take $f(x)=\sin x$and $g(x)={{x}^{2}}$ in the formula, getting \[\int{2x\,\cos ({{x}^{2}})\,dx}=\sin ({{x}^{2}})\]    Note that in the way I wrote the integral, the functions occur in the opposite order from the pattern. That kind of thing happens a lot.

    Laws and constraints

    • The statement “$(x+1)^2=x^2+2x+1$” is a pattern that is true for all numbers $x$. $3^2=2^2+2\times2+1$ and $(-2)^2=(-1)^2+2\times(-1)+1$, and so on. Such a pattern is a universal assertion, so it is a theorem. When the statement is an equation, as in this case, it is also called a law.
    • The statement “$\sin x\leq 1$” is also true for all $x$, and so is a theorem.
    • The statement “$x^2+(y-1)^2$ is at most $5$” is true for some real numbers and not others, so it is not a theorem, although it is a constraint.
    • The quadratic formula says that:
      The solutions of an equation
      of the form $a{{x}^{2}}+bx+c=0$ is
      given by\[r=\frac{-b\pm
      \sqrt{{{b}^{2}}-4ac}}{2a}\]

      This is true for all complex numbers $a$, $b$, $c$.
      The $x$ in the equation is not a free variable, but a “variable to be solved for” and does not appear in the quadratic formula. Theorems like the quadratic formula are usually called “formulas” rather than “laws”.

    • The product rule for derivatives

      The derivative of $f(x)g(x)$ is $f'(x)\,g(x)+f(x)\,g'(x)$

      is true for all differentiable functions $f$ and $g$. That means it is true for both of these choices of $f$ and $g$:

      • $f(x)=x$ and $g(x)=x\sin x$
      • $f(x)=x^2$ and $g(x)=\sin x$

      But both choices of $f$ and $g$ refer to the same function $x^2\sin x$, so if you apply the product rule in either case you should get the same answer. (Try it).

    Some bothersome types of pattern recognition

    Dependence on conventions

    Definition: A quadratic polynomial in $x$is an expression of the form $a{{x}^{2}}+bx+c$.   

    Examples

    • $-5{{x}^{2}}+32x-5$ is a quadratic polynomial: You have to recognize that it fits the pattern in the definition by writing it as $(-5){{x}^{2}}+32x+(-5)$
    • So is ${{x}^{2}}-1$: You have to recognize that it fits the definition by writing it as ${{x}^{2}}+0\cdot x+(-1)$ (I wrote zero cleverly).

    Some authors would just say, “A quadratic polynomial is an expression of the form $a{{x}^{2}}+bx+c$” leaving you to deduce from conventions on variables that it is a polynomial in $x$ instead of in $a$ (for example).

    Note also that I have deliberately not mentioned what sorts of numbers $a$, $b$, $c$ and $x$ are. The authors may assume that you know they are using real numbers.

    An expression as an instance of substitution

    One particular type of pattern recognition that comes up all the time in math is recognizing that a given expression is an instance of a substitution into a known expression.

    Example

    Students are sometimes baffled when a proof uses the fact that ${{2}^{n}}+{{2}^{n}}={{2}^{n+1}}$ for positive integers $n$. This requires the recognition of the patterns $x+x=2x$ and $2\cdot
    \,{{2}^{n}}={{2}^{n+1}}$.

    Similarly ${{3}^{n}}+{{3}^{n}}+{{3}^{n}}={{3}^{n+1}}$.

    Example

    The assertion

    \[{{x}^{2}}+{{y}^{2}}\ge 0\ \ \ \ \ \text{(1)}\]

    has as a special case

    \[(-x^2-y^2)^2+(y^2-x^2)^2\ge
    0\ \ \ \ \ \text{(2)}\]

    which involves the substitutions $x\leftarrow -{{x}^{2}}-{{y}^{2}}$ and $y\leftarrow
    {{y}^{2}}-{{x}^{2}}$.

    Remarks
    • If you see (2) in a text and the author blithely says it is “never negative”, that is because it is of the form \[{{x}^{2}}+{{y}^{2}}\ge 0\] with certain expressions substituted for $x$ and $y$. (See substitution and The only axiom for algebra.)
    • The fact that there are minus signs in (2) and that $x$ and $y$ play different roles in (1) and in (2) are red herrings. See ratchet effect and variable clash.
    • Most people with some experience in algebra would see quickly that (2) is correct by using chunking. They would visualize (2) as

      \[(\text{something})^2+(\text{anothersomething})^2\ge0\]
      This shows that in many cases


      chunking is a psychological inverse to substitution

    • Note that when you make these substitutions you have to insert appropriate parentheses (more here). After you make the substitution, the expression of course can be simplified a whole bunch, to

      \[2({{x}^{4}}+{{y}^{4}})\ge0\]

    • A common cause of error in doing this (a mistake I make sometimes) is to try to substitute and simplify at the same time. If the situation is complicated, it is best to

      substitute as literally as possible and then simplify

    Integration by Parts

    The rule for integration by parts says that

                             \[\int{f(x)\,g'(x)\,dx=f(x)\,g(x)-\int{f'(x)\,g(x)\,dx}}\]

    Suppose you need to find $\int{\log x\,dx}$.(In abstractmath.org, “log” means ${{\log }_{e}}$).  Then we can recognize this integral as having the pattern for the left side of the parts formula with $f(x)=1$ and $g(x)=\log \,x$. Therefore

    \[\int{\log x\,dx=x\log x-\int{\frac{1}{x}dx=x\log \,x-x+c}}\]

    How on earth did I think to recognize $\log x$ as $1\cdot \log x$??  
    Well, to tell the truth because some nerdy guy (perhaps I should say some other nerdy guy) clued me in when I was taking freshman calculus. Since then I have used this device lots of times without someone telling me — but not the first time.

    This is an example of another really useful principle:


    Write $1$ cleverly.

    Two different substitutions give the same expression

    Some proofs involve recognizing that a symbolic expression or figure fits a pattern in two different ways. This is illustrated by the next two examples. (See also the remark about the product rule above.) I have seen students flummoxed by Example ID, and Example ISO is a proof that is supposed to have flummoxed medieval geometry students.

    Example ID

    Definition: In a set with an associative binary operation and an identity element $e$, an element $y$ is the inverse of an element $x$ if

    \[xy=e\ \ \ \ \text{and}\ \ \ \ yx=e \ \ \ \ (1)\]

    In this situation, it is easy to see that $x$ has only one inverse: If $xy=e$ and $xz=e$ and $yx=e$ and $zx=e$, then \[y=ey=(zx)y=z(xy)=ze=z\]

    Theorem: ${{({{x}^{-1}})}^{-1}}=x$.

    Proof: I am given that ${{x}^{-1}}$ is the inverse of $x$, By definition, this means that

    \[x{{x}^{-1}}=e\ \ \ \text{and}\ \ \ {{x}^{-1}}x=e \ \ \ \ (2)\]

    To prove the theorem, I must show that $x$ is the inverse of ${{x}^{-1}}$. Because $x^{-1}$ has only one inverse, all we have to do is prove that

    \[{{x}^{-1}}x=e\ \ \ \text{and}\ \ \ x{{x}^{-1}}=e\ \ \ \ (3)\]  

    But (2) and (3) are equivalent! (“And” is commutative.)

    Example ISO

    This sort of double substitution occurs in geometry, too.

    Theorem: If a triangle has two equal angles, then it has two equal sides.

    Proof: In the figure, assume $\angle ABC=\angle ACB$. Then triangle $ABC$ is congruent to triangle $ACB$ since the sides $BC$ and $CB$ are equal (they are the same line segment!) and the adjoining angles are equal by hypothesis.

    The point is that although triangles $ABC$ and $ACB$ are the same triangle, and sides $BC$ and $CB$ are the same line segment, the proof involves recognizing them as geometric figures in two different ways.

    This proof (not Euclid’s origi­nal proof) is hundreds of years old and is called the pons asinorum (bridge of donkeys). It became famous as the first theorem in Euclid’s books that many medi­eval stu­dents could not under­stand. I con­jecture that the name comes from the fact that the triangle as drawn here resembles an ancient arched bridge. These days, isos­ce­les tri­angles are usually drawn taller than they are wide.

    Technical problems in carrying out pattern matching

    Parentheses

    In matching a pattern you may have to insert parentheses. For example, if you substitute $x+1$ for $a$, $2y$ for
    $b$ and $4$ for $c$ in the expression \[{{a}^{2}}+{{b}^{2}}={{c}^{2}}\] you get \[{{(x+1)}^{2}}+4{{y}^{2}}=16\]
    If you did the substitution literally without editing the expression so that it had the correct meaning, you would get \[x+{{1}^{2}}+2{{y}^{2}}={{4}^{2}}\] which is not the result of performing the substitution in the expression ${{a}^{2}}+{{b}^{2}}={{c}^{2}}$.   

    Order switching

    You can easily get confused if the patterns involve a switch in the order of the variables.

    Notation for integer division

    • For integers $m$ and $n$, the phrase “$m$ divides $n$” means there is an integer $q$ for which $n=qm$.
    • In number theory (which in spite of its name means the theory of positive integers) the vertical bar is used to denote integer division. So $3|6$ because $6=2\times 3$ ($q$ is $2$ in this case). But “$3|7$” is false because there is no integer $q$ for which $7=q\times 3$.
    • An equivalent definition of division says that $m|n$ if and only if $n/m$ is an integer. Note that $6/3=2$, an integer, but $7/3$ is not an integer.
    • Now look at those expressions:
    • “$m|n$” means that there is an integer $q$ for which $n=qm$.In these two expressions, $m$ and $n$ occur in opposite order.
    • “$m|n$” is true only if $n/m$ is an integer. Again, they are in opposite order. Another way of writing $n/m$ is $\frac{n}{m}$. When math people pronounce “$\frac{n}{m}$” they usually say, “$n$ over $m$” using the same order.
  • I taught these notation in courses for computer engineering and math majors for years. Some of the students stayed hopelessly confused through several lectures and lost points repeatedly on homework and exams by getting these symbols wrong.
  • The problem was not helped by the fact that “$|$” and “$/$” are similar but have very different syntax:

    Math notation gives you no clue which symbols are operators (used to form expressions) and which are verbs (used to form assertions).

  • A majority of the students didn’t have so much trouble with this kind of syntax. I have noticed that many people have no sense of syntax and other people have good intuitive understanding of syntax. I suspect the second type of people find learning foreign languages easy.
  • Many of the articles in the references below concern syntax.
  • References

    Creative Commons License

    This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 License.


    Send to Kindle

    A mathematical saga

    This post outlines some of the intellectual developments in the history of math. I call it a saga because it is like one:

    • It is episodic, telling one story after another.
    • It does not try to give an accurate history, but its episodes resemble what happened in math in the last 3000 years.
    • It tells about only a few of the things that happened.

    Early techniques

    We represented numbers by symbols.

    Thousands of years ago, we figured out how to write down words and phrases in such a way that someone much later could read and understand them.

    Naturally, we wanted to keep records of the number of horses the Queen owned, so we came up with various notations for numbers (number representing count). In some societies, these symbols were separate from the symbols used to represent words.

    We invented algorithms

    We discovered positional notation. We write $213$, which is based on a system: it means $2\times100+1\times10+3\times 1$. This notation encapsulates a particular computation of a number (its base-10 representation). (The expression $190+23$ is another piece of notation that encapsulates a computation that yields $213$.)

    Compare that to the Roman notation $CCXIII$, which is an only partly organized jumble.
    Try adding $CCXIII+CDXXIX$. (The answer is $DCXLII$.)

    Positional notation allowed us to create the straightforward method of addition involving adding single digits and carrying:
    \[\overset{\hspace{6pt}1\phantom{1}}
    {\frac
    {\overset{\displaystyle{213}}{429}}{642}
    }
    \]
    Measuring land requires multiplication, which positional notation also allows us to perform easily.
    The invention of such algorithms (methodically manipulating symbols) made it easy to calculate with numbers.

    Geometry: Direct construction of mathematical objects

    We discovered geometry in ancient times, in laying out plots of land and designing buildings. We had a bunch of names for different shapes and for some of them we knew how to calculate their area, perimeter and other things.

    Euclid showed how to construct new geometric figures from given ones using specific methods (ruler and compasses) that preserve some properties.

    Example

    We can bisect a line segment (black) by drawing two circles (blue) centered at the endpoints with radius the length of the line segment. We then construct a line segment (red) between the points of intersection of the circle that intersects the given line segment at its midpoint. These constructions can be thought of as algorithms creating and acting on geometric figures rather than on symbols.



    It is true that diagrams were drawn to represent line segments, triangles and so on.
    But the diagrams are visualization helpers. The way we think about the process is that we are operating directly on the geometric objects to create new ones. We are thinking of the objects Platonically, although we don’t have to share Plato’s exact concept of their reality. It is enough to say we are thinking about the objects as if they were real.

    Axioms and theorems

    Euclid came up with the idea that we should write down axioms that are true of these figures and constructions, so that we can systematically use the constructions
    to prove theorems about figures using axioms and previously proved theorems. This provided documented reasoning (in natural language, not in symbols) for building up a collection of true statements about math objects.

    Example

    After creating some tools for proving triangles are congruent, we can prove the the intersection of red and black lines in the figure really is the midpoint of the black line by constructing the four green line segments below and making appeals to congruences between the triangles that show up:



    Note that the green lines have the same length as the black line.

    Euclid thought about axioms and theorems as applying to geometry, but he also proved theorems about numbers by representing them as ratios of line segments.

    Algebra

    People in ancient India and Greece knew how to solve linear and quadratic equations using verbal descriptions of what you should do.
    Later, we started using a symbolic language to express numerical problems and symbolic manipulation to solve (some of) them.

    Example

    The quadratic formula is an encapsulated computation that provides the roots of a quadratic equation. Newton’s method is a procedure for finding a root of an arbitrary polynomial. It is recursive in the loose sense (it does not always give an answer).

    The symbolic language is a vast expansion of the symbolic notation for numbers. A major innovations was to introduce variables to represent unknowns and to state equations that are always true.

    Logic

    Aristotle developed an early form of logic (syllogisms) aimed at determining which arguments in rhetoric were sound. “All men are mortal. Socrates is a man. Therefore Socrates is mortal.” This was written in sentences, not in symbols.

    By explicit analogy with algebra, we introduced symbolism and manipulation rules for logical reasoning, with an eye toward making mathematical reasoning sound and to some extent computable. For example, in one dialect of logical notation, modus ponens (used in the Socrates syllogism) is expressed as $(P\rightarrow Q,\,P)\,\,\vdash\,\, Q$. This formula is an encapsulated algorithm: it says that if you know $P\rightarrow Q$ and $P$ are valid (are theorems) then $Q$ is valid as well.

    Crises of understanding

    We struggled with the notion of function as a result of dealing with infinite series. For example, the limit of a sequence of algebraic expressions may not be an algebraic expression. It would no longer do to think of a function as the same thing as an algebraic expression.

    We realized that Euclid’s axioms for geometry lacked clarity. For example, as I understand it, the original version of his axioms didn’t imply that the two circles in the proof above had to intersect each other. There were other more subtle problems. Hilbert made a big effort to spell out the axioms in more detail.

    We refined our understanding of logic by trying to deal with the mysteries of calculus, limits and spaces. An example is the difference between continuity and uniform continuity.
    We also created infinitesimals, only to throw up our hands because we could not make a logic that fit them. Infinitesimals were temporarily replaced by the use of epsilon-delta methods.

    We began to understand that there are different kinds of spaces. For example, there were other models of some of Euclid’s axioms than just Euclidean space, and some of those models showed that the parallel axiom is independent of the other axioms. And we became aware of many kinds of topological spaces and manifolds.

    We started to investigate sets, in part because spaces have sets of points. Then we discovered that a perfectly innocent activity like considering the set of all sets resulted in an impossibility.

    We were led to consider how to understand the Axiom of Choice from several upsetting discoveries. For example, the Banach-Tarski “paradox” implies that you can rearrange the points in a sphere of radius $1$ to make two spheres of radius $1$.

    Mathematics adopts a new covenant… for awhile

    These problems caused a kind of tightening up, or rigorizing.
    For a period of time, less than a century, we settled into a standard way of practicing research mathematics called new math or modern math. Those names were used mostly by math educators. Research mathematicians might have called it axiomatic math based on set theory. Although I was around for the last part of that period I was not aware of any professional mathematicians calling it anything at all; it was just what we did.

    First, we would come up with a new concept, type of math object, or a new theorem. In this creative process we would freely use intuition, metaphors, images and analogies.

    Example

    We might come up with the idea that a function reaches its maximum when its graph swoops up from the left, then goes horizontally for an infinitesimal amount of time, then swoops down to the right. The point at which it was going horizontally would obviously have to be the maximum.

    But when we came to publish a paper about the subject, all these pictures would disappear. All our visual, metaphorical/conceptual and kinetic feelings that explain the phenomenon would have to be suppressed.

    Rigorizing consisted of a set of practices, which I will hint at:

    Orthodox behavior among mathematicians in 1950

    Definition in terms of sets and axioms

    Each mathematical object had to be defined in some way that started with a set and some other data defined in terms of the set. Axioms were imposed on these data. Everything had to be defined in terms of sets, including functions and relations. (Multiple sets were used occasionally.)

    Definitions done in this way omit a lot of the intuition that we have concerning the object being defined.

    Examples
    • The definition of group as a set with a binary operation satisfying some particular axioms does not tell you that groups constitute the essence of symmetry.
    • The definitions of equivalence relation and of partition do not even hint that they define the same concept.

    Even so, definitions done in this way have an advantage: They tend to be close to minimal in the sense that to verify that something fits the definition requires checking no more (or not much more) than necessary.

    Proofs had to be clearly translatable into symbolic logic

    First order logic (and other sorts of logic) were well developed and proofs were written in a way that they could in principle be reduced to arguments written in the notation of symbolic logic and following the rules of inference of logic. This resulted in proofs which did not appeal to intuition, metaphors or pictures.

    Example

    In the case of the theorem that the maximum of a (differentiable) function occurs only where the derivative is zero, that meant epsilon-delta proofs in which the proof appeared as a thick string of symbols. Here, “thick” means it had superscripts, subscripts, and other things that gave the string a fractal dimension of about $1.2$ (just guessing!).

    Example

    When I was a student at Oberlin College in 1959, Fuzzy Vance (Elbridge P. Vance) would sometimes stop in the middle of an epsilon-delta proof and draw pictures and provide intuition. Before he started that he would say “Shut the door, don’t tell anyone”. (But he told us!)

    Example

    A more famous example of this is the story that Oscar Zariski, when presenting a proof in algebraic geometry at the board, would sometimes remind himself of a part of a proof by hunching over the board so the students couldn’t see what he was doing and drawing a diagram which he would immediately erase. (I fault him for not telling them about the diagram.)

    It doesn’t matter whether this story is true or not. It is true in the sense that any good myth is true.

    Commercial

    I wrote about rigor in these articles:

    Rigorous view in abstractmath.org.

    Dry bones, post in this blog.

    Logic and sets clarify but get in the way

    The orthodox method of “define it by sets and axioms” and “makes proofs at least resemble first order logic” clarified a lot of suspect proofs. But it got in the way of intuition and excessive teaching by using proofs made it harder to students to learn.

    • The definition of a concept can make you think of things that are foreign to your intuition of the concept. A function is a mapping,. The ordered pairs are a secondary construction; you should not think of ordered pairs as essential to your intuition. Even so the definition of function in terms of ordered pairs got rid of a lot of cobwebs.
    • The cartesian product of sets is obviously an associative binary operation. Except that if you define the cartesian product of sets in terms of ordered pairs then it is not associative.
    • Not only that, but if you define the ordered pair $(a,b)$ as $\{\{a,b\},a\}$ the you have to say that $a$ is an element of $(a,b)$ but $b$ is not That is not merely an inconvenient definition of ordered pair, it is wrong. It is not bad way to show that the concept of ordered pair is consistent with ZF set theory, but that is a minor point mathematicians hardly ever worry about.

    Mathematical methods applied to everything

    The early methods described at the beginning of this post began to be used everywhere in math.

    Algorithms on symbols

    Algorithms, or methodical procedures, began with the addition and multiplication algorithms and Euclid’s ruler and compass constructions, but they began to be used everywhere.

    They are applied to the symbols of math, for example to describe rules for calculating derivatives and integrals and for summing infinite series.

    Algorithms are used on strings, arrays and diagrams of math symbols, for example concatenating lists, multiplying matrices, and calculating binary operations on trees.

    Algorithms as definitions

    Algorithms are used to define the strings that make up the notation of symbolic logic. Such definitions include something like: “If $E$ and $F$ are expressions than $(E)\land (F)$ and $(\forall x)(E)$ are expressions”. So if $E$ is “$x\geq 3$” then $(\forall x)(x\geq 3)$ is an expression. This had the effect of turning an expression in symbolic logic into a mathematical object. Deduction rules such as “$E\land F\vdash E$” also become mathematical objects in this way.

    We can define the symbols and expressions of algebra, calculus, and other part of math using algorithms, too. This became a big deal when computer algebra programs such as Mathematica came in.

    Example

    You can define the set $RP$ of real polynomials this way:

    • $0\in RP$
    • If $p\in RP$ then $p+r x^n\in RP$, where $x$ is a variable and $n$ a nonnegative integer.

    That is a recursive definition. You can also define polynomials by pattern recognition:

    Let $n$ be a positive integer, $a_0,\,a_1\,\ldots a_n$ be real numbers and $k_0,\,k_1\,\ldots k_n$ be nonnegative integers. Then $a_0 x^{k_0}+a_1 x^{k_1}+\ldots+ a_n x^{k_n}$ is a polynomial.

    The recursive version is a way of letting a compiler discover that a string of symbols is a polynomial. That sort of thing became a Big Deal when computers arrived in our world.

    Algorithms on mathematical objects

    I am using the word “algorithm” in a loose sense to mean any computation that may or may not give a result. Computer programs are algorithms, but so is the quadratic formula. You might not think of a formula as an algorithm, but that is because if you use it in a computer program you just type in the formula; the language compiler has a built-in algorithm to execute calculations given by formulas.

    It has not been clearly understood that mathematicians apply algorithms not only to symbols, but also directly to mathematical objects. Socrates thought that way long ago, as I described in the construction of a midpoint above. The procedure says “draw circles with center at the endpoints of the line segment.” It doesn’t say “draw pictures of circles…”

    In the last section and this one, I am talking about how we think of applying an algorithm. Socrates thought he was talking about ideal lines and circles that exist in some other universe that we can access by thought. We can think about them as real things without making a metaphysical claim like Socrates did about them. Our brains are wired to think of abstract ideas in some many of the same ways we think about physical objects.

    Example

    The unit circle (as a topological space at least) is the quotient space of the space $\mathbb{R}$ of real numbers mod the equivalence relation defined by: $x\sim y$ if and only if $x-y$ is an integer.

    Mathematicians who understand that construction may have various images in their mind when they read this. One would be something like imagining the real line $\mathbb{R}$ and then gluing all the points together that are an integer apart. This is a distinctly dizzying thing to think about but mathematicians aren’t worried because they know that taking the quotient of a space is a well-understood construction that works. They might check that by imagining the unit circle as the real line wrapped around an infinite number of times, with points an integer apart corresponding to the same point on the unit circle. (When I did that check I hastily inserted the parenthetical remark saying “as a topological space” because I realized the construction doesn’t preserve the metric.) The point of this paragraph is that many mathematicians think of this construction as a construction on math objects, not a construction on symbols.

    Everything is a mathematical object

    A lot of concepts start out as semi-vague ideas and eventually get defined as mathematical objects.

    Examples

    • A function was originally thought of as a formula, but then get formalized in the days of orthodoxy as a set of ordered pairs with the functional property.
    • The concept of equation has been turned into a math object many times, for example in universal algebra and in logic. I suspect that some practitioners in those fields might disagree with me. This requires further research.
    • Propositions are turned into math objects by Boolean Algebra.
    • Perhaps numbers were always thought of as math objects, but much later the set $\mathbb{N}$ of all natural numbers and the set $\mathbb{R}$ of all real numbers came to be thought of explicitly as math objects, causing some mathematicians to have hissy fits.
    • Definitions are math objects. This has been done in various ways. A particular theory is a mathematical object, and it is essentially a definition by definition (!): Its models are what the theory defines. A particular example of “theory” is first-order theory which was the gold standard in the orthodox era. A classifying topos is also a math object that is essentially a definition.

    Category Theory

    The introduction of categories broke the orthodoxy of everything-is-a-set. It has become widely used as a language in many branches of math. It started with problems in homological algebra arising in at least these two ways:

    • Homotopy classes of continuous functions are not functions in the set theory sense. So we axiomatized the concept of function as an arrow (morphism) in a category.
    • The concept of mathematical object is axiomatized as an object in a category. This forces all properties of an object to be expressed in terms of its external relations with other objects and arrows.
    • Categories capture the idea of “kind of math”. There is a category of groups and homomorphisms, a category of topological spaces and homeomorphisms, and so on. This is a new level of abstraction. Before, if someone said “I work in finite groups”, their field was a clear idea and people knew what they were talking about, but now the category of finite groups is a mathematical object.
    • Homology maps one kind of math (topology) into another kind (algebra). Since categories capture the general notion of “kind of math”, we invented the idea of functor to capture the idea of modeling or representing one branch of math in another one. So Homology became a mathematical object.
    • The concept of functor allowed the definition of natural transformation as a mathematical object. Before categories, naturality was only an informal idea.

    Advantages of category theory

    • Categories, in the form of toposes, quickly became candidates to replace set theory as a foundation system for math. They are more flexible and allow the kind of logic you want to use (classical, intuitionistic and others) to be a parameter in your foundational system.
    • “Arrow” (morphism) replaced not only the concept of function but also the concept of “element of” (“$\in$”). It allows the concept of variable elements. (This link is to a draft of a section of abstractmath.org that has not been included in the table of contents yet.) It also requires that an “element” has to be an element of one single object; for example, the maps $1\to \mathbb{N}$ and $1\to \mathbb{R}$ that pick out the number $42$ are not the same maps, although of course they are related by the canonical inclusion map $\mathbb{N}\to\mathbb{R}$.
    • Diagrams are used in proofs and give much better immediate understanding than formulas written in strings, which compress a lot of things unnecessarily into thick strings that require understanding lots of conventions and holding things in your memory.
    • Categories-as-kinds-of-math makes it easy to turn an analogy, for example between products of groups and products of spaces, into two examples of the same kind of mathematical object: Namely, a product in a category.

    Disadvantages of category theory

    • Category theory requires a new way of thinking. Some people think that is a disadvantage. But genuine innovation is always disruptive. New technology breaks old technology. Of course, the new technology has to turn out to be useful to win out.
    • Category theory has several notions of “equal”. Objects can be the same or isomorphic. Categories can be isomorphic or equivalent. When you are doing category theory, you should never worry about whether two objects are equal: that is considered evil. Category theorists generally ignored the fuzziness of this problem because you can generally get away with it. Still, it was an example of something that had not been turned into a mathematical definition. Two ways of accomplishing this are anafunctors and homotopy type theory.

    I expect to write about homotopy type theory soon. It may be the Next Revolution.

    Send to Kindle