Produced by Charles Wells Revised 2017-01-08 Introduction to this website website TOC website index blog head of Understanding Math chapter

An abstraction of a concept $C$ is a concept $C'$ with these properties:

- $C'$ includes all instances of $C$ and
- $C'$ is constructed by taking as axioms certain assertions that are true of all instances of $C$.

There are two major situations where abstraction is used in math.

- $C$ may be a familiar concept or property that has not yet been given a math definition.
- $C$ may already have a mathematical definition using axioms. In that case the abstraction will be a generalization of $C$.

In both cases, the math definition may allow instances of $C'$ that were not originally thought of as being part of $C$.

Mathematicians have made use of **relations** between math objects since antiquity.

- For real numbers $r$ and $s$. "$r\lt x$" means that $r$ is less than $s$. So the statement "$5\lt 7$" is true, but the statement "$7\lt 5$" is false. We say that "$\lt$" is a
**relation on the real numbers.**Other relations on real numbers denoted by symbols are "$=$" and "$\leq$". - Suppose $m$ and $n$ are positive integers. $m$ and $n$ are said to be
**relatively prime**if the greatest common divisor of $m$ and $n$ is $1$. So the statement that $5$ and $7$ are relatively prime, but $15$ and $21$ are not relatively prime. So being relatively prime is a**relation on positive integers.**This is a relation that does not have a commonly used symbol. - The concept of congruence of triangles has been used for a couple of millenia. In recent centuries it has been denoted by the symbol "$\cong$". Congruence is a
**relation on triangles.**

One could say that a relation is a true-or-false statement that can be made about a pair of math objects of a certain type. Logicians have in fact made that a formal definition. But when set theory came to be used around 100 years ago as a basis for all definitions in math, we started using this definition:

**A relation on a set $S$ is a set $\alpha$ of ordered pairs of elements of $S$.**

"$\alpha$" is the Greek letter alpha.

The idea is that if $(s,t)\in\alpha$, then $s$ is related by $\alpha$ to $t$, then $(s,t)$ is an element of $\alpha$, and if $s$ is *not* related by $\alpha$ to $t$, then $(s,t)$ is not an element of $\alpha$. That abstracts the everyday concept of relationship by focusing on the property that a relation either *holds or doesn’t hold between two given objects.*

For example, the less-than relation on the set of all real numbers $\mathbb{R}$ *is* the set \[\alpha:=\{(r,s)|r\in\mathbb{R}\text{ and }s\in\mathbb{R}\text{ and }r\lt s\}\] In other words, $r\lt s$ *if and only if* $(r,s)\in \alpha$.

A consequence of this definition is that *any set of ordered pairs is a relation.* Example: Let $\xi:=\{(2,3),(2,9),(9,1),(9,2)\}$. Then $\xi$ is a relation on the set $\{1,2,3,9\}$. Your reaction may be: What relation IS it? Answer: just that set of ordered pairs. You know that $2\xi3$ and $2\xi9$, for example, but $9\xi1$ is false. There is no other definition of $\xi$.

Yes, the relation $\xi$ is weird. It is an arbitrary definition. It does not have any verbal description other than listing the element of $\xi$. It is probably useless. Live with it.

The symbol "$\xi$" is a Greek letter. It looks weird, so I used it to name a weird relation. Its upper case version is "$\Xi$", which is even weirder. I pronounce "$\xi$" as "ksee" but most mathematicians call it "si" or "zi" (rhyming with "pie").

Defining a relation as any old set of ordered pairs is an example of a reconstructive generalization.

Years ago, mathematicians started coming up with things that were like relations but which involved *more than two elements* of a set.

Let $r$, $s$ and $t$ be real numbers. We say that "$s$ is between $r$ and $t$" if $r\lt s$ and $s\lt t$. Then **betweenness** is a relation that is true or false about *three* real numbers.

Mathematicians now call this a **ternary relation.** The abstract definition of a ternary relation is this: A **ternary relation** on a set $S$ is a set of ordered triple of elements of $S$. This is an reconstructive generalization of the concept of relation that allows ordered triples of elements as well as ordered pairs of elements.

In the case of betweenness, we have to decide on the ordering. Let us say that the betweenness relation holds for the triple $(r,s,t)$ if $r\lt s$ and $s\lt t$. So $(4,5,7)$ is in the betweenness relation and $(4,7,5)$ is not.

You could argue that in the sentence, "$s$ is between $r$ and $t$", the $s$ comes first, so that we *should* say that the betweenness relation (meaning $r$ is between $s$ and $t$) holds for $(r,s,t)$ if $s\lt r$ and $r\lt t$. Well, when you write an article you can write it that way. But I am writing this article.

Nowadays we talk about $n$-ary relations for any positive integer $n$. One consequence of this is that if we want to talk just about sets of ordered pairs we must call them **binary relations.**

When I was a child there was only one kind of guitar and it was called "a guitar". (My older cousin Junior has a guitar, but I had only a plastic ukelele.) Some time in the fifties, electrically amplified guitars came into being, so we had to refer to the original kind as "acoustic guitars". I was a teenager when this happened, and being a typical teenager, I was completely contemptuous of the adults who reacted with extreme irritation at the phrase "acoustic guitar".

The **axiomatic method** is a technique for studying math objects of some kind by formulating them as a type of math structure. You take some basic properties of the kind of structure you are interested in and set them down as axioms, then deduce other properties (that you may or may not have already known) as theorems. The point of doing this is *to make your reasoning and all your assumptions completely explicit.*

Nowadays research papers typically state and prove their theorems in terms of math structures defined by axioms, although a particular paper may not mention the axioms but merely refer to other papers or texts where the axioms are given. For some common structures such as the real numbers and sets, the axioms are not only *not* referenced, but the authors clearly don’t even think about them in terms of axioms: they use commonly-known properties (or real numbers or sets, for example) without reference.

Typically when using the axiomatic method some of these things may happen:

- You discover that there are other examples of this system that you hadn’t previously known about. This makes the axioms more broadly applicable.
- You discover that some properties that your original examples had
*don’t*hold for some of the new examples. Depending on your research goals, you may then add some of those properties to the axioms, so that the new examples are not examples any more. - You may discover that some of your axioms follow from others, so that you can omit them from the system.

A continuous function (from the set of real numbers to the set of real numbers) is sometimes described as a function whose graph you can draw without lifting your chalk from the board. This is a physical description, not a mathematical definition.

In the nineteenth century, mathematicians talked about continuous functions but became aware that they needed a rigorous definition. One possibility was functions given by formulas, but that didn’t work: some formulas give discontinuous functions and they couldn’t think of formulas for some continuous functions.

This description of nineteenth century math is an oversimplification.

Cauchy produced the definition we now use (the epsilon-delta definition) which is a rigorous mathematical version of the no-lifting-chalk idea and which included the functions they thought of as continuous.

To their surprise, some clever mathematicians produced examples of some weird continuous functions that you *can’t* draw, for example the sine blur function. In the terminology in the discussion of abstraction above, the abstraction $C'$ (epsilon-delta continuous functions) had functions in it that were not in $C$ (no-chalk-lifting functions.) On the other hand, their definition now applied to functions between some spaces besides the real numbers, for example the complex numbers, for which drawing the graph without lifting the chalk doesn’t even make sense.

Suppose you are studying the algebraic properties of numbers. You know that addition and multiplication are both associative operations and that they are related by the **distributive law:** $x(y+z)=xy+xz$. Both addition and multiplication have identity elements ($0$ and $1$) and satisfy some other properties as well: addition forms a commutative group for example, and if $x$ is any number, then $0\cdot x=0$.

One way to approach this problem is to write down some of these laws as axioms on a set with two binary operations without assuming that the elements are numbers. In doing this, you are abstracting some of the properties of numbers.

Certain properties such as those in the first paragraph of this example were chosen to define a type of math structure called a **ring.** (The precise set of axioms for rings is given in the **Wikipedia article.**)

You may then prove theorems about rings *strictly by logical deduction from the axioms without calling on your familiarity with numbers.*

When mathematicians did this, the following events occurred:

- They discovered systems such as matrices whose elements are not numbers but which obey most of the axioms for rings.
- Although multiplication of numbers is commutative, multiplication of matrices is not commutative.
- Now they had to decide whether to require commutative of multiplication as an axioms for rings or not. In this example, historically, mathematicians decided
*not*to require multiplication to be commutative, so (for example) the set of all $2\times 2$ matrices with real entries is a ring. - They then defined a
**commutative ring**to be a ring in which multiplication*is*commutative. - You can prove from the axioms that in any ring, $0 x=0$ for all $x$, so you don’t need to include it as an axiom.

So the name "commutative ring" means the *multiplication* is commutative, because addition in rings is always commutative. Mathematical names are not always transparent.

Nowadays, all math structures are defined by axioms.

- Historically, the first example of something like the axiomatic method is Euclid’s axiomatization of geometry. The axiomatic method began to take off in the late nineteenth century and now is a standard tool in math. For more about the axiomatic method see the Wikipedia article.
- Partitions and equivalence relations are two other concepts that have been axiomatized. Remarkably, although the axioms for the two types of structures are quite different, every partition is in fact an equivalence relation in exactly one way, and any equivalence relation is a partition in exactly one way.

Many articles on the web about the axiomatic method emphasize the representation of the axiom system as a formal logical theory (formal system). Mathematicians in practice create and use a particular axiom system as a tool for research and understanding, and state and prove theorems of the system in semi-formal narrative form rather than in formal logic.

To **generalize** a mathematical concept $C$ is to find a concept $C'$ with the property
that *all* instances of $C$ are also instances of $C'$.

a) $\mathbb{R}^n$, for arbitrary positive integer $n$, is a generalization of $\mathbb{R}^2$. One replaces the ordered pairs in $\mathbb{R}^2$ by ordered $n$-tuples, and much of the arithmetic and spatial structure (except for the representation of $\mathbb{R}^2$ using complex numbers) and even some of our intuitions carry over to the more general case.

b) The concept of abstract vector space is a generalization of ${\mathbb{R}}^n$. To get it you forget that the elements of ${\mathbb{R}^n}$ are $n$-tuples of real numbers and your produce a different, more abstract definition obtained by finding properties of $\mathbb{R}^n$ such as the existence of addition, multiplication, scalar multiplication, and so on, and using these properties as axioms for the new concept of vector space.

Example (a) is an example of an **expansive generalization****, **obtained
by changing a datum in the definition of a concept (the dimension $2$ in this
case) into a parameter.

Example (b) is obtained taking the concept and reconstructing
it, producing
a very different-looking definition that includes all the original examples and
others as well. This is **reconstructive generalization**.

Both expansive and reconstructive generalizations, if done correctly, take a concept and introduce another concept that includes all the examples of the original concepts and (in general) others as well. These are legitimate generalizations.

It can happen that all the examples you know of a specific
concept have a certain property $P$ not required by the definition. If
you conclude from this that all the examples of the concept have property $P$ you
are engaging in **generalization from
examples**.

Generalization
from examples |

**MYTH. **Many
newcomers to abstract math believe it because in the typical examples you see
of limits of sequences the elements of the sequence in fact are never equal to
the limit. For example

but 1/$n$ is never equal to $0$.

On the other hand,

\[\underset{n\to \infty }{\mathop{\lim }}\,\frac{\sin \left(\frac{\pi n}{4}\right)}{n}=0\]but every fourth term is zero. The sequence starts out like this:

\[\frac{1}{\sqrt{2}},\,\frac{1}{2},\,\frac{1}{3 \sqrt{2}}\, ,\,0,-\frac{1}{5 \sqrt{2}}\, ,\,-\frac{1}{6},\,-\frac{1}{7 \sqrt{2}}\, ,\,0,\,\frac{1}{9 \sqrt{2}}\, ,\,\frac{1}{10},\,\frac{1}{11 \sqrt{2}}\, ,\,0,\,-\frac{1}{13 \sqrt{2}}\, ,\,-\frac{1}{14},\,-\frac{1}{15 \sqrt{2}}\, ,\,0\]The names "expansive" and "reconstructive" are due to Harel and Tall.

Once you acquire an insight, you may not be able to understand how someone else can't understand it. It becomes obvious, or trivial to prove. That is the **ratchet effect.**

“How did you know that \[{{\left(\frac{{{x}^{3}}-10}{3{{e}^{-x}}+1} \right)}^{6}}\] is never negative?”

“Because it is a sixth power, so it is a square (of a cube of a number), and the square of a number is never negative.”

(Smiting forehead) “Of course, why didn’t I see that!”

This observation stays in your head. You can’t forget it. Your neurons have been permanently changed and you can’t go back and regain your former state of non-understanding!

This is an example of pattern recognition – of ignoring the details and seeing \[{{\left(\frac{{{x}^{3}}-10}{3{{e}^{-x}}+1} \right)}^{6}}\]as\[\left(\left(\text{some stuff}\right)^3\right)^2\] See also chunking. There are other examples like this here, here and here

*Mathematicians can be quite cruel once they know how to do something and they enjoy seeing you squirm as you struggle to catch on. *–Marcus du Sautoy

“How could you NOT see that \[{{\left( \frac{{{x}^{3}}-10}{3{{e}^{-x}}+1}\right)}^{6}}\] is never negative?! It’s OBVIOUS that it’s never negative. It’s a SIXTH POWER YOU DUMMY!”

It is distressingly common that a mathematician for whom a concept has become obvious because of the ratchet effect will then tell someone else that the concept is obvious or **trivial,** leaving them to feel **put down.** It is one thing for mathematicians to do this to each other; that is a kind of normal competitiveness among members of a tribe. I believe behaving this way to non-mathematicians is a major contribution to the dislike and fear of math that many people have.

When you are explaining abstract math to non-mathematicians

DO NOT DUMP on those who are baffled

because they haven’t been RATCHETED UP yet.

This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 License.