Category Archives: category theory

A proof by diagram chasing



In Rigorous proofs, I went through the details of a medium-easy epsilon-delta proof in great detail as a way of showing what is hidden by the wording of the proof. In this post, I will do the same for an easy diagram-chasing proof in category theory. This theorem is stated and proved in Category Theory for Computing Science, page 365, but the proof I give here maximizes the diagram-chasing as a way of illustrating the points I want to make.

Theorem (J. Lambek) Let $F$ be a functor from a category to itself and let $\alpha:Fa\to a$ be an algebra for $F$ which is initial. Then $\alpha$ is an isomorphism.

Proof

  1. $F\alpha:FFa\to Fa$ is also an $F$-algebra.
  2. Initiality means that there is a unique algebra morphism $\eta:a\to Fa$ from $\alpha:Fa\to a$ to $F\alpha:FFa\to Fa$ for which this diagram commutes:



  3. To that diagram we can adjoin another (obviously) commutative square:



  4. Then the outside rectangle in the diagram above also commutes.
  5. This means that $\alpha\circ\eta:a\to a$ is an $F$-algebra morphism from $\alpha:Fa\to a$ to itself.
  6. Another such $F$-algebra morphism is $\text{id}_{A}$.
  7. Initiality of $\alpha$ means that the diagram below commutes:



  8. Because the upper bow and the left square both commute we are justified in inserting a diagonal arrow as below.



  9. Now we can read off the diagram that $F\alpha\circ F(\eta)=\text{id}_{Fa}$ and $\eta\circ\alpha=\text{id}_a$. By definition, then, $\eta$ is a two-sided inverse to $\alpha$, so $\alpha$ is an isomorphism.

Analysis of the proof

This is an analysis of the proof showing what is not mentioned in the proof, similar to the analysis in Rigorous proofs.

  • An $F$-algebra is any arrow of the form $\alpha:Fa\to a$. This definition directly verifies statement (1). You do need to know the definition of “functor” and that the notation $Fa$ means $F(a)$ and $FFa$ means $F(F(a))$.
  • When I am chasing diagrams, I visualize the commutativity of the diagram in (2) by thinking of the red path and the blue path as having the same composites in this graph:





    In other words, $F\alpha\circ F\eta=\eta\circ\alpha$. Notice that the diagram carries all the domain and codomain information for the arrows, whereas the formula “$F\alpha\circ F\eta=\eta\circ\alpha$” requires you to hold the domains and codomains in your head.

  • (Definition of morphism of $F$-algebra) The reader needs to know that a morphism of $F$ algebras is any arrow $\delta:c\to d$ for which




    commutes.
  • (Definition of initial $F$-algebra) $\alpha$ is an initial $F$-algebra means that for any algebra $\beta:Fb\to b$, there is a unique arrow $\delta$ for which the diagram above commutes.
  • (2) is justified by the last two definitions.
  • Pulling a “rabbit out of a hat” in a proof means introducing something that is obviously correct with no motivation, and then checking that it results in a proof. Step (9) in the proof given in Rigorous proofs has an example of adding zero cleverly. It is completely OK to pull a rabbit out of a hat in a proof, as long as the result is correct, but it makes students furious.
  • In statement (3) of the proof we are considering here, the rabbit is the trivially commutative diagram that is adjoined on the right of the diagram from (2).
  • Statement (4) uses a fact known to all diagram chasers: Two joined commutative squares make the outside rectangle commute. You can visualize this by seeing that the three red paths shown below all have the same composite. When I am chasing a complicated diagram I trace the various paths with my finger, or in my head.



    You could also show it by pointing out that $\alpha\circ F\alpha\circ F\eta=\alpha\circ\eta\circ\alpha$, but to check that I think most of us would go back and look at the diagram in (3) to see why it is true. Why not work directly with the diagram?

  • The definition of initiality requires that there be only one $F$-algebra morphism from $\alpha:Fa\to a$ to itself. This means that the upper and lower bows in (7) commute.
  • The diagonal identity arrow in (8) is justified by the fact that the upper bow is exactly the same diagram as the upper triangular diagram in (8). It follows that the upper triangle in (8) commutes. I visualize this as moving the bow down and to the left with the upper left node $Fa$ as a hinge, so that the two triangles coincide. (It needs to be flipped, too.) I should make an interactive diagram that shows this.
  • The lower triangle in (8) also commutes because the square in (2) is given to be commutative.
  • (Definition of isomorphism in a category) An arrow $f:a\to b$ in a category is an isomorphism if there is an arrow $g:b\to a$ for which these diagrams commute:


    xx


    This justifies statement (9).

Remark: I have been profligate in using as many diagrams as I want because this can be seen on a screen instead of on paper. That and the fact that much more data about domains and codomains are visible because I am using diagrams instead of equations involving composition means that the proof requires the readers to carry much less invisible data in their heads.

Send to Kindle

Monads for High School III: Algebras

The interactive examples in this post require installing Wolfram CDF player, which is free and works on most desktop computers using Firefox, Safari and Internet Explorer, but not Chrome. The source code is the Mathematica Notebook MonadAlg.nb, which is available for free use under a Creative Commons Attribution-ShareAlike 2.5 License. The notebook can be read by CDF Player if you cannot make the embedded versions in this post work.

This is a continuation of Monads for high school I and Monads for High School II: Lists. This post covers the concept of algebras for the monad for lists.

Lists

$\textrm{Lists}(S)$ is the set of all lists of finite length whose entries are elements of $S$.

  • $\boxed{2\; 2\; 4}$ is the way I denote the list of length $3$ whose first and second entries are each $2$ and whose third entry is $4$.
  • A list with only one entry, such as $\boxed{2}$, is called a singleton list.
  • The empty list $\boxed{\phantom{2}}$ has no entries.
  • $\textrm{Lists}^*(S)$ is the set of all nonempty lists of finite length whose entries are elements of $S$.
  • $\textrm{Lists}(\textrm{Lists}(S))$ is the list whose entries are lists with entries from $S$.
  • For example, $\boxed{\boxed{5\; 7}\; \boxed{2\; 12\; 7}}$ and $\boxed{\boxed{5\; 7\; 2\; 12\; 7}}$ are both entries in $\textrm{Lists}^*(\textrm{Lists}^*(\mathbb{Z}))$. The second one is a singleton list!
  • $\boxed{\boxed{\phantom{3}}\; \boxed{2}}
    $ and $\boxed{\boxed{\phantom{3}}}$ are entries in $\textrm{Lists}^*(\textrm{Lists}(\mathbb{Z}))$.
  • The empty list $\boxed{\phantom{2}}$ is an entry in $\textrm{Lists}(\mathbb{Z})$, in $\textrm{Lists}(\textrm{Lists}^*(\mathbb{Z}))$ and in $\textrm{Lists}(\textrm{Lists}(\mathbb{Z}))$. If you have stared at this for more than ten minutes, do something else and come back to it later.

The star notation is used widely in math and computing science to imply that you are including everything except some insignificant shrimp of a thing such as the empty list, the empty set, or $0$. For example, $\mathbb{R}^*$ denotes the set of all nonzero real numbers.

More details about lists are in Monads for High School II: Lists.

Join

The function join (or concatenation) takes two lists and creates a third list. For example, if you join $\boxed{5\; 7}$ to $\boxed{2\; 12\; 7 }$ in that order you get $\boxed{5\; 7\; 2\; 12\; 7}$.

  • I will use this notation: join$\boxed{\boxed{5\; 7}\; \boxed{2\; 12\; 7}}=\boxed{5\; 7\; 2\; 12\; 7}$.
  • This notation means that I am regarding join as a function that takes a two-element list in $\textrm{Lists}(\textrm{Lists}(S))$ to an element of $\textrm{Lists}(S)$.
  • join removes one level of lists
  • join is not commutative: join$\boxed{\boxed{2\; 12\; 7}\; \boxed{5\; 7}}=\boxed{2\; 12\; 7\; 5\; 7}$
  • Join is associative, and as for any associative binary operation, join is defined on any finite list of lists of elements of $S$. So for example, join$\boxed{\boxed{5\; 7}\; \boxed{2\; 12\; 7}\; \boxed{1}}=\boxed{5\; 7\; 2\; 12\; 7\; 1}$.
  • For any single list $\boxed{a\; b\; c}$, join$\boxed{\boxed{a\; b\; c}}=\boxed{a\; b\; c}$. This is required to make the theory work. It is called the oneidentity property.
  • If the empty list $\boxed{\phantom{2}}$ occurs in a list of lists, it disappears when join is applied: join $\boxed{\boxed{2\; 3}\; \boxed{\phantom{2}}\; \boxed{4\; 5\; 6}}=\boxed{2\; 3\; 4\; 5\; 6}$.

More details about join in Monads for High School II: Lists.

The main monad diagram

When you have a list of lists of lists, join can be applied in two different ways, "inside" and "outside" as illustrated in the diagram below. It gives you several different inputs to try out as a way to understand what is happening.

This is the special case of the main diagram for all monads as it applies to the List monad.

As you can see, after doing either of "inside" and "outside", if you then apply join, you get the same list. That list is simply the list of entries in the beginning list (and the two intermediate ones) in the same order, disregarding groupings.

From what I have just written, you must depend on your pattern recognition abilities to learn what inside and outside mean. But both can also be described in words.

  • The lists outlined in black are lists of elements of $\mathbb{Z}$. In other words, they are elements of $\textrm{Lists}(\mathbb{Z})$.
  • The lists outlined in blue are lists of elements of $\textrm{Lists}(\mathbb{Z})$. In other words, they are list of lists of elements of $\mathbb{Z}$. Those are the kinds of things you can apply join to.
  • The leftmost list in the diagram, outlined in green, is a list in $\textrm{Lists}(\textrm{Lists}(\mathbb{Z}))$. This means you can apply join in two different ways:
  • Each list boxed in blue is a list of lists of integers (two of the are singletons!) so you can apply join to each of them. This is joining inside first.
  • You can apply join directly to the leftmost list, which is a list of lists (of lists, but forget that for the moment), so you can apply join to the blue lists. This is join outside first.

To understand this diagram, staring at the diagram (for most people) uses the visual pattern recognition part of your brain (which uses over a fifth of the energy used by your brain) to understand what inside and outside mean, and then check your understanding by reading the verbal description. Starting by reading the verbal description first does not work as well for most people.

The unit monad diagram

There is a second unitary diagram for all monads:

The two right hand entries are always the same. Again, I am asking you to use your pattern recognition abilities to learn what singleton list and singleton each mean.

The main and unit monad diagrams will be used as axioms to give the general definition of monad. To give those axioms, we also need the concepts of functor and natural transformation, which I will define later after I have finished the monad algebra diagrams for Lists and several other examples.

Algebras for the List monad

If you have any associative binary operation on a set $S$, its definition can be extended to any nonempty list of elements (see Monads for High School I.)

Plus and Times are like that:

  • $(3+2)+4$ and $3+(2+4)$ have the same value $9$, so you can write $3+2+4$ and it means $9$ no matter how you calculate it.
  • I will be using the notation Plus$\boxed{3\; 2\; 4}$ instead of $3+2+4$.
  • Times is also associative, so for example we can write Times$\boxed{3\; 2\; 4}=24$.
  • Like join, we require that these operations satisfy oneidentity, so we know Plus$\boxed{3}=3$ and Times$\boxed{3}=3$.
  • When the associative binary operation has an identity element, you can also define its value on the empty list as the identity element: Plus$\boxed{\phantom{3}}=0$ and Times$\boxed{\phantom{3}}=1$. I recommend that you experiment with examples to see why it works.

An algebra for the List monad is a function algop:$\textrm{Lists}(S)\to S$ with certain properties: It must satisfy the Main Monad Algebra Diagram and the Unit Monad Algebra Diagram, discussed below.

The main monad algebra diagram

Example using Plus and Times

The following interactive diagram allows you to see what happens with Plus and Times. Afterwards, I will give the general definition.

Plus insides replaces each inside list with the result of applying Plus to it, and the other operation Join is the same operation I have used before.

Another example

The main monad algebra diagram requires that if you have a list of lists of numbers such as the one below, you can add up each list (Plus insides) and then add up the list of totals (top list in diagram), you must get the same answer that you get when you join all the lists of numbers together into one list (bottom list in the diagram) and then add up that list.

This is illustrated by this special case of the main monad algebra diagram for Plus:

General statement of the main monad algebra diagram

Suppose we have any function $\blacksquare$ $:\textrm{Lists}(S)\to S$ for any set $S$.
If we want to give the main monad algebra diagram for $\blacksquare$ we have a problem. We know for example that Plus$\boxed{1\; 2}=3$. But for some elements $a $ and $b$ of $S$, we don’t know what $\blacksquare\boxed{a\; b}$ is. One way to write it is simply to write $\blacksquare\boxed{a\; b}$ (the usual way we write a function). Or we could use tree notation and write

newalopdouble.

I will use tree notation mostly, but it is a good exercise to redraw the diagrams with functional notation.

Main monad diagram in prose

Below is a presentation of the general main monad algebra diagram using (gasp!) English phrases to describe the nodes.

genalgdiag

The unit monad algebra diagram

Suppose $\blacksquare$ is any function from $\textrm{Lists}(S)$ to $S$ for any set $S$. Then the diagram is

UnitMAdiag

This says that if you apply $\blacksquare$ to a singleton you get the unique entry of the singleton. This is not surprising: I defined above what it means when you apply an operation to a singleton just so this would happen!

A particular example

These are specific examples of the general main monad algebra diagram for an arbitrary operation $\blacksquare$:

stalgdiagleft

staldiagright

These examples show that if $\blacksquare$ is any function from $\textrm{Lists}(S)$ to $S$ for any set $S$, then

newalopleft

equals

newaloptriple

and

newalopright

equals

newaloptriple

Well, according to some ancient Greek guy, that means

newalopleft

equals

newalopright

which says that
newalopdouble
is an associative binary operation!

The mother of all associative operations

We also know that any associative binary $\blacksquare$ on any set $S$ can be extended to a function on all finite nonempty lists of elements of $S$. This is the general associative law and was discussed (without using that name) in Monads fo High School I.

Let’s put what we’ve done together into one statement:

Every associative binary operation $\blacksquare$ on a set $S$ can be extended uniquely to a function $\blacksquare:\textrm{Lists}^*(S)\to S$ that satisfies both the main monad algebra diagram and the unit monad algebra diagram. Furthermore, any function $\blacksquare:\textrm{Lists}^*(S)\to S$ that satisfies both the main monad algebra diagram and the unit monad algebra diagram is an asssociative binary operation when applied to lists of length $2$ of elements of $S$.

That is why I claim that the NonemptyList monad is the mother of all associative binary operations.

I have not proved this, but the work in this and preceding posts provide (I think) a good intuitive understanding of this fundamental relationship between lists and associative binary operations.

Things to do in upcoming posts

  • I have to give a proper definition of monads using the concepts of functor and natural transformation. I expect to do this just for set functors, not mentioning categories.
  • Every type of binary operation that is defined by equations corresponds to a monad which is the mother of all binary operations of that type. I will give examples, but not prove the general case.

Other examples of monads

  • Associative binary operations on $S$ with identity element (monoids) corresponds to all lists, including the empty list, with entries from $S$.
  • Commutative, associative and idempotent binary operations, like and and or in Boolean algebra, correspond to the set monad: $\text{Sets}(S)$ is the set of all finite and countably infinite sets of elements of $S$. (You can change the cardinality restrictions, but you have to have some cardinality restrictions.) Join is simply union.
  • Commutative and associative binary operations corresponds to the multiset monad (with a proper definition of join) and appropriate cardinality restrictions. You have to fuss about identity elements here, too.
  • Various kinds of nonassociative operations get much more complicated, involving tree structures with equivalence relations on them. I expect to work out a few of them.
  • There are lots of monads in computing science that you never heard of (unless you are a computing scientist). I will mention a few of them.

  • Every type of binary operation defined by equations corresponds to a monad. But some of them are unsolvable, meaning you cannot describe the monad precisely.

There will probably be long delay before I get back to this project. There are too many other things I want to do!

Send to Kindle

Naming mathematical objects

Commonword names confuse

Many technical words and phrases in math are ordinary English words ("commonwords") that are assigned a different and precisely defined mathematical meaning.  

  • Group  This sounds to the "layman" as if it ought to mean the same things as "set".  You get no clue from the name that it involves a binary operation with certain properties.  
  • Formula  In some texts on logic, a formula is a precisely defined expression that becomes a true-or-false sentence (in the semantics) when all its variables are instantiated.  So $(\forall x)(x>0)$ is a formula.  The word "formula" in ordinary English makes you think of things like "$\textrm{H}_2\textrm{O}$", which has no semantics that makes it true or false — it is a symbolic expression for a name.
  • Simple group This has a technical meaning: a group with no nontrivial normal subgroup.  The Monster Group is "simple".  Yes, the technical meaning is motivated by the usual concept of "simple", but to say the Monster Group is simple causes cognitive dissonance.

Beginning students come with the (generally subconscious) expectation that they will pick up clues about the meanings of words from connotations they are already familiar with, plus things the teacher says using those words.  They think in terms of refining an understanding they already have.  This is more or less what happens in most non-math classes.  They need to be taught what definition means to a mathematician.

Names that don't confuse but may intimidate

Other technical names in math don't cause the problems that commonwords cause.

Named after somebody The phrase "Hausdorff space" leads a math student to understand that it has a technical meaning.  They may not even know it is named after a person, but it screams "geek word" and "you don't know what it means".  That is a signal that you can find out what it means.  You don't assume you know its meaning. 

New made-up words  Words such as "affine", "gerbe"  and "logarithm" are made up of words from other languages and don't have an ordinary English meaning.  Acronyms such as "QED", "RSA" and "FOIL" don't occur often.  I don't know of any math objects other than "RSA algorithm" that have an acronymic name.  (No doubt I will think of one the minute I click the Publish button.)  Whole-cloth words such as "googol" are also rare.  All these sorts of words would be good to name new things since they do not fool the readers into thinking they know what the words mean.

Both types of words avoid fooling the student into thinking they know what the words mean, but some students are intimidated by the use of words they haven't seen before.  They seem to come to class ready to be snowed.  A minority of my students over my 35 years of teaching were like that, but that attitude was a real problem for them.

Audience

You can write for several different audiences.

Math fans (non-mathematicians who are interested in math and read books about it occasionally) In my posts Explaining higher math to beginners and in Renaming technical conceptsI wrote about several books aimed at explaining some fairly deep math to interested people who are not mathematicians.  They renamed some things. For example, Mark Ronan in Symmetry and the Monster used the phrase "atom" for "simple group" presumably to get around the cognitive dissonance.  There are other examples in my posts.  

Math newbies  (math majors and other students who want to understand some aspect of mathematics).  These are the people abstractmath.org is aimed at. For such an audience you generally don't want to rename mathematical objects. In fact, you need to give them a glossary to explain the words and phrases used by people in the subject area.   

Postsecondary math students These people, especially the math majors, have many tasks:

  • Gain an intuitive understanding of the subject matter.
  • Understand in practice the logical role of definitions.
  • Learn how to come up with proofs.
  • Understand the ins and outs of mathematical English, particularly the presence of ordinary English words with technical definitions.
  • Understand and master the appropriate parts of the symbolic language of math — not just what the symbols mean but how to tell a statement from a symbolic name.

It is appropriate for books for math fans and math newbies to try to give an understanding of concepts without necessary proving theorems.  That is the aim of much of my work, which has more an emphasis on newbies than on fans. But math majors need as well the traditional emphasis on theorem and proof and clear correct explanations.

Lately, books such as Visual Group Theory have addressed beginning math majors, trying for much more effective ways to help the students develop good intuition, as well as getting into proofs and rigor. Visual Group Theory uses standard terminology.  You can contrast it with Symmetry and the Monster and The Mystery of the Prime Numbers (read the excellent reviews on Amazon) which are clearly aimed at math fans and use nonstandard terminology.  

Terminology for algebraic structures

I have been thinking about the section of Abstracting Algebra on binary operations.  Notice this terminology:

boptable

The "standard names" are those in Wikipedia.  They give little clue to the meaning, but at least most of them, except "magma" and "group", sound technical, cluing the reader in to the fact that they'd better learn the definition.

I came up with the names in the right column in an attempt to make some sense out of them.  The design is somewhat like the names of some chemical compounds.  This would be appropriate for a text aimed at math fans, but for them you probably wouldn't want to get into such an exhaustive list.

I wrote various pieces meant to be part of Abstracting Algebra using the terminology on the right, but thought better of it. I realized that I have been vacillating between thinking of AbAl as for math fans and thinking of it as for newbies. I guess I am plunking for newbies.

I will call groups groups, but for the other structures I will use the phrases in the middle column.  Since the book is for newbies I will include a table like the one above.  I also expect to use tree notation as I did in Visual Algebra II, and other graphical devices and interactive diagrams.

Magmas

In the sixties magmas were called groupoids or monoids, both of which now mean something else.  I was really irritated when the word "magma" started showing up all over Wikipedia. It was the name given by Bourbaki, but it is a bad name because it means something else that is irrelevant.  A magma is just any binary operation. Why not just call it that?  

Well, I will tell you why, based on my experience in Ancient Times (the sixties and seventies) in math. (I started as an assistant professor at Western Reserve University in 1965). In those days people made a distinction between a binary operation and a "set with a binary operation on it".  Nowadays, the concept of function carries with it an implied domain and codomain.  So a binary operation is a function $m:S\times S\to S$.  Thinking of a binary operation this way was just beginning to appear in the common mathematical culture in the late 60's, and at least one person remarked to me: "I really like this new idea of thinking of 'plus' and 'times' as functions."  I was startled and thought (but did not say), "Well of course it is a function".  But then, in the late sixties I was being indoctrinated/perverted into category theory by the likes of John Isbell and Peter Hilton, both of whom were briefly at Case Western Reserve University.  (Also Paul Dedecker, who gave me a glimpse of Grothendieck's ideas).

Now, the idea that a binary operation is a function comes with the fact that it has a domain and a codomain, and specifically that the domain is the Cartesian square of the codomain.  People who didn't think that a binary operation was a function had to introduce the idea of the universe (universal algebraists) or the underlying set (category theorists): you had to specify it separately and introduce terminology such as $(S,\times)$ to denote the structure.   Wikipedia still does it mostly this way, and I am not about to start a revolution to get it to change its ways.

Groups

In the olden days, people thought of groups in this way:

  • A group is a set $G$ with a binary operation denoted by juxtaposition that is closed on $G$, meaning that if $a$ and $b$ are any elements of $G$, then $ab$ is in $G$.
  • The operation is associative, meaning that if $a,\ b,\ c\in G$, then $(ab)c=a(bc)$.
  • The operation has a unity element, meaning an element $e$ for which for any element $a\in G$, $ae=ea=a$.
  • For each element $a\in G$, there is an element $b$ for which $ab=ba=e$.

This is a better way to describe a group:

  • A group consist of a nullary operation e, a unary operation inv,  and a binary operation denoted by juxtaposition, all with the same codomain $G$. (A nullary operation is a map from a singleton set to a set and a unary operation is a map from a set to itself.)
  • The value of e is denoted by $e$ and the value of inv$(a)$ is denoted by $a^{-1}$.
  • These operations are subject to the following equations, true for all $a,\ b,\ c\in G$:

     

    • $ae=ea=a$.
    • $aa^{-1}=a^{-1}a=e$.
    • $(ab)c=a(bc)$.

This definition makes it clear that a group is a structure consisting of a set and three operations whose axioms are all equations.  It was formulated by people in universal algebra but you still see the older form in texts.

The old form is not wrong, it is merely inelegant.  With the old form, you have to prove the unity and inverses are unique before you can introduce notation, and more important, by making it clear that groups satisfy equational logic you get a lot of theorems for free: you construct products on the cartesian power of the underlying set, quotients by congruence relations, and other things. (Of course, in AbAl those theorem will be stated later than when groups are defined because the book is for newbies and you want lots of examples before theorems.)

References

  1. Three kinds of mathematical thinkers (G&G post)
  2. Technical meanings clash with everyday meanings (G&G post)
  3. Commonword names for technical concepts (G&G post)
  4. Renaming technical concepts (G&G post)
  5. Explaining higher math to beginners (G&G post)
  6. Visual Algebra II (G&G post)
  7. Monads for high school II: Lists (G&G post)
  8. The mystery of the prime numbers: a review (G&G post)
  9. Hersh, R. (1997a), "Math lingo vs. plain English: Double entendre". American Mathematical Monthly, volume 104, pages 48–51.
  10. Names (in abmath)
  11. Cognitive dissonance (in abmath)
Send to Kindle

Monads for high school II: Lists

The interactive examples in this post require installing Wolfram CDF player, which is free and works on most desktop computers using Firefox, Safari and Internet Explorer, but not Chrome. The source code is the Mathematica Notebook Monad.nb, which is available for free use under a Creative Commons Attribution-ShareAlike 2.5 License. The notebook can be read by CDF Player if you cannot make the embedded versions in this post work.

Introduction

This is the second part of a series of posts describing how I will lead up to introducing monads in my proposed e-book Abstracting Algebra (AbAl). It follows Monads for high school I. Comments in red are meta and mostly will not be included in the book.  

Lists 

A list is a specific kind of mathematical object. This is a reasonable specification for lists:

A list of length $n$ determines and is determined by what its first, second, $\ldots$, $n$th entries are. 

In this post, lists will always be finite in length.

For doing rigorous proofs you need a precise definition of a list, such as a function from $\{1,2,…,n\}$ to a set, or a recursive definition.  This book is not about proofs.

Terminology and representation

The most common way in the symbolic language of math to represent a finite list is to use a comma-delimited expression in parentheses.  For example, \[(4,4,2,8)\] is the list of length 4 whose first and second entries are both $4$, third entry $2$ and fourth entry $8$.

  • The order matters and repetitions are allowed. For example, $(4,4,2,8)$, $(4,2,8)$ and $(4,2,4,8)$ are all different lists.
  • Other words for lists are (finite) sequenceword, tuple and string.
  • Many mathematicians would call $(4,4,2,8)$ an $4$tuple.
  • My Discrete math classnotes discusses the specification and the definition of lists called tuples there) at length on pages 50ff. This section of AbAl will incorporate some of the information there.
  • Some computer languages represent our list without the commas: $(4\,\,4\,\,2\,\,8)$.
  • Mathematica represents it this way: $\{4,4,2,8\}$.  This conflicts with the usual set notation, where the order does not matter and where repetitions are ignored  — the set $\{4,4,2,8\}$ has three elements.  But if you type Length[$\{4,4,2,8\}$] in Mathematica, you get the answer 4.
  • A list of characters (alphabetical, numerical, or other symbols) can be represented  by writing the characters down in order without spaces between them.  For example $(a,a,c,d)$ would be written "aacd".  This representation is referred to as a string or as a word in computing science.  The string "4428" is the base-10 representation of the integer $4,428$.  Of course, it is also the hexadecimal representation of the integer $17,448$. 
  • In the text, I will mostly use a cartouche representation: for example, $\boxed{1\ 2\ 3\ 4}$ is the list consisting of the first four positive integers in order.
  • The cartouche is more in-your-face than the other representations I've listed and as far as I know is not used to mean anything else.  I'm not sure I can give any better explanation for why I prefer it than that.  Math is supposed to be explicit and precisely defined and justified by clear reasoning, but after all deciding which representation to use is not math, it is art.

Lists with entries from a given set

If $S$ is any set, finite or infinite, $\textrm{Lists}(S)$ denotes the set of all lists of finite length whose entries come from $S$.  Thus the set $\textrm{Lists}(\{1,\  2,\  3\})$ contains:

  • $\boxed{2\ 2\ 3\ 2\ 2\ 1}$,
  • $\boxed{3\  3\  3\  3}$,
  • the list of length $42$ whose first entry is $3$ and every other entry is $1$,
  • the empty list $\boxed{\vphantom{n}}$,
  • the singleton lists $\boxed{1}$,  $\boxed{2}$ and  $\boxed{3}$, and
  • an infinite number of other lists, 
  • but the list $\boxed{4\  2\  3}$ is not an element of $\textrm{Lists}(\{1,\  2,\  3\})$.

$\textrm{Lists}$ is a function from sets to sets.  Its input is any set and its output is the set of all finitely-long lists whose entries are from the input set. We will also use the similar function $\textrm{Lists}^+$ which takes a set to the set of nonempty lists with entries from the set.

Associativity

(Review from Monads for high school I.)  If a binary operation is associative, then the operation is defined on any (finite) list of inputs in its underlying set.  For example, the sum of the list $\boxed{4\ 4\ 2\ 8}$  is 18.  It follows from associativity that you can add it up as $(4+4)+(2+8)$, $4+(4+(2+8))$, $4+((4+2)+8)$, $(4+(4+2))+8$ or as $((4+4)+2)+8$.  They all give the same answer. In other words, Plus is in fact an operation on lists of numbers.  It is customary to extend associative binary operations to lists of length $0$ and $1$ by setting the value at the empty list to be the identity element of the operation, and the value at a one element list to be its only entry.  Thus Plus($\boxed{4\ 4\ 2\ 8}$)$=18$, Plus($\boxed{\ \vphantom{0} }$)$=0$, Times($\boxed{\ \vphantom{0} }$)$=1$ and Plus($\boxed{3 }$)$=3$.

Operations defined on finite lists

 You can join two lists together in order to make one list.  

The order matters.  If you join $\boxed{5\ 7}$ to $\boxed{2\ 12\ 7 }$ in that order you get $\boxed{5\ 7\ 2\ 12\ 7}$.  

Join is in fact an associative binary operation on lists.  Example: 

This means we can define an operation on lists of lists that joins all the lists inside together to make one list. 

 Notice the blue rectangle disappears when you do the operation. What I have defined here is a function that has a list of lists as input and a list of numbers as output.

The operation of joining lists to get a single list has a property shown by the drawing below (which will be interactive when I work on it some more).  Start on the left with a list of lists of lists.  The border colors distinguish the innermost lists, bordered in black, from the second level lists, in blue, and the outside list, bordered in green.

  • There is only one outside list: It is a list of (blue) lists.  That is the kind of list you can apply join to, so when you do you get a single blue list with five lists inside it (on the bottom of the diagram). "Join outside first" means "apply join to the outside list first". 
  • The single blue list on the bottom is again the kind of list you can apply join to, and when you do you get the lower list on the right end of the diagram.
  • However, the green list also contains two lists each of which is a list of lists that you can apply join to.  Apply it to both of them and you get the list at the top of the diagram.  
  • Again, that list is the kind you can apply join to and when you do you get the upper list on the right.

JoinDiagram

The two lists on the right are the same.  That always happens, whatever lists you start with.  (Try it with others, and include some singleton and empty lists while you are at it.) 

You might not have thought of this property, and now that you see it, it may look like some sort of second-rate phenom to take note of.  Or not.  But in fact, it turns out that it means that our modest function  $\textrm{Lists}^+$, that takes a set to the nonempty set of lists of its elements is a monad.  (So is $\textrm{Lists}$.) In order to say this we must define some other concepts: functor and natural transformation, and we have to verify a number of other properties of the $\textrm{List}^+$ function:  It is not just a function, it is a functor on the category of sets, the join function is a natural transformation, and some other technicalities.

Once we do that, we can define what the algebras of the join monad are, and it turns out that they are exactly all the associative binary operations.  

In other words:

The binary operation of join on nonempty lists is the mother of all associative binary operations.

But that will have to wait for the next post.

References

 

Send to Kindle

Monads for high school I

 

Notes for viewing

The interactive examples in this post require installing Wolfram CDF player, which is free and works on most desktop computers using Firefox, Safari and Internet Explorer, but not Chrome. The source code is the Mathematica Notebook associative.nb, which is available for free use under a Creative Commons Attribution-ShareAlike 2.5 License. The notebook can be read by CDF Player if you cannot make the embedded versions in this post work.

Monads in Abstracting Algebra

I've been working on first drafts (topic posts) of several sections of my proposed book Abstracting algebra (AbAl), concentrating on the ideas leading up to monads.  This is going slowly because I want the book to be full of illustrations and interactive demos.  I am writing the demos in Mathematica simultaneously with writing the text, and designing demos is very s l o w work. It occurred to me that I should write an outline of the path leading up to monads, using some of the demos I have already produced. This is the first of probably two posts about the thread.

  • AbAl is intended to give people with a solid high school math background a mental picture of or way of thinking about the various levels of abstraction of high school algebra.
  • This outline is not a "Topic post" as described in the AbAl page. In particular, it is not aimed at high school students! It is a guided tour of my current thoughts about a particular thread through the book.
  • The AbAl page has a brief outline of the topics to be covered in the whole book.  Perhaps it should also have a list of threads like this post.

Associativity

AbAl will have sections introducing functions and binary operations using pictures and demos (not outlined in this thread).  The section on binary operations will introduce infix, prefix and postfix notation but will use trees (illustrated below) as the main display method.  Then it will introduce associativity, using demos such as this one: 

Using this computingscienceish tree notation makes it much easier to visualize what is happening (see Visible Algebra II), compared to, for example, \[(ab)(cd)=a(b(cd))=a((bc)d)=((ab)c)d=(a(bc))d\]  In this equation, the abstract structure is hidden.  You have to visualize doing the operation starting with the innermost parentheses and moving out.  With the trees you can see the computation going up the tree.

I will give examples of associative functions that are not commutative using $2\times2$ matrices and endofunctions on finite sets such as the one below, which gives all the functions from a two element set to itself. 


  • Note that each function is shown by a diagram, not by an arbitrary name such as "id" or "sw", which would add a burden to the memory for an example that occurs in one place in the book. (See structural notation in the Handbook.) 
  • The section on composition of functions will also look in some depth at permutations of a three-element set, anticipating a section on groups.

 By introducing a mechanism for transforming trees of associative binary operations, you can demonstrate (as in the demo below) that any associative binary operation is defined on any list of two or more elements of its domain.

For example, applying addition to three numbers $a$, $b$ and $c$ is uniquely defined. This sort of demo gives an understanding of why you get that unique definition but it is not a proof, which requires formal induction. AbAl is not concerned with showing the reader how to prove math statements.

In this section I will also introduce the oneidentity concept: the value of an associative binary operation on a an element $a$ is $a$.  Thus applying addition or multiplication to $a$ gives $a$.  (The reason for this is that you want an associative binary operation to be a unique quotient of the free associative binary operation.  That will come up after we talk about some examples of monads.)  

The oneidentity property also implies that for an associative binary operation with identity element, applying the operation to the empty set gives the identity element.  Now we can say:

An associative binary operation with identity element is uniquely defined on any finite list of elements of its domain.

Thus, in prefix notation,$+(2,3)=5$, $+(2,3,5)=10$, $+(2)=2$ and $+()=0$.  Similarly $\times(2)=2$ and $\times()=1$.

This fact suggests that the natural definition of addition, multiplication, and other associative binary operations is as functions from lists of elements of the domain to elements of the domain.   This fits with our early intuition of addition from grade school, not to mention from Excel:  Addition is something you do to lists.  That feeling (for me) is not so strong for multiplication; for many common business applications you generally multiply two things like price and number sold. That's because multiplication is usually for things of different data types, but you usually add things of the same data type (not apples and oranges?).   

That raises the question: Does every function taking lists to elements come from an associative binary operation?  I will give an example that says no.  But the next thing is to introduce joining lists (concatenation), where we discover that joining lists is an associative binary operation.  So it is really an operation on lists of lists.  This will turn out to give us a systematic way to define all associative binary operations by one mechanism, because it is an example of a monad.  That is for the second installment of this outline.

Send to Kindle

“Category Theory for Computing Science” online

The entire contents of the 3rd edition of Category theory for computing science, by Michael Barr and Charles Wellsis now available online at either of these addresses:

http://www.abstractmath.org/CTCS/ctcs.pdf

ftp.math.mcgill.ca/barr/pdffiles/ctcs.pdf

We have submitted it to be published by Theory and Application of Categories in their reprint series. That is where Triples, Toposes and Theories is published.

There are two distinct versions of parts of CTCS already available on the internet.  Neither of them contain all the material in the book, nor do they contain answers to all the exercises, as the copy available above does.

Send to Kindle

Visible algebra II

The interactive examples in this post require installing Wolfram CDF player, which is free and works on most desktop computers using Firefox, Safari and Internet Explorer, but not Chrome. The source code is the Mathematica Notebook Wolfram website. The code for the demos is in the Mathematica notebook algebra2.nb, which is available for free use under a Creative Commons Attribution-ShareAlike 2.5 License. The notebook can be read by CDF Player if you cannot make the embedded versions in this post work.

More about visible algebra

I have written about visible algebra in previous posts (see References). My ideas about the interface are constantly changing. Some new ideas are described here.

In the first place I want to make it clear that what I am showing in these posts is a simulation of a possible visual algebra system.  I have not constructed any part of the system; these posts only show something about what the interface will look like.  My practice in the last few years is to throw out ideas, not construct completed documents or programs.  (I am not saying how long I will continue to do this.)  All these posts, Mathematica programs and abstractmath.org are available to reuse under a Creative Commons license.

Commutative and associative operations

Times and Plus are commutative and associative operations.  They are usually defined as binary operations.  A binary operation $*$ is said to be commutative if for all $x$ and $y$ in the underlying set of the operation, $x*y=y*x$, and it is associative if for all $x$,$y$ and $z$ in the underlying set of the operation, $(x*y)*z=x*(y*z)$. 

It is far better to define a commutative and associative operation $*$ on some underlying set $S$ as an operation on any multiset of elements of $S$.  A multiset is like a set, in particular elements can be rearranged in any way, but it is not like a set in that elements can be repeated and a different number of repetitions of an element makes a different multiset.  So for any particular multiset, the number of repetitions of each element is fixed.  Thus $\{a,a,b,b,c\} = \{c,b,a,b,a\}$ but $\{a,a,b,b,c\}\neq\{c,b,a,b,c\}$. This means that the function (operation) Plus, for example, is defined on any multiset of numbers, and \[\mathbf{Plus}\{a,a,b,b,c\}=\mathbf{Plus} \{c,b,a,b,a\}\] but $\mathbf{Plus}\{a,a,b,b,c\}$ might not be equal to $\mathbf{Plus} \{c,b,a,b,c\}$.

This way of defining (any) associative and commutative operation comes from the theory of monads.  An operation defined on all the multisets drawn from a particular set is necessarily commutative and associative if it satisfies some basic monad identities, the main one being it commutes with union of multisets (which is defined in the way you would expect, and if this irritates you, read the Wikipedia article on multisets.). You don't have to impose any conditions specifically referring to commutativity or associativity.  I expect to write further about monads in a later post. 

The input process for a visible algebra system should allow the full strength of this fact. You can attach as many inputs as you want to Times or Plus and you can move them around.  For example, you can click on any input and move it to a different place in the following demo.

Other input notations might be suitable for different purposes.  The example below shows how the inputs can be placed randomly in two dimensions (but preserving multiplicity).  I experimented with making it show the variables slowly moving around inside the circle the way the fish do in that screensaver (which mesmerizes small children, by the way — never mind what it does to me), but I haven't yet made it work.

A visible algebra system might well allow directly input tables to be added up (or multiplied), like the one below. Spreadsheets have such an operation In particular, the spreadsheet operation does not insist that you apply it only as a binary operation to columns with two entries.  By far the most natural way to define addition of numbers is as an operation on multisets of numbers.

Other operations

Operations that are associative but not commutative, such as matrix multiplication, can be defined the monad way as operations on finite lists (or tuples or vectors) of numbers.  The operation is automatically associative if you require it to preserve concatenation of lists and some other monad requirements.

Some binary operations are neither commutative nor associative.  Two such operations on numbers are Subtract and Power.  Such operations are truly binary operations; there is no obvious way to apply them to other structures.  They are only binary because the two inputs have different roles.  This suggests that the inputs be given names, as in the examples below.

Later, I will write more about simplifying trees, solving the max area problem for rectangles surmounted by semicircles, and other things concerning this system of doing algebra.

References

Previous posts about visible algebra

Other references

 

Send to Kindle

Mathematical and linguistic ability

This post uses MathJax.  If you see mathematical formulas with dollar signs around them, or badly formatted formulas, try refreshing the screen.  Sometimes you have to do it two or three times.

Some personal history

When I was young, I was your typical nerdy geek.  (Never mind what I am now that I am old.)

In high school, I was fascinated by languages, primarily by their structure.  I would have wanted to become a linguist if I had known there was such a thing.  I was good at grasping the structure of a language and read grammars for fun. I was only pretty good at picking up vocabulary. I studied four different languages in high school and college and Turkish when I was in the military.  I know a lot about their structure but am not fluent in any of them (possibly including English).

After college, I decided to go to math grad school.  This was soon after Sputnik and jobs for PhD's were temporarily easy to get.

I always found algebra easy.  When I had to learn other symbolic languages, for example set theory, first order logic, and early programming languages, I found them easy too.  I had enough geometric insight that I did well in all my math courses, but my real strength was in learning languages. 

When I got a job at (what is now) Case Western Reserve University, I began learning category theory and a bit of cohomology of groups. I wrote a paper about group automorphisms that got into Transactions of the AMS.  (Full disclosure: I am bragging). 

The way Saunders Mac Lane did cohomology, he used "$+$" as a noncommutative operation.  No problem with that, I did lots of calculations in his notation.  In reading category theory I learned how to reason using commutative diagrams.  That is radically different from other math — it isn't strings of symbols — but I caught on. I read Beck's thesis in detail.  Beck wrote functions on the right (unlike Mac Lane) which I adapted to with no problem.  In fact my automorphisms paper and many others in those days was written with functions on the right. 

Later on in my career, I learned to program in Forth reasonably well. It is a reverse Polish language. Then (by virtue of summer grants in the 1990's) to use Mathematica, which I now use a lot:  I am an "experienced" user but not an "expert".

Learning foreign languages in studying math

I taught mostly engineering students during my 35 years at CWRU (especially computer engineering). When I used a text (including my own discrete math class notes) some students pleaded with me not to use $P\wedge Q$ and $P \vee Q$ but let them use $PQ$ and $P+Q$ like they did in their CS courses.  Likewise $1$ and $0$ instead of T and F.  Many of them simply could not switch easily between different codes.  Similar problems occurred in classes in first order logic. 

In the early days of calculators when most of them were reverse Polish, some students never mastered their use. 

These days, a common complaint about Mathematica is that it is a difficult language to learn; at the MAA meeting in Madison (where I am as I write this) they didn't even staff a booth.  Apparently too many of the professors can't handle Mathematica.

I gave up writing papers with functions on the right because several professional mathematicians complained that they found them too hard to read. I guess not all professional mathematicians can switch code easily, either. 

There are many great mathematicians whose main strength is geometric understanding, not linguistic understanding.  Nevertheless, to become a mathematician you have to have enough linguistic ability to learn…

Algebra

The big elephant in the room is ordinary symbolic algebra as is used in high school algebra and precalculus.  This of course causes difficulty among first year calculus students, too, but college profs are spared the problem that high school teachers have with a large percentage of the students never really grasping how algebra works.  We don't see those students in STEM courses.

It is surely the case that algebra is a difficult and unintuitive foreign language.  I have carried on about this in my stuff about the languages of math in my abstractmath site. 

Some students already in college don't really understand expressions such as $x^2$.  You still get some who sporadically think it means $2x$.  (They don't always think that, but it happens when they are off guard.)  Lots of them don't understand the difference between $x^2$ and $2^x$.

In complicated situations, students don't grasp the difference between an expression such as $x^2+2x+1$ and a statement like $x^2+2x+1=0$.  Not to mention the difference between the way $x^2+2x+1=0$ and $x^2+2x+1=(x+1)^2$ are different kinds of statements even though the difference is not indicated in the syntax.

There are many irregularities and ambiguities (just like any natural language — the symbolic language of math is a natural language!): consider $\sin xy$, $\sin x + y$, $\sin x/y$.  (Don't squawk to me about order of operators.  That's as bad as aus, außer, bei, mit, zu.  German can't help it, but mathematical notation could.)

One monstrous ambiguity is $(x,y)$, which could be an ordered pair, the GCD, or an open interval.  I found an example of two of those in the same sentence in the Handbook of Mathematical Discourse, and today in a lecture I saw someone use it with two meanings about three inches apart on a transparency.

Anyway, the symbolic language of math is difficult and we don't teach it well.

Structuring calculations

There are other ways to structure calculations that are much more transparent.  Most of them use two or three dimensions.

  • Spreadsheets: It is easy to approximate the zeros of a function using a spreadsheet and changing the input till you get the value near zero. Why can't middle school students be taught that?
  • Bret Victor has made suggestions for easy ways to calculate things.
  • My post Visible Algebra I suggest a two-dimensional approach to putting together calculations.  (There are several more posts coming about that idea.)
  • Mathematica interactive demos could maybe be provided in a way that would allow them to be joined together to make a complicated calculation. (Modules such as an inverse image constructor.)  I have not tried to do this.

A lot of these alternatives work better because they make full use of two dimensions.  Toolkits could be made for elementary school students (there are some already but I am not familiar with them).  

It is impractical to expect that every high school student master basic algebraic notation.  It is difficult and we don't know how to teach it to everyone. With the right toolkits, we could provide everyone, not just students, to put together usable calculations on their computer and experiment with them.  This includes working out the effect of different payment periods on loans, how much paint you need for a room, and many other things.

STEM students will still have to learn algebraic notation as we use it now.  It should be taught as a foreign language with explicit instruction in its syntax (sentences and terms, scope of an operator, and so on), ambiguities and peculiarities.

Send to Kindle

An Introduction to Forms

In 2009, I wrote a sequence of posts on this blog explaining the concept of form that I introduced in [1].  I have now updated and combined them into an article [2].  The posts no longer exist on the blog. The article contains links to other papers on forms.

[1] A generalization of the concept of sketch, Theoretical Computer Science 70, 1990.

[2] An Introduction to forms.

Send to Kindle

More about defining “category”


In a recent post, I wrote about defining “category” in a way that (I hope) makes it accessible to undergraduate math majors at an early stage.  I have several more things to say about this.

Early intro to categories

The idea is to define a category as a directed graph equipped with an additional structure of composition of paths subject to some axioms.  By giving several small finite examples of categories drawn in that way that gives you an understanding of “category” that has several desirable properties:

  • You get the idea of what a category is in one lecture.
  • With the right choice of examples you get several fine points cleared up:
    • The composition is added structure.
    • A loop doesn’t have to be an identity.
    • Associativity is a genuine requirement —  it is not automatic.
  • You get immediate access to what is by far the most common notation used to work with a category — objects (nodes) and arrows.
  • You don’t have to cope with the difficult chunking required when the first examples given are sets-with-structure and structure-preserving functions.  It’s quite hard to focus on a couple of dots on the paper each representing a group or a topological space and arrows each representing a whole function (not the value of the function!).

Introduce more examples

Then the teacher can go on with the examples that motivated categories in the first place: the big deal categories such as sets, groups and topological spaces.   But they can be introduced using special cases so they don’t require much background.

  • Draw some finite sets and functions between them.  (As an exercise, get the students to find some finite sets and functions that make the picture a category with $f=kh$ as the composite and $f\neq g$.)
  • If the students have had calculus,  introduce them to the category whose objects are real finite nonempty intervals with continuous or differentiable mappings between them.  (Later you can prove that this category is a groupoid!)
  • Find all the groups on a two element set and figure out which maps preserve group multiplication.  (You don’t have to use the word “group” — you can simply show both of them and work out which maps preserve multiplication — and discover isomorphism!.)  This introduces the idea of the arrows being structure-preserving mape. You can get more complicated and use semigroups as well.  If the students know Mathematica you could even do magmas.  Well, maybe not.

All this sounds like a project you could do with high school students.

Large and small

If all this were just a high school (or intro-to-math-for-math-majors) project you wouldn’t have to talk about large vs. small.  However, I have some ideas about approaching this topic.

In the first place, you can define category, or any other mathematical object that might involve a proper class, using the syntactic approach I described in Just-in-time foundations.  You don’t say “A category consists of a set of objects and a set of arrows such that …”.  Instead you say something like “A category $\mathcal{C}$ has objects $A,\,B,\,C\ldots$ such that…”.

This can be understood as meaning “For any $A$, the statement $A$ is an object of  $\mathcal{C}$ is either true or false”, and so on.

This approach is used in the Wikibook on category theory.  (Note: this is a permanent link to the November 28 version of the section defining categories, which is mostly my work.  As always with Wikimedia things it may be entirely different when you read this.)

If I were dictator of the math world (not the same thing as dictator of MathWorld) I would want definitions written in this syntactic style.  The trouble is that mathematicians are now so used to mathematical objects having to be sets-with-structure that wording the definition as I did above may leave them feeling unmoored.  Yet the technique avoids having to mention large vs. small until a problem comes up. (In category theory it sometimes comes up when you want to quantify over all objects.)

The ideas outlined in this subsection could be a project for math majors.  You would have to introduce Russell’s Paradox.  But for an early-on intro to categories you could just use the syntactic wording and avoid large vs. small altogether.

 

http://en.wikibooks.org/w/index.php?title=Category_Theory/Categories&stableid=2221684

Send to Kindle