# The meaning of the word “superposition”

This is from the Wikipedia article on Hilbert's 13th Problem as it was on 31 March 2012:

[Hilbert’s 13th Problem suggests this] question: can every continuous function of three variables be expressed as a composition  of finitely many continuous functions of two variables? The affirmative answer to this general question was given in 1957 by Vladimir Arnold, then only nineteen years old and a student of Andrey Kolmogorov. Kolmogorov had shown in the previous year that any function of several variables can be constructed with a finite number of three-variable functions. Arnold then expanded on this work to show that only two-variable functions were in fact required, thus answering Hilbert's question.

In their paper A relation between multidimensional data compression and Hilbert’s 13th  problem,  Masahiro Yamada and Shigeo Akashi describe an example of Arnold's theorem this way:

Let $f ( \cdot , \cdot, \cdot )$ be the function of three variable defined as $f(x, y, z)=xy+yz+zx$, $x ,y , z\in \mathbb{C}$ . Then, we can easily prove that there do not exist functions of two variables $g(\cdot , \cdot )$ , $u(\cdot, \cdot)$ and $v(\cdot , \cdot )$ satisfying the following equality: $f(x, y, z)=g(u(x, y),v(x, z)) , x , y , z\in \mathbb{C}$ . This result shows us that $f$ cannot be represented any 1-time nested superposition constructed from three complex-valued functions of two variables. But it is clear that the following equality holds: $f(x, y, z)=x(y+z)+(yz)$ , $x,y,z\in \mathbb{C}$ . This result shows us that $f$ can be represented as a 2-time nested superposition.

The strategy used in the Superposition Theorem is to eliminate all but one source of power within a network at a time, using series/parallel analysis to determine voltage drops (and/or currents) within the modified network for each power source separately. Then, once voltage drops and/or currents have been determined for each power source working separately, the values are all “superimposed” on top of each other (added algebraically) to find the actual voltage drops/currents with all sources active.

Superposition Theorem in Wikipedia:

The superposition theorem for electrical circuits states that for a linear system the response (Voltage or Current) in any branch of a bilateral linear circuit having more than one independent source equals the algebraic sum of the responses caused by each independent source acting alone, while all other independent sources are replaced by their internal impedances.

Quantum superposition is a fundamental principle of quantum mechanics. It holds that a physical system — such as an electron — exists partly in all its particular, theoretically possible states (or, configuration of its properties) simultaneously; but, when measured, it gives a result corresponding to only one of the possible configurations (as described in interpretation of quantum mechanics).

Mathematically, it refers to a property of solutions to the Schrödinger equation; since theSchrödinger equation is linear, any linear combination of solutions to a particular equation will also be a solution of it. Such solutions are often made to be orthogonal (i.e. the vectors are at right-angles to each other), such as the energy levels of an electron. By doing so the overlap energy of the states is nullified, and the expectation value of an operator (any superposition state) is the expectation value of the operator in the individual states, multiplied by the fraction of the superposition state that is "in" that state

The CIO midmarket site says much the same thing as the first paragraph of the Wikipedia Quantum Superposition entry but does not mention the stuff in the second paragraph.

In particular, the  Yamada & Akashi article describes the way the functions of two variables are put together as "superposition", whereas the Wikipedia article on Hilbert's 13th calls it composition.  Of course, superposition in the sense of the Superposition Principle is a composition of multivalued functions with the top function being addition.  Both of Yamada & Akashi's examples have addition at the top.  But the Arnold theorem allows any continuous function at the top (and anywhere else in the composite).

So one question is: is the word "superposition" ever used for general composition of multivariable functions? This requires the kind of research I proposed in the introduction of The Handbook of Mathematical Discourse, which I am not about to do myself.

The first Wikipedia article above uses "composition" where I would use "composite".  This is part of a general phenomenon of using the operation name for the result of the operation; for examples, students, even college students, sometimes refer to the "plus of 2 and 3" instead of the "sum of 2 and 3". (See "name and value" in abstractmath.org.)  Using "composite" for "composition" is analogous to this, although the analogy is not perfect.  This may be a change in progress in the language which simplifies things without doing much harm.  Even so, I am irritated when "composition" is used for "composite".

Quantum superposition seems to be a separate idea.  The second paragraph of the Wikipedia article on quantum superposition probably explains the use of the word in quantum mechanics.

Send to Kindle

# Composites of functions

In my post on automatic spelling reform, I mentioned the various attempts at spelling reform that have resulted in both the old and new systems being used, which only makes things worse.  This happens in Christian denominations, too.  Someone (Martin Luther, John Wesley) tries to reform things; result: two denominations.   But a lot of the time the reform effort simply disappears.  The Chicago Tribune tried for years to get us to write “thru” and “tho” —  and failed.  Nynorsk (really a language reform rather than a spelling reform) is down to 18% of the population and the result of allowing Nynorsk forms to be used in the standard language have mostly been nil.  (See Note 1.)

In my early years as a mathematician I wrote a bunch of papers writing functions on the right (including the one mentioned in the last post).  I was inspired by some algebraists and particularly by Beck’s Thesis (available online via TAC), which I thought was exceptionally well-written.  This makes function composition read left to right and makes the pronunciation of commutative diagrams get along with notation, so when you see the diagram below you naturally write h = fg instead of h = gf.

Sadly, I gave all that up before 1980 (I just looked at some of my old papers to check).  People kept complaining.  I even completely rewrote one long paper (Reference [3]) changing from right hand to left hand (just like Samoa).  I did this in Zürich when I had the gout, and I was happy to do it because it was very complicated and I had a chance to check for errors.

Well, I adapted.  I have learned to read the arrows backward (g then f in the diagram above).  Some French category theorists write the diagram backward, thus:

But I was co-authoring books on category theory in those days and didn’t think people would accept it. Not to mention Mike Barr (not that he is not a people, oh, never mind).

Nevertheless, we should have gone the other way.  We should have adopted the Dvorak keyboard and Betamax, too.

Notes

[1] A lifelong Norwegian friend of ours said that when her children say “boka” instead of “boken” it sound like hillbilly talk does to Americans.  I kind of regretted this, since I grew up in north Georgia and have been a kind of hillbilly-wannabe (mostly because of the music); I don’t share that negative reaction to hillbillies.  On the other hand, you can fageddabout “ho” for “hun”.

References

[1] Charles Wells, Automorphisms of group extensions, Trans. Amer. Math. Soc, 155 (1970), 189-194.

[2] John Martino and Stewart Priddy, Group extensions and automorphism group rings. Homology, Homotopy and Applications 5 (2003), 53-70.

[3] Charles Wells, Wreath product decomposition of categories 1, Acta Sci. Math. Szeged 52 (1988), 307 – 319.

Send to Kindle