HPS 0628 | Paradox | |

Back to doc list

Additive Measures

John D. Norton

Department of History and Philosophy of Science

University of Pittsburgh

http://www.pitt.edu/~jdnorton

- The idea of an additive measure
- Finite Additivity
- Countable Additivity
- Uncountable Additivity?
- Induced measures
- The Paradoxes
- Non-measurable Sets
- To Ponder

Supplement: If not
additive, then what?

We have now seen three paradoxes whose resolutions have been postponed. They are Zeno's paradox of the stadium, in Davy's version; Zeno's paradox of measure; and the geometric version of Aristotle's wheel. They each concern the notion of measure, that is, the size of things. Their resolution will reside in identifying the need to separate two things that have been tacitly coupled: the size of things and the sizes of the points that compose them. It proves to be a delicate matter to effect this separation consistently. The separation requires ideas about infinity from the earlier chapter on infinite sets.

Measure theory is the applicable, formal theory of the sizes of things. Once we have a clearer grip on measures, we can see that all three paradoxes depend on the tacit assumption that the points of an infinite magnitudes fix their measures. In the paradoxical cases, this proves to be a false assumption that generates the paradoxes.

The key idea to be developed in this chapter
concerns the relationship between an extended magnitude and its parts.

• If there are finitely many parts, then magnitude of the whole is just
the sum of the magnitude of the parts.

• If there is a countable infinity of parts, then magnitude of the whole
is still the sum of the magnitude of the parts.

• If there is an uncountable infinity of parts, then the magnitude of the
whole is decoupled from magnitude of the parts. For there is no
way to sum uncountably many component magnitudes.

This modern resolution of the paradoxes of measure requires a nineteenth century understanding of the different orders of infinity. It can only arise once we can distinguish a countable infinity from an uncountable infinity.

Extensive magnitudes are found throughout mathematics and the sciences. They are quantities that increase in proportion to the size of the system considered. The notion of an additive measure provides a general account of these extensive magnitudes, applicable to many cases.

The general notion applies to systems composed of many points. Its defining characteristics are:

*(Additive Measure)* For subsets of
points defining a magnitude:

1. (non-negativity) The measure assigns a non negative real number to some
subsets.

2. (null set) The measure assigns zero to the empty set.

3. (additivity) If two subsets are disjoint, then the measure of their
union is sum of the measures of their two parts.

The simplest additive measure is the counting measure routinely employed for systems consisting of finitely many points. For example, we measure how many eggs we have by counting them; and when we combine our eggs the counted numbers add up.

A dozen eggs combined with half a dozen eggs gives us 18 eggs = 12 eggs + 6 eggs.

We find additive measures in many more places. They are lengths, areas and volumes and times elapsed. They apply to the masses of bodies, to energies and to momenta. In the figure below, they are applied to simple planar areas. We have an area whose parts individually have areas 1, 3 and 2. Then the combined area is 1 + 3 + 2 = 6.

The additivity rule above mentions only adding the measures of two parts:

measure (combined parts 1 and 2)

= measure
(part-1) + measure (part-2)

It is routinely assumed that this rule allows us to add the measures of finitely many parts:

(*Finite additivity*)

If a system has finitely many disjoint parts, part-1, part-2, ..., part-n,
then:

measure (combined parts)

= measure (part-1) + measure (part-2) + ... + measure
(part-n)

This rule is justified by breaking up the summation into n-1 summations of two parts. That is, we first sum the measures of the first two parts, and then the next two and so on until all n-1 summations are completed.

Here is how it looks
spelled out in greater detail.

measure (part-1) + measure (part-2) + ... + measure (part-n)

= **[measure (part-1) + measure (part-2) ]** + ... + measure
(part-n)

= **measure (parts 1,2)** + measure (part-3) + ... + measure
(part-n)

= **[ measure (parts 1,2) + measure (part-3) ]** + ... +
measure (part-n)

= **measure (parts 1,2, 3) + measure (part-3)** + ... +
measure (part-n)

etc.

Since arithmetic addition is associative, the
order in which these additions are carried out does not matter. We will
always arrive at the same sum.

A new rule appears when we seek to add measures in
systems that have infinitely many parts. The key condition is that the
number of parts to be added is countably infinite,
that is, of size of the set of natural numbers in the sense of Cantor's
theory, ℵ_{0}. For then we
can number all the parts whose measures are to be added:

part-1, part-2, part-3, ...

and so on infinitely. There is a rule for adding the measures of these parts, called the rule of "countable additivity."

(countable* additivity*)

If a system has finitely many disjoint parts, part-1, part-2, ..., part-n,
then:

measure (combined parts)

= measure (part-1)

+ measure (part-2)

+ ... (and so on infinitely)

At first glance, this rule looks little different from the rule of finite additivity. However, there is an important difference. We could justify the rule of finite additivity for n parts just by writing down a calculation with n-1 pairwise summations. This procedure fails for the case of countable additivity. No matter how many additions we carry out finitely, we will never have completed the infinitely many additions called for in the summation.

The solution is to introduce as a definition what it is to sum infinitely many measures; and to do it in a way that naturally extends the rule of finite additivity. We form the infinite sequence of partial sums:

measure (part-1)

measure (part-1 and part-2)

measure (part-1 and part-2 and part-3)

...

How can "all goes well" fail? Since each measure is
non-negative, the partial sums cannot decrease as we form them. All that
can go wrong is that the sum is infinite.
"Infinity" is not a real number, so it lies outside the definition. It is
customary, however, to overlook this complication and call such a result
an "infinite measure."

Each of these can be formed using finite additivity. What will result is a sequence of numbers. If all goes well, that sequence will converge to some definite number and that number is the infinite sum.

We have already seen an example of this procedure in Zeno's dichotomy paradox. To complete the course the runner had to traverse a distance of 1/2, and then 1/4, and then 1/8, and so on infinitely. We had to add these infinitely many distances.

We saw there that the partial sums in this infinite sequence of finite summations are:

1/2 = 1 - **1/2**

1/2 + 1/4 = 3/4 = 1 - **1/4**

1/2 + 1/4 + 1/8 = 7/8 = 1 - **1/8**

1/2 + 1/4 + 1/8 + 1/16

= 15/16 = 1 - **1/16**

No matter how many more of these sums we form, we are always some finite away from 1. We might add the first ten terms in the sequence and have

1/2 + 1/4 + 1/8 + ... +
1/1024

= 1023/1024 = 1 - **1/1024**

Since these sums get arbitrarily close to one, it is quite natural to declare as a definition that the infinite sum is one:

1/2 + 1/4 + 1/8 + ... = 1

Countable additivity is a perfectly well defined mathematical operation. If one has a magnitude with a countable infinity of parts, it would seem perverse not to use it. We cannot complain, for example, that summing an infinity of terms is inadmissible since completing an infinity of actions is objectionable. We have seen through our investigation of supertasks that this objection fails.

Perverse or not, nothing compels us to add the rule of countable additivity to the rule of finite additivity. We can choose to add it or not as seems appropriate to the case at hand. For the two operations of finite addition and countably infinite addition are distinct operations. No logical contradiction arises if we apply the first, but not the second.

What does it look like to retain finite additivity but not countable additivity? Here is a simple example. Imagine that we have a system consisting of a countable infinity of parts of measure

1/2, 1/4, 1/8, 1/16, ...

We can use finite additivity to compute the measure of any finite combination of these parts.

1/2 + 1/16 = 9/16

1/4 + 1/8 = 3/8

etc.

However, without invoking the rule of countable addivity, no summation will lead to the measure of the full system. For no matter how we try to arrive at that measure, we will have to add a countable infinity of measures of parts.

The rule of countable additivity would tell us that the measure of the whole is just 1. However without that rule applying, nothing stops us assigning a different measure to the whole. We might assign, say, a measure of 2. It would be an odd thing to do since no finite sum of measures of parts exceeds 1. However there is no logical contradiction in the assignment.

The most notable case and the one that will be important for us here arises in the case of systems composed of a countable infinity of points and where each point is measure zero.

measure (point-1) = 0

measure (point-2) = 0

measure (point-3) = 0

etc.

Then the rule of countable additivity gives us:

measure (all points together) = 0 + 0 + 0 + ... = 0

We might be tempted to represent lengths by sets of rational numbers. That is, the interval of lengths from 0 to 1 would be represented by the set of rational numbers between 0 and 1, where each individual rational number is assigned a measure zero. It is tempting to use this representation since, as we saw earlier, the rational numbers are dense in the interval 0 to 1.

is the escape to assign a very tiny
measure ε>0 to each rational? Then we run into the opposite
problem. The measures of the infinitely many rationals in the interval
from 0 to 1 would sum to infinity.

This choice would be a quite troublesome, since the rational numbers form a countable set. If we applied the rule of countable additivity to the measures of the rational numbers in this interval, we would find that they sum to zero. We would have a degenerate notion of lengths: all intervals, would have zero length.

In a later chapter, we shall see another case in which it is appealing to abandon countable additivity. It arises in probability theory when we deal with the problem of the infinite lottery.

We have just seen that a set of rational numbers fails to represent lengths made up of a dense set of points if we want these lengths to have non-trivial, countably additive measures (that is, measures other than zero and infinity). The standard solution is to represent lengths (and areas and volumes) by sets of continuum size of real numbers.

Take again the interval from 0 to 1. It is represented by the set of all real numbers lying between 0 and 1. Each real number represents a point in the line segment. Each real number is assigned zero measure.

This much seems to set us up for the same trouble that faced a representation of lengths by sets of rational numbers. For we have points--real numbers--of zero measure; and an infinity of them taken together gives us the line from 0 to 1. The only difference now is that the reals form an infinite set of a larger size, an uncountable set that is the size of the continuum.

All that is needed to arrive at this same trouble is for there to be a way to add the measures of the uncountably many points forming the continuum between zero and one. The rule of countable additivity can be applied, but it falls far short of what is needed. At best it adds up the measures of a countable infinity of real number points. We would have a sequence of points or parts like:

1, 2, 3, 4, 5, 6, ... (all natural numbers)...

Then we can form an infinite sequence of finite, partial sums.

measure (part-1)

measure (part-1 and part-2)

measure (part-1 and part-2 and part-3)

...

The ensuing, limiting sum will be defined. Applied to points in the interval of reals from 0 to 1, it can tell us us that the measure of the resulting countably infinite set is zero in size. That is untroubling since all the sets of points of countable size are a negligible portion of the full set of real numbered points between zero and one. What we need is a notion of uncountable additivity that would allow us to sum the uncountably many zero measures of all the points. We would have a sequence like:

1, 2, 3, 4, 5, 6, ... (all real numbers)...

There is no way to form such a sequence that is of continuum size, so that...

Standard measure theory provides no rule of uncountable additivity.

This is a "light-bulb" moment in which we see the
key fact!

This is the most important fact of this chapter, as far as resolving the paradoxes of measure are concerned.

The lack of a rule of uncountable additivity opens a gap between two types of measures: the zero measures of the individual points and the measures that can be assigned to intervals such as extend from 0 to 1. Because of this gap, we have great latitude in the measures that we can assign to systems consisting of continuum many points.

In the case of the interval from 0 to 1, a common choice is a uniform measure, officially called the "Lesbegue measure." This measure is assigned to infinite subsets of the interval of continuum size. The rule is just this:

To the interval, [a,b], assign the measure (b-a)

The whole interval [0,1] is assigned measure 1-0 =
1.

The half interval [0,0.5] is assigned measure 0.5-0 = 0.5.

The interval [0.15, 0.45] is assigned measure 0.45-0.15 = 0.30.

etc.

What matters is that the totality of these assignments are consistent. We can check this is so for a few simple cases.

The rule of finite additivity holds. For example, the intervals [a, b] and [b,c] combine to form the interval [a,c]. The measures of the first two are (b-a) and (c-b). Their sum is (c-b) + (b-a) = (c-a), which is the measure of the interval [a,c].

This Lebesgue measure conforms with the rule of countable additivity. Take the infinitely many intervals that arise in the dichotomy:

[0,1/2], [1/2,3/4], [3/4, 7/8], [7/8, 15/16], ...

They have measures

1/2, 1/4, 1/8, 1/16, ...

They sum to

1 = 1/2 + 1/4 + 1/8 + 1/16 + ...

While this Lebesgue measure is the standard one used, nothing compels it. We can replace it with other measures that conform with finite and countable additivity, while still preserving the measure zero of the individual points.

A simple alternative measure has values that are twice those of the Lebesgue measure. That is:

To the interval, [a,b], assign the measure 2x (b-a)

Then everything proceeds as before, but now all the measures are doubled. Since the first measure produced no contradiction, the same will be true of the doubled measure. We have:

The whole interval [0,1] is assigned measure 2-0 =
2.

The half interval [0,0.5] is assigned measure 1-0 = 1.

The interval [0.15, 0.45] is assigned measure 0.90-0.30 = 0.60.

etc.

We can reproduce the example of the dichotomy, now with doubled measures. The intervals

[0,1/2], [1/2,3/4], [3/4, 7/8], [7/8, 15/16], ...

have doubled measures

1, 1/2, 1/4, 1/8, ...

They sum to

2 = 1 + 1/2 + 1/4 + 1/8 + ...

Both these measure--the Lebesgue measure and the doubled Lebesgue measure--are uniform measures. They are just the first two of infinitely many measures that can be assigned to the interval of reals from 0 to 1. Nothing requires the measure to be uniform. We can have measures that assign greater values to intervals close to zero; or close to 1; or elsewhere; and so on in innumerable variations.

*For calculus experts:*
These further measures assign a non-negative number to each interval of
reals and all their unions. The key condition to be met is that all the
numbers assigned are consistent with one another. This condition is
routinely met by specifying the measure through a density function, ρ. It
assigns real numbers to each real in the interval. The measure of an
interval is then just the integral of the density function over that
interval.

The discussion of the last few sections shows that there is no, simple assured relationship between the measures assigned to individual points forming some system and the measures assigned to the system as a whole.

We have cases in which the measures assigned to the individual points fix the measures assigned to the whole. That is, these are cases in which the measures on the whole are induced by the measures on the individual points.

One is the case of the counting measure. If we have two cartons of eggs, then our total egg count is sum of the counts of eggs in the two cartons.

Another, richer example is the case of the rationals between zero and one. Once we assign a zero measure to each rational individually, countable additivity forces us to assign a zero measure to the whole interval.

Alternatively, the measures on the individual points can fail to fix the measure on the whole. The notable case is the one explored in the last section. We assign zero measure to the individual points of an interval of the real continuum. We remain free to assign many different measures to whole interval.

The three paradoxes listed at the start of this chapter depend essentially on neglecting the possibility that the measure on the whole may not be fixed by the measure on the parts. That is:

In sets of continuum size, the zero measure of the individual points does not determine the measure of the totality.

The three paradoxes depend on contradicting this result.

The paradox considered an infinitely divisible extended magnitude dissolved into its infinity of dimensionless parts.

Each part has zero measure. It was assumed that the measure of the totality could be recovered by adding up the infinitely many zeroes to get zero:

0 + 0 + 0 + 0 + 0 + ... = 0

If the magnitude consists of a countable infinity of points, such as a set of rational numbers, then this addition is authorized by the rule of countable additivity. Then the measures on the points does fix the measure on the totality.

However the extended magnitudes used in geometry and elsewhere are composed of an uncountable set of continuum many points. Standard theory provides no notion of uncountable additivity such that this last summation can proceed. At best the summation can extend only over a countable subset of all the points. It can reassure us that the subset of rational numbers in some interval is a measure zero set. But it can do no more.

The summation violates the conclusion above that the measure of the whole is not fixed by those of the individual points.

In Davy's version of the paradox, we we have three bodies, AA', BB' and CC' of equal size, consisting of extended magnitudes of infinitely many points.. AA' is at rest and bodies BB' and CC' move past in opposite directions, such that point B' point sweeps past half of body AA', but the full extend of body CC'.

The result is that a one-to-one correspondence is established between half the points of AA' and all the points of CC'.

The inference then made is that the half interval of AA' must have the same magnitude as the full interval CC'. That contradicts the initial assumption that AA' and CC' are of equal (non-zero) magnitude.

This inference, we now see, is a
fallacy. The two intervals may have the same number of points and
every one of them may be of zero measure. That does not entail that the
extended magnitudes composed from them have the same measure.

The geometric version of the Aristotle wheel paradox depended on identifying a one-to-one correspondence between the infinitely many points of the circumference of a smaller circle and the infinitely many points of a larger circle.

The same fallacy is
then committed. Since the two circumferences are composed of the same
number of points, it is inferred fallaciously that they are of the same
measure, that is, the same length. That then contradicts the Euclidean
result that the larger circle has a larger circumference.

We have now seen enough measure theory to be able to resolve the three paradoxes listed at the start of the chapter. There is an important aspect of measure theory that should be mentioned here.

In systems of continuum size, only some subsets can consistently be assigned a countably additive measure. This is a universal problem. The Lebesgue measure defined above, for example, can be extended from the intervals of the original definition to many more subsets by finite and countable additions. However there remain many more subsets for which no extension of the Lebesgue measure is possible.

For this reason, the specification of a measure
routinely requires three elements.

• First is the set of points that delimit the system of interest.

• Second is a specification of those subsets to which the measure will be
assigned.

•Third is the specification of the value of the measure assigned to each
of these subsets.

For more, see "Axiom
of Choice" in a later chapter.

The identification of which sets are non-measurable has proven to be delicate. The difficulty is that these sets are, on our best understanding, non-constructible. That means that we can infer that they exist, but we cannot point to a specific set and announce "that set is not measurable!"

The standard example of such a set is the "Vitali set," whose existence is inferred but not displayed. I have elsewhere provided an elementary account it. See a later chapter and also

John D. Norton, *The Material Theory of
Induction.* Chapter 14, Section 6. http://www.pitt.edu/~jdnorton/homepage/cv.html#material_theory

Consider an account of measure that allows only finite additivity, but not countable additivity. What do we gain? What do we lose?

Is there no way to sum an uncountable infinity of measures? Perhaps we should just try harder?

Consider the three paradoxes resolved in this chapter: Zeno's paradoxes of measure and the stadium and the geometric version Aristotle's wheel. They use notions derived from modern measure theory. Might it be possible to explain to thinkers in antiquity how the paradoxes are resolved in terms compatible with their methods.

June 28, October 14, 2021

Copyright, John D. Norton