Become a Member! | Print Page | Contact Us | Your Cart | Sign In
Share |

 

TOCICO International Conference - PROGRAM

HOME
REGISTRATION
ACCOMMODATIONS
TRAVEL
SPEAKERS (Bios/Abstracts)
SPEAKER BY INDUSTRY
PROGRAM
 SCHEDULE
SPONSORSHIPS


July 16-19, 2017
Melia Hotel in Berlin, Germany.


 Benjamin Weber - Leader in Operations and Supply Chain Management
   
 

Prior to the 17th century astronomers picked the value from their observations that seemed to suit their needs best (1963). It was not until the solar-centric world view gained acceptance during the 16th century that astronomers adopted the principle of the arithmetic mean to get closer to the true position of a star (1890). Earlier in the first half of the 16th century mariners used the arithmetic mean implicitly to determine the longitude at sea as the difference between the geographic and the magnetic north pole (1963). Quêtelet (1846) transferred the concept from astronomy to other fields. Until then the arithmetic mean had been employed to eliminate errors in observations of a constant cause. Quêtelet distinguished between this typical mean and the untypical arithmetic mean. While the former would be that of multiple measurements of the height of the same house, the latter would be that of the heights of different houses (1913). As the causes are constant for the same house, the differences to the true value follow the law of error (1809). As they distribute symmetrically, the differences can be canceled out against each other, producing the arithmetic mean as the value most likely to be the closest to the true value. Following the law of large numbers (1713), the more often a measurement is repeated, the less the difference of the arithmetic mean to the true value will be. If one, however, considers the heights of different houses, the causes are no longer constant. Their differences do not distribute symmetrically any more. The arithmetic mean cannot represent a height of a house. It is rather an abstract value. Meyer (1891) wonders why the arithmetic mean is used in meteorology as a convention while the values regularly do not distribute symmetrically. He concludes the arithmetic mean cannot be considered in such cases as the most likely value. However, as everyone uses it without question, he assumes people to misunderstand the arithmetic mean as the most likely value in cases where it is not.

School teaches us the arithmetic mean as the most common representative value for data (2007a). Its learning framework tacitly assumes a symmetric distribution as standard. It does not tell that many things in reality do not follow the symmetric distribution. This framework allows only for the conclusion, that the arithmetic mean is something like the most likely value.

It is the benevolent warning school adds not to trust the arithmetic mean in all situations that seems to encourage us that we can rely on it in most others. A vague feeling persists that the arithmetic mean is in most cases the most likely value — which it is not.
Each time we encounter many values, we desire to reduce them as our short-term memory is limited (TOC). We want to get an idea of the values by means of a representative value.

The arithmetic mean is the very value that all values had if they were equal. We only need to remember one value and the number of the values to imagine all. It is only the arithmetic mean that can be determined without having seen a single value itself. Usually we need the sum anyways for accounting purposes. We have only the arithmetic mean at our disposal if we are given the sum and the number of the values only (be it for space, privacy concerns or just by convention). We can neither determine the other measures of central tendency nor whether the values distribute symmetrically. Hence, one either miss-assumes the arithmetic mean as the most likely value or risks of not having any idea of the values at all. If man’s nature to diminish uncertainty encounters an option for more certainty in a rather ill-defined framework of reference not telling us sufficiently this certainty could be worse than the uncertainty, one is likely to fall prey to the misuse of the arithmetic mean.

The arithmetic mean will be for multiple measurements of constant causes the most likely value. If the causes are not constant, it can but usually will not be the most likely value.

630 exoplanets form a group of the same type. Most of their masses are similar. As the causes and the effects are similar, we tempt to attribute the force provoking the mass of an exoplanet to the cause, that is of being an exoplanet. Why don’t we see that 20 percent of exoplanets make up for 80 percent of the total mass? More than half of them do not belong to the group constituting for 80 percent of the total volume. One literally does not see for more than half of them any difference in volume compared to the 80 percent constituting only 20 percent of the total mass. As our view, one force provokes the mass for exoplanets, is based on the majority of values, we have absolutely no reason to expect a force for the outliers in their own regard, let alone be it superior.

We calculate the arithmetic mean, possibly under the assumption of having constant causes.

One centre of gravity of the values derives its force from the frequency of values while the other from their mass. As both forces relate inversely to each other, one force distorts the arithmetic mean to be a representative value for the other — and vice versa. The standard deviation of 2.7 times the arithmetic mean tells us not to trust it as the most likely value. We could calculate the most likely value directly or use the median as a measure of central tendency. However, both would fail to appreciate the second centre of gravity as it derives its force from mass rather than frequency. It seems paradoxical: the nature of any measure of central tendency, to divide the values into two groups, makes it impossible to recognise their two centres of gravity.

How, then can we find out the very centres of gravity, in other words that a minority of very similar causes provokes a majority of effects?

The presenter operationalised the entropy model by Ronen et al. (2007b) to tell for values of any number whether a pareto distribution is present. He determines the most relevant cause-effect relation to allow for instant action. The standard deviation of the minority is equivalent to the arithmetic mean of the same, that of the majority 1.3 times their arithmetic mean.

If one applies the function repeatedly to the minority one finds it to represent another pareto distribution. Likewise the majority contains another four pareto distributions. The package comes with CSV and Excel interfaces and a Python API. It ships with expressive example data from real life and a documentation including in-depth tutorials. The results produced by the package invite for instant action towards enhancing a system’s throughput following the TOC principles.

It is available as open source via http://pypi.python.org/pypi/effectus.

References:

  • 1713: Bernoulli, J. Wahrscheinlichkeitsrechnung (Ars conjectandi). Ostwalds Klassiker der exakten Wissenschaften no. 107 & 108, translated by Haussner, R. (1899)
  • 1809: Gauss, C. F. Theorie der Bewegung der Himmelskörper welche in Kegelschnitten die Sonne umlaufen, translated by Haase, C. (1865)
  • 1846: Quêtelet, A. Lettres sur la. théorie des probabilities
  • 1874: Fechner, Gustav Theodor. Ausgangswerth der kleinsten Abweichungssumme
  • 1890: Dreyer, J. L. E. Tycho Brahe; a Picture of Scientific Life and Work in the XVIth Century, p. 350 1891: Meyer, Hugo: Anleitung zur Bearbeitung meteorologischer Beobachtungen: Der Centralwerth, das arithmetische Mittel und der Scheitelwerth, pp. 12-27
  • 1907: Weber, Heinrich; Wellstein, Josef. Angewandte Elementar Mathematik, 3. Band 1913: Kaufmann, Al. Theorie und Methoden der Statistik
  • 1963: Eisenhart, Churchill. The Background and Evaluation of the Method of Least Squares, 34th Session of the International Statistical Institute, Ottawa, Canada, August 1963; published in Eisenhart, Churchill (1983). The Collected Works of Eisenhart Churchill, Volume 2, 1954 - 1983, 63-5, p. 1f
  • 2007a: Indian National Council of Educational Research and Training. Mathematics, Textbook for Class VII, p. 60
  • 2007b: Grosfeld-Nir, A.; Ronen, B; Kozlovsky, N. The Pareto managerial principle: when does it apply?, International Journal of Production Research, Vol. 45, No. 10, 15 May 2007, 2317—2325

3 learning objectives:

  1. A mean(s) [sic!] to get closer to truth usually is abused to depart from it.
  2. If very similar causes provoke mostly similar effects, the effects might emerge from different forces, even their appearance does not differ from most of the other causes.
  3. Very little matters much and most things don't matter at all.

3 questions for attendees:

  1. Which observations of effects come to your mind you that you would like to feed the software library with?
  2. Is the arithmetic mean a bad or a good guy?
  3. How does this relate to TOC at all?
BENJAMIN WEBER is a rather young independent organizational developer based
in Germany. He holds a Bachelor’s of Arts degree in European Energy Business. Instead
of an organization’s flows of energy eliminating each other, he strives to enable it to let the
their flows of energy enrich each other to realize the organization’s full potential.
Benjamin’s recently published ‘effectus’ Python library (open source) spots pareto
distributions and determines the most relevant cause-effect relationship. Besides collecting
misunderstandings his main interests lie in systems theory, human communication, a grain
of practical philosophy, theory of constraints, programming and stenography.