Become a Member! | Print Page | Contact Us | Report Abuse | Sign In
The core of Theory of Constraints
Share |

The core of Theory of Constraints:

A discussion between Eli Schragenheim and Sanjeev Gupta 


 Please enjoy this discussion between Eli and Sanjeev. Join us on the TOCICO LinkedIn company page to add your input.


By Sanjeev Gupta, CEO Realization Technologies, Inc.

Dr. Goldratt's genius lay in inventing practical solutions to seemingly intractable problems. While those solutions were often a leap of intuition by a brilliant mind, I believe they were also guided by a sound theory, as far as planning and scheduling (including financial decisions) are concerned.

The purpose of this essay is to articulate that underlying theory, gleaned from THE GOAL, and reinforced to me over the years by experts like Peter Noonan, Bob Vollum, Mark Woeppel, John Covington and Dale Houle.

There are alternative ways to understand TOC solutions, but I believe understanding their underlying theory is the one that’s most effective. It allows you to systematically judge proposed solutions and invent new ones.


First, the core axiom around which TOC is formulated.

“Every process is characterized not only by dependencies among its steps, but also finite resources (capacity, time, materials, cash, etc.) and variability (in processing time, resources required, yield, etc.)”

Ignoring dependencies, finite resources or variability generates plans that are unachievable AND over-buffered; and detailed schedules that cannot be followed.

To illustrate why planning/scheduling is difficult and why TOC's Core Axiom can't be ignored, consider the simple factory shown here. Planning and scheduling it is quite simple if there is no capacity limitation. Every delivery can be planned exactly when the order is due; and, to schedule the various steps, we just work backwards from the due-date.

Now, what if we have only one machine for each step? Not matching production plans with available capacity will unnecessarily starve the machines, increase lead times AND cause due-dates to be missed. Planning/scheduling is also not easy, and the more complex the process (more products, more steps...) the more difficult it gets.

What if we also introduce variability into the mix, i.e., demand, machine uptime, and processing times fluctuate randomly? It gets much more difficult to create good and feasible plans. How does that impact output, costs and delivery?

Food for Thought

Are Dependencies, Finite Resources and Variability in your delivery process (services, projects, supply chain...) consequential? Do your current planning/ scheduling methods tackle them well? How does that affect output, delivery and costs?


Next, the three theorems of TOC which I consider to be the real paradigm shift required. They follow directly from TOC’s Core Axiom and set the boundaries for what not to do. Since most managers violate them all, they are more important to understand than the theorems that tell you what to do.


Unbuffered plans degrade global performance by amplifying variability.


Detailed schedules created in planning cannot be used in execution; moreover, the planning logic cannot even be used to create execution schedules. Planning logic is about maximizing planned performance, whereas execution is about achieving the plans you have committed.


Unit Costs, and its variants like Earned Value, cannot be used for financial decisions because: i) fixed costs are largely not affected by decisions regarding individual activities, parts, products or projects; and ii) even the variable costs incurred by an activity, part, product or project can depend on what happens on other activities, parts, products and projects.


Now, the strokes of Dr. Goldratt’s genius: the planning/scheduling enablers in TOC for global optimization. (Since true experts have explained them in detail elsewhere, I will just list them along with my observations.)


While constraints being the limiting factor is almost axiomatic, Dr. Goldratt’s brilliance was in exposing local optimization practices that ignore this reality.


Even a few sporadic decisions using the T-I-OE (Throughput-Inventory-Operating Expense) framework can double your profits! Sadly, it is not implemented enough.


They provide the PLANNING and decision-making logic, and can be used at every level in an organization. Again, very powerful but not implemented enough.

4.     BUFFERS

There are three types of buffers: buffers that protect commitments; buffers that ensure on-demand availability; and buffers that enable flexibility. Knowing why you need buffers helps you plan them in the right place.


It provides the EXECUTION logic to create execution schedules, identify expediting/recovery/improvement actions, and expose hidden constraints.

In conjunction with "demand-pull" from JIT/Lean, these enablers are sufficient to maximize the performance of any "system".


I conclude this discussion on TOC's theory by addressing how it can be applied to local measurements.

While overall organizational performance is what matters in the end, local measurements are the lifeblood of management. They are required to convert global performance goals into departmental ones, to monitor execution, and to hold people accountable. At the same time, they also carry certain risks that should be avoided/ mitigated.

  • First, local measurements can promote local optimization at the expense of global. This risk can be avoided simply by deriving local measurements from global measurements using globally optimized plans and schedules.
  • Secondly, every local measurement that is used for accountability induces a hidden buffer. This risk is impossible to avoid but can be mitigated by choosing wisely and by setting ranges of acceptable performance rather than a specific number.

The same ideas can be used for aligning subcontractors and suppliers.

Thanks for reading. In the future, I also hope to share my views on how (not) to implement TOC.

P.S. Yes that's all there's to the theory of TOC as it pertains to planning and scheduling/ financial decisions. You should re-read THE GOAL if you have been sidetracked by all the complexity that has crept into TOC in the last thirty years.

The Structure of the TOC approach to Operations

By Eli Schragenheim

Sanjeev's initiative to describe the whole TOC for Operation as a pyramid made of one Axiom, several theorems, and then several insights/enablers is both interesting and highly valuable.  With such a structure of the key generic insights - new solutions to other problems can be built.

I'd like to propose a somewhat different structure and raise some reservations on several details.  Eventually I agree with the top objective:  Make it easier for people to develop new solutions, with the help of the generic insights of TOC, for other environments, for instance transportation organizations or managing a television network.

Let’s start with the top of the pyramid.  Instead of an axiom I like to propose a generic objective:

Be able to make reasonably good planning and execution decisions in environments that are both complex and highly uncertain

Complex means there are many variables that are partially dependent on each other.  So, in order to predict the outcome of an idea one has to solve a very complicated formula.

Uncertainty means each of the variables is also subject to significant variability.  On the face of it, adding variability on top of complexity would make the system even more complex and thus unpredictable.

A key generic insight by Goldratt that leads all TOC methodologies is a kind of an Axiom:

Every organization, or any human system, is inherently simple!

The simplicity means that even though it seems that every organization is complex and uncertain, it is still not too difficult to predict the outcome of an action in a good-enough way.

Here I come to Sanjeev's definition of his axiom.  Organizations have many processes that describe how to produce and deliver a product or service.  Each process is a flow that is subject to certain dependencies, like having to have the right inputs in order to continue the process.  Sanjeev rightfully states that the common description of the organization through its processes misses the substantial impact of finite capacity of resources and the impact of variability.  Both categories of variables generate dependencies that can be examined only holistically, meaning considering all the flows and the quantity each flow has to complete.  Without considering both categories of dependencies any planning would lead to chaotic execution.

How come such a system, with huge number of variables, dependencies and variability can be simple?

The answer is that the management of any organization cannot afford to let the organization behave in a chaotic way!

Chaos means it is impossible predict the outcome, not even close!  So, a chaotic organization would not be tolerated by its clients.  Actually, I can think of one example of an organization in chaos; an army in war.  Just think of the two world wars and how chaotic they were.

So, how come most organizations are both complex and uncertain, but they still perform in a tolerable way, meaning the performance is not chaotic and the organization has active customers?

The only logical explanation is that organizations take huge efforts to reduce the level of dependencies within the operational system.  They do it by maintaining excess capabilities, excess capacity, and they use high stocks to decouple as many dependencies as possible.  The fact that many managers pretend to strive for efficiency just points to a paradox in the basic understanding of the managers of their true environment.

Once the ramifications of the axiom are understood then we can put in place many of the TOC insights: constraints, buffers, buffer management and throughput.

Sanjeev’ first Theorem states:  Unbuffered plans degrade global performance by amplifying variability.

I have a slight reservation because to my mind and experience most plans definitely include buffers.  The real problem is that all those buffers are hidden:  they are included in every element of the plan, but in a way that disguises their existence, pretending to be exactly the time, quantity and quality that are absolutely required.  The cause for hiding the buffers is the common utopia of optimization.

The problem with hidden buffers is that they are easily and commonly wasted, and thus they do not really function as buffers for the whole area.

I agree with rest of Sanjeev's theorems and enablers.  I'd like to add a theorem, or you might claim it is another axiom for buffer management:

When buffers are used against continuous fluctuations (not against rare sporadic events) then the behavior of the buffer consumption reflects the relative strength of the buffer.

Sanjeev also claims that performance measurements are required to ensure accountability.  Here I strongly disagree:  performance measurements, at their best, present a picture of the overall performance.  They do not answer the question: Why the measurements are lower than expected?

I find it very beneficial to be exposed to such an effective description of TOC thinking about managing operations in complex and uncertain environments.  I hope that my slightly different description and few minor reservations complement the overall understanding of readers.  And I like to call for more views and different perspectives to be expressed here at the TOCICO site for all of us to be able to do more with what we have. 

TOCICO members are invited to express their own opinions, reservations and new ideas about the topic on the TOCICO LinkedIn company page.