Tuesday, May 22, 2012

Ghosts, Goblins, and Monsters – The Power (and Danger) of Visibility

As a father of three young boys, I am consistently bombarded by requests. “Can we play outside?”, “Can we go to Grandpa and Grandma’s?” and “Can we buy a toy?” are familiar questions for my wife and I.

One popular request recently has been, “Can we watch Scooby Doo?” For those of you unfamiliar with Scooby Doo – it’s a cartoon that features a group of teenagers and their dog who investigate paranormal occurrences. Recently, our 3 year-old asked to watch an episode of Scooby Doo. With its ghosts, goblins and monsters, my wife and I were initially reluctant to let him watch an episode fearing nightmares. However, after much lobbying from him and his older brothers, we relented. Needless to say, he only ended up watching about 2 minutes before his eyes were shut tight due to fear.

Undoubtedly, some of you must be wondering, what does this have to do with Manufacturing Operations Management (MOM) and the issue of visibility? It turns out that many who undertake a process improvement effort in the MOM space want visibility into the goblins that lurk within their operation. Whether they’re the ghosts of process improvement efforts past or the “waste” monster, project sponsors and team members enter a process improvement project ready to take on whatever is uncovered.

However, an interesting phenomenon occurs once they see these “villains”. Often, much like my 3-year old son, they shut their eyes. Statements like, “we can’t change that process step” or “the data must be wrong” become common place as the Team and, ultimately, the broader organization loses the courage to act upon the trends that have been exposed by the project.

There are some actions we’d recommend taking to maintain and build the organizational courage to follow through on improvement efforts:



·        Arm yourself with Data. Provide open access to data and use case studies to demonstrate its accuracy (read a few of ours here).  This will help build confidence and consensus within the organization that decisions are being made on fundamentally good data.  You will also likely find that providing easy access to good data promotes more data-driven decision making.

·        Know your Nemesis. Use good analytical techniques to rigorously quantify the “villains”- these are the events that define poor performance, waste, poor quality, yield loss, etc. - publish the definitions and make them common knowledge. You’ll likely need to be able to capture and analyze both time series process and event-based context data.  Be sure to check out the capabilities of Catalyst PDC, our historian, and watch for more on Level3 regarding Event Detection and Complex Event Processing.

·        Plan your Attack.  Define the right response (a business process) that you will take when an event occurs. Start simply, iterate often and know that your business process will evolve as you learn more about your adversary.  Remember, perfection is the enemy of done!

·        Take Action.  Ensure you are diligently executing every time an event occurs – automating business processes with workflows is a great way to ensure 100% compliance and guaranteed execution.  If you want to learn more about workflow automation, read this great case study.

·        Promote your Wins.  Many people overlook the importance of promoting successes, don’t be one of them.  Make sure you promote the actions being taken with data regarding savings, frequency of occurrence, duration, etc.  
Taking these steps will help ensure that you have the courage to keep your eyes open and take the appropriate action no matter how scary the monster.

Wednesday, May 16, 2012

Merovingian or a simple explanation of the Reactive Agents Concept

As I was working on my first topic for this blog – “Reactive Agents” I found myself suffering from a big, fat case of writer’s block.

I have been working with Reactive Agents for over 15 years; they form the foundation of our Manufacturing Operations Management platform. When we talk to our customers about them we often use an analogy – they’re like functional “Lego Blocks” that you can use to build manufacturing systems, but we rarely talk about what “Reactive Agents” really are and why they are the foundation for our technology. Believe me, there are a lot of reasons– developer productivity, scalability, flexibility, reuse, etc., the list goes on and on.

But I was stuck, and quickly realizing that the topic requires a good introduction for novice readers, something that won’t scare people away with overly complex theory and terminology. I needed an inspiration, something better than the quote from the first published article about Reactive Agents: “In order to build a system that is intelligent, it is necessary to have representations grounded in the physical world, such obviating the need for symbolic representations or models because the world becomes its own best model.” (Brooks, 1991).

I turned my attention to the TV, unexpectedly in about 4 to 5 clicks I got my inspiration – “Matrix Reloaded” was on, right about the time that Neo and friends walked into a club looking for the “Key Maker” (here is the clip if you need to refresh your memory http://www.youtube.com/watch?v=E_PFZ92dMys#t=2m05s)

The Merovingian talked about “cause and effect” – the philosophical concept of causality, in which an action or event will produce a certain response to the action in the form of another event. I have seen this scene numerous times before, never realizing how it is related to the products and technologies I have been working on for years.

You can’t model the world, but with reactive agents, you don’t have to. In Brooks’ words: “this hypothesis obviates the need for symbolic representations or models because the world becomes its own best model. Furthermore, this model is always kept up-to-date since the system is connected to the world via sensors and/or actuators. Hence, the reactive agents hypothesis may be stated as follows: smart systems can be developed from simple agents which do not have internal symbolic models, and whose 'smartness' derives from the emergent behavior of the interactions of the various agents.”

Cause and effect…



Stay tuned for more on Reactive Agents and how Savigent is applying this technology in manufacturing operation management systems. Upcoming posts will cover base concepts as well as the software side of the Agent-based platform.

Wednesday, May 9, 2012

What Good is Your Data if You Can’t Find it?

Modern data historians help solve many issues for process engineers and managers. Data Historians provide the ability to access past values of process data, enabling advanced trending, and to accurately provide root cause analysis. Process data comes in two main forms:
  • Analog Values, in which data is represented by a continuous variable (temperate, pressure, level, weights and etc.) 
  • Discrete Values, in which data is from a finite set of values, such as state information (like on/off, opened/closed, high/low) 

Under the hood of a typical historian, the value for each data variable is stored by time. A routine investigation by a process engineer is to answer the age old question:
“There were reports that we were out of quality on machine X last week, what happened?”
The process engineer can query the historian, based upon a time range, to retrieve process data such as temperature or current draw.

At first glance, this time based retrieval method of the historian seems sufficient. But is it? Upon a closer look, several questions might arise such as what was the machine state (was it running during the time queried) or what was the acceptable range for the process variable for the part being manufactured (by recipe, SKU or lot). Without context to the data, trying to find the appropriate data can be difficult, time consuming and inefficient. If the information cannot be found easily, its value is reduced.

When implementing a historian, take the time to consider the context variables that might assist in future queries; some examples are:
  • Machine State (Running, Idle, and Down) 
  • Lot or BatchID 
  • Recipe Class and/or Step 
  • Operator 
  • Shift 
By using the approach of adding context variables along with process variables, answers can be found with a couple of queries to refine the results and transform the data into information. In addition, similar lots can be compared to determine differences.

Savigent’s Historian solution goes one step further by simplifying the retrieval process down to one query leveraging both context and time.

Thursday, May 3, 2012

OEE won’t provide you ROI

I debated using the title “Why I hate OEE” but I thought that was a little extreme, albeit catchy. And I wouldn’t go so far as to say that I hate OEE. Rather, I view initiatives solely focused on calculating the metric like investing in an expensive rear view mirror. You need one, don’t get me wrong. But the metric and the data required to calculate it will only tell you where you’ve been. If implemented properly, you will know when you have had a problem, but it won’t change your behavior, your business processes or support your continuous improvement efforts - all of which can serve to increase your return on invested capital.

My friend Matt Littlefield at LNS Research (formerly an analyst at Aberdeen) wrote a good summary of OEE (Part 1, Part 2) in his blog. If you are looking for some good foundational information on OEE it’s well worth a read. He advocates measuring OEE, rightly so. From a comparative perspective it’s helpful (between equipment, within a plant, between plants and potentially between companies). What’s not written, and Matt will be quick to point out, it’s the action you take based on OEE related events that provides ROI.

Therein lies the rub, the devil is in the “events” and the “events” are what provide ROI. Take the availability component of OEE for example – the ratio of asset uptime to asset scheduled production time. What’s more important to you, knowing that you are using 85% of scheduled production time or knowing when an asset transitioned into an unproductive state (the event)? Would you rather have a detailed account of the 85% of available time you used, or of the 15% that’s being wasted? More importantly, what are you doing about it? What actions are you taking when an asset transitions from a productive to unproductive state? Taking action on exception events differentiates a manufacturer from its peers and provides returns that exceed peer performance – the mark of an exceptional manufacturer.

We advocate a unified approach to modeling asset states and state transitions, with sufficient detail to support continuous improvement initiatives. The same holds for the quality component of OEE – have a unified approach with regards to quality events (yield loss). There’s a special emphasis here at Savigent on events, state transitions and quality events, because we want our customers to realize high ROIs on their implementations. Knowing when events occur allows our customers to implement workflows that guide the actions they take every time an asset transitions from a productive to unproductive state, or a yield loss event occurs. Since events are monitored in real time and responses are implemented as workflows there’s guaranteed execution with 100% process compliance. Our customers know when bad things happen, and they know they are responding to them as they have defined each and every time.

Of course, when you take this approach you’ll have all the data you need to calculate OEE. But you will also capture a tremendous amount of information about the events that are causing inefficiency – their frequency of occurrence, duration, impact and any related data you need to support continuous improvement efforts.