Wednesday, December 5, 2012

Software Tools Tailored to Roles / Audience

I was watching "Channel 9" on TV the other night, you know the Microsoft community channel, I mean who doesn't right?  Anyway, there was a segment entitled "John Cook: Why and How People Use R".  The basis of the talk was why statisticians use R (a language and environment for statistical computing and graphics) to solve programming tasks.  The interesting thing to me is this talk was given from developer/programmer point of view; he was questioning why they would use that tool to solve a task verses Visual Studio.  He summarized with a couple of points:
  • Users may have very different priorities than computer scientists
  • Users will use a familiar tool when feasible
I think that these points extend beyond the roles of programmers and statisticians to the many roles of the manufacturing environment.  So what tool is best for the manufacturing environment? Making the assuming that most are not programmers, I would suggest that it needs to have the following qualities:
  • Visual/Graphical in nature - if controlling an SOP, for example, it should have a workflow type interface
  • Provides code abstraction - the ability to build things from "building blocks" of functionality.

Which tool is right for you?
Which tool is Right for You?
Why are Visual Code Development and Abstraction important?   Simply put, by using a tool with both of these qualities, processes can be rapidly developed  by existing subject matter experts (SMEs).  Furthermore, that system can be easily followed by many different roles within the organization which allows for collaboration and standardization. 

Thursday, September 27, 2012

The Top 10 Workflow RFI/ RFP Questions (From the home office in Minnetonka, MN)

Anyone who has heard of comedian, David Letterman and his program, “The Late Show with David Letterman” is familiar with his “Top 10 List” segment. During this segment, Letterman or one of his celebrity guests recite a top 10 list of items in a particular category. These lists are said to originate, “From the home office in Wahoo, Nebraska” and contain the host’s humorous take on a current event or topic.

In the spirit of Letterman’s Top 10, Savigent is proud to bring you, “The Top 10 Workflow RFI/RFP Questions that you should be asking your MOM Vendor” from the home office in Minnetonka, MN:

1. Tell me about your product. Where has it been implemented? How it is being used? - These fundamental questions lead our Top 10 list because they provide an immediate indication of the nature of the product and help you determine if it’s an actual product or just a bunch of nice-looking slides and marketing material.

2. Will your product fit my business? – In asking this question, you are seeking to understand the configurability and adaptability of the vendor’s product. A number of software vendors claim that their product is configurable during the sales process, but their customers find out post-Purchase Order that the product requires changing their manufacturing process – an activity that is either not feasible given the cost and time constraints of the project or, more fundamentally, is not a desired project outcome. Your manufacturing process can be a source of competitive advantage for your business – why should automating all, or even a portion of it, require that process to fundamentally change?

3. Does your product comply with Industry standards like ISA-95? – While you may want to maintain the process underlying your automation effort, alternatively you may want to move it closer toward one of the applicable standards in the space such as ISA-95. Does the vendor support these types of standards? Can the vendor explain the intent behind the standards and help you assess how the standard can work for you?

4. Will your product help me leverage my existing MOM investments? – Often times, vendors will state, “Our product can replace that system” or “You can only realize the benefits you’re seeking if you use our product for all MOM functions”. Savigent offers you an option – we can either replace systems that exist within your portfolio of MOM solutions or we can help leverage them by building workflows around them, supplementing their functionality and user-interfaces, as necessary, to drive business results while not throwing away prior investment.

5. Can you integrate with my other related systems? – There are a number of related systems that can benefit from MOM data including ERP and MES applications. By asking this question, you are gauging the openness of the vendor’s product and the ease with which it can integrate with these existing systems.

6. Is your product scalable and ready to support my Enterprise? In response to this question, examine the vendor’s product architecture. Is it Service-oriented? Does it provide a set of re-useable services that can be throttled to meet increased demand? Moreover, are these services managed by the product or do you need an external management facility such as an Enterprise Service Bus to do so?

7. Can I use your product or will I need to hire “a bunch of IT guys” to help use it? – If a vendor’s automation solution requires a large investment in training and/or hiring within IT, will you realize the benefits that you are seeking? Put another way, are you really saving anything by automating if your IT spend goes up? Examine who uses the vendor’s system and the size of the support team at one of the vendor’s clients. Ask that client if the size of that support team is higher than what they had planned during their implementation.

8. How will your product help me drive actual improvement in my operations? How can I know that I am realizing results? – In examining the vendor’s product in regards to this question, you want to understand what data will be provided and, more importantly, how easy it is to turn that data into information through analysis and reporting. How easy is it to collect intelligence about your manufacturing operations that can be used to justify both your current and future investment within the space?

9. How long will it take to implement your product? – Can the vendor’s product be implemented in a matter of weeks versus months? How will the product be implemented? Will the vendor be working independently to get it installed and configured or will you be able to frequently view the solution as it is being constructed and offer input on to best build it to meet your needs?

And the final question of “The Top 10 Workflow RFI/RFP Questions that you should be asking your MOM Vendor” is…

10. What questions haven’t I asked that I should be asking? – While having this question last may seem anti-climactic, I’ll defend it on two accounts. First, a vendor that has experience in the MOM space is going to be forthright with questions that you may not have visibility to within your operation, but that are relevant to your future investment in the space. Second, and most importantly, a vendor that provides you with a list of questions is a potential partner who will help you improve your operation regardless of how it benefits them. Anyone know of such a company?

Did I miss a key question? Disagree with any of the above? If so, I would love to hear from you. I can be reached at “the home office” at

Tuesday, September 25, 2012

Leveraging Workflow Automation to Drive Operational Intelligence

We published a new brief today focused the application of workflow automation in the manufacturing environment to drive operational intelligence. The brief, titled “Leveraging Workflow Automation to Drive Operational Intelligence” can be found on Savigent’s web site at

The three part brief includes thought provoking research by Gartner analysts Simon Jacobson and Leif Eriksen.  Their research highlights the “demand for efficient and effective tools and technologies” that manufacturers need to make more effective and accurate decisions.  And manage business processes to “close the feedback loop of corporate performance back to the plant level to drive the right behaviors.”

It also provides a practical understanding of workflow automation and an introduction to Savigent’s industry leading software, Catalyst Workflow.  We also show how we are applying workflow automation to drive change within a manufacturing operation – through Incident Management.  Incident Management is a powerful workflow-based system available to support continuous improvement efforts in the manufacturing environment. It puts some rigor around the business processes that define how manufacturers respond when problems occur.

Wednesday, August 8, 2012

To tackle Complexity we need to Simplify

This is second installment in the series of posts related to Reactive Agents. The first can be found here:

When the scientific community began work on autonomous robots and artificial intelligence, it ran into a very large obstacle – the world is complex. As it turns out, the best model of the world is the world itself, and we lack the ability to describe it in any form that computers can execute.

With all of our technology for planning, learning, natural language processing and other AI techniques, early robots were very unimpressive creatures. And while a lot can be attributed to the hardware platforms and sensors that we used, the main problem was (and remains) that deduction is slow and complicated!
Even today with all the advancements in hardware and sensors like LIDAR, vision and GPS – robots are still primitive. All the military drones we see and read about are nothing more than remote controlled toys with the over horizon control capabilities, where all the real intelligence is coming from the human operator.
To tackle the complexity problem Rodney A. Brooks proposed that Artificial Intelligence should not be an attempt to build "human level" intelligence directly into machines. Rather, citing evolution as an example, he asserted that we should first create simpler intelligences, and gradually build on the lessons learned from these, working our way up to more complex behaviors.
Brooks’ architecture was designed to provide all the functionality displayed by lower level life forms, namely insects. Using a common house fly as an example, Brooks claimed that creatures with this level of intelligence have attributes that resemble closely connected networks of sensors and actuators, with pre-wired patterns of behavior and simple navigation techniques – they are "deterministic machines".

For the software savvy people reading this, the key concepts present in Brooks’ architecture are that:

  • Intelligent behavior does not require explicit representations – we don’t have to model the world to build an intelligent system
  • Intelligent behavior does not require abstract (symbolic) reasoning – our systems don’t have to think to be intelligent
  • Intelligence is an emergent property of certain complex systems – at some level of complexity systems will become intelligent

We can use Finite State Machines to model the behavior of individual system participants
The architecture provides these capabilities through the use of a combination of simple machines with no central control, no shared representation, slow switching rates and low bandwidth communication. Putting this into modern software system jargon – it’s like many simple computing cores in a massively parallel configuration, each running a Finite State Machine.

Reactive agents are the building blocks of such architecture:
  • They are situated in the world – they are instances and not classes, they are things
  • They interact directly with the world through sensors and actuators – there are no layers, implementation uses direct connectivity to minimize reaction time and implementation overhead
  • They can also interact directly with each other – this interaction is not different than their interaction with the world
A system of reactive agents exists with no overarching models and agents have no knowledge of other agents. Reactive agents simply have to process all incoming signals and based on their internal algorithms and their internal state optionally set outgoing signals.

iRobot’s Roomba is a modern example of a robot using this technic. We can easily identify reactive agents that:

  • Stop the motor and back out when the bumper sensor is triggered
  • Signal the robot to find the charging station or go back to work based on the battery sensor
  • Navigate robot back to the charging station using light beacon when charging is required
Software systems in general and manufacturing systems specifically are complex and suffer from the same problems that exist in robotics and AI software. When we developed Catalyst Platform, we used the very best concepts from reactive agent architecture and functionality, and translated them into a comprehensive solution for event-driven, reactive agent-based software development platform. In the next installment I will cover how our patented technology translates reactive agents from the world of robotics into world of distributed software systems.

Tuesday, July 24, 2012

Catalyst System Management Provides Centralized Management

Chances are that you reading this article via your favorite RSS reader, in my case, Flipboard. Or perhaps you were pulled in via LinkedIn or Twitter. No matter the exact method, there is a common thread among all of these; simply put, you wanted to avoid manually checking all or your favorite websites independently. Like many users, as you come across something interesting, you add it to your favorite aggregator - you did add us right? Once you are subscribed (or following), the magic happens; you can read content from hundreds of blogs and sites from a single interface. Think of the time that you save by having all of that data pulled to you – no need to visit them separately.

So, why I am highlighting some of points of social media (besides a shameless plug – again, you are following or subscribing, right)? The recent release of Savigent’s Catalyst Platform™ offers several benefits – one that I would like to highlight is the addition of Catalyst System Management™ to the Platform. This tool allows you to browse, monitor and maintain your Catalyst Platform™ in one central place. Think of your assets (or nodes) on the plant floor, as websites/blogs, you might have hundreds… System Management allows to you to keep in touch, without typing URLs each time or physically going to the asset.

That is just the beginning, unlike a simple RSS feed, Catalyst System Management™ allows for interaction with the assets on the Platform. You can configure node data, push updates of code/configuration data, and monitor system health in real time. Think of the time that you will save in managing your system… But a word of caution, similar to using a RSS reader, you might just find yourself getting addicted to the information!

Thursday, July 19, 2012

Bull Durham and Catalyst Platform: Composite Applications

This summer is the 25th anniversary of one of my all-time favorite movies. Ron Shelton’s Bull Durham is the classic story of minor league baseball player, “Crash” Davis, an aging veteran who is sent to the minor leagues to help train a flashy young pitcher, Ebby Calvin “Nuke” LaLoosh” in the ways of professional baseball. In the process, he meets and falls in love with a die hard baseball fan, Annie Savoy; who, unfortunately for “Crash” has already taken a romantic interest in young “Nuke”.

While there are number of memorable scenes, one of the best involves Annie discussing her beliefs on love. In educating both “Crash” and “Nuke” on her beliefs, she talks about how matters of the heart are really out of our control and more a matter of quantum physics. As “Crash” rises to leave, stating that after all of his time in minors he doesn’t believe in trying out; Annie asks, “Well, what do you believe in?”

After “Crash” replies with a soliloquy that covers the designated hitter, the JFK Assassination, good scotch and “deep, slow wet kisses that last three days”, “Nuke” replies, “Hey Annie – what’s all this molecule stuff?”

So, what does this have to with Savigent’s recent release of
Catalyst Platform? While this release provides a number of features, many of the features appear, on the surface, to offer only technical benefit. While my co-authors will address the benefits of these features in future posts, some users may be a little like “Nuke” in the scene mentioned above asking, “What’s all this technical stuff?”

In an effort to demonstrate that there’s more than just technical benefit to our new release, I want to highlight a feature that can deliver immediate business results – composite applications. Simply, a composite application is an “application” built of other applications. As “mash-ups” of the service-oriented development world, composite applications are built using re-usable components that can quickly be rearranged in response to changing business priorities and conditions. The ability to build composite applications and reuse components across them within Catalyst Platform will allow our clients to build architectures that are flexible, adaptable and extensible. More importantly, when coupled with an iterative development approach, composite applications built on the Catalyst Platform will allow customers to develop applications more quickly leading to a faster return on their investment.

Given the dynamic nature of today’s manufacturing environment, leading software providers have to provide tools that facilitate faster, more scalable development. Catalyst Platform creates the foundation to meet this challenge. That’s what I believe – in case Annie asks…

Tuesday, July 17, 2012

Savigent’s Catalyst Platform v4.0 Lays Foundation for Migration to Windows Azure

Includes new Catalyst Bus™ with real-time information access via OData services
MINNETONKA, Minn. – July 17, 2012 – Savigent Software, Inc., the Minnetonka-based company specializing in event-driven manufacturing operations management software, announced today the release of Catalyst Platform™ version 4.0. The updated version delivers significant enhancements to the software, most notably a fully managed infrastructure for the development, deployment and management of composite applications. Catalyst Platform™ also provides a central repository for revision controlled management of composite applications and a new, highly scalable Catalyst Bus™ with real-time information access via OData services.

Michael Feldman, company CTO, said, “The release of Catalyst Platform™ lays a foundation for the migration of the entire Catalyst suite to Windows Azure and provides significant new functionality that supports enterprise-wide development, deployment and management of composite applications in data centers, and private and public clouds.”

Catalyst Platform™ is the foundation of Savigent’s expanding Catalyst™ suite of software products. The software dramatically simplifies the development and management of highly scalable service-oriented software solutions in the manufacturing environment. It combines three powerful capabilities into one software product: a composite application development framework, a unifying service architecture and a managed execution environment.

The enhancements present in Catalyst Platform™ version 4.0 provide users with significantly increased functionality, flexibility and simplicity. Catalyst Platform™ provides a central repository for revision controlled management of composite applications and their configuration, and managed deployment to the Catalyst environment for execution. Catalyst Bus™ extends the Service Oriented Architecture of Catalyst Platform™ by implementing standardized service patterns for commonly used interactions in the environment. Data within the Catalyst Bus™ is available in real-time via OData services. Catalyst Development Studio™ provides a visually intuitive environment for the assembly of composite applications from highly configurable, prebuilt agents.

Jay Mellen, Savigent’s executive vice president of business development, remarked, “This is a very exciting release with loads of new, differentiating functionality. For example, Catalyst Bus™ dramatically simplifies service implementations and it is OData accessible, so domain experts and IT professionals can access information quickly and easily using a wide variety of applications tailored to meet their needs. And we are continuing to keep our entire product line on the leading edge of Microsoft technologies, with support for Microsoft .NET 4.0, SQL Server 2012, SharePoint 2010, Windows Server 2008 R2 and Windows Azure.”

With this the release of Catalyst Platform
, Savigent is rebranding its entire Catalyst suite of software products. Catalyst Workflow™ delivers a controlled system for workflow automation, providing manufacturers with guaranteed compliance, unparalleled traceability and rich manufacturing intelligence. Catalyst Historian™ provides manufacturers with a real-time, context-aware data historian, in addition to a comprehensive data analysis tool.

Founded in 1994, Savigent Software has pioneered a new class of event-driven manufacturing operations management software. The company currently serves manufacturers in a variety of industries including automotive, semiconductor, industrial, specialty chemical, consumer packaged goods, and aerospace and defense. Customers served by Savigent are seeking increased efficiencies, agile control of manufacturing assets, and improved process control and product quality. The company also serves OEMs and independent software vendors by providing value-added software solutions for their products. Its Catalyst™ suite of products provides solutions for workflow automation, manufacturing intelligence and systems integration.

More information about Savigent Software can be found at Read about manufacturing operations management on our Level3 blog at Follow us on Twitter at

Tuesday, July 3, 2012

Workflow Automation provides a “Business Process Historian”

In our interactions with prospective customers, we often see them struggling to answer seemingly simple questions like:
  • What material lot was used in this batch?
  • When did we detect this issue?
  • Why was this decision made?
  • Who postponed the maintenance on the equipment?

In a manufacturing enterprise, information exists in a variety of systems (ERP, MES, LIMS, QMS, WMS, SCADA and IO devices, etc.), each an island of information, that need to interface and communicate with each other as part of the intricate orchestration of process execution. Each island is a System of Record (SOR) – an information system which is the authoritative data source for a given data element or piece of information. ISA-95 states that activities of manufacturing operations management are those activities of a manufacturing facility that coordinate the personnel, equipment, material, and energy in the conversion of raw materials and/or parts into products.

As manufacturing business processes execute, data from various SORs is used and/or changed. Then, why is it so difficult to answer the questions above when all the data required exists in SORs? The reasons are not that obvious.

First off, many of the standard operating procedures (SOPs) in the manufacturing environment are manual in nature (people driven), and a majority of them are performed on an as needed basis. Since the SOPs are executed manually, understanding a detailed account of what happened, when it happened, and why it happened typically involves, at best, paper records, and at worst meetings with people discussing what they recall happing. Unfortunately, most people look at SORs as individual systems that are eventually updated through manual data entry or, in some cases, through automated data flows using variety of technics (i.e. file transfers or custom code extensions of the individual SOR). I am not going to address the problem of Band-Aid / spaghetti code integration in this blog posting; it is a topic on its own.

The second problem that prevents people from answering the questions is that they don’t have access to the transient history of process execution that occurs outside of these systems. In other words – these systems don't store the information change history needed to understand lineage of causes and effects required to understand what, when and why. This information is essential to driving continuous process improvement within the manufacturing enterprise. Applications like process historians, if used to capture information, typically only includes time context and are unable to handle complex and transactional data from SORs like ERP and MES. Likewise, ERP systems typically lack detailed execution information.

As an example, ask yourself how you would answer the following question - what production request was first to process after equipment maintenance was performed on a specific piece of equipment? In most cases the answer we get is that someone has to know how to query records related to the equipment from the Maintenance Management system, and from the time-stamp of the record try to find the production request that was processed around that time. Time, again, being the only relevant context available to link information between SORs.

What is missing is one very important element of context of information – its “lineage” or ordered history of information changes. Lineage adds the essential context that relates instances of execution with a timeline of events and activities. Lineage also provides information about the duration of process execution, be it related to equipment, material transformations, system performance, or people’s responsiveness. From a continuous improvement perspective, lineage provides in situ metrology for the performance of SOPs and for all participants in the SOP.

Our preferred solution that solves the lineage problem is to automate SOPs using Catalyst workflows – because workflow automation provides a “business process historian”. To enable high reliability and fault tolerance, the Catalyst Workflow Engine uses a transactional data store that records all of the execution data related to a workflow as well as all data that is passed in and out of a workflow to/from connected SORs (systems, equipment and people). Workflow automation provides a system that contains a complete lineage and therefore provides visibility across all SORs participating in the execution of SOPs.

Revisiting the example, consider the alternative workflow-based approach for SOPs that will capture data upon:

  • Interaction within any SOR (a request to make product was entered),
  • Presentation of a task to any person (an operator was told to go make product),
  • Communication with any asset (the equipment in the workstation experienced a state changed to running and it is now making product).

The execution history of the workflow(s) provides the lineage required to quickly and easily answer the question posed. But perhaps more important, the complete business process history is known. Because all data related to the execution of workflows is accessible, the information required to manage SOP performance exists. This provides additional insight into questions like:

  • How long did it take for the operator to actually go and make the product? (uncovering unnecessary delays which is wasted time)
  • How long did it actually take to make the product? (providing true cycle time for the product within the workstation)
  • How much idle equipment time was experienced in the process? (providing true equipment utilization on productive tasks)

None of this information typically exists within an individual SOR, and without workflow automation, we are often left with an incomplete and incomprehensible view of the past. So with workflow automation as a Business Process Historian, not only can we answer the question posed, but we can support continuous improvement efforts with detailed data about the performance of SOPs.

Tuesday, June 12, 2012

Supporting Continuous Improvement Efforts with Incident Management (Part 2 of 2)

This post is a continuation of my prior post “Supporting Continuous Improvement with Incident Management."

I’m working with a customer right now that’s using Incident Management to support their continuous improvement efforts. As an example, one incident that they are particularly interested in is a tool change event – periodically they need to stop manufacturing and re-tool an asset. There’s a downtime hit when the asset is being re-tooled and there’s scrap associated with the event. How long the re-tooling process takes is varied, we know when it occurs and we know how long the asset has been in the tool change state – we’ve taken Mike’s first step and armed ourselves with data. What we’re interested in is catching the top 20% of the tool changes that take longer than they should (and represent 80% of the downtime). We want to manage them to completion, quickly, and capture as much intelligence as we can about why the incident occurred and what was done to resolve it. 

We’ve defined a “tool change incident” to be a tool change that is taking longer than [an acceptable period of time]. When the incident occurs, the workflow based system is instantiated and we log all relevant data related to the incident in the system. The appropriate leads on the floor are notified instantly that an incident has occurred and they are prompted to perform an investigation of and document the incident as well as the response that was taken to resolve the incident. As this isn’t a critical incident and we’re still in “learning mode”, we’re not requiring review and approval of the response (we are skipping the final two steps in the process). Actions required of users that are not complete in an acceptable period of time are escalated and managed appropriately.

As we learn more about the tool change incident we expect that there may be changes to training and tool change procedures. If warranted, a specific workflow can be developed to manage the tool change process and we can instantiate it when a tool change is required, managing the execution of the procedure and error-proofing it. We don’t expect that we will turn off the tool change incident because it provides assurance that 100% of the tool changes that are taking longer than acceptable are being managed properly. We do expect that the average time to complete a tool change and the standard deviation of tool change times will decrease – and the frequency of occurrence of the tool change incident will decrease as a result. Since we can measure this we can quantify the impact that we are having on the operation (in terms of improved availability and actual dollar savings). 

Providing support for the continuous improvement process with incident management provides significant benefits:
  • We know that every incident is being managed in a consistent manner – 100% guaranteed execution and process compliance.
  • We know that the appropriate people are made aware of and acting to resolve incident on occurrence – we have the right people on task.
  • We are capturing information related to incidents and their resolution in real-time – there’s no after the fact documentation that we all know is error prone and all too often ignored.
  • We have all the information we need to quantify the impact of incidents, their frequency of occurrence by asset, by tool, by process, by product, by operator, etc., and how long they take to resolve – all data is correlated to specific incidents.
  • We have the ability to categorize and Pareto the causes of incidents and have documentation about how they are resolved – we can prioritize continuous improvement efforts.
  • We are providing summary statistics and reporting in and managing user interactions through the corporate SharePoint portal – everyone has visibility.
  • We have made all relevant data openly accessible via SharePoint and OData services and it is stored in a back-office / IT friendly, managed system – it’s a highly scalable and manageable solution.

I’ll be writing more about incident management in upcoming posts and how we are using workflow automation to support continuous improvement, lean and six sigma initiatives. If you are interested in learning more about Incident Management don’t hesitate contact us, we’d be more than happy to provide additional information and a demonstration of the system.

Thursday, June 7, 2012

Supporting Continuous Improvement Efforts with Incident Management (Part 1 of 2)

In our last post on Level3, “Ghosts, Goblins, and Monsters – The Power (and Danger) of Visibility”, Mike Stuedemann included a fantastic list of actions that you can take to maintain and build the organizational courage to follow through on improvement efforts. I’m particularly fond of the (adapted) Voltaire quote “perfection is the enemy of done” and his recommendation to start simply and iterate often. The continuous improvement process works, anyone telling you differently is selling something. 

But there’s a situation that I run into very frequently, more often than you would expect, when I am talking to customers and prospective customers. They have a known list of “problems” that are impacting their manufacturing operations, sources of waste and inefficiency (scrap events, unplanned tool downtime, frequent setups, etc.) but they don’t have enough information about them to quantify their impact and prioritize them, let alone address any underlying root causes. Recognizing and acknowledging this is a good thing because there’s a lot you can do to support the continuous improvement process.

One thing we advocate is having a rigorous business process in place to manage how you respond when problems occur – we call it Incident Management. It’s a “Workflow based System”, a tightly defined collection of configurable workflows that work in concert to manage incidents through a standard process – Creation, Notification, Investigation, Resolution, Review and Approval. Incidents can be configured to run through the full process, or a subset depending on the level of response required. User activities are managed through a SharePoint portal and directed to participants based on roles. As it is a collection of workflows, so it’s easily reconfigured to meet unique customer needs.

In a couple of days I’ll publish a follow on to this post. I have a customer example that will help describe the functionality and benefits of Incident Management, so stay tuned.

Tuesday, May 22, 2012

Ghosts, Goblins, and Monsters – The Power (and Danger) of Visibility

As a father of three young boys, I am consistently bombarded by requests. “Can we play outside?”, “Can we go to Grandpa and Grandma’s?” and “Can we buy a toy?” are familiar questions for my wife and I.

One popular request recently has been, “Can we watch Scooby Doo?” For those of you unfamiliar with Scooby Doo – it’s a cartoon that features a group of teenagers and their dog who investigate paranormal occurrences. Recently, our 3 year-old asked to watch an episode of Scooby Doo. With its ghosts, goblins and monsters, my wife and I were initially reluctant to let him watch an episode fearing nightmares. However, after much lobbying from him and his older brothers, we relented. Needless to say, he only ended up watching about 2 minutes before his eyes were shut tight due to fear.

Undoubtedly, some of you must be wondering, what does this have to do with Manufacturing Operations Management (MOM) and the issue of visibility? It turns out that many who undertake a process improvement effort in the MOM space want visibility into the goblins that lurk within their operation. Whether they’re the ghosts of process improvement efforts past or the “waste” monster, project sponsors and team members enter a process improvement project ready to take on whatever is uncovered.

However, an interesting phenomenon occurs once they see these “villains”. Often, much like my 3-year old son, they shut their eyes. Statements like, “we can’t change that process step” or “the data must be wrong” become common place as the Team and, ultimately, the broader organization loses the courage to act upon the trends that have been exposed by the project.

There are some actions we’d recommend taking to maintain and build the organizational courage to follow through on improvement efforts:

·        Arm yourself with Data. Provide open access to data and use case studies to demonstrate its accuracy (read a few of ours here).  This will help build confidence and consensus within the organization that decisions are being made on fundamentally good data.  You will also likely find that providing easy access to good data promotes more data-driven decision making.

·        Know your Nemesis. Use good analytical techniques to rigorously quantify the “villains”- these are the events that define poor performance, waste, poor quality, yield loss, etc. - publish the definitions and make them common knowledge. You’ll likely need to be able to capture and analyze both time series process and event-based context data.  Be sure to check out the capabilities of Catalyst PDC, our historian, and watch for more on Level3 regarding Event Detection and Complex Event Processing.

·        Plan your Attack.  Define the right response (a business process) that you will take when an event occurs. Start simply, iterate often and know that your business process will evolve as you learn more about your adversary.  Remember, perfection is the enemy of done!

·        Take Action.  Ensure you are diligently executing every time an event occurs – automating business processes with workflows is a great way to ensure 100% compliance and guaranteed execution.  If you want to learn more about workflow automation, read this great case study.

·        Promote your Wins.  Many people overlook the importance of promoting successes, don’t be one of them.  Make sure you promote the actions being taken with data regarding savings, frequency of occurrence, duration, etc.  
Taking these steps will help ensure that you have the courage to keep your eyes open and take the appropriate action no matter how scary the monster.

Wednesday, May 16, 2012

Merovingian or a simple explanation of the Reactive Agents Concept

As I was working on my first topic for this blog – “Reactive Agents” I found myself suffering from a big, fat case of writer’s block.

I have been working with Reactive Agents for over 15 years; they form the foundation of our Manufacturing Operations Management platform. When we talk to our customers about them we often use an analogy – they’re like functional “Lego Blocks” that you can use to build manufacturing systems, but we rarely talk about what “Reactive Agents” really are and why they are the foundation for our technology. Believe me, there are a lot of reasons– developer productivity, scalability, flexibility, reuse, etc., the list goes on and on.

But I was stuck, and quickly realizing that the topic requires a good introduction for novice readers, something that won’t scare people away with overly complex theory and terminology. I needed an inspiration, something better than the quote from the first published article about Reactive Agents: “In order to build a system that is intelligent, it is necessary to have representations grounded in the physical world, such obviating the need for symbolic representations or models because the world becomes its own best model.” (Brooks, 1991).

I turned my attention to the TV, unexpectedly in about 4 to 5 clicks I got my inspiration – “Matrix Reloaded” was on, right about the time that Neo and friends walked into a club looking for the “Key Maker” (here is the clip if you need to refresh your memory

The Merovingian talked about “cause and effect” – the philosophical concept of causality, in which an action or event will produce a certain response to the action in the form of another event. I have seen this scene numerous times before, never realizing how it is related to the products and technologies I have been working on for years.

You can’t model the world, but with reactive agents, you don’t have to. In Brooks’ words: “this hypothesis obviates the need for symbolic representations or models because the world becomes its own best model. Furthermore, this model is always kept up-to-date since the system is connected to the world via sensors and/or actuators. Hence, the reactive agents hypothesis may be stated as follows: smart systems can be developed from simple agents which do not have internal symbolic models, and whose 'smartness' derives from the emergent behavior of the interactions of the various agents.”

Cause and effect…

Stay tuned for more on Reactive Agents and how Savigent is applying this technology in manufacturing operation management systems. Upcoming posts will cover base concepts as well as the software side of the Agent-based platform.

Wednesday, May 9, 2012

What Good is Your Data if You Can’t Find it?

Modern data historians help solve many issues for process engineers and managers. Data Historians provide the ability to access past values of process data, enabling advanced trending, and to accurately provide root cause analysis. Process data comes in two main forms:
  • Analog Values, in which data is represented by a continuous variable (temperate, pressure, level, weights and etc.) 
  • Discrete Values, in which data is from a finite set of values, such as state information (like on/off, opened/closed, high/low) 

Under the hood of a typical historian, the value for each data variable is stored by time. A routine investigation by a process engineer is to answer the age old question:
“There were reports that we were out of quality on machine X last week, what happened?”
The process engineer can query the historian, based upon a time range, to retrieve process data such as temperature or current draw.

At first glance, this time based retrieval method of the historian seems sufficient. But is it? Upon a closer look, several questions might arise such as what was the machine state (was it running during the time queried) or what was the acceptable range for the process variable for the part being manufactured (by recipe, SKU or lot). Without context to the data, trying to find the appropriate data can be difficult, time consuming and inefficient. If the information cannot be found easily, its value is reduced.

When implementing a historian, take the time to consider the context variables that might assist in future queries; some examples are:
  • Machine State (Running, Idle, and Down) 
  • Lot or BatchID 
  • Recipe Class and/or Step 
  • Operator 
  • Shift 
By using the approach of adding context variables along with process variables, answers can be found with a couple of queries to refine the results and transform the data into information. In addition, similar lots can be compared to determine differences.

Savigent’s Historian solution goes one step further by simplifying the retrieval process down to one query leveraging both context and time.

Thursday, May 3, 2012

OEE won’t provide you ROI

I debated using the title “Why I hate OEE” but I thought that was a little extreme, albeit catchy. And I wouldn’t go so far as to say that I hate OEE. Rather, I view initiatives solely focused on calculating the metric like investing in an expensive rear view mirror. You need one, don’t get me wrong. But the metric and the data required to calculate it will only tell you where you’ve been. If implemented properly, you will know when you have had a problem, but it won’t change your behavior, your business processes or support your continuous improvement efforts - all of which can serve to increase your return on invested capital.

My friend Matt Littlefield at LNS Research (formerly an analyst at Aberdeen) wrote a good summary of OEE (Part 1, Part 2) in his blog. If you are looking for some good foundational information on OEE it’s well worth a read. He advocates measuring OEE, rightly so. From a comparative perspective it’s helpful (between equipment, within a plant, between plants and potentially between companies). What’s not written, and Matt will be quick to point out, it’s the action you take based on OEE related events that provides ROI.

Therein lies the rub, the devil is in the “events” and the “events” are what provide ROI. Take the availability component of OEE for example – the ratio of asset uptime to asset scheduled production time. What’s more important to you, knowing that you are using 85% of scheduled production time or knowing when an asset transitioned into an unproductive state (the event)? Would you rather have a detailed account of the 85% of available time you used, or of the 15% that’s being wasted? More importantly, what are you doing about it? What actions are you taking when an asset transitions from a productive to unproductive state? Taking action on exception events differentiates a manufacturer from its peers and provides returns that exceed peer performance – the mark of an exceptional manufacturer.

We advocate a unified approach to modeling asset states and state transitions, with sufficient detail to support continuous improvement initiatives. The same holds for the quality component of OEE – have a unified approach with regards to quality events (yield loss). There’s a special emphasis here at Savigent on events, state transitions and quality events, because we want our customers to realize high ROIs on their implementations. Knowing when events occur allows our customers to implement workflows that guide the actions they take every time an asset transitions from a productive to unproductive state, or a yield loss event occurs. Since events are monitored in real time and responses are implemented as workflows there’s guaranteed execution with 100% process compliance. Our customers know when bad things happen, and they know they are responding to them as they have defined each and every time.

Of course, when you take this approach you’ll have all the data you need to calculate OEE. But you will also capture a tremendous amount of information about the events that are causing inefficiency – their frequency of occurrence, duration, impact and any related data you need to support continuous improvement efforts.

Tuesday, April 24, 2012

Understanding Workflow Automation: A Foundation

One of the topics that you will read about frequently in upcoming blog postings is Workflow Automation. It’s a topic we all here at Savigent are pretty enthusiastic about because it’s a big part of what we do with our customers. And the feedback we get is consistently positive because Workflow Automation based solutions pay for themselves – providing a significant return on investment. This posting is to provide a simple foundation around the concept, particularly as it relates to Workflow Automation within Savigent’s Catalyst suite of products.

When we talk about Workflow Automation, what we are specifically referring to is a controlled system in which business processes, standard work, operating procedures, etc. can be implemented. It is Business Process Management (BPM) specifically designed to be applied within the manufacturing environment. It allows companies to migrate paper-based processes and procedures to a controlled electronic system for execution.

Workflows are comprised of events (that instantiate or influence the execution of workflows), automated actions (that interact with systems and/or equipment), user actions (that interact with people) and logical elements (that direct the execution of the workflow). They can be implemented individually to meet a focused need, or architected into systems to meet higher level functional needs. Any software functionality can be implemented as a system of workflows.

The manufacturing environment is a complex and diverse collection of systems, equipment and people operating in concert, unified by process. Processes are developed to handle standard activities that define how we manufacturing goods for sale. They are also developed to handle exception events that define how we respond to situations that should not happen but all too often do. Whatever the focus, manufacturers invest time, effort and energy in defining and continuously improving those processes that most significantly influence their ability to execute profitably. Workflow Automation is the logical next step in the evolution of processes in the manufacturing environment and it provides a wealth of benefits:

Guaranteed Execution and Compliance - Implementing a process as a workflow guarantees execution and compliance – that is to say, the desired response to an event will be taken every time.

Traceability and Process Genealogy - Because workflows operate in a controlled electronic system, they are highly traceable and provide a wealth of execution related information – what happened, how long it took, who did it, when they did it, etc.

Manufacturing Intelligence – All data that touches a workflow through its execution is stored in Catalyst for posterity. This related data, when combined with the execution history of the workflows, provides a tremendous source of manufacturing intelligence all correlated around the most important factor to our customer – their business processes.

Operational Efficiency – Workflow execution is managed by a controlled electronic system. Interactions with it, by systems, equipment and people, are as a result orchestrated. We see customers realize operational benefits simply by implementing workflows from existing paper-based operations without change. Even greater benefits are realized when efforts are made to Pareto execution paths, optimize execution, and refine workflows and behaviors.

Wednesday, April 11, 2012

Savigent Announces New “Level3” Manufacturing Operations Management Blog

MINNETONKA, Minn. – Savigent Software, Inc., the Minnetonka-based company specializing in event-driven manufacturing operations management software, announced today the release of a new manufacturing operations management blog titled “Level3”. Level3 derives its name from the ANSI / ISA-95 Standard, which is an international standard for developing an automated interface between manufacturing operations and enterprise functions.

The Level3 blog is a venue to discuss a variety of topics relevant to manufacturers, including: Workflow Automation, Manufacturing Operations Management (MOM), Manufacturing Execution Systems (MES), and Systems Integration. Content will cover the integration of data from the ‘shop floor’ up; including Historians, reporting, IT Topics and SCADA systems. The Level3 blog comes to Savigent with its creator Jimmy Asher, formerly of Avid Solutions, Inc.

Jimmy Asher, an architect and engagement manager with Savigent, commented, “We are excited to announce the continuation of the Level3 blog. It is an exceptional venue for Savigent to share thought provoking insights and perspectives on topics that are relevant to manufacturers.” The Level3 blog can be found at, or via direct links from the Savigent website

Founded in 1994, Savigent Software has pioneered a new class of event-driven manufacturing operations management software. The company currently serves manufacturers in a variety of industries including automotive, semiconductor, industrial, specialty chemical, consumer packaged goods, and aerospace and defense. Customers served by Savigent are seeking increased efficiencies, agile control of manufacturing assets, and improved process control and product quality. The company also serves OEMs and independent software vendors by providing value-added software solutions for their products. Its Catalyst™ suite of products provides solutions for workflow automation, manufacturing intelligence and systems integration.

More information about Savigent Software can be found at Follow us on Twitter at

Tuesday, April 10, 2012


So you might have noticed the lack of postings in the past couple of months…  Simply put, I have decided to take an opportunity outside Avid Solutions.  With that being said – “Level3” will continue, but have a new home (along with me).  I would like to thank Avid Solutions and all my peers for their support and assistance with the blog over the past year – matter of fact there are two other blogs created by others at Avid that I would encourage you to read:
The ArchestrAnaut – a blog dedicated to all issues of the Wonderware Archestra System Platform
Automate-ZEN – a blog created for the automation professional with aspirations to simplify their career by adding value and humor to their life.

Of course you You can find our new home for the Level3 blog at  - please remember to update your feeds!