BPM Research - How Do You Identify The Operational Processes

In the early part of next quarter, I am entering a research phase on a topic I have alluded to many times: techniques for Process Architecture.

One of the key problems that BPM initiatives suffer from is that, even with all the attention, we end up with processes that still have significant issues — they are too inflexible and difficult to change. They become just another version of concrete poured in and around how people work — focusing on control rather than enabling and empowering.

A phrase that I picked up (from a business architect) put it fairly succinctly:

“People tend to work hard to improve what they have, rather than what they need.”

This was then further reinforced by a process architect in government sector on an email:

“The wall I keep hitting is how to think about breaking processes into bite-size chunks that can be automated.”

The problem is that we don’t have good techniques to design (derive) the right operational process architecture from the desired business vision (business capability). Of course, there is an assumption here that there is an effective business vision, but that’s a subject for another line of research.

I am talking about the operational chunks — the pieces of the jigsaw puzzle required to deliver a given outcome. Not how the puzzle pieces are modeled (BPMN, EPC, IDEF, or any other modeling technique), but how to chop up the scope of a business capability to end up with the right operational parts.

If they even recognize the problem upfront, what normally happens is that folks apply functional decomposition to what they currently think of as their “processes,” which often more closely links the operational activities to the current org chart. Rather than breaking down the silos, this approach tends to reinforce the existing structures (Stafford Beer once described the org chart as “mechanisms for apportioning blame,” which I think is about accurate). The resulting process implementation, while it may have been automated with a BPM suite, merely speeds up the existing processes, complete with all of its arcane exception handling and workarounds.

Putting it another way, you can often end up in a bigger mess, faster! There was no attempt to simplify — merely to automate the cow paths.

Now I have a couple of techniques in mind for further assessment, but I am interested in interviewing anyone who has been involved in major process initiatives where any of the following conditions were true:

  • The process that was under investigation turned out to be a series of processes.
  • The solution followed a dynamic case management approach — where the implementation was composed of a number of processes.
  • The implemented processes changed significantly throughout the project.
  • BPM implementations where process structure changed significantly post initial implementation.
  • Situations where there is a dynamic relationship between processes — i.e., where one process instantiates, triggers, or chains to others.

I would stress, this research is technology-neutral — the techniques I am looking to identify are at an abstraction level higher than the technological implementation. It shouldn’t matter what technology is used to implement. Once you have an environment (BPMS) where one process can trigger another and pass it some context, i.e., just about every BPMS, then you have the basis for inter-process communication. After that it all comes down to how you design the processes. 

I would be happy to share the results with participants who contribute to the research. Interested parties should email me directly at dmiers@forrester.com.


Chunks and Boundaries

The problem with boundaries is we just make them up. We first "chunked" the work with an org chart or hierachy (military model). Then we "chunked" the work horizontally by the production line or process. We also try to "chunk" by markets, or by regions. The challenge is that when you approach work this way you inevitably slice away something that is critical for the "chunk" to work in the first place.

The problem chunking is it is taking an outside in approach - someone standing "outside" tries to determine the boundaries of the work - when the very messy reality is that there really are no boundaries.

So what to do? What we really need is a way for people to describe their own neighborhood in a way that they can intelligently connect with their neighbors in a way they can negotiate getting the work done at the local level. But they can only work effectively locally if they also have a global or "big picture" understanding.

So regardless of how the work is sliced and diced, the real key is global transparency into the activity, with local control. However work happens there needs to be a "live," completely transparent map or model that allows everyone in the system to understand how everyone else's work fits with their own. Further real time change needs to be visible. Holy grail? Maybe, but I look forward to seeing what you turn up in your exploration. I suspect there are a lot of pieces that are starting to come together. The shift we are in toward collaborative activity management is every bit as profound as our shift from functions to processes.

Verna Allee @vernaallee