Resources

Students in our PI concentration are provided access to a number of videos created by Dr. Kramer.  We provide some here to support our alumni students that lose access to our classroom repositories upon graduation.  Most videos are close-captioned.

Exploratory Data Analysis

Exploratory Data Analysis (EDA) is a structured process for analysis.  Call it a cookbook.  It lays out what needs to be done in a process analysis and in what order.

Part I: Setup and Link to Requirements

Always start with your customer.  Who or whatever the receiver of the process is defines the needed outcomes of the process.  Make sure to know how the process adds value to the firm.

Part II: Histograms

Look at your data.  Characterize its performance using common metrics like mean and standard deviation.  Note the shape and whether the mean or median are more appropriate for central tendency.

Part III: Capability

Compare the current process output to the needs of the customer.  This ratio (needs/current) determines how capable the process is performing.  We use metrics like Cp and Cpk process capability ratios here.

Part IV: Time Series

It is important to assess the process performance over time.  We want to make sure the process is “stable” or not changing from a predictable pattern, otherwise it may be in transition and not appropriate to assess.   Here we employ tools such as statistical process control (SPC).

Part V: Relationships

We search for relationships between variables.  Specifically, we search for input levels that lead to consistent output levels.  We use correlation to measure their relationship and then leverage regression to assess what fraction of overall output is explained by this relationship.

Part VI: Strata

Once the overall (aggregate) performance and relationships are determined we search for other causes of output performance by “peeling the onion” and checking for performance differences by groups.  Groups may be locations, materials types, days of the week, or any other factor.

Part VII: Summary

Applying Exploratory Data Analysis (EDA) systematically ensures that one part of the analysis is not done before it is appropriate.  For example, if a process is in transition and not exhibiting predictability (process control) then its output data distribution is heavily influenced by when the reading was taken.  A central tendency cannot be established until the process is in a stable state.

Quality Function Deployment

Quality function deployment (QFD) is quite literally the deployment of quality.  It is a strategic process planning and analysis tool that maps what is needed to how you plan on accomplishing it.  It can be applied to plan how you will deploy quality and it can be applied to assess how well you are deploying quality.

Process Flows

Process flows are building blocks for process improvement.  They provide a visual depiction of operations and form the basis for further analysis.  Below process flows are described and then followed by their assessment through process capability analyses.  We then explain common process metrics.  There are quite a few.  We include a downloadable file for you to explore on your own.

Part I: Process Flows

Part II: Process Capability

Measuring Processes: DPO, DPU, DPO RTY and FTY

Statistical Process Control

Statistical process control (SPC) is a bit of a misnomer: we are not actually “controlling” our processes, rather, we are assessing whether they seem to be acting predictably.  It may be predictably good or predictably bad.  SPC leverages statistics to “know” when the process seems to be changing.  It may be changing in its central tendency (mean) and/or changing in the amount of output variation (standard deviation).

Value Stream Mapping

Value streams take process flows to the next level.  They enable you to focus on not only what you do, but what you don’t do to facilitate their improvement by simplification, and waste removal.  We explain all the features of value stream maps and provide an example.

Introduction to Value Stream Mapping

What are value streams? Think of process flows on steroids.

Part I: Process and Information Flows

The value stream map begins with the process flow.  We make sure to include all delays BETWEEN steps (detail not normally depicted in traditional process flows) and then adds the information flow to and from suppliers and customers.

Part II: Adding Data Boxes and Rework Cycles

Now we add process step details including processing time, delay time, and process yield.  We also reflect the yield with rework cycles.

Part III: Adding Timing Chain and Summary

We then add the delay times and processing times below the data boxes to clearly indicate the value to non-value time.  Once done, we summarize to find the lead time and calculate the % value added time.

Part IV: Summary Box and Summarizing Value Stream Mapping

We complete the value stream map by adding a summary box containing all key performance details.

Queuing Theory

Queuing theory is the study of waiting lines.  Their management is hugely important to operations.  Unfortunately, waiting line performance is NOT intuitive: an operation working at 25% utilization does not perform twice as fast as one with 50% utilization.  Below is a description of basic queuing theory and the application of management tools that work.

Part I: Introduction and Arrivals

Waiting lines are simple constructs: an operation (process) “serves” customers that “arrive.”  Customers care about the sojourn time (time in the system) and specifically the time they must wait before service.  Here we focus on the arrivals.

Part II: Service Systems

The service provided to customers arriving at an operation can be characterized by the number of servers as well as the waiting line priority scheme.  We are most familiar with FCFS – first come, first served.  Here we focus on the service system.

Part III: Arrival and Priority Considerations

We are most familiar with FCFS – first come, first served.  There are others that we experience, too, such as triage at an emergency room that admits (serves) the most critical customers first.  Here we focus on the priority considerations.

Part IV: Single Server Inputs

The single server is a very common situation.  Accessing an ATM, or a single order-taker at a coffee shop.  Here we explore inputs to the operation with a single server.

Part V: Single Server Outputs

The single server is a very common situation.  Accessing an ATM, or a single order-taker at a coffee shop.  Here we explore the common performance measures for single server system.

Part VI: Single Server Queuing Formulas

The single server is a situation that lends itself to simple mathematics for performance calculations.  Here we explore the single-server queuing formulas.

Part VII: Example Problem

Here we work through an example of a single-server queuing system.  We calculate the most common output performance measures.

Part VIII: System Performance

Here we look at the managerial implications of the single-server operation.  We look at the options for the manager based on the goals of system performance.

Part IX: Affecting System Performance and Summary

Finally, what are the options for managing waiting lines – affecting the system performance?