Files
LDRDocs/Processes/Continuous Improvement/Six Sigma Done Wrong and Described Badly.md
T
2024-02-02 13:46:05 -05:00

2.8 KiB
Raw Blame History

Six Sigma is a continuous improvement methodology partly focused on quality. It observes the situation that short-term process output is not a good indicator of long term performance. Six Sigmas approach to quality is to reduce process output variation so that on a long term basis (which is the customers aggregate experience with our process over time), there will be no more than 3.4 defect parts per million opportunities (DPMO).

What that means is that out of 1 million chances for a process to be skipped or done differently than is standard, it only happens 3.4 times.

So lets look at one of the deviations from the past week: out of 173 times we should have taken 3-day pictures, something was different about it 14 times (or 8.1% of the time). Six Sigma teaches that a process capacity measured over a short period of time is at least 1.5x different than what customers perceive. So if were having an 8.1% deviation rate about pictures over the course of a week, our customer is likely experiencing about a 12.1% deviation rate.

Lets put that into the context of McDonalds. Lets say Bobby is supposed to check if the fries are fresh every 15 minutes, but he fails to do that 10% of the time. That means we probably receive unexpected fries 15% of the time from McDonalds. Now lets say that John is supposed to ask customers if they want salt, but he forgets to do it 5% of the time. Six Sigma shows us that (roughly) the customer-experience deviation rate is probably 22.5%. Meaning 22.5% of the time we order fries theyre different than we expected (in some way). What were doing right now is a process called DMAIC, which is a data-driven approach to rapidly improve process output.

Define tasks and duties in a way that Muninn can understand them. Then have Muninn Measure aspects of the task for unexpected results. Analyze those results to determine what is causing the unexpected output. We then Improve by implementing a verifiable solution or improvement to the process. We Control by putting measures in place to make sure the Improvement stays in place, most likely by repeating DMAIC to identify how it can be automatically measured.

For instance, lets say we identify that one key problem with autorequests is untrained people doing them. We may create a list of authorized people which Muninn can access somehow. The test is then modified to specifically flag when unauthorized people are doing autorequests, thus controlling that the change stays in place. We might still observe that similar problems still happen, as well as unauthorized people doing them. However, well be able to detect that and act on it quickly, rather than waiting for complaints.