Pages

Sunday, November 11, 2012

LABORATORY ERROR AND THE LEAN SIX SIGMA PROCESS


LABORATORY ERROR AND THE LEAN SIX SIGMA PROCESS

Originally lean and six-sigma were separate ideas designed to achieve two related metrics: time and error. Lean was designed to eliminate non-value-adding steps and six sigma aimed to reduce variation in process. The Lean Six Sigma projects comprise the Lean's waste elimination projects and the Six Sigma projects based on the quality characteristics of a process. Lean is a process adopted to eliminate waste from a process, first practiced and then formalized into the Toyota Production System.

The objective of lean was to reduce time; the objective of six-sigma was to reduce error. Today both are combined to form lean and six-sigma. Lean six sigma measures the amount of non-value adding steps in a process to reduce variation and improve performance of a process, as part of its core metrics. A process sigma represents the capability of a process to meet (or exceed) the process requirements. It reflects the number of defects (errors) per million opportunities (DPMO). The sigma refers to the number of SDs from the mean a process can be before it is outside the acceptable limits. E.g. if sodium has six sigma performance, then the mean could shift by six SDs and till meet the laboratory requirements. A 6 sigma process has narrow process SD and produces only 3 errors for every million tests. A 3 sigma process has much wider SD and produces about 26,674 errors per million tests.

There are various ways to calculate the sigma of a process. In order to calculate the sigma, defects must be clearly defined. The most straight forward method uses the process yield-the percentage of times that a process is defect free. Another simple method is to calculate the DPMO. Both of these methods then require finding the process sigma on the process sigma chart.
 
Six sigma metrics can be plotted graphically. This chart incorporates many of the measures like total allowable error for given analyte or given process, systematic error and imprecision. Systemic error and imprecision are derived from the COM experiments and AE is by the specification given by CLIA regulations. 


Six-sigma is an evolution in quality management which is widely implemented in business and industry in new millennium. Six sigma metrics are being adopted as universal measure to maintain quality in process. The principles of six-sigma go back to Motorola’s approach to TQM in the early 1990s. This means that variation upto 6 sigmas or 6 standard deviations should fit within the tolerance limits for the process; hence the name six sigma. For this development Motorola won the Malcolm Baldridge Quality award in 1988.
 
Six sigma provides a framework for evaluating process performance and process improvement to reduce variation. The goal for process performance is illustrated in figure below which shows the quality requirements for that measurement or process.

Any process can be evaluated by determining how many sigmas fit within the tolerance limits. There are two methods for assessing process performance in terms of sigma metric. One approach is to measure outcome by inspection. The other approach is to measure variation and predict process performance.


Conversion to sigma metric is done by using standard table available. In health care organization a defect rate of 0.033% (333 DPM) is considered excellent, where error rates from 1% to 5% are often considered acceptable. A 5.0% (50000 DPM) error rate corresponds to 3.15 sigma performance and 1.0% error rate corresponds to 3.85 sigma. Six sigma shows that the goal should be error rate of 0.1% (4.6 sigma) to 0.01% or 100 DPM (5.2 sigma) and ultimately 0.001% (5.8 sigma).

The application of sigma metrics for assessing analytical performance uses the variable obtained during method validation studies, like accuracy, precision, PPP, NPP, sensitivity, specificity parameters and that available from internal and external quality control processes.

For the particular method for given analyte, the allowable error, method bias, method CV, can be obtained from external quality assessment programs or regulatory requirements (like US Clinical Laboratory Improvement Amendment [CLIA] criteria for acceptable performance in proficiency testing). Process variation and bias can be estimated from method validation experiments, peer comparison data, proficiency testing results and routine QC data.

In the laboratory sigma performance of the method can be determined from imprecision: SD or CV and inaccuracy (bias) observed for a method and quality requirement (allowable total error, TEa) for the test. [Sigma = (TEa – bias)/SD]. Sigma metric from 6.0 to 3.0 represents the range from best case to worst case respectively. Methods with sigma performance less than 3 are not considered acceptable for production. 




No comments:

Post a Comment