StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Six Sigma Method for Dealing with Problems - Essay Example

Cite this document
Summary
The essay "Six Sigma Method for Dealing with Problems" analyzes the delineation of clearly defined methods to tackle Six Sigma along with RCA within a lean approach. Six Sigma is a method of identifying, classifying, and solving problems that affect the overall productivity of a business…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER96.3% of users find it useful
Six Sigma Method for Dealing with Problems
Read Text Preview

Extract of sample "Six Sigma Method for Dealing with Problems"

?Lean 6? – Root Cause Analysis process capability from the external s and VOP perspectives Introduction Six Sigma has emerged as a method ofidentifying, classifying and solving problems that affect the overall productivity of a business. Large industrial names such as GE and Motorola have proved the efficacy of utilising Six Sigma over and over again. For example GE announced in 1998 that it had managed to save some $350 million as a result of Six Sigma initiatives. Subsequently this figure eventually reached more than $1 billion. (Dusharme, 2001) However, not all companies utilising Six Sigma have ended up saving money or making substantial gains. Fortune 500 reported that ninety one percent of fifty eight large companies that established Six Sigma regimes have been trailing the S&P 500 index ever since. (Betsy, 2006) One major reason for this phenomenon is that Six Sigma is less understood and more employed in businesses often in ways that make little or no sense. In essence, Six Sigma is a statistical technique and the lack of data, its analysis, proper presentation and follow up can all lead up to its demise. Amongst other things attempted through Six Sigma, RCA (Root Cause Analysis) is a major application. The contention behind RCA is to locate and subsequently rectify problems in a business operation. However, the application of Six Sigma to RCA in inappropriate methods often causes less than desirable outcomes. Often Six Sigma is used to “create” evidence in order to justify some kind of process or business hypothesis. This text attempts to delineate clearly defined methods to tackle Six Sigma along with RCA within a lean approach. The external customer’s perspective as well as the VOP (Voice of Process) perspective will be utilised to explain the application of lean Six Sigma to RCA. Differentiating the VoC and the VoP Approaches Any business process will always possess an external customer who receives the finished good. If the business process spectrum consists of multiple processing steps then the external customer might be a secondary processing department. On the other hand if the business is small enough or based on a single process, then the external customer will be someone who gets the final product. However, the size of an organisation is critical to the implementation of Six Sigma. Hence this text will take implementation within a large business context as small businesses can seldom afford Six Sigma initiatives. Therefore the external customer in question will be an allied business processing unit or function. Using the customer’s input as the guideline within Six Sigma is better labelled as VoC (Voice of Customer). The customer specifies their requirements using surveys, discussions, focus groups, comment cards etc. (Curious Cat, 2009) In comparison, the VoP (Voice of Process) depends on the process capability. The contention is to measure the best performance that a system could deliver. This is often described statistically using a control chart. Historical performance statistics may also be used to analyse the situation better. The most differentiated aspect of the VoP approach is its reliance on hard statistical data to take decisions. However, there is a great chance of leaving large gaps in collecting data through misreporting as well as omission. (Mann, 2006) VoP is also reliable in discerning the stability or instability of any given business process or operation. Statistical quality records are utilised to create control chart models and this will be discussed below. Lean Methodology for VoP The methodology for a lean Six Sigma RCA will be analysed by following it step wise. Strengths, weaknesses and vulnerabilities will be discussed in an attempt to introduce the sources of error in such systems. Data Collection and Processing Data collection is the single most important part of any RCA treatment. If data is flawed by any given definition then there is no chance that the entire analysis and its subsequent follow up will yield any favourable results at all. It must be borne in mind that in any RCA approach (TQM, maintenance based methods or otherwise) the problem is not really known before hand. The only things really known are the symptoms produced by the problem that can be measured. For example, RCA cannot be used to attempt HR (human resources) problems because people’s levels of satisfaction cannot be measured in numbers and figures. However, the breakage of a particular component in a system can well easily be quantified. The reason behind the breakage is not known prior to conducting a lean Six Sigma based RCA. (Willis, 2009)The contention in attempting the RCA is to trace out all the various problems that are causing failure symptoms. These problems may be time dependant or independent, they may be sporadic or part of environmental noise, they may be actual problem symptoms or mere ramifications of another system’s problems. The noticeable thing here is that Six Sigma delineates methods to measure these varying problems reliably given that the methods are used properly. Any mistreatment of methodology at this stage will spell little more than failure. Error can be introduced into supply chains systems for example from a variety of sources. For one thing, the software data being entered by a reporter such as a warehouse attendant may be entered without any concern for precision. Often small fields such as date, time, and auxiliary descriptions are either left out or filled in incorrectly due to “hurries” of getting material placed on the floor. In other cases data may be missed out because there is not enough management focus on being precise. Similarly another source of error is generated as data is being collected using software for analysis. It is often highly convenient to connect large databases in Oracle or other DBMS (database management systems) to Excel in order to query the available data. Often data conversion between software based on slightly differing standards produces patches of data that are not usable by the receiving software. For example if an Oracle database releases date to Excel as “ 19/06/2011” there are significant chances that Excel will not recognise this data as a date at all. This stems from the fact that two space operators precede the date making this data block a string type. Often such inadequacies in processing data modify results by quite a bit. (Pearson, 2007) Another source of error may be introduced due to lack of comprehension of mathematical modelling such as regression. This error would only step in given the fact that mathematical treatment of the subject data is required. This may occur as failure or other data may possess a large amount of variance. It is always best to consult an expert or to carry out a literature review before deciding which mathematical model to use. (SticiGui, 2011) For example, regression is most generally done through linear treatment but in certain circumstances, the data presented may be correlated through a trigonometric expression that would tend to hover around a common straight line. The data could now be described using linear or trigonometric expressions but trigonometric expressions would be far more descriptive and accurate. This is displayed in the graph below. The graphing and regression have been performed using Microsoft Excel. The linear equation is displayed in the upper right hand corner while the polynomial expression is displayed in the bottom left hand corner of the graph. The variability is obvious given the R2 values for each regression result. Creating Project Metrics It is imperative that the project team demarcate all project metrics as clearly as possible and as soon as possible after identifying a problem. Problems can often be identified in day to day operations through non compliance or failure to meet certain objectives. Improvement in such gaps is the simplest project metrics that can be delineated. However, before sketching out any project metrics it is advisable that the team should ensure that the objectives are SMART (LSS Academy, 2007): S specific M measurable A attainable R relevant T time bound As an example, scheduling problems may be occurring due to delay that the drivers are creating in an attempt to bargain for their pays. Using Six Sigma in any form to achieve scheduling success would not make any real sense even if other constraints could be well controlled. It is up to the project team to ensure that the project metrics are within the SMART definition. (AntiClue, 2010) If the business operation is generating KPIs (Key Performance Indicators), LIs (Leading Indicators) or other such measures to audit performance then it is best to start with these indicators. Historical trending is sufficient to classify problems. (Six Sigma Online, 2008) (Decicco, 2010) The team should move to eliminate chronic issues first because smaller problems may actually be ramifications of these problems. It is advisable not to solve smaller problems first as solutions may not be sustainable and because smaller disturbances are themselves symptoms of larger problems. For example, creating a new pallet storage scheme in a warehouse may solve the problem temporarily but whenever major variations in incoming material occur the problem is bound to reappear again. Project metrics are also known as KPOVs (Key Process Output Variables) or “Y’s” and are related to KPIVs (Key Performance Input Variables) or “X’s”. The point in a lean Six Sigma RCA is to identify the exact nature of the relationship. The prime objective of the team is to first create a relationship of the kind of Y = f (X1, X2, ... Xn) which can be used later to solve problems. Determining Sample Size Another major area of concern is collecting data that possesses enough population samples to warrant a proper analysis. Most Six Sigma based problems involving statistics are subject to the control chart (also known as the Bell Curve) for a normal distribution. Good calculation practice warrants that at least enough samples must be used to create relationships which warrant a confidence level of around 95%. Using larger population samples is all the more preferred as it easily exposes the existence of sporadic inputs such as environmental or other such factors. The standard deviation to a sample set is the key to understanding such issues successfully. The regular distribution of samples as per standard deviation’s distance from the mean is given as: Standard Deviation* Percentage of Population Mathematical One standard deviation 68% ? Two standard deviations 95% 2? Three standard deviations 99% 3? * taken as k standard deviations away from the mean in either direction on the horizontal axis To calculate the standard deviation the following formula should be utilised: where: ? is the standard deviation x is each value in the data set is the mean of all values in the data set n is the total number of values in the data set For example to calculate the total number of samples required for confidences level of 95% using the data in Appendix “A” the following values should be known: standard deviation = 2? (for a confidence level of 95%) or acceptable level is supposed to be Now: Therefore for a confidence level of 95% the project team would require a sample set of some 5,800 samples. These samples can then be used to develop the appropriate relationships. (Barcelona Field Studies Centre, 2011) Notes on Measurement It is up to the project team to ensure that the collected data responding to these numbers is precise. Any adjustments to values such as conversion of units etc. must be adjusted within data as soon as collection is completed. In general it is a good practice to ensure that the collected data complies with a simple set of criteria. These include the resolution of the data, reproducibility, repeatability, stability and linearity. These aspects are treated briefly below. Resolution refers to the smallest step size used to gather measurements of data. For example, time can be measured using seconds, minutes, hours, days, months or years. All these measures may be used for different purposes. If packaging times for machines need to be calculated the resolution ought to be seconds or minutes. However if movement time between warehouses is going to be measured then the resolution ought to be hours. Similarly if goods are being moved between cities then the resolution ought to be days. This rule applies to all kinds of data that is collected for any statistical investigation purposes. Reproducibility is the ability of differing units of production to produce measurable things that display only slight variations. An example would be two cigarette machines that produce cigarettes whose lengths lie within some fractions of a millimetre of each other. Similarly this concept can be extended to human operations and their measurable outcomes too. For example if two different technicians assemble a mechanical seal, their fits and tolerances should lie within a few inches of each other. Repeatability refers to the ability to collect the same data in different instances in time. One example is stock taking during auditing. One auditor may audit and produce different results from another auditor. This means that the produced data does not offer repeatability. However, if the same measure and relatively similar spreads are obtained for samples of the same measurement then the data collection is said to be repeatable. This property is extremely important considering that the established baseline must remain consistent for present as well as future assessments. If the data does not offer repeatability then a true measure of improvement or degradation of a process cannot be established. Moreover if a set of data does not offer repeatability over consecutive measurements then there may be sporadic behaviour associated with the process which may need to be isolated. This may include environmental factors or other aberrations which may contribute to sporadic differences in measurement. However, errors in measurement and errors due to sporadic factors must be distinguished with care. Stability is a property associated with the measurement mechanisms and not with the data itself. This may include machine and human measurement stability. In due course of time, the stability of measurement systems may be affected. Machine measurement systems may degrade due to wear and tear while human measurement systems may give way to variations in instructions. The simplest solution to these phenomenons is to document machine measurement system’s efficacy and to document human instructions clearly at all points in time. This strategy is typically part of the quality management operations of a business establishment. Linearity refers to the capability of a system to measure something such that the variation in error remains linearly related to the entire measurement spectrum. Hence the variation in error remains equal over the entire measurement spectrum. This helps to determine noise or sporadic behaviour in a measurement spectrum which tends to fall to the extremities. Hence if a system displays linearity then it would be simple to isolate the sporadic behaviour or noise mentioned above. (Martin, 2006) Creating Relationships Any effective statistical relationship banks on the fact that it is connected to the output reliably that is a sporadic relationship does not exist. Changes in the input variables can often be seen to have a direct impact on the output given of course that the output is singular. In case that a multiple input to a multiple input relationship exists then it is best to deal with each KPOV individually. This helps to simplify traceability, data collection and analysis all at the same time. This approach is based on Enrico Fermi’s problem solving approach whereby a large problem is broken down into smaller problems which are solved to create a solution for the larger problem. Moreover, the “Fermi problem” approach ensures that you can trace out mild relationships between inputs and output. (Santos, 2009) Eventually the largest and strongest contributors from the inputs can be labelled as KPIVs and they can be related to the KPOV. If the problem involves business operations with discrete and quantifiable KPIVs then the problem could be treated rather simply. However, if the KPIVs offer a diverse relationship then it is best to use techniques involving linear programming. This can be extremely helpful for problems involving shipping and stocking decisions or in decisions involving current stock, future demand and production calculations. Defects and Opportunities Given that the measurement is complete according to principles of accuracy and precision mentioned above, it is time to create a process baseline. This baseline is going to be utilised for any further assessments of the system and its capability. Generally Six Sigma stipulates the definition of defects. In order to quantify process capability for measurement and manipulation strict definitions are carved out for process conformity and inconformity. If the process complies with a stipulated and defined behaviour then it is acceptable but if the process deviates from the defined behaviour then it is classified as a defect. Defects are of primary importance to all Six Sigma statistical investigations. (Bothe, 2001) Generally when a defect is chosen, it is closely related to the system’s output and capability. A defect has direct bearing on finances whether direct or indirect. For example, if a shipment is due to be delivered in one week then the accompanying defect would be a shipment delivered later than one week. This would have obvious financial implications such as fines, delayed payments or denial of payments for the concerned business. For statistical investigation the defect is defined in terms of the project metric(s) which is also known as the “Y” or KPOV. More than one project metric can be used to define multiple defects but it is best to define situations where one project metric is handled at one time as mentioned before. Moreover if the system consists of more than one process KPOV then these should be prioritised in terms of impact which could be related to finances, safety, health, expansion etc. The process may also possess some measures that qualify as defects by default. Such measures could include wastages, power losses, rework operations etc. (Godfrey, 1999) It must be borne in mind that the defect definition should be sensible enough. Wastages can never be driven to zero in the textile industry for example. A certain percentage of the cloth must always be wasted to accommodate for cutting lines. In such cases defects are defined as the minimum possible wastage with which cutting could be carried out. Any increase in this project metric would signal a defect. Expecting wastage to come to zero using any possible methods is not a practicable option. (Pyzdek, 2003) Another relevant concept is that of defect opportunities. Such opportunities refer to the chance that a defect could easily be produced. This may be a result of machine problems such as using obsolete equipment or spares or this may be a result of human error such as infant mechanical seal failure that occurs due to installation errors. Generally the greater the complexity of a given process, the larger the chances are for defect opportunities to surface. The production of defects is directly related to defect opportunities. Often the defects are divided with opportunities to gauge the complexity of a process in relation to other process(es). (Born, 1994) It is essential that process manipulation be carried out in such a way that does not impact the external customer. Disturbances for the external customer may produce problems that would possess long term impacts. RCA with lean Six Sigma is not about reinventing the wheel but about using the wheel properly. Connecting Defects to Six Sigma and the Normal Distribution Calculations for Six Sigma involve the use of a normalised variable better known as the DPMO (Defects per Million Opportunities). DPMO is calculated by dividing the defect fractions with the total opportunity count and multiplying the result by a million. The DPMO calculation effectively translates the defect into a normal variable that can be compared to a regular normal distribution. This can be expressed mathematically as (World Class Manufacturing, 2011): where D are the total number of defects observed U is the number of units within the sample O are the opportunities for defects in one unit For example if bearings were being sampled for defects and the results showed that out of a total of 2,000 units only 4 units were defective. However, the total number of defects was 7 for the total of the 4 bearings. The DPMO calculation would become: The point behind Six Sigma compliance is to reduce the DPMO to 3.4. This would mean that for every one million operations the total number of defects would only be 3.4. The distribution of sigma levels and the total number of allowable defects are shown below for reference. Sigma Level Percentage Defects Number of Defects* 1.5? 50 500,000 2? 30.9 308,500 3? 6.68 66,809 4? 0.62 6,210 5? 0.02 233 6? 0 3.4 * defects out of a million Statistically this distribution becomes a normal distribution as mentioned above and is represented by a Gaussian function or a bell curve (Regent's Prep, 2011). This is represented below for clarity. Differentiating VoC and VoP for Process Capability Using sigma calculations it can be easily found out where the process currently stands and how much the process needs to be improved to boost existing sigma levels. Process capability is simply the difference between the lower specification limits and the upper specification limits divided by the variation within the process. However, process capability may be evaluated using two distinct approaches. One approach is to utilise the VoC which represents the demand of the external customer. These may be specified in any given format such as percentages, number of defective products allowed, variation in non defective products allowed etc. the other approach would be to utilise the VoP approach whereby the KPOV is compared to existing process performance levels. However, the internal lean Six Sigma would be utilising a regular DPMO approach to solve problems. There has to be some kind of an internal translation between the VoC and the VoP to find a suitable solution to bridge the differences between both approaches. The measure of spread that exists between VoC and VoP is utilised to ascertain the capability ratios for the process. Mathematically the process capability and the capability ratio are: Variations within the capability ratio express the difference between customer expectations and process behaviour. A high capability ratio indicates that most of the customer’s specifications are being met with and vice versa. The discussion to this point in time has centred along the assumption that one business operation is under focus alone. However, this may not be true in the real world. Another major assumption has been that the defects have a simplified and truly representative mean but this again need not be true. For example, if a supply chain process for transportation of goods is looked at, then the mean position may be defined as 5 deliveries per day to a warehouse. Now the upper specification limit might be 9 deliveries per day and the lower specification limit may be 3 deliveries per day. The difference between 3 and 5 is 2 while the difference between 5 and 9 is 4. In this case the distribution is tilted towards the upper specification limit and so is the mean which becomes 6. The customer would quote this issue to the process controllers as reported above but the process controllers would have to translate this information into a form viable for process control. Generally, the lower and upper process capability estimates can be calculated as: However, if the process mean is centred between the upper and lower specification limits then the process capability becomes (Boyles, 1991): In case that drops below 0, then the mean is not centred between the upper and lower specification limit which is a useful indicator. Now if a target T is known or the process team is working towards such a target, then the process capability becomes: Again it is assumed that the mean is centred as in the cases above. If however the mean is off centre then the associated process capability for a target T can be calculated as: The method outlined above can be used with relative ease to translate a customer’s expectations to the current process over a normal distribution. As various customer expectations or KPOVs are tested in relation to their inputs, any variations will clearly highlight problems with the process. In this manner VoP will become an effective indicator to VoC needs and vice versa. These variations are the actual root causes of problems being experienced. Based on the findings from these calculations, the problems can be identified and then either reduced or eliminated as suitable. If a suitable process capability is achieved through manipulation then there would be little need for elimination. (Montgomery, 2004) Similarly, the same process can be used to calculate current sigma levels and accompanying process capabilities. These can then be used for root cause analysis using techniques such as Paretto charts and the findings could be focused and improved upon to boost existing sigma levels. (Booker et al., 2001) Bibliography AntiClue, 2010. Six Sigma Creating SMART Project Goals. [Online] Available at: http://www.anticlue.net/archives/000753.htm [Accessed 17 August 2011]. Barcelona Field Studies Centre, 2011. Minimum Sample Size Calculation: beach pebble long axes size example. [Online] Available at: http://geographyfieldwork.com/MinimumSampleSize.htm [Accessed 17 August 2011]. Betsy, M., 2006. New rule: Look out, not in. [Online] Available at: http://money.cnn.com/2006/07/10/magazines/fortune/rule4.fortune/index.htm [Accessed 17 August 2011]. Booker, J.M., Raines, M. & Swift, K.G., 2001. Designing Capable and Reliable Products. Oxford: Butterworth-Heinemann. Born, G., 1994. Process Management, Quality Improvement: The Way to Design Document and Re-engineering Business Systems. London: John Wiley and Sons. Bothe, D.R., 2001. Measuring Process Capability. Cedarburg: Landmark Publishing Inc. Boyles, R., 1991. The Taguchi Capability Index. Journal of Quality Technology, 23(1), pp.17-26. Curious Cat, 2009. Curious Cat Management Improvement Library - Dictionary. [Online] Available at: http://curiouscat.com/management/voiceofthecustomer.cfm [Accessed 17 August 2011]. Decicco, M.E., 2010. ASQ Certified Six Sigma Green Belt. Farmingdale: Office of Business Outreach Farmingdale State College. Dusharme, D., 2001. Six Sigma Survey: Breaking Through the Six Sigma Hype. [Online] Available at: http://www.qualitydigest.com/nov01/html/sixsigmaarticle.html [Accessed 17 August 2011]. Godfrey, A.B., 1999. Juran's Quality Handbook. 5th ed. McGraw Hill. LSS Academy, 2007. Be SMART! [Online] Available at: http://lssacademy.com/2007/02/01/be-smart/ [Accessed 2011 August 2011]. Mann, D., 2006. Unleashing The Voice Of The Product And The Voice Of The Process. [Online] Triz Journal Available at: http://www.triz-journal.com/archives/2006/06/01.pdf [Accessed 17 August 2011]. Martin, J.W., 2006. Lean Six SIGMA for Supply Chain Management. McGraw Hill. Montgomery, D., 2004. Introduction to Statistical Quality Control. New York: John Wiley & Sons, Inc. Pearson, 2007. Error Handling in VBA. [Online] Available at: http://www.cpearson.com/excel/errorhandling.htm [Accessed 17 August 2011]. Pyzdek, T., 2003. Quality Engineering Handbook. 2nd ed. CRC Press. Regent's Prep, 2011. Normal Distribution. [Online] Available at: http://www.regentsprep.org/Regents/math/algtrig/ATS2/NormalLesson.htm [Accessed 17 August 2011]. Santos, A., 2009. How Many Licks?: Or, How to Estimate Damn Near Anything. Running Press. Six Sigma Online, 2008. Get on the Bus - Metrics vs. KPIs, Scorecards vs. Dashboards. [Online] Available at: http://www.sixsigmaonline.org/six-sigma-training-certification-information/articles/get-on-the-bus---metrics-vs-kpis-scorecards-vs-dashboards.html [Accessed 17 August 2011]. SticiGui, 2011. Errors in Regression. [Online] Available at: http://www.stat.berkeley.edu/~stark/SticiGui/Text/ch6.htm [Accessed 17 August 2011]. Willis, T., 2009. Improving ROI of Six Sigma With Root Cause Analysis. [Online] Available at: http://www.qualitydigest.com/inside/twitter-ed/improving-roi-six-sigma-root-cause-analysis.html# [Accessed 17 August 2011]. World Class Manufacturing, 2011. Six Sigma DPMO Calculator. [Online] Available at: http://world-class-manufacturing.com/Sigma/level.html [Accessed 17 August 2011]. Appendix A Sample Number Sample Value 1 10 2 9 3 8 4 8 5 16 6 12 7 8.5 8 10 9 12 10 9 11 13 12 14 13 10 14 14 15 17 16 12 17 6 18 17 19 9 20 5 21 10 22 7.5 23 13 24 13 25 7.5 26 15 27 12 28 8 29 22 30 16 Mean is 11.45 Standard Deviation is 3.81 Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(“Six Sigma Essay Example | Topics and Well Written Essays - 3000 words”, n.d.)
Retrieved de https://studentshare.org/statistics/1390726-lean-six-sigma-for-supply-chain-management-theme
(Six Sigma Essay Example | Topics and Well Written Essays - 3000 Words)
https://studentshare.org/statistics/1390726-lean-six-sigma-for-supply-chain-management-theme.
“Six Sigma Essay Example | Topics and Well Written Essays - 3000 Words”, n.d. https://studentshare.org/statistics/1390726-lean-six-sigma-for-supply-chain-management-theme.
  • Cited: 0 times

CHECK THESE SAMPLES OF Six Sigma Method for Dealing with Problems

Supply chain management

10 Pages (2500 words) Essay

Six Sigma Technology Implementation

10 Pages (2500 words) Term Paper

Customer satisfaction of fast food services in kuwait a survy study

The concept of TQM is explored within a global context and then expounded upon by offering operational definition as well as dealing with the subject within the context of customer satisfaction in the fast food industry.... The importance of Total Quality Management (TQM) in the manufacturing and service organizations has been significantly increased within the past twenty years....
54 Pages (13500 words) Essay

A Business Report

66-123), Lean and six sigma perform optimally when implemented together, leading to improved efficiency and productivity.... Both six sigma and lean management gives a certain principle of organizational performance, especially in a competitive environment, which states that returns may diminish if the two programs work in isolation and this may lead to failure in achieving perfect goals (Arnheiter and Maleyeff , 2005.... A society that will be practicing lean six sigma, would be exploiting both on the strength of both lean organization and six sigma process....
8 Pages (2000 words) Assignment

Six Sigma Green Belt. Black Belt and Master Belt and differentiations among these three belts

This is exemplified by the six sigma method that concentrates on the improvement of quality dealing processes through the removal of undesirable defects.... Additionally, it attempts to reduce variability that often features in business processes and manufacturing Therefore, to attain all these, the six sigma has different categories of classifying its experts (Gygi et al.... However, the six sigma Green Belt is concerned with improvement of critical projects under the supervision within organizations....
9 Pages (2250 words) Essay

Six Sigma as E-Sourcing Capability Model

This degree of flexibility enables the six sigma method, along with its toolkit, to easily integrate with existing models of software process implementation.... "six sigma as E-Sourcing Capability Model" paper determines the key ingredients for the effective implementation of six sigma programs in the UK industry by means of a pilot study.... The paper makes an attempt to understand the common tools in the UK industry currently practicing six sigma philosophy....
17 Pages (4250 words) Coursework

Project Management: Six Sigma Model

The author analyzes the six sigma model for improving process efficiency.... In essence, a six sigma process is one in which the margin of error is zero or insignificant in terms of the products manufactured.... This is the reason why six sigma has become an integral part of organizational culture.... This is the reason why six sigma has become an integral part of organizational culture.... Going by the definition in the Harvard Business Review, the six sigma Process Improvement is described as a business management stratagem initially devised by Motorola in 1986....
9 Pages (2250 words) Term Paper

Six Sigma Practical Experiences

However especial y focus was on the quality and customer service departments and those parts about lament administration, reporting, and those dealing with corporate accounts (Marx, 2005).... This paper ''six sigma Practical Experiences'' tells that The HSBC Middle East is a banking and financial institution that is a subsidiary of the international group HSBC Ltd.... six sigma is a business process reengineering initiative employing the tools and strategically specifically proposed by Motorola in the 80s for improvement in the business sectors....
9 Pages (2250 words) Case Study
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us