The post Process Capability with SigmaXL appeared first on Deploy OpEx.

]]>Process capability measures how well the process performs to meet given specified outcome. It indicates the conformance of a process to meet given requirements or specifications. Capability analysis helps to better understand the performance of the process with respect to meeting customer’s specifications and identify the process improvement opportunities.

Process Capability Analysis Steps

- Determine the metric or parameter to measure and analyze.
- Collect the historical data for the parameter of interest.
- Prove the process is statistically stable (i.e. in control).
- Calculate the process capability indices.

Monitor the process and ensure it remains in control over time. Update the process capability indices if needed.

Process capability can be presented using various indices depending on the nature of the process and the goal of the analysis. Popular process capability indices are:

- Cp
- Pp
- Cpk
- Ppk
- Cpm

The Cp index is process capability. It assumes the process mean is centered between the specification limits and essentially is the ratio of the distance between the specification limits to six process standard deviations. Obviously, the higher this value the better, because it means you can fit the process variation between the spec limits more easily. Cp measures the process’ potential capability to meet the two-sided specifications. It does not take the process average into consideration.

High Cp indicates the small spread of the process with respect to the spread of the customer specifications. Cp is recommended when the process is centered between the specification limits. Cp works when there are both upper and lower specification limits. The higher Cp the better, meaning the spread of the process is smaller relative to the spread of the specifications.

Note: Cpm can work only if there is a target value specified.

Data File: “Capability Analysis” tab in “Sample Data.xlsx”

- Select the entire range of data (i.e. the column “HtBk”)
- Click SigmaXL -> Process Capability -> Histograms & Process Capability
- A new window named “Histogram & Process Cap” pops up with the selected range of data appearing in the box under “Please select your data”

- Click “Next>>”
- A new window named “Histograms & Process Capability” appears
- Select “HtBk” as the “Numeric Data Variables”
- Enter 6 in LSL, 6.5 in T and 7 in USL into the boxes for “Lower Spec Limit”, “Target” and “Upper Spec Limit” respectively

- Click “OK”
- The histogram and the process capability analysis results are in the newly generated tab “Hist Cap (1)”

Model summary: With P_{pk} of less than 1.0 we can conclude that the capability of this process is not very good. Anything less than 1.0 should be considered not capable and we should strive for P_{pk} to reach levels of greater than 1 and preferably over 1.67.

The post Process Capability with SigmaXL appeared first on Deploy OpEx.

]]>The post Process Capability with Minitab appeared first on Deploy OpEx.

]]>Process capability measures how well the process performs to meet given specified outcome. It indicates the conformance of a process to meet given requirements or specifications. Capability analysis helps to better understand the performance of the process with respect to meeting customer’s specifications and identify the process improvement opportunities.

Process Capability Analysis Steps

- Determine the metric or parameter to measure and analyze.
- Collect the historical data for the parameter of interest.
- Prove the process is statistically stable (i.e. in control).
- Calculate the process capability indices.
- Monitor the process and ensure it remains in control over time. Update the process capability indices if needed.

Process capability can be presented using various indices depending on the nature of the process and the goal of the analysis. Popular process capability indices are:

- Cp
- Pp
- Cpk
- Ppk
- Cpm

The Cp index is process capability. It assumes the process mean is centered between the specification limits and essentially is the ratio of the distance between the specification limits to six process standard deviations. Obviously, the higher this value the better, because it means you can fit the process variation between the spec limits more easily. Cp measures the process’ potential capability to meet the two-sided specifications. It does not take the process average into consideration.

High Cp indicates the small spread of the process with respect to the spread of the customer specifications. Cp is recommended when the process is centered between the specification limits. Cp works when there are both upper and lower specification limits. The higher Cp the better, meaning the spread of the process is smaller relative to the spread of the specifications.

Note: Cpm can work only if there is a target value specified.

Data File: “Capability Analysis” tab in “Sample Data.xlsx”

Steps in Minitab to run a process capability analysis:

- Click Stat → Basic Statistics → Normality Test.
- A new window named “Normality Test” pops up.
- Select “HtBk” as the variable.

- Click “OK.”
- The histogram and the normality test results appear in the new window.

In this example, the p-value is 0.275, greater than the alpha level (0.05). We fail to reject the hypothesis and conclude that the data are normally distributed.

- Click Stat → Quality Tools → Capability Analysis→ Normal.
- A new window named “Capability Analysis(Normal Distribution)” pops up.
- Select “HtBk” as the single column and enter “1” as the subgroup size.
- Enter “6” as the “Lower spec” and “7” as the “Upper spec”

- Click “Options” button and another new window named “Capability Analysis(Normal Distribution) – Options” pops up.
- Enter “6.5” as the target and click “OK.”

- Click “OK” in the “Capability Analysis(Normal Distribution)” window.
- The capability analysis results appear in the new window.

If the p-value of the previous normality test is smaller than the alpha level (0.05), we would reject the null hypothesis and conclude that the data are not normally distributed. Thus, we would perform a Non-Normal Capability analysis as follows:

- Click Stat → Quality Tools → Capability Analysis→ Non-Normal
- A new window named “Capability Analysis(Non-Normal Distribution)” pops up.
- Select “HtBk” as the single column.
- Enter “6” as the “Lower spec” and “7” as the “Upper spec.”

- Click “Options” button and another new window named “Capability Analysis(Non-Normal Distribution) – Options” pops up.
- Enter “6.5” as the target and click “OK.”

- Click “OK” in the “Capability Analysis(Non-Normal Distribution)” window.
- The capability analysis results appear in the new window.

Model summary: With P_{pk} of less than 1.0 we can conclude that the capability of this process is not very good. Anything less than 1.0 should be considered not capable and we should strive for P_{pk} to reach levels of greater than 1 and preferably over 1.67.

The post Process Capability with Minitab appeared first on Deploy OpEx.

]]>The post Attribute MSA with JMP appeared first on Deploy OpEx.

]]>Data File: “AttributeMSA.jmp”

- Click Analyze -> Quality & Process ->Variability/Attribute Gauge Chart
- Select “Appraiser A”, “Appraiser B” and “Appraiser C” as “Y, Response”
- Select “Part” as “X,Grouping”
- Select “Reference” as “Standard”
- Select “Attribute” as the “Chart Type”
- Click “OK”

- Click on the red triangle button next to “Attribute Gauge”
- Click “Show Effectiveness Points”
- Click “Connect Effectiveness Points”

Percentage of agreement by appraiser

- Red line: the percentage of agreement with the reference level
- Blue line: the percentage of agreement between and within the appraisers
- When both lines are at 100% level across parts and appraisers, the measurement system is perfect

- % Agreement: Overall agreement percentage of both within and between appraisers. It reflects how precise the measurement system performs
- In this example, 78% of items inspected have the same measurement across different appraisers and also within each individual appraiser
- Rater Score: the agreement percentage within each individual appraiser

Kappa statistic is a coefficient indicating the agreement percentage above the expected agreement by chance. Kappa ranges from −1 (perfect disagreement) to 1 (perfect agreement). When the observed agreement is less than the chance agreement, Kappa is negative. When the observed agreement is greater than the chance agreement, Kappa is positive. Rule of thumb: If Kappa is greater than 0.7, the measurement system is acceptable. If Kappa is greater than 0.9, the measurement system is excellent.

The first table shows the Kappa statistic of the agreement between appraisers. The second table shows the Kappa statistic of the agreement between individual appraiser and the standard. The bottom table shows the categorical kappa statistic to indicate which category in the measurement has worse results.

Model summary: Count of true positives, true negatives, false positives and false negatives. The effectiveness shows the percentage of the agreement between each appraiser and the standard. It reflects the accuracy of the measurement system.

The post Attribute MSA with JMP appeared first on Deploy OpEx.

]]>The post Attribute MSA with SigmaXL appeared first on Deploy OpEx.

]]>Data File: “Attribute MSA” tab in “Sample Data.xlsx” (an example in the AIAG MSA Reference Manual, 3rd Edition).

Step 1: Reorganize the original data into four new columns (i.e., Appraiser, Assessed Result, Part, and Reference).

- Select the entire range of the original data (“Part”, “Reference”, “Appraiser A”, “Appraiser B” and “Appraiser C” columns)
- Click SigmaXL -> Data Manipulation -> Stack Subgroups Across Rows
- A new window named “Stack Subgroups” pops with the selected data range appearing in the box under “Please select your data”

- Click “Next>>”
- A new window named “Stack Subgroups Across Rows” appears
- Select “Appraiser A”, “Appraiser B” and “Appraiser C” as “Numeric Data Variables”

Select “Part” and “Reference” as the “Additional Category Columns”

Enter “Assessed Result” as the “Stacked Data (Y) Column Heading (Optional)

Enter “Appraiser” as the “Category (X) Column Heading (Optional)”

- Click “OK>>”
- The stacked data are created in a new worksheet.

Step 2: Run a MSA using SigmaXL

- Select the entire range of the data (“Part”, “Reference”, “Appraiser” and “Assessment Result” columns)
- Click SigmaXL -> Measurement Systems Analysis -> Attribute MSA (Binary)
- A new window named “Attribute MSA (Binary)” pops with the selected data range appearing in the box under “Please select your data”

- Click “Next>>”
- A new window named “Attribute MSA (Binary)” appears
- Select “Part” as “Part/Sample”

Select “Appraiser” as “Appraiser”

Select “Assessed Result” as “Assessed Result”

Select “Reference” as “True Standard (Optional)”

Select “1” as “Good Level”

- Click “OK”

The MSA results appear in the newly generated tab “Att_MSA_Binary”.

The rater scores represent how the raters agree with themselves. Appraiser A, for instance, agreed with himself on 84% of the measurements made.

The important numbers are called out here. Of the 50 total measurements performed, for 78% of those (39) the appraisers agreed with both themselves and the other appraisers.

*Kappa statistic* is a coefficient indicating the agreement percentage above the expected agreement by chance. Kappa ranges from −1 (perfect disagreement) to 1 (perfect agreement). When the observed agreement is less than the chance agreement, Kappa is negative. When the observed agreement is greater than the chance agreement, Kappa is positive. Rule of thumb: If Kappa is greater than 0.7, the measurement system is acceptable. If Kappa is greater than 0.9, the measurement system is excellent.

Model summary: In all cases the Kappa indicates that the measurement system is acceptable.

The post Attribute MSA with SigmaXL appeared first on Deploy OpEx.

]]>The post Attribute MSA with Minitab appeared first on Deploy OpEx.

]]>Data File: “Attribute MSA” tab in “Sample Data.xlsx” (an example in the AIAG MSA Reference Manual, 3rd Edition).

Steps in Minitab to run an attribute MSA:

Step 1: Reorganize the original data into four new columns (i.e., Appraiser, Assessed Result, Part, and Reference).

- Click Data → Stack → Blocks of Columns.
- A new window named “Stack Blocks of Columns” pops up.
- Select “Appraiser A,” “Part,” and “Reference” as block one.
- Select “Appraiser B,” “Part,” and “Reference” as block two.
- Select “Appraiser C,” “Part,” and “Reference” as block three.
- Select the radio button of “New worksheet” and name the sheet “Data.”
- Check the box “Use variable names in subscript column.”
- Click “OK.”

- The stacked columns are created in the new worksheet named “Data.”

- Name the four columns from left to right in worksheet “Data”: Appraiser, Assessed Result, Part, and Reference.

Step 2: Run a MSA using Minitab

- Click Stat → Quality Tools → AttributeAgreement Analysis.
- A new window named “AttributeAgreement Analysis” pops up.
- Click in the blank box next to “Attributecolumn” and the variables appear in the list box on the left.
- Select “Assessed Result” as “Attribute”
- Select “Part” as “Sample.”
- Select “Appraiser” as “Appraisers.”
- Select “Reference” as “Known standard/attribute.”

- Click the “Options” button and another window named “AttributeAgreement Analysis – Options” pops up.
- Check the boxes of both “Calculate Cohen’s kappa if appropriate” and “Display disagreement table.”

- Click “OK” in the window “AttributeAgreement Analysis – Options.”
- Click “OK” in the window “AttributeAgreement Analysis.”
- The MSA results appear in the newly-generated window and the session window.

The rater scores represent how the raters agree with themselves. Appraiser A, for instance, agreed with himself on 84% of the measurements made.

The important numbers are called out here. Of the 50 total measurements performed, for 78% of those (#39) the appraisers agreed with both themselves and the other appraisers.

*Kappa statistic* is a coefficient indicating the agreement percentage above the expected agreement by chance. Kappa ranges from −1 (perfect disagreement) to 1 (perfect agreement). When the observed agreement is less than the chance agreement, Kappa is negative. When the observed agreement is greater than the chance agreement, Kappa is positive. Rule of thumb: If Kappa is greater than 0.7, the measurement system is acceptable. If Kappa is greater than 0.9, the measurement system is excellent.

Model summary: In all cases the Kappa indicates that the measurement system is acceptable.

The post Attribute MSA with Minitab appeared first on Deploy OpEx.

]]>The post Variable Gage R&R with SigmaXL appeared first on Deploy OpEx.

]]>Variable Gage Repeatability & Reproducibility (Gage R&R) is a method used to analyze the variability of a measurement system by partitioning the variation of the measurements using ANOVA (Analysis of Variance). Whenever something is measured repeatedly or by different people or processes, the results of the measurements will vary. Variation comes from two primary sources:

- Differences between the parts being measured
- The measurement system

We can use a Gage R&R to conduct a measurement system analysis to determine what portion of the variability comes from the parts and what portion comes from the measurement system. There are key study results that help us determine the components of variation within our measurement system.

Variable Gage R&R primarily addresses the precision aspect of a measurement system. It is a tool used to understand if a measurement system can repeat and reproduce and if not, help us determine what aspect of the measurement system is broken so that we can fix it.

Gage R&R requires a deliberate study with parts, appraisers and measurements. Measurement data must be collected and analyzed to determine if the measurement system is acceptable. Typically Variable Gage R&Rs are conducted by 3 appraisers measuring 10 samples 3 times each. Then, the results can be compared to determine where the variability is concentrated. The optimal result is for the measurement variability to be due to the parts.

Measurement System Analysis (MSA) is a systematic method to identify and analyze the variation components of a measurement system. It is a mandatory step in any Six Sigma project to ensure the data are reliable before making any data-based decisions. A MSA is the check point of data quality before we start any further analysis and draw any conclusions from the data. Some good examples of data-based analysis where MSA should be a prerequisite:

- Correlation analysis
- Regression analysis
- Hypothesis testing
- Analysis of variance
- Design of experiments
- Statistical process control

You will see where and how the analysis techniques listed above are used. It is critical to know that any variation, anomalies, or trends found in your analysis are actually due to the data and not due to the inaccuracies or inadequacies of a measurement system. Therefore the need for a MSA is vital.

A measurement system is a process used to obtain data and quantify a part, product or process. Data obtained with a measurement device or measurement system are the observed values. Observed values are comprised of two elements

- True Value = Actual value of the measured part
- Measurement Error = Error introduced by the measurement system.

The true value is what we are ultimately trying to determine through the measurement system. It reflects the true measurement of the part or performance of the process.

Measurement error is the variation introduced by the measurement system. It is the bias or inaccuracy of the measurement device or measurement process.

The observed value is what the measurement system is telling us. It is the measured value obtained by the measurement system. Observed values are represented in various types of measures which can categorized into two primary types discrete and continuous. Continuous measurements are represented by measures of weight, height, money and other types of measures such as ratio measures. Discrete measures on the other hand are categorical such as Red/Yellow/Green, Yes/No or Ratings of 1–10 for example.

The guidelines for acceptable or unacceptable measurement systems can vary depending on an organizations tolerance or appetite for risk. The common guidelines used for interpretation are published by the Automotive Industry Action Group (AIAG). These guidelines are considered standard for interpreting the results of a measurement system analysis using Variable Gage R&R. Table 1.0 summarizes the AIAG standards.

Data File: “Variable MSA” tab in “Sample Data.xlsx”

Let’s take a look at an example of a Variable MSA using the data in the Variable MSA tab in your “Sample Data.xlsx” file. In this exercise we will first walk through how to set up your study using SigmaXL and then we will perform a Variable MSA using 3 operators who all measured 10 parts three times each. The part numbers and operators and measurement trials are all generic so that you can apply the concept to your given industry. First we need to set up the study:

Step 1: Set up your data collection worksheet

- Click on SigmaXL -> Measurement Systems Analysis ->

Create Gauge R&R (Crossed) Worksheet - A new window named “Create Gauge R&R (Crossed) Worksheet” appears
- Enter 10 as the “Number of Parts/Samples”
- Enter 3 as the “Number of Operators/Appraisers”
- Enter 3 as the “Number of Replicates/Trials”
- Uncheck the checkboxes for both “Randomize Parts/Sample” and “Randomize Operators/Appraisers”

- Click “OK>>”
- A new tab named “Gage R&R (Crossed) WKS” is generated

Step 2: Data collection

In the newly generated data table, SigmaXL has provided the template where we can organize the data. We will need to enter test results into the measurement column. In the “Sample Data.xlsx” file under the “Variable MSA” tab there are already “Measurement” values collected by the three operators (i.e., operator A, B, and C). The data are listed in Run order.

Step 3: Enter the data into the MSA template generated in SigmaXL

Transfer the data from the “Measurement” column in “Variable MSA” tab of “Sample Data.xlsx” file to the last column in the MSA template that SigmaXL generated from the steps above.

- Click SigmaXL -> Measurement Systems Analysis -> Analyze Gage R&R (Crossed)
- A new window named “Analyze Gage R&R (Crossed)” appears with the data range automatically selected in the box right below “Please select your data”

- Click “Next>>”
- A new window also named “Analyze Gauge R&R (Crossed)” pops up.
- Select “Part” as “Part”

Select “Operator” as “Operator”

Select “Measurement” as “Measurement” - Enter 5.15 as the “Standard Deviation Multiplier” and enter 95% as the “Confidence Level”.

- Click “OK”

Step 5: Interpret the MSA results

Model summary: The result of this Gage R&R study leaves room for consideration on one key measure. As noted in previous pages, the targeted percent contribution R&R should be less than 10% and study variation less than 30%. With % contribution at 7.76% it is below our 10% unacceptable threshold and similarly, Study variation at 26.86% is also below the threshold of 30% but this result is at best marginal and should be heavily scrutinized by the business before concluding that the measurement system does not warrant further improvement.

The post Variable Gage R&R with SigmaXL appeared first on Deploy OpEx.

]]>The post Variable Gage R&R with Minitab appeared first on Deploy OpEx.

]]>Whenever something is measured repeatedly or by different people or processes, the results of the measurements will vary. Variation comes from two primary sources:

- Differences between the parts being measured
- The measurement system

We can use a variable Gage R&R to conduct a measurement system analysis to determine what portion of the variability comes from the parts and what portion comes from the measurement system. There are key study results that help us determine the components of variation within our measurement system.

Measurement System Analysis (MSA) is a systematic method to identify and analyze the variation components of a measurement system. It is a mandatory step in any Six Sigma project to ensure the data are reliable before making any data-based decisions. A MSA is the check point of data quality before we start any further analysis and draw any conclusions from the data. Some good examples of data-based analysis where MSA should be a prerequisite:

- Correlation analysis
- Regression analysis
- Hypothesis testing
- Analysis of variance
- Design of experiments
- Statistical process control

You will see where and how the analysis techniques listed above are used. It is critical to know that any variation, anomalies, or trends found in your analysis are actually due to the data and not due to the inaccuracies or inadequacies of a measurement system. Therefore the need for a MSA is vital.

A measurement system is a process used to obtain data and quantify a part, product or process. Data obtained with a measurement device or measurement system are the observed values. Observed values are comprised of two elements

- True Value = Actual value of the measured part
- Measurement Error = Error introduced by the measurement system.

The true value is what we are ultimately trying to determine through the measurement system. It reflects the true measurement of the part or performance of the process.

Measurement error is the variation introduced by the measurement system. It is the bias or inaccuracy of the measurement device or measurement process.

The observed value is what the measurement system is telling us. It is the measured value obtained by the measurement system. Observed values are represented in various types of measures which can categorized into two primary types discrete and continuous. Continuous measurements are represented by measures of weight, height, money and other types of measures such as ratio measures. Discrete measures on the other hand are categorical such as Red/Yellow/Green, Yes/No or Ratings of 1–10 for example.

The guidelines for acceptable or unacceptable measurement systems can vary depending on an organizations tolerance or appetite for risk. The common guidelines used for interpretation are published by the Automotive Industry Action Group (AIAG). These guidelines are considered standard for interpreting the results of a measurement system analysis using Variable Gage R&R. Table 1.0 summarizes the AIAG standards.

Data File: “Variable MSA” tab in “Sample Data.xlsx”

Let’s take a look at an example of a Variable MSA using the data in the Variable MSA tab in your “Sample Data.xlsx” file. In this exercise we will first walk through how to set up your study using Minitab and then we will perform a Variable Gage MSA using 3 operators who all measured 10 parts three times each. The part numbers and operators and measurement trials are all generic so that you can apply the concept to your given industry. First we need to set up the study:

- Click on Stat → Quality Tools → Gage R&R → Create Gage R&R Study Worksheet.
- A new window named “Create Gage R&R Study Worksheet” pops up.

- Select 10 as the “Number of Parts.”

Select 3 as the “Number of Operators.”

Select 3 as the “Number of Replicates.”

Enter the part name (e.g., Part 01, Part 02, and Part 03).

Enter the operator name (e.g., Operator A, Operator B, Operator C).

Click on the “Options” button,

another window named “Create Gage R&R Study Worksheet – Options” pops up.

- Select the radio button “Do not randomize.”
- Click “OK” in the window “Create Gage R&R Study Worksheet – Options.”
- Click “OK’ in the window “Create Gage R&R Study Worksheet.”
- A new data table is generated.

Step 2: Data collection

In the newly-generated data table, Minitab has provided the data layout for your data collection for your variable MSA study. We have added the header “Measurement” for this example. You would have to do something similar.

When you conduct your variable MSA in your work environment it would be necessary to set up your study just as we have in the previous steps and then you could collect your measurement data properly. However, for our purposes today, we have provided you with an MSA that is setup with data already collected. We will use our “Variable MSA” tab in “Sample Data.xlsx,” for the next steps.

Step 3: Activate the Minitab worksheet with our Variable MSA data prepopulated.

Step 4: Implement Gage R&R

- Click Stat → Quality Tools → Gage Study → Gage R&R Study (Crossed).
- A new window named “Gage R&R Study (Crossed)” appears.
- Select “Part” as “Part numbers.”
- Select “Operator” as “Operators.”
- Select “Measurement” as “Measurement data.”

- Click on the “Options” button and another new window named “Gage R&R Study (Crossed) – ANOVA Options” pops up.
- Enter 5.15 as the “Study variation (number of standard deviations)”.

The value 5.15 is the recommended standard deviation multiplier by the Automotive Industry Action Group (AIAG). It corresponds to 99% of data in the normal distribution. If we use 6 as the standard deviation multiplier, it corresponds to 99.73% of the data in the normal distribution.

- Click “OK” in the window “Gage R&R Study (Crossed) – ANOVA Options.”
- Click “OK” in the window “Gage R&R Study (Crossed).”
- The MSA analysis results appear in the new window and the session window.

Step 5: Interpret the MSA results

The result of this Gage R&R study leaves room for consideration on one key measure. As noted in previous pages, the targeted percent contribution R&R should be less than 9% and study variation less than 30%. With % contribution at 7.76% it is below our 9% unacceptable threshold and similarly, Study variation at 26.86% is also below the threshold of 30% but this result is at best marginal and should be heavily scrutinized by the business before concluding that the measurement system does not warrant further improvement.

Visual evaluation of this measurement system is another effective method of evaluation but can at times be misleading without the statistics to support it. Diagnosing the mean plots above should help in the consideration of measurement system acceptability, you may benefit from taking a closer look at operator C.

Model summary: Visual evaluation of this measurement system alone might mislead you to a conclusion of a passing gage study (most of the variation seems to be in part to part, which is what we hope to see). However, an experienced practitioner will note such things such as the range chart being out of control. This may help provide clues regarding what to look for when trying to further diagnose the validity of this measurement system. An out of control Range chart in a variable MSA suggests that an operator or all operators are too inconsistent in their repeated measures causing wide ranges and out of control conditions.

Whenever a range chart is out of control, the accuracy of the X chart is automatically called into question. You will learn in future lessons that the control limits of an X chart are calculated using the mean of the R chart. If the R chart shows out-of-control conditions, then the mean is likely misrepresented and any calculation using it should be questioned.

The post Variable Gage R&R with Minitab appeared first on Deploy OpEx.

]]>The post Variable Gage R&R with JMP appeared first on Deploy OpEx.

]]>Variable Gage Repeatability & Reproducibility (Gage R&R) is a method used to analyze the variability of a measurement system by partitioning the variation of the measurements using ANOVA (Analysis of Variance). Whenever something is measured repeatedly or by different people or processes, the results of the measurements will vary. Variation comes from two primary sources:

- Differences between the parts being measured
- The measurement system

We can use a Gage R&R to conduct a measurement system analysis to determine what portion of the variability comes from the parts and what portion comes from the measurement system. There are key study results that help us determine the components of variation within our measurement system.

Variable Gage R&R primarily addresses the precision aspect of a measurement system. It is a tool used to understand if a measurement system can repeat and reproduce and if not, help us determine what aspect of the measurement system is broken so that we can fix it.

Gage R&R requires a deliberate study with parts, appraisers and measurements. Measurement data must be collected and analyzed to determine if the measurement system is acceptable. Typically Variable Gage R&Rs are conducted by 3 appraisers measuring 10 samples 3 times each. Then, the results can be compared to determine where the variability is concentrated. The optimal result is for the measurement variability to be due to the parts.

Measurement System Analysis (MSA) is a systematic method to identify and analyze the variation components of a measurement system. It is a mandatory step in any Six Sigma project to ensure the data are reliable before making any data-based decisions. A MSA is the check point of data quality before we start any further analysis and draw any conclusions from the data. Some good examples of data-based analysis where MSA should be a prerequisite:

- Correlation analysis
- Regression analysis
- Hypothesis testing
- Analysis of variance
- Design of experiments
- Statistical process control

You will see where and how the analysis techniques listed above are used. It is critical to know that any variation, anomalies, or trends found in your analysis are actually due to the data and not due to the inaccuracies or inadequacies of a measurement system. Therefore the need for a MSA is vital.

A measurement system is a process used to obtain data and quantify a part, product or process. Data obtained with a measurement device or measurement system are the observed values. Observed values are comprised of two elements

- True Value = Actual value of the measured part
- Measurement Error = Error introduced by the measurement system.

The true value is what we are ultimately trying to determine through the measurement system. It reflects the true measurement of the part or performance of the process.

Measurement error is the variation introduced by the measurement system. It is the bias or inaccuracy of the measurement device or measurement process.

The observed value is what the measurement system is telling us. It is the measured value obtained by the measurement system. Observed values are represented in various types of measures which can categorized into two primary types discrete and continuous. Continuous measurements are represented by measures of weight, height, money and other types of measures such as ratio measures. Discrete measures on the other hand are categorical such as Red/Yellow/Green, Yes/No or Ratings of 1–10 for example.

The guidelines for acceptable or unacceptable measurement systems can vary depending on an organizations tolerance or appetite for risk. The common guidelines used for interpretation are published by the Automotive Industry Action Group (AIAG). These guidelines are considered standard for interpreting the results of a measurement system analysis using Variable Gage R&R. The table below summarizes the AIAG standards.

Use JMP to Implement a Variable MSA

Data File: “VariableMSA.jmp”

Let’s take a look at an example of a Variable MSA using the data in the Variable MSA tab in your “Sample Data.xlsx” file. In this exercise we will first walk through how to set up your study using JMP and then we will perform a Variable MSA using 3 operators who all measured 10 parts three times each. The part numbers and operators and measurement trials are all generic so that you can apply the concept to your given industry. First we need to set up the study.

Step 1: Initiate the MSA study

- Click: Analyze > Quality & Process > Measurement Systems Analysis
- Select “Measurement” as “Y, Response”
- Select “Operator” as “X, Grouping”
- Select “Part” as “Sample, Part ID”
- Select “Gauge R&R” as the “MSA Method”
- Select “Crossed” as “Model Type”

- Click “OK”

Step 2: Create the variability chart for measurement

- Click on the red triangle button next to “Variability Gauge”
- Click “Connect Cell Means” to link the average measurement for each part together
- Click “Show Group Means” to display the average for each appraiser (solid line)
- Click “Show Grand Mean” to display the average for the entire data set (dotted line)

Step 3: Implement Gauge R&R

- Click on the red triangle button next to “Variability Gauge”
- Click “Gauge Studies” -> “Gauge RR”
- A window named “Enter/Verify Gauge R&R Specifications” opens
- Enter the specified value into “K, Sigma Multiplier” box. In this example, we use 5.15 to assume a 99% spread of the data

- Click “OK”

Step 4: Create Mean Plots for further analysis

- Click on the red triangle button next to “Variability Gauge”
- Click “Gauge Studies” -> “Gauge R&R Plots” -> “Mean Plots”
- Three plots appear

Model summary: The result of this Gage R&R study leaves room for consideration on one key measure. As noted in previous pages, the targeted percent contribution R&R should be less than 9% and study variation less than 30%. With % contribution at 7% it is below our 9% unacceptable threshold and similarly, Study variation at 26.1476% is also below the threshold of 30% but this result is at best marginal and should be heavily scrutinized by the business before concluding that the measurement system does not warrant further improvement.

Visual evaluation of this measurement system is another effective method of evaluation but can at times be misleading without the statistics to support it. Diagnosing the mean plots above should help in the consideration of measurement system acceptability, you may benefit from taking a closer look at operator C.

The post Variable Gage R&R with JMP appeared first on Deploy OpEx.

]]>The post Run Chart with SigmaXL appeared first on Deploy OpEx.

]]>A run chart is a chart used to present data in time order. These charts capture process performance over time. The X axis indicates time and the Y axis shows the observed values. A run chart is similar to a scatter plot in that it shows the relationship between X and Y. Run charts differ however, because they show how the Y variable changes with an X variable of time.

Run charts look similar to control charts except that run charts do not have control limits and they are much easier to produce than a control chart. A run chart is often used to identify anomalies in the data and discover pattern over time. They help to identify trends, cycles, seasonality and other anomalies.

Data File: “Run Chart” tab in “Sample Data.xlsx”

- Select the entire range of the data (“Measurement”, “Cycle” and “Trend”).
- Click SigmaXL -> Graphical Tools -> Run Chart

A new window named “Run Chart” pops up with the selected range of data appearing in the box under “Please select your data”

- Click “Next>>”
- A new window also named “Run Chart” appears
- Select “Measurement” as the “Numeric Data Variable (Y)”

- Click “OK”
- The run chart appears automatically in the tab “Run Chart (1)”

The figure above is a run chart created with SigmaXL. The time series displayed by this chart appears stable. There are no extreme outliers, no visible trending or seasonal patterns. The data points seem to vary randomly over time.

Now, let us look at another example which may give us a different perspective. We will create another run chart using the data listed in the column labeled “Cycle”. This column is in the same file used to generate the figure above. Follow the steps used for the first run chart and instead of using “Measurement” use “Cycle” in the Run Chart dialog box pictured in the figure below.

In the figure above, the data points are clearly exhibiting a pattern. It could be seasonal or it could be something cyclical. Imagine that the data points are taken monthly and this is a process performing over a period of 2.5 years. Perhaps the data points represent the number of customers buying new homes. The home buying market tends to peak in the summer months and dies down in the winter.

Using the same data tab lets create a final run chart. This time use the “Trend” data. Again, follow the steps outlined previously to generate a run chart.

In this example, the process starts out randomly, but after the seventh data point almost every data point has a lower value than the one before it. This clearly illustrates a downward trend. What might this represent? Perhaps a process winding down? Product sales at the end of a product’s life cycle? Defects decreasing after introducing a process improvement?

Model summary: It should be clearly evident through our review of Histograms, Scatterplots and Run Charts, that there is great value in “visualizing” the data. Graphical displays of data can be very telling and offer excellent information.

The post Run Chart with SigmaXL appeared first on Deploy OpEx.

]]>The post Run Chart with Minitab appeared first on Deploy OpEx.

]]>A run chart is a chart used to present data in time order. These charts capture process performance over time. The X axis indicates time and the Y axis shows the observed values. A run chart is similar to a scatter plot in that it shows the relationship between X and Y. Run charts differ however, because they show how the Y variable changes with an X variable of time.

Run charts look similar to control charts except that run charts do not have control limits and they are much easier to produce than a control chart. A run chart is often used to identify anomalies in the data and discover pattern over time. They help to identify trends, cycles, seasonality and other anomalies.

Steps to plot a run chart in Minitab:

Data File: “Run Chart” tab in “Sample Data.xlsx”

- Click Stat → Quality Tools → Run Chart.
- A new window named “Run Chart” pops up.
- Select “Measurement” as the “Single Column.”
- Enter “1” as the “Subgroup Size.”

- Click “OK.”

The figure above is a run chart created with Minitab. The time series displayed by this chart appears stable. There are no extreme outliers, no visible trending or seasonal patterns. The data points seem to vary randomly over time.

Now, let us look at another example which may give us a different perspective. We will create another run chart using the data listed in the column labeled “Cycle”. This column is in the same file used to generate the figure above. Follow the steps used for the first run chart and instead of using “Measurement” use “Cycle” in the Run Chart dialog box pictured above.

In this figure above, the data points are clearly exhibiting a pattern. It could be seasonal or it could be something cyclical. Imagine that the data points are taken monthly and this is a process performing over a period of 2.5 years. Perhaps the data points represent the number of customers buying new homes. The home buying market tends to peak in the summer months and dies down in the winter.

Using the same data tab lets create a final run chart. This time use the “Trend” data. Again, follow the steps outlined previously to generate a run chart.

In this example, the process starts out randomly, but after the seventh data point almost every data point has a lower value than the one before it. This clearly illustrates a downward trend. What might this represent? Perhaps a process winding down? Product sales at the end of a product’s life cycle? Defects decreasing after introducing a process improvement?

Model summary: It should be clearly evident through our review of Histograms, Scatterplots and Run Charts, that there is great value in “visualizing” the data. Graphical displays of data can be very telling and offer excellent information.

The post Run Chart with Minitab appeared first on Deploy OpEx.

]]>