The present document can't read!
Please download to view
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
...

Statistics for qc 2

by institute-of-validation-technology

on

Report

Download: 0

Comment: 0

191

views

Comments

Description

Download Statistics for qc 2

Transcript

  • 1. Basic Statistics for Quality Control and Validation Studies: Session 2•  Steven S. Kuwahara, Ph.D.•  GXP BioTechnology, LLC •  PMB 506, 1669-2 Hollenbeck Ave. •  Sunnyvale, CA 94087-5402•  Tel. & FAX (408) 530-9338 •  E-Mail: [email protected]•  Website: www.gxpbiotech.org ValWkPHL1012S2 1
  • 2. Sample Number Determination 1.•  One of the major difficulties with setting the number of samples to take lies in determining the levels of risk that are acceptable. It is in this area that managerial inaction is often found, leaving a QC supervisor or senior analyst to make the decision on the level of risk the company will accept. If this happens, management has failed its responsibility.ValWkPHL1012S22
  • 3. Sample Number Determination 2.•  The problem is that all sampling plans, being statistical in nature, will possess some risk. For instance, if we randomly draw a new sample from a population we could assume or predict that a test result from that sample will fall within ±3σ of the true average 99.7% of the time, but there is still 0.3% (3 parts-per-thousand) of the time when the result will be outside the range for no reason other than random error. Thus a good lot could be rejected. This is known as a false positive or a Type I error.•  This is the type of error that is most commonly considered, but there is type II error also. ValWkPHL1012S23
  • 4. Sample Number Determination 3.•  False positives occur when you declare that there is a difference when one does not really exist (example given in the previous slide). Sometimes called producer’s risk, because the producer will dump a lot that was okay.•  False negatives occur when you declare that a difference does not exist when, in fact, the difference does exist. Sometimes called customer’s risk, because the customer ends up with a defective product. It is also known as a Type II error.ValWkPHL1012S2 4
  • 5. SIMPLIFIED FORM OF n CALCULATIONn for an ! to compare with a µxi − µ ⎛ s ⎞ = x − µt= t ⎜ ⎟ i s ⎝ n ⎠ n 2 2 2 2ts 2tsn( )= x−µ =Δ2n= 2 ΔValWkPHL1012S2 5
  • 6. EXAMPLE OF SIMPLIFIED METHOD WITH ITERATION•  Δ = 51- 50 = 1 s = ± 2 Z0.025=1.96•  n = (1.96)2 (2)2 / 1 = 3.8416 X 4 = 15.4 ~ 16•  t0.025,15= 2.131 (2.131)2 = 4.541161•  n = 4.54116 X 4 = 18.16 ~ 19•  t0.025,18= 2.101 (2.101)2 = 4.414201•  n = 4.414201 X 4 = 17.66 ~ 18•  t0.025,17= 2.110 (2.110)2 = 4.4521•  n = 4.4521 X 4 = 17.81 ~ 18ValWkPHL1012S2 6
  • 7. Sample Number Determination 6.•  Because of the need to define risk and consider the level of variation that is present, sampling plans that do not allow for these factors are not valid.•  Examples of these are: Take 10% of the lot below N=200 and then 5% thereafter. The more famous one is to take :•  in samples.N +1ValWkPHL1012S2 7
  • 8. DEVELOPMENT OF A SAMPLING PLAN•  Consider a situation where a product must contain at least 42 mg/mL of a drug. At 41 mg/mL the product fails. Because we want to allow for the test and product variability, we decide that we want a 95% probability of accepting a lot that is at 42 mg/ mL, but we want only a 1% chance of accepting a lot that is at 41 mg/mL.•  For the sampling plan we need to know the number (n) of test results to take and average.•  We will accept the lot if the average () exceeds k mg/mL. ValWkPHL1012S2 8
  • 9. SAMPLING PLAN CALCULATIONS A. You will need the table of the normal distribution for this.• Suppose we have a lot that is at 42.0 mg/mL.•  would be normally distributed with µ=42.0– And the SEM = s/!n. We want !>k x − 42.0 k − 42.0 x= > ssnn x = standard normal deviate From a “normal” table (or “x” with ν = ∞) we want a probability of 0.95 that “x” will be greater than the “k” expression.ValWkPHL1012S29
  • 10. SAMPLING PLAN CALCULATIONS A1.You will need a normal distribution table for this•  x0.95,∞ = 1.645 (cumulative probability of 0.95)•  We know that this must be greater than the “k” expression.•  We also know that k must be less than 42.0 since the smallest acceptable  will be 42.0.•  Therefore: k − 42.0= 1.645since k < x snValWkPHL1012S2 10
  • 11. SAMPLING PLAN CALCULATIONS B.• Now suppose that the correct value for the lot is 41.0 mg/mL. So now µ = 41.0 and we want a probability of 0.01that !>k. Now:x − 41.0 k − 41.0x= > = −2.326ss nnk − 42.01.645 = = −0.707k − 41.0 − 2.326k = 41.59ValWkPHL1012S2 11
  • 12. SAMPLING PLAN CALCULATIONS C.• Going back to the original equation for a passing resultand knowing that s = ± 0.45 (From our assay validationstudies?) k − 42.0 41.59 − 42.0 − 0.41 == = −1.64 ss snn n 2[−1.64]s = (− 0.41) or n = ([− 1.64][0.45]) 2n(− 0.41)0.544644n== 3.24 0.1681ValWkPHL1012S2 12
  • 13. SAMPLING PLAN•  The sampling plan now says: To have a 95% probability of accepting a lot at 42.0 mg/mL or better and a 1% probability of accepting a lot at 41.0 mg/mL or worse, given a standard deviation of ± 0.45 mg/mL for the test method; run four samples and average them. Accept the lot if the mean is 41.59 mg/mL or better.•  Note that the calculated value of n is close enough to 3 that some would argue for 3 samples. ValWkPHL1012S213
  • 14. SAMPLE SIZES FOR MEANS• Suppose we want to determine µ using a test where weknow the standard deviation (s) of the population.• How many replicates will we need in the sample?• The length of a confidence interval = L 2 22 2 2ts 2 4t s 4t sL= L = n = 2 L = 2Δ n n L ValWkPHL1012S214
  • 15. Recalculation of Earlier Problem.2 24t s n=2 LL = 2, s = ±2, t0.95,∞=1.960 (two sided)2 24(2) (1.96 ) 61.4656n= 2 =(2) 4n = 15.4 or 16Iterate : (t )0.95,15 = 2.131 n = 18.16, (t )0.95,18 = 2.101 n = 17.66(t )0.95,17 = 2.110 n = 17.81 so n = 18ValWkPHL1012S2 15
  • 16. Sample size for estimating µ• Note the statement: We are determining the % of drugpresent and we wish to bracket the true amount (µ%)by ± 0.5% and do this with 95% confidence, so L = 2 x0.5 = 1.0• We have 22 previous estimates for which s = 0.45• Now at the 95% level of significance (1–0.95), t0.975,21 =2.080. 22 4(2.080) (0.45 )n= 2= 3.5(1.0) ValWkPHL1012S216
  • 17. POOLED VARIANCE22 n1 −1 s1 + n2 −1 s2 ( ) ( )sp = n1 +n2 −2ValWkPHL1012S2 17
  • 18. Calculating the Confidence Interval, Sp • The results of the four determinations are: 42.37%, 42.18%, 42.71%, 42.41%. • ! = 42.42% and s = 0.22% (n2 – 1) = 3 • Using the extra 3 df and s = 0.22% we have: 22 21(0.45 ) + 3(0.22 )Sp == 0.4321 + 3 ValWkPHL1012S218
  • 19. Calculating the Confidence Interval, L• Sp = s, the new estimate of the standard deviation, so anew confidence interval can be calculated with 24 df.t(0.975, 24)= 2.064. L = 2(2.064)(0.43)4L = 0.88752, rather than 1.0. ( )C.I. = ± L = 0.44376 or ± 0.452C.I. 95% = 42.42 ± 0.45 or 41.97 - 42.87 Note that n = 4 not 25 for calculating L. ValWkPHL1012S219
  • 20. Sample Sizes for Estimating Standard Deviations. I.•  The problem is to choose n so that s at n – 1 will be within a given ratio of s/σ.•  Examples are found in reproducibility, repeatability, and intermediate precision measurements.•  s = standard deviation experimentally determined. σ = population or true standard deviation. s2 and σ2 are corresponding variances.•  You will use n to derive s. ValWkPHL1012S2 20
  • 21. Sample Sizes for Estimating Standard Deviations. χ2• This is the asymmetric 2distribution for σ2.• Now as an example, χ 2=(n − 1)sassume n-1 = 12. At 12 df, n −1 2χ2 will exceed 21.0261 5% σof the time and it will 2 2exceed 5.2260 95% of the χn −1stime. Therefore 90% of the= 2time, χ2 will lie between5.2260 and 21.0261 for 12 (n − 1)σdf. 2 2• Check your tables to ⎛ s ⎞ ⎛ χ ⎞n −1confirm this.⎜ ⎟ = ⎜ ⎟ ⎜ (n − 1) ⎟ ⎝ σ ⎠ ⎝ ⎠ValWkPHL1012S2 21
  • 22. Confidence interval for the standard deviation.•  Given the data in the previous slide, we know that (s2/σ2) will lie between (5.2260/12) and (21.0261/12), or between 0.4355 and 1.7552.•  Thus the ratio of s/σ will lie between the square roots of these numbers or between 0.66 and 1.32 or 0.66 < s/σ < 1.32. This gives:•  s/1.32 < σ < s/0.66. If you know s this gives you a 90% confidence interval for the standard deviation.•  Now let’s reverse our thinking. ValWkPHL1012S222
  • 23. Sample Sizes for Estimating Standard Deviations. Continued. I.•  Instead of the confidence interval, suppose we say that we want to determine s to be within ± 20% of σ with 90% confidence. So:•  1 – 0.2 < s/σ < 1+ 0.2 or 0.8 < s/σ < 1.2•  This is the same as: 0.64 < (s/σ)2 < 1.44•  Since we want 90% confidence we use levels of significance at 0.05 and 0.95.•  Now go to the χ2 table under the 0.95 column and look for a combination where χ2/df is not < 0.64, but df is as large as possible.ValWkPHL1012S223
  • 24. Sample Sizes for Estimating Standard Deviations.Continued. II.•  Trial and error shows this number to be about 50.•  Next we go to the column under 0.05 and look for a ratio that does not exceed 1.44, but df is as small as possible.•  Trial and error will show this number to be between 30 and 40.•  You must take the larger of the two numbers and since df = n – 1, n = 51 replicates.ValWkPHL1012S2 24
  • 25. Do Not Panic. Consider This!•  Instead of the confidence interval, suppose we say that we want to determine s to be within ± 50% of σ with 95% confidence. So:•  1 – 0.5 < s/σ < 1+ 0.5 or 0.5 < s/σ < 1.5•  This is the same as: 0.25 < (s/σ)2 < 2.25•  Since we want 95% confidence we use levels of significance at 0.025 and 0.975.•  Now go to the χ2 table under the 0.975 column and look for a combination where χ2/df is not < 0.25, but df is as large as possible.ValWkPHL1012S225
  • 26. Greater Confidence, But Lesser Certainty•  Trial and error shows this number to be 8.•  Next we go to the column under 0.025 and look for a ratio that does not exceed 2.25, but df is as small as possible.•  Trial and error will show this number to be 8. The same as the other df.•  You must take the larger of the two numbers and but in this case df = 8 and n = 9.•  You have a greater confidence interval for a smaller n. ValWkPHL1012S226
  • 27. n for Comparing Two Averagesx1 − x2tα , df = Δ = x1 − x2 n1 = n2σ 122 σ2 +n1 n22 Δ 2t α ,df 2tα .df =σ 2 σ 2n(σ 2 1 ) + σ 2 = Δ2 212 +n nn= 2tα , df (2 2σ1 + σ 2)Δ2 ValWkPHL1012S2 27
  • 28. Introduction to the Analysis of Variance (ANOVA) I.This method was aimed at deciding whether or notdifferences among averages were due toexperimental or natural variations or truedifferences among averages.R.A. Fisher developed a method based oncomparing the variances of the treatment meansand the variances of the individual measurementsthat generated the means.The technique has been extended into the fieldknown as DOE or factorial experimentsValWkPHL1012S2 28
  • 29. Introduction to the Analysis of Variance (ANOVA) II.•  The method is based on the use of the F-test and the F-distribution (Named after him.) –  The F-distribution, and all distributions related toerrors, is a skewed, unsymmetrical distribution.2 ns yF= 2 s pooled –  S2y represents the variance among the treatments ands2pooled is the variance of the individual results (systemnoise). ValWkPHL1012S229
  • 30. Introduction to the Analysis of Variance(ANOVA) III.•  F increases as the number of replicates increases. –  In simple ANOVA systems n is the same for alltreatments. –  By increasing n you amplify small differences betweenthe variances of the treatment means and the systemnoise. –  An F value of 1.0 or less says that the system noise isgreater than the variance of the means. This suggeststhat the differences among the means are due toexperimental or environmental variations.ValWkPHL1012S230
  • 31. Introduction to the Analysis of Variance(ANOVA) IV.•  Because of the importance of system noise, before doing an ANOVA or factorial experiment, you should reduce variation in the system to a minimum. –  You should remove all special cause variation andminimize common cause variation. –  Methods such as Statistical Process Control (SPC)should be used to reduce variations.•  Note: A system where special cause variation has been eliminated and only common cause variation is left is known as a system under statistical control.ValWkPHL1012S231
  • 32. Introduction to the Analysis of Variance (ANOVA)V.•  The F-distribution depends on the number of degrees of freedom of the numerator and denominator and the level of type 1 error that you will accept. –  For each level of type 1 error there are differentdistribution tables. The exact value of F then dependson the number of degrees of freedom of the numeratorand denominator.•  If the calculated F exceeds the tabular F, it is then significant at the1-α level. Where α is the level of type 1 error that you are willing to accept.•  α is the p value. Most statistical software programs will calculate the p value. Normally, you want 0.05 or 0.01.•  Type-1 error is where you falsely conclude that there is a difference. AKA: False positive, producer’s risk.ValWkPHL1012S2 32
  • 33. Fairness of 4 sets of dice. (Taken from Anderson, MJ andWhitcomb, PJ, DOE Simplified, CRC Press, Boca Raton, FL, 2007.)•  Frequency distribution for 56 rolls of dice. Dots WhiteBlue GreenPurple 6 6+66+6 6+6 6 555 55 44 4+4 4+4 4 33+3+3+3+33+3+3+33+3+3+3 3+3+3+3+3 22+2+22+2+2+22+2+2+2 2+2+2+2+2 1 1+1 1 11Mean (y) 3.143.293.29 2.93Var. (s2)2.592.372.37 1.76 n = 14 Grand Ave. = 3.1625•  Grand average = Total of all dots/56 dice (4X14) ValWkPHL1012S2 33
  • 34. Fairness of 4 sets of dice. Calculation of F.Note differences in denominator. Since F is much less than 1.0 we can assume that there is no significant difference among the colors even without lookingat an F table.s2 = (3.14 − 3.1625)2 + (3.29 − 3.1625)2 + (3.29 − 3.1625)2 + (2.93 − 3.1625)2 y 4 −12s y = 0.029s 2 = 2.59 + 2.37 + +2.37 + 1.76 = 2.28pooled 42 n * s y 14 * 0.029F= 2= = 0.18 s pooled 2.28 ValWkPHL1012S2 34
  • 35. Fairness of 4 sets of dice. How about a loaded set?Dots WhiteBlue GreenPurple613615125241313324112510214115 Mean (y) 2.503.934.93 2.86 Var. (s2)2.422.382.07 3.21n = 14 Grand Ave. = 3.555δ= -1.0550.3751.375 -0.695 δ2 =1.1130 0.1406 1.8906 0.4830Σδ2 =3.6245 Σδ2/3 = s2y =1.2082ValWkPHL1012S2 35
  • 36. Fairness of 4 sets of dice. How about a loaded set? ANOVA 22.42 + 2.38 + 2.07 + 3.21spooled = = 2.52 df = 4(14 - 1) = 5242s y = 1.21 df = 3 (4 - 1) 14 *1.21F== 6.71 F3,52 = 6.712.52Tabular F3,52 = 2.839 − 2.758 at 5%, p = 0.05and 4.313 - 4.126 at 1%, and 6.595 - 6.171 at 0.1%.Range is for F3,40 to F3,60 . Significant at p = 0.001 ValWkPHL1012S236
  • 37. Least Significant DifferenceLucy in the Sky with Diamonds (LSD)•  DO NOT EVER USE THIS METHOD WITHOUT THE PROTECTION OF A SIGNIFICANT ANOVA RESULT ! ! !•  There are 45 combinations of 10 results taken in pairs. If you focus mainly on the high and low results, you are almost guaranteed to encounter a type-1 error.–  This is why you need to use the ANOVA coupled with an LSD determination.•  The LSD is based on the equations for confidence intervals. n 2 LSD = ± t(1−α ,df ) × s pooled 2 / n s pooled = ∑s1 in ValWkPHL1012S2 37
  • 38. LSD for the Current Problem2.42 + 2.38 + 2.07 + 3.21s pooled == 2.52 = 1.5942LSD = 2.01 ×1.59 = ±1.21 14at 99% LSD = ±1.333 for t (0.99,df =52 ) ≅ 2.68•  The (1-α) level of the t determines the level of significance for the LSD.•  n = 14 for replicates, but s2pooled had 4X(14-1) = 52 df.ValWkPHL1012S238
  • 39. So where are the bad dice?•  Given the LSD = ±1.333, the result can be displayed in different ways.•  Plot the result as the mean of the average count of the treatments (colors) ± ½ LSD. –  Then look for overlaps. A significant difference will not havean overlap.•  Or take the difference between means and compare them to the LSD. –  In the present case, the white and purple dice are similar, butthe green dice are definitely higher, with the blue dicedifferent from the white, but not from the green and onlymarginally different from the purple.ValWkPHL1012S2 39
  • 40. White = 2.50Blue = 3.93 Green=4.93 Purple=2.86White = 2.50 1.43 2.43 0.36Blue = 3.931.43 1.00 1.07Green=4.93 2.431.002.07Purple=2.860.361.07 2.07For 95% confidence, the LSD is ± 1.21 and for 99%, the LDSis ± 1.33.So blue and green are different from white, and green isdifferent from purple and white at the 99% level.White and purple are the same as are blue and green.Purple is also similar to blue, but not to green.All of this holds at the 99% level, thus at p = 0.01 we concludethat blue and green dice run to higher numbers than whiteand purple.ValWkPHL1012S240
  • Fly UP