Chakra means “spinning wheel” and “vyuha” means formation. Hence, Chakravyuha means the puzzled arrangement of soldiers that keeps moving in the form of a spinning wheel. Rotation of soldiers is very similar to helix of a screw commonly seen in watches. The Chakravyuha, is a multi-tier defensive formation that looks like a disc [chakra] when viewed from above. The warriors at each interleaving position would be in an increasingly tough position to fight. The formation was used in the battle of Kurukshetra in Mahabharata war. It was a largest war ever fought in the history of Mankind. Chakravyuha is a hopeless ‘no escape’ situation for the captive. This strategy was applied during prehistoric days.
Abhimanyu is a character of the ancient Hindu epic Mahabharata. He was the son of the Pandava prince Arjuna. On the thirteenth day of the Mahabharata war, the young warrior was slain inside the Chakravyuha. When Abhimanyu was in his mother’s womb, he heard about the Chakravyuha and gained half the knowledge to enter the Chakravyuha but he was not able to hear how to escape the formation. He displayed great bravery in the battle, even though he was a youngster. Abhimanyu knew how to enter, but didn’t know how to exit. He went into the trap, the Kaurava’s army destroy Abhimanyu’s chariot and left him weaponless and killed him.

The lesson from this is that, he knew how to enter the Chakravyuha but did not learn or understand how to exit. Similarly, when failures happen, we do not have a plan how to tackle / address the failure/s, we simply falter. If the plan, in case of failure is known or addressed, failures are easier to handle.
Similarly regulatory agencies are more focused on how to handle failures and are focused during the regulatory inspections. One such is the area of failed test QC results i.e Out-of-Specification [OOS]. Many critical observations are cited by the regulatory agencies for the poor investigation done arising in Out-of-Specification [OOS] test results. There is a Guidance for Industry, Investigating Out-of-Specification [OOS] Test Results for Pharmaceutical Production, U.S. Department of Health and Human Services Food and Drug Administration Center for Drug Evaluation and Research [CDER] October 2006 Pharmaceutical CGMPs. This guidance applies to chemistry-based laboratory testing.
What is OOS?
OOS [Out of Specification] results include all test results that fall outside the specifications or acceptance criteria established in drug applications, drug master files [DMFs], official compendia, or by the manufacturer. The term also applies to all in-process laboratory tests that are outside of established specifications.
Scope of the guidance
1.Discusses how to investigate OOS test results.
2.The responsibilities of laboratory personnel.
3.The laboratory phase of the investigation, additional testing that may be necessary.
4.When to expand the investigation outside the laboratory.
5.The final evaluation of all test results.

Identifying and assessing OOS test results
Phase-I Laboratory investigation
FDA regulations require that an investigation be conducted whenever an OOS test result is obtained. The purpose of the investigation is to determine the cause of the OOS result. The source of the OOS result should be identified either as an aberration of the measurement process or an aberration of the manufacturing process. Even if a batch is rejected based on an OOS result, the investigation is necessary to determine if the result is associated with other batches of the same drug product or other products. Batch rejection does not negate the need to perform the investigation. The regulations require that a written record of the investigation be made, including the conclusions and follow-up.
The first phase of such an investigation should include an initial assessment of the accuracy of the laboratory’s data. Whenever possible, this should be done before test preparations are discarded. This way, hypotheses regarding laboratory error or instrument malfunctions can be tested using the same test preparations. If this initial assessment indicates that no meaningful errors were made in the analytical method used to arrive at the data, a full-scale OOS investigation should be conducted.
Responsibility of the Analyst
When unexpected results are obtained and no obvious explanation exists, test preparations should be retained, if stable, and the analyst should inform the supervisor. An assessment of the accuracy of the results should be started immediately.
If errors are obvious, such as the spilling of a sample solution or the incomplete transfer of a sample composite, the analyst should immediately document what happened. Analysts should not knowingly continue an analysis they expect to invalidate at a later time for an assignable cause.
Responsibilities of the Laboratory Supervisor
Once an OOS result has been identified, the supervisor’s assessment should be objective and timely. Data should be assessed promptly to ascertain if the results might be attributed to laboratory error, or whether the results could indicate problems in the manufacturing process. An immediate assessment could include re-examination of the actual solutions, test units, and glassware used in the original measurements and preparations, which might provide more credibility for laboratory error hypotheses.
The assignment of a cause for OOS results will be greatly facilitated if the retained sample preparations are examined promptly. Hypotheses regarding what might have happened (e.g., dilution error, instrument malfunction) should be tested. Examination of the retained solutions should be performed as part of the laboratory investigation
Example:
Solutions can be re-injected as part of an investigation where a transient equipment malfunction is suspected. Such hypotheses are difficult to prove. However, reinjections can provide strong evidence that the problem should be attributed to the instrument, rather than the sample or its preparation.
Laboratory error should be relatively rare. Frequent errors suggest a problem that might be due to inadequate training of analysts, poorly maintained or improperly calibrated equipment, or careless work. Whenever laboratory error is identified, the firm should determine the source of that error and take corrective action to prevent recurrence.
In summary, when clear evidence of laboratory error exists, laboratory testing results should be invalidated. When evidence of laboratory error remains unclear, a full-scale OOS investigation should be conducted by the manufacturing firm to determine what caused the unexpected results.
Phase-II Full scale OOS investigation
A full-scale investigation should include a review of production and sampling procedures, and will often include additional laboratory testing. Such investigations should be given the highest priority. Among the elements of this phase is evaluation of the impact of OOS result(s) on already distributed batches.
A. Review of Production
The investigation should be conducted by the QCU [Quality Control Unit] and should involve all other departments that could be implicated, including manufacturing, process development, maintenance, and engineering.
If this part of the OOS investigation confirms the OOS result and is successful in identifying its root cause, the OOS investigation may be terminated and the product rejected. However, a failure investigation that extends to other batches or products that may have been associated with the specific failure must be completed.
OOS results may indicate a flaw in product or process design. For example, a lack of robustness in product formulation, inadequate raw material characterization or control, substantial variation introduced by one or more-unit operations of the manufacturing process, or a combination of these factors can be the cause of inconsistent product quality. In such cases, it is essential that redesign of the product or process be undertaken to ensure reproducible product quality. OOS results might also be the result of the objectionable practice of making unauthorized or unvalidated changes to the manufacturing process.
B. Additional Laboratory Testing
A full-scale OOS investigation may include additional laboratory testing. A number of practices are used during the laboratory phase of an investigation. These include (1) retesting a portion of the original sample and (2) resampling.
1.Retesting
Part of the investigation may involve retesting of a portion of the original sample. The sample used for the retesting should be taken from the same homogeneous material that was originally collected from the lot, tested, and yielded the OOS results.
Situations where retesting is indicated include investigating testing instrument malfunctions or to identify a possible sample handling problem, for example, a suspected dilution error. Decisions to retest should be based on the objectives of the testing and sound scientific judgment. It is often important for the predefined retesting plan to include retests performed by an analyst other than the one who performed the original test. A second analyst performing a retest should be at least as experienced and qualified in the method as the original analyst.
FDA inspections have revealed that some firms use a strategy of repeated testing until a passing result is obtained, then disregarding the OOS results without scientific justification. This practice of “testing into compliance” is unscientific and objectionable under CGMPs. The maximum number of retests to be performed on a sample should be specified in advance in a written standard operating procedure [SOP]. The number may vary depending upon the variability of the particular test method employed, but should be based on scientifically sound principles. The number of retests should not be adjusted depending on the results obtained. The firm’s predetermined retesting procedures should contain a point at which the additional testing ends and the batch is evaluated. If the results are unsatisfactory at this point, the batch is suspect and must be rejected or held pending further investigation. In the case of a clearly identified laboratory error, the retest results would substitute for the original test result. If no laboratory or calculation errors are identified in the first test, there is no scientific basis for invalidating initial OOS results in favor of passing retest results. All test results, both passing and suspect, should be reported and considered in batch release decisions
2.Resampling
While retesting refers to analysis of the original, homogenous sample material, resampling involves analyzing a specimen from any additional units collected as part of the original sampling procedure or from a new sample collected from the batch, should that be necessary. The original sample from a batch should be sufficiently large to accommodate additional testing in the event an OOS result is obtained. In some situations, however, it may be appropriate to collect a new sample from the batch.
When all data have been evaluated, an investigation might conclude that the original sample was prepared improperly and was therefore not representative of the batch quality. Improper sample preparation might be indicated, for example, by widely varied results obtained from several aliquots of an original composite [after determining there was no error in the performance of the analysis]. Resampling should be performed by the same qualified, validated methods that were used for the initial sample. However, if the investigation determines that the initial sampling method was inherently inadequate, a new accurate sampling method must be developed, documented, and reviewed and approved by the QCU.
3.Reporting Testing Results
1.Averaging
A. Appropriate uses
Averaging data can be a valid approach, but its use depends upon the sample and its purpose. For example, in an optical rotation test, several discrete measurements are averaged to determine the optical rotation for a sample, and this average is reported as the test result. If the sample can be assumed to be homogeneous, [i.e., an individual sample preparation designed to be homogenous], using averages can provide a more accurate result.
B. Inappropriate uses
Reliance on averaging has the disadvantage of hiding variability among individual test results. For this reason, all individual test results should normally be reported as separate values. Where averaging of separate tests is appropriately specified by the test method, a single averaged result can be reported as the final test result.
Averaging can also conceal variations in different portions of a batch, or within a sample. For example, the use of averages is inappropriate when performing powder blend / mixture uniformity or dosage form content uniformity determinations. In these cases, testing is intended to measure variability within the product, and individual results provide the information for such an evaluation.
In the context of additional testing performed during an OOS investigation, averaging the result(s) of the original test that prompted the investigation and additional retest or resample results obtained during the OOS investigation is not appropriate because it hides variability among the individual results. Relying on averages of such data can be particularly misleading when some of the results are OOS and others are within specifications. It is critical that the laboratory provide all individual results for evaluation and consideration by the QCU, which is responsible for approving or rejecting.
For example, in an assay of a finished drug with a specification of 90 to 110%, an initial OOS result of 89% followed by additional retest results of 90% and 91% would produce an average of 90%. While this average would meet specifications, the additional test results also tend to confirm the original OOS result. However, in another situation with the same specifications, an initial OOS result of 80% followed by additional test results of 85% and 105% would also produce an average of 90%, but present a much different picture. These results do not confirm the original OOS result but show high variability and may not be reliable. In both examples, the individual results, not the average, should be used to evaluate the quality of the product.
2) Outlier tests.
On rare occasions, a value may be obtained that is markedly different from the others in a series obtained using a validated method. Such a value may qualify as a statistical outlier. An outlier may result from a deviation from prescribed test methods, or it may be the result of variability in the sample. It should never be assumed that the reason for an outlier is error in the testing procedure, rather than inherent variability in the sample being tested.
Outlier testing is a statistical procedure for identifying from an array those data that are extreme. The possible use of outlier tests should be determined in advance. This should be written into SOPs for data interpretation and be well documented. The SOPs should include the specific outlier test to be applied with relevant parameters specified in advance. The SOPs should specify the minimum number of results required to obtain a statistically significant assessment from the specified outlier test.
For validated chemical tests with relatively small variance, and if the sample being tested can be considered homogeneous [for example, an assay of a composite of a dosage form drug to determine strength], an outlier test is only a statistical analysis of the data obtained from testing and retesting. It will not identify the cause of an extreme observation and, therefore, should not be used to invalidate the suspect result. Occasionally, an outlier test may be of some value in estimating the probability that the OOS result is discordant from a data set, and this information can be used in an auxiliary fashion, along with all other data from the investigation, to evaluate the significance of the result.
Outlier tests have no applicability in cases where the variability in the product is what is being assessed, such as for content uniformity, dissolution, or release rate determinations. In these applications, a value perceived to be an outlier may in fact be an accurate result of a non-uniform product.
When using these practices during the additional testing performed in an OOS investigation, the laboratory will obtain multiple results. It is again critical for the laboratory to provide all test results for evaluation and consideration by the QCU in its final disposition decision.
Concluding the investigation
To conclude the investigation, the results should be evaluated, the batch quality should be determined, and a release decision should be made by the QCU. The SOPs should be followed in arriving at this point. Once a batch has been rejected, there is no limit to further testing to determine the cause of the failure so that a corrective action can be taken.
A. Interpretation of Investigation Results
The QCU is responsible for interpreting the results of the investigation. An initial OOS result does not necessarily mean the subject batch fails and must be rejected. The OOS result should be investigated, and the findings of the investigation, including retest results, should be interpreted to evaluate the batch and reach a decision regarding release or rejection.
In those instances where an investigation has revealed a cause, and the suspect result is invalidated, the result should not be used to evaluate the quality of the batch or lot. Invalidation of a discrete test result may be done only upon the observation and documentation of a test event that can reasonably be determined to have caused the OOS result.
In those cases where the investigation indicates an OOS result is caused by a factor affecting the batch quality [i.e., an OOS result is confirmed], the result should be used in evaluating the quality of the batch or lot. A confirmed OOS result indicates that the batch does not meet established standards or specifications and should result in the batch’s rejection.
For inconclusive investigations – in cases where an investigation
(1) Does not reveal a cause for the OOS test result.
(2) Does not confirm the OOS result.
The OOS result should be given full consideration in the batch or lot disposition decision. In the first case [OOS confirmed], the investigation changes from an OOS investigation into a batch failure investigation, which must be extended to other batches or products that may have been associated with the specific failure.
In the second case [inconclusive], the QCU might still ultimately decide to release the batch. For example, a firm might consider release of the product under the following scenario:
A product has an acceptable composite assay range of 90.0 to 110.0 percent. The initial [OOS] assay result is 89.5 percent. Subsequent sample preparations from the original sample yield the following retest results: 99.0, 98.9, 99.0, 99.1, 98.8, 99.1, and 99.0 percent. A comprehensive laboratory investigation [Phase 1] fails to reveal any laboratory error. Review of events during production of the batch reveals no aberrations or indication of unusual process variation. Review of the manufacturing process and product history demonstrates that the process is robust. The seven passing retest results are all well within the known limits of variability of the method used. Batch results from in-process monitoring, content uniformity, dissolution, and other tests are consistent with the passing retest results. After a thorough investigation, a firm’s QCU might conclude that the initial OOS result did not reflect the true quality of the batch.
It is noteworthy in this scenario that the original, thorough laboratory investigation failed to find any assignable cause. However, if subsequent investigation nonetheless concludes that the source of the OOS result was a cause unrelated to the manufacturing process, in response to this atypical failure to detect the laboratory deviation, it is essential that the investigation include appropriate follow-up and scrutiny to prevent recurrence of the laboratory error[s] that could have led to the OOS result.
As the above example illustrates, any decision to release a batch, in spite of an initial OOS result that has not been invalidated, should come only after a full investigation has shown that the OOS result does not reflect the quality of the batch. In making such a decision, the QCU should always err on the side of caution.
B. Cautions
In cases where a series of assay results [to produce a single reportable result] are required by the test procedure and some of the individual results are OOS, some are within specification, and all are within the known variability of the method, the passing results are no more likely to represent the true value for the sample than the OOS results. For this reason, a firm should err on the side of caution and treat the reportable average of these values as an OOS result, even if that average is within specification. An assay result that is low, but within specifications, should also raise a concern. One cause of the result could be that the batch was not formulated properly. Batches must be formulated with the intent to provide not less than 100 percent of the labeled or established amount of active ingredient. This would also be a situation where the analytical result meets specifications, but caution should be used in the release or reject decision
C. Field Alert Reports [FAR]
For those products that are the subject of approved full and abbreviated new drug applications, regulations require submitting within 3 working days a field alert report [FAR] of information concerning any failure of a distributed batch to meet any of the specifications. OOS test results on these products are considered to be one kind of “information concerning any failure” described in this regulation. Unless the OOS result on the distributed batch is found to be invalid within 3 days, an initial FAR should be submitted. A follow-up FAR should be submitted when the OOS investigation is completed.
Note-The images given for representation in this blog are taken from Google Images. Many thanks for Google.
It is a great analogy to begin with. It makes it easy to understand the concept.
Thank you Sneha for the compliments!!