In building a research career, especially in the hard sciences and other science or engineering related fields, conducting the actual research study is only one stage of the process. Before pursuing any R&D that involves collecting hard data, one must have a strategy to data collection, which is commonly known as research methods. Research methods include specific approaches for performing experiments that will lead to valid data; which can then be demonstrated as repeatable and reproducible before publishing. It is the responsibility of the researcher to plan and perform the research carefully and with due diligence, so that the data obtained is repeatable, reproducible, and validated using standard methods of validation. Since most R&D involves some form of analytical method, even if it is biological, biochemical, biomedical, or other types of research, most of it ultimately relies on collecting usable and valid analytical data.
Analytical Method Validation (AMV)
Method validation relates to having a final, optimized method that meets certain standard criteria stipulated by a universally accepted organization, such as the International Conference on Harmonization (of Technical Requirements for Registration of Pharmaceuticals for Human Use) of analytical methods or ICH. The ICH basically contains input from regulatory agencies as well as pharmaceutical and biopharmaceutical industries as to how analytical methods should be validated. In some ICH documents, the accepted approaches for validating such methods are agreed upon by all the signatories. These are then adopted by international pharmaceutical agencies (FDA, EMA, JPA, etc.), thereby regulating the approval of and marketing for prescription drugs in general. These guidelines do not apply to over-the-counter pharmaceuticals, which are not regulated by any agency. There are numerous aspects to AMV, such as repeatability, reproducibility, robustness, ruggedness, system suitability, limits of detection, and limits of quantitation, amongst many others. However, when reduced to its fundamentals, AMV is intended to guarantee the eventual validity, usability, and reproducibility of any given analytical scientific method to be ultimately used by the pharmaceutical or biopharmaceutical and regulatory fields. These ICH guidelines must be followed by any company wishing to gain regulatory approval for animal studies, human clinical studies, and marketing. However, individuals wishing to submit a manuscript for publication in any journal across the world are not required to follow these guidelines. Indeed, journals, editors or reviewers do not require authors to follow any guidelines for using AMV.
Although there are other organizations besides ICH that determine what is a validated method; ICH remains one of the most recognized ones. It regulates all pharmaceutical and biopharmaceutical practices, companies and the major regulatory agencies, including the U.S. FDA. In addition to ICH, the National Institute of Standards and Technology (NIST) too, has its own guidelines for developing and applying validated methods, as do several other scientific organizations (IVT/JVT). The Institute of Validation Technology (IVT) is a well-known, private organization that deals entirely with the validation world. It regularly publishes the Journal of Validation Technology (JVT), a refereed, online journal of long standing, which deals almost entirely with validation issues of all types.
Numerous books, journals, review papers, magazines, and websites are devoted to discussions on AMV. For example, the ICH guidelines for analytical method validation can be easily found online. If one uses the keywords relevant to ICH guidelines for AMV, then virtually all literature related to these by any regulatory agency will appear. Researchers should be aware of these basic, fundamental guidelines in order to conduct AMV of their own methods before submitting a manuscript to a journal. However, one should also know that method validation to meet pharmaceutical, regulatory approvals may not be the same as those followed for other industries or organizations. In general, most scientists involved with AMV tend to follow ICH guidelines. However, only the pharma and biopharma industries are actually required to follow these in their submittals/filings to a regulatory agency in the U.S. or abroad. There are no similar requirements by any scientific journal in the world for any manuscript submittals. This is all to be decided on an ad hoc, individual basis by the authors. Perhaps it could also be decided by the reviewers, but this is usually unlikely. Guidelines for authors never discuss ICH or AMV requirements.
Here are a few of the basic ICH guidelines for conducting AMV.
1. Repeatability and reproducibility
Any relatively newer analytical method, be it for chemical, biological, biochemical, medicinal or medical purposes, needs to meet the basic AMV requirements. Most scientific work involves collecting hard data i.e., numbers that have an average value and some deviation from that average, such as standard deviation (SD), relative standard deviation (RSD), percent relative standard deviation (%RSD), coefficient of variation (CV), etc. All publications should show the number of times each measurement has been repeated; which is often abbreviated as ‘n.’ According to FDA, n should not be below 3, and should ideally be 6 or more.
Any scientific paper with single data points should be unacceptable to any reviewers and journal editors. However, ICH standards do not apply to journal publications, where instead individual reviewers usually define what AMV should include, if at all. Therefore, it varies from author-to-author and reviewer-to-reviewer; nothing is stated by the journal or its editors, by and large.
Moreover, if you submit a paper with n = 1 to any journal, it could reflect unfavourably on your own work. I would like to emphasize that such validation requirements are never clearly stated or stipulated by any analytical journal, to the best of my own knowledge. The AMV guidelines from ICH are never presented as guidelines for authors by any scientific journal. This is an important point that is left to the authors and their reviewers to determine just what are the expected AMV guidelines for all authors.
Every table of analytical data must present the number (and data) of all the repeats for every average number, as well as the repeatability or n = number of repeat measurements. There should also be some indication of how close these measurements are to one another, such as SD, %RSD, and others. If the work has been done properly, these indications of measurement deviations should be very small, usually less than 1-2%. That indicates good repeatability.
The repeatability of all measurements is usually performed by one individual; although these could also be replicated by others using the same methods, instrumentation, expertise, and capabilities as the originator. Repeatability is usually tested by the original scientist to be subsequently replicated and reproduced by others with similar prowess, using the same methods but different labs, instrumentation, reagents, chemicals, and so forth. This then demonstrates overall reproducibility from lab to lab. Quantitatively, these numbers should differ from one another by less than a few percent RSD or %RSD.. Since presently, no scientific journals demand that authors demonstrate reproducibility; there are many authors, and just as many reviewers, who do not believe that such things must be included in a submission in order for it to be found acceptable and publication-worthy. Let me put this last sentence in a different way. If an author did not make attempts at AMV, repeatability, or reproducibility, and every single measurement was done only once, that manuscript could still be published, regardless of the journal’s guidelines or editor’s wishes.
In other words, science is based on one person’s original methods that are made available to others of similar skill and expertise, so that others can utilize these methods and the resultant data to further their own R&D. On the other hand, if results turn out to be irreproducible, I believe that they should not be published. To what end? This is because such irreproducible methods can waste the time, efforts, resources, and finances of other scientists. Irreproducible publications must be retracted and revealed to be erroneous and faulty.
2. Robustness and ruggedness
There are many other aspects of successful research analytical methods, such as robustness and ruggedness. This means that a method can be successful even if variations exist within the operating parameters, such as temperature, humidity, purity of solvents, source of reagents, age of instrumentation, nature of instrumentation, atmospheric pressure, etc. A robust method is one that will function as expected, within numerical deviations from zero %RSD to whatever it deviates (+/-). A robust method has a small %RSD of its measurements and it is insensitive to small changes in the operational parameters. A poor or non-robust method is one whose measurements has a very large RSD or %RSD, and usually very constrained experimental parameters that provide accurate and precise measurements.
A rugged method, on the other hand, can function for long periods of time with many repeat measurements or under varying operational parameters and it still provides useful, meaningful, and repeatable data with small RSDs for its data. Most people prefer using methods that have been proven to be robust and rugged as well as practical, inexpensive, and easily transferable to other groups/labs with similar data and RSDs. Virtually no scientific publications, unless they are following ICH guidelines, ever discuss robustness or ruggedness in their studies. They usually only mention a single set of (optimized) experimental parameters which were followed for obtaining all or most of the data presented, but they never discuss these other important ideas of robustness and ruggedness.
3. System suitability testing
Eventually, your method will have to have a system suitability test i.e., it has to be demonstrated as suitable for the intended measurements. This will require a system suitability standard of at least two or more compounds present in your actual samples at known levels, with one compound, i.e. the actual analyte being identified and quantitated. Further, this standard must be shown to have a baseline resolved analyte peak from other compounds present in that standard sample. In addition, the analyte peak must be able to be identified and quantified with high accuracy and precision by standard, quantitative methods. However, true quantitation studies are done separately by first performing system suitability studies. System suitability is an ideal way to prove that your analytical system is operating properly before actually measuring real world samples. Although most academic R&D does not use system suitability samples, in many industrial studies, this is usually required before running real world samples.
4. Other aspects of AMV such as accuracy, precision, detection limits, etc.
What else constitutes a good collection of analytical methods to gather data for your chosen field or pursuit? It is rather obvious that one would want methods that are simple to set up, easy to teach to new users, easy to maintain, operate with stability, are computer controlled, and with rapid data acquisition. Science tends to progress faster when its analytical methods meet such desirable standards. One tends to seek methods that will provide data with high accuracy (i.e., the agreement of measurement with true/actual levels of analyte present), precision (small RSD) of each measurement, specific to the analytes of interest (recognizes and quantitates only the analyte of interest), low detection and quantitation limits, wide linearity range and above all, data that is robust. It is essential that all of these desirable traits are built into the methods being applied for your own or others’ measurements.
Despite the ease of availability of such existing guidelines for conducting AMV, irreproducible methods and results are still a common phenomenon. Such results often lead to forced retractions of published data, which is a very serious black mark against any researcher’s reputation. How then do we prevent irreproducible results from being published? If researchers were to demand of themselves and their colleagues that all data be shown (at the very least, the attempts at repeatability and reproducibility) as well as sufficient data points for statistical treatment, then there would be no more irreproducible publications or retractions because of irreproducibility. In addition, researchers would not lose either respect or their careers in science. Thus, it seems obvious that if the originator and originating laboratory follow these practices of doing good science and publishing high quality data/results, then science’s reputation and future will improve. As will your own futures. Are you with me?