The challenges faced by researchers in dental materials are many and varied. In particular, mechanical testing has many constraints and potential interferences that can, if not addressed, at best vitiate, and at worst invalidate the results and thus the conclusions drawn, with knock-on effects for future studies and product use. Indeed, one must be sure in each and every case that the test employed really can yield results relevant to the context of the actual service conditions of a material or product. Essentially, one must ensure that both the mode of loading and the mode of failure are representative of service [ ], likewise taking into account preparation methods and environment as well as such matters as strain rate, and that the results are therefore interpretable – indeed useful, fit for purpose.
It is worrying, then, as a matter of principle, that in many instances tests are used simply because they have been used before, typically over many years, without those criteria being even nodded at, let alone difficulties being addressed. There are sound grounds for arguing, for example, that for materials meant to be used in the mouth being tested dry, and at room temperature, cannot be justified without confirmation of validity for several reasons. That proof of validity is never forthcoming. Strain rate is another factor of importance that affects the outcome, but which is routinely ignored [ ].
On top of this lack of awareness (to be charitable), is another kind of abrogation of responsibility: uncritical use of dental ISO standards, and in particular ISO 4049, for example in [ ]. There seems to be a widespread belief that the ISO exists for the purposes of research. This kind of thinking is illustrated by a recent paper [ ] which makes some very important points about test conditions and the protocol to be followed, but which also has the following claim in the Abstract:
“ The ISO 4049 standard defines the conditions for performing the properties tests of composites to allow reproducibility and comparison of different studies .”
As written, this seems to imply a broad generality that is improper, rather than the intended specific ‘standardization’ of quality control (QC) methods for certification purposes. It is a very common misunderstanding that appears in one form or another in many papers, and for other standards, often just implicitly. Whilst international standards (IS) development may (and should) be guided by the best information available, creating methods that are reliable and informative, the goal of such an IS is simply to determine safety and efficacy through the use of sufficiently discriminatory but easily reproduced methods [ ]. In many cases, to work to the best understanding would require too elaborate a device, be too expensive or time-consuming, or require too much expertise on the part of the user: the manufacturer and test house alike need to keep costs down whilst maintaining adequate reliability [ ]. For the quality control and certification purposes of standards, which are essentially based on the experience of products with a service history – and emphatically not on any theoretical behaviour or critical value – many test values are simply assumed to correlate with satisfactory performance because they have not caused trouble in the past (“grandfathered” products). This to say that criteria are not determined by the best attainable, the state of the art. To suggest that a standard is a research tool betrays a lack of understanding. In fact, it may be fairly stated that IS methods are very often just simplified derivatives of best practice, at least in dentistry. They simply are not suited for fundamental studies.
I will not deny that IS methods have utility in making product comparisons under certain specific circumstances and motivations, for example when experimental formulations are studied with an eye on being certifiable and thus saleable, or indeed to examine their validity for QC purposes – “appropriate use” [ ], but a very careful statement of reasoning would be required to be clear about the purpose of not following best practice. As with any method used in a research study, it is incumbent on the researcher to ensure that a protocol is up-to-date, theoretically sound, relevant and interpretable, that is, fully justifiable, even with the caveats that cannot be avoided (there is no perfection anywhere). Editors and reviewers should understand this.
In fact, ISO makes no claim as to the applicability of any method, test or protocol for research purposes. Indeed, ISO disclaims any responsibility for how its standards are used. The adoption of any IS is a matter for specific rules or legislation in all jurisdictions, some do, some do not, but such adoption places no obligation on researchers to follow those protocols.
To return to the key issue: nullius in verba. 1
1 “nothing in (mere) words”, the motto of The Royal Society: ‘take nobody’s word for it’, that is, uncritically.
We have an obligation to research the methods that we use, not just the materials they are applied to. Good science questions. We should not rely on precedent or assumed authority to justify adoption. It is essential to investigate assumptions and validity conditions, as we should as a routine in using statistical tests, but here, as in that area, more often honoured in the breach. Such habits are not commonly taught; they should be. It is too easy to do what we have always done. As reviewers, then, we should be more attentive to such deficiencies in our attempts to raise the standard of published work.
You may also need