I have a talk coming up titled What standards should be set for qualitative research conducted in a science faculty: Psychology, rigour and the politics of evidence which i will present at the Australian Conference on Science and Mathematics Education 2012, Sydney, Australia..I havent written it yet but have been reading a great paper, below and a few others
They propose the following standards to judge good qual research:
Clarification and justification
As in all forms of research, clarity of research question reflected in the aims of the study is essential for evaluating results and their interpretation. The demonstration of theoretical rigour (referring to the soundness of fit of the research question, aims and the choice of methods appropriate to the research problem11) is extremely important.
There is a wide variety of named qualitative approaches that are underpinned by particular theoretical perspectives. In addition, the researcher may use basic field research (question, investigation, interpretation). Regardless of the theoretical approach used, the choice requires justification in reference to the research question of the study.
Procedural, or methodological, rigour concerns the transparency or “explicitness” of the description of the way the research was conducted. It involves detailing issues of accessing subjects; development of rapport and trust; how data are collected, recorded, coded and analysed; and accounts of the manner in which errors or subject refusals are dealt with.4,11,22 In this regard, readers and reviewers may ask the following questions while examining descriptions of qualitative methods: How were participants/settings accessed? Who was interviewed/observed? How often? For how long? What interview questions were asked? What was the purpose of any observation? Which policy documents/case notes were accessed? How were they assessed? How was collected data managed?
There are a number of commonly available, non-probability sampling approaches. Maximum variation sampling seeks representativeness of all aspects of the topic in terms of participants. Homogenous sampling consists of the selection of a group fitting specified criteria. Snowball samplinginvolves networking from one difficult-to-access type of participant to a wider range of participants. Finally, convenience sampling involves studying easily accessed individuals or groups. This last technique obviously presents its own ethical dilemmas of the “insider” type and is possibly the weakest form of sampling in terms of allowing conceptual generalisability.4,15,16,22,23Maximum variation is the ideal when a holistic overview of the phenomenon is sought; for instance, the question of how a particular hospital department operates may involve sampling in the wider organisation as well as within the individual department and among recipients of services.
Simply mentioning the sampling strategy in the methods section of a qualitative research paper is not sufficient. The key findings of the research need to be evaluated in reference to the diverse characteristics of the research subjects. Through constantly comparing the experiences and responses of the participants against each other, subtle but significant differences can be uncovered that can generate profound insights into the phenomena under study.19
Interpretative rigour relates to as full as possible a demonstration of the data/evidence. In qualitative research, a commonly used concept is inter-rater reliability. This refers to using a type of researcher triangulation by which multiple researchers are involved in the analytical process. This is an attempt to increase the validity and reliability of the study19 through the provision of a more complex and nuanced understanding of the possible interpretations of the objects of the research.11 In contrast to the quantitative research paradigm, what is important in this process is not the level of consensus, but the opportunity for discussion among analysts to provide opportunities for developing further coding.19
A related technique is that of respondent validation, or member checking. This entails offering subjects interviewed the opportunity to view and amend their transcripts as a type of validity.12 However, this approach does have limitations due to the evolution over time of the positions and purposes of the researchers and participants, thereby potentially affecting interpretations and accounts. Respondent validation should be thought of as part of a process of reducing error, which involves the generation of further original data, which then requires interpretation.8
Other techniques that enhance interpretative rigour are the differing forms of triangulation: data (multiple evidentiary sources; ie, documents, interviews, survey data, observation), methods (multiple methods), and theory (multiple theoretical and conceptual frames applied to the research to enhance insights into phenomena). Using these forms of triangulation allows the development of a comprehensive understanding of the phenomena and can ameliorate the potential bias of simply using one method.4,5,8,11,16,22
In the interpretive process, accounts of “negative” or “deviant” cases are especially important. These are explanations pertaining to data or evidence that contradicts the researchers overall explanatory account of the phenomena.5
In sum, a clear description of what forms of analysis were used, the process and what were the major outcomes of the analytical process in terms of findings is needed to ensure quality for the author, and to enable an assessment to be made in terms of the analytical quality of the research by the reader.
Reflexivity and evaluative rigour
Reflexivity is where researchers openly acknowledge and address the influence that the relationship among the researchers, the research topic and subjects may have on the results.4,11,13 Fundamentally, reflexivity requires a demonstration by the researchers that they are aware of the sociocultural position they inhabit and how their value systems might affect the selection of the research problem, research design, collection and analysis of data.15 It also refers to an awareness by the researchers of the social setting of the research and of the wider social context in which it is placed.4
Evaluative rigour refers to ensuring that the ethical and political aspects of research are addressed. Typically, this refers to proper ethics approval from appropriate committees covering confidentiality, informed consent and steps to avoid possible adverse effects on the subjects. Importantly, where appropriate, relevant community leaders should be consulted in the design and conduct of the research.11 Researchers should revisit their actions and interactions within the research process to ensure as “accurate” as possible portrayal of the production of their findings.
Conceptual generalisability and transferability refer to how well the study’s findings inform health care contexts that differ from that in which the original study was undertaken.4 For example, a review of data from qualitative studies was conducted on a wide variety of doctor–patient interactions about medication compliance.24 The authors examined barriers to patients taking prescribed medication as directed by their doctors and found that patients were often inclined to resist taking medicines, not because of problems with the patients, doctors or systems, but because patients were concerned about the medicines. This type of study allows for the construction and transfer of general policy on medicine-taking (through, for example, less emphasis on patient behaviour modification and more emphasis on production of safer medicines) and practice (suggesting, for instance, that doctors should assist lay evaluations through provision of more information, support, feedback and safe prescribing practices
Here is an approach that perhaps has more consideration for the many different types of qual research
The Open University, UK
firstname.lastname@example.org MartynHammersley 0000002007 330Taylor & Francis 2007 Original Article 1743-727X (print)/1743-7288 (online) International Journal of Research & Method in Education 10.1080/17437270701614782 CWSE_A_261329.sgm Taylor and Francis
This article addresses the perennial issue of the criteria by which qualitative research should be evaluated. At the present time, there is a sharp conflict between demands for explicit criteria, for example in order to serve systematic reviewing and evidence-based practice, and arguments on the part
of some qualitative researchers that such criteria are neither necessary nor desirable. At issue here,
in part, is what the term ‘criterion’ means, and what role criteria could play in the context of qualitative enquiry. Equally important, though, is the question of whether a single set of criteria is possible across qualitative research, given the fundamental areas of disagreement within it. These reflect
divergent paradigms framed by value assumptions about what is and is not worth investigation. In
addition, there are differences in methodological orientation: over what counts as rigorous enquiry,
realism versus constructionism, and whether the goal of research is to produce knowledge or to serve