Are any generalizations made, and if so, are they made within the scope of the findings?
TABLE 18.1
Summary of Major Content Sections of a Research Report and Related Critical Appraisal GuidelinesSectionCritical Appraisal Questions to Guide EvaluationBackground and Significance (see Chapters 2 and 3)Does the background and significance section make it clear why the proposed study was conducted?Research Question and Hypothesis (see Chapter 2)
1. What research question(s) or hypothesis (or hypotheses) are stated, and are they appropriate to express a relationship (or difference) between an independent and a dependent variable?
2. Has the research question(s) or hypothesis (or hypotheses) been placed in the context of an appropriate theoretical framework?
3. Has the research question(s) or hypothesis (or hypotheses) been substantiated by adequate experiential and scientific background material?
4. Has the purpose, aim(s), or goal(s) of the study been substantiated?
5. Is each research question or hypothesis specific to one relationship so that each can be either supported or not supported?
6. Given the level of evidence suggested by the research question, hypothesis, and design, what is the potential applicability to practice?
Review of the Literature (see Chapters 3 and 4)
1. Does the search strategy include an appropriate and adequate number of databases and other resources to identify key published and unpublished research and theoretical resources?
2. Is there an appropriate theoretical/conceptual framework that guides development of the research study?
3. Are both primary source theoretical and research literature used?
4. What gaps or inconsistencies in knowledge or research does the literature uncover so that it builds on earlier studies?
5. Does the review include a summary/critique of the studies that includes the strengths and weakness or limitations of the study?
6. Is the literature review presented in an organized format that flows logically?
7. Is there a synthesis summary that presents the overall strengths and weaknesses and arrives at a logical conclusion that generates hypotheses or research questions?
MethodsInternal and External Validity (see Chapter 8)
1. What are the controls for the threats to internal validity? Are they appropriate?
2. What are the controls for the threats to external validity? Are they appropriate?
3. What are the sources of bias, and are they dealt with appropriately?
4. How do the threats to internal and external validity affect the strength and quality of evidence?
5. Was the fidelity of the intervention maintained, and if so, how?
Research Design (see Chapters 9 and 10)
1. What type of design is used in the study?
2. Is the rationale for the design appropriate?
3. Does the design used seem to flow from the proposed research question(s) or hypothesis (or hypotheses), theoretical framework, and literature review?
4. What types of controls are provided by the design that increase or decrease bias?
Sampling (see Chapter 12)
1. What type of sampling strategy is used? Is it appropriate for the design?
2. How was the sample selected? Was the strategy used appropriate for the design?
3. Does the sample reflect the population as identified in the research question or hypothesis?
4. Is the sample size appropriate? How is it substantiated? Was a power analysis necessary?
5. To what population may the findings be generalized?
Legal-Ethical Issues (see Chapter 13)
1. How have the rights of subjects been protected?
2. What indications are given that institutional review board approval has been obtained?
3. What evidence is given that informed consent of the subjects has been obtained?
Data Collection Methods and Procedures (see Chapter 14)
1. Physiological measurement:
a. Is a rationale given for why a particular instrument or method was selected? If so, what is it?
b. What provision is made for maintaining accuracy of the instrument and its use, if any?
2. Observation:
a. Who did the observing?
b. How were the observers trained and supervised to minimize bias?
c. Was there an observation guide?
d. Was interrater reliability calculated?
e. Is there any reason to believe that the presence of observers affected the behavior of the subjects?
3. Interviews:
a. Who were the interviewers? How were they trained and supervised to minimize bias?
b. Is there any evidence of interview bias, and if so, what is it? How does it affect the strength and quality of evidence?
4. Instruments:
a. What is the type and/or format of the instruments (e.g., Likert scale)?
b. Are the operational definitions provided by the instruments consistent with the conceptual definition(s)?
c. Is the format appropriate for use with this population?
d. What type of bias is possible with self-report instruments?
5. Available data and records:
a. Are the records or data sets used appropriate for the research question(s) or hypothesis (or hypotheses)?
b. What sources of bias are possible with use of records or existing data sets?
6. Overall, how was intervention fidelity maintained?
Reliability and Validity (see Chapter 15)
1. Was an appropriate method used to test the reliability of the instrument(s)?
2. Was the reliability and validity of the instrument(s) adequate?
3. Was the appropriate method(s) used to test the validity of the instrument(s)?
4. Have the strengths and weaknesses related to reliability and validity of the instruments been presented?
5. What kinds of threats to internal and external validity are presented as weaknesses in reliability and/or validity?
6. How do the reliability and/or validity affect the strength and quality of evidence provided by the study findings?
Data Analysis (see Chapter 16)
1. Were the descriptive or inferential statistics appropriate to the level of measurement for each variable?
2. Are the inferential statistics appropriate for the type of design, research question(s), or hypothesis (or hypotheses)?
3. If tables or figures are used, do they meet the following standards?
a. They supplement and economize the text.
b. They have precise titles and headings.
c. They do not repeat the text.
4. Did testing of the research question(s) or hypothesis (or hypotheses) clearly support or not support each research question or hypothesis?
Conclusions, Implications, and Recommendations (see Chapter 17)
1. Are the results of each research question or hypothesis presented objectively?
2. Is the information regarding the results concisely and sequentially presented?
3. If the data are supportive of the hypothesis or research question, does the investigator provide a discussion of how the theoretical framework was supported?
4. How does the investigator attempt to identify the study’s weaknesses and limitations (e.g., threats to internal and external validity) and strengths and suggest possible research solutions in future studies?
5. Does the researcher discuss the study’s relevance to clinical practice?
6. Are any generalizations made, and if so, are they made within the scope of the findings?
7. Are any recommendations for future research stated or implied?
Applicability to Nursing Practice (see Chapter 17)
1. What are the risks/benefits involved for patients if the findings are applied in practice?
2. What are the costs/benefits of applying the findings of the study?
3. Do the strengths of the study outweigh the weaknesses?
4. What are the strength, quality, and consistency of evidence provided by the study findings?
5. Are the study findings applicable in terms of feasibility?
6. Are the study findings generalizable?
7. Would it be possible to replicate this study in another clinical setting?