Qualitative content analysis is commonly used for analyzing qualitative data. However, few articles have examined the trustworthiness of its use in nursing science studies. The trustworthiness of qualitative content analysis is often presented by using terms such as credibility, dependability, conformability, transferability, and authenticity. This article focuses on trustworthiness based on a review of previous studies, our own experiences, and methodological textbooks. Trustworthiness was described for the main qualitative content analysis phases from data collection to reporting of the results. We concluded that it is important to scrutinize the trustworthiness of every phase of the analysis process, including the preparation, organization, and reporting of results. Together, these phases should give a reader a clear indication of the overall trustworthiness of the study. Based on our findings, we compiled a checklist for researchers attempting to improve the trustworthiness of a content analysis study. The discussion in this article helps to clarify how content analysis should be reported in a valid and understandable manner, which would be of particular benefit to reviewers of scientific articles. Furthermore, we discuss that it is often difficult to evaluate the trustworthiness of qualitative content analysis studies because of defective data collection method description and/or analysis description.

Although qualitative content analysis is commonly used in nursing science research, the trustworthiness of its use has not yet been systematically evaluated. There is an ongoing demand for effective and straightforward strategies for evaluating content analysis studies. A more focused discussion about the quality of qualitative content analysis findings is also needed, particularly as several articles have been published on the validity and reliability of quantitative content analysis (Neuendorf, 2011; Potter & Levine-Donnerstein, 1999; Rourke & Anderson, 2004) than qualitative content analysis. Whereas many standardized procedures are available for performing quantitative content analysis (Baxter, 2009), this is not the case for qualitative content analysis.

Qualitative content analysis is one of the several qualitative methods currently available for analyzing data and interpreting its meaning (Schreier, 2012). As a research method, it represents a systematic and objective means of describing and quantifying phenomena (Downe-Wamboldt, 1992; Schreier, 2012). A prerequisite for successful content analysis is that data can be reduced to concepts that describe the research phenomenon (Cavanagh, 1997; Elo & Kyngäs, 2008; Hsieh & Shannon, 2005) by creating categories, concepts, a model, conceptual system, or conceptual map (Elo & Kyngäs, 2008; Morgan, 1993; Weber, 1990). The research question specifies what to analyze and what to create (Elo & Kyngäs, 2008; Schreier, 2012). In qualitative content analysis, the abstraction process is the stage during which concepts are created. Usually, some aspects of the process can be readily described, but it also partially depends on the researcher’s insight or intuitive action, which may be very difficult to describe to others (Elo & Kyngäs, 2008; Graneheim & Lundman, 2004). From the perspective of validity, it is important to report how the results were created. Readers should be able to clearly follow the analysis and resulting conclusions (Schreier, 2012).

Qualitative content analysis can be used in either an inductive or a deductive way. Both inductive and deductive content analysis processes involve three main phases: preparation, organization, and reporting of results. The preparation phase consists of collecting suitable data for content analysis, making sense of the data, and selecting the unit of analysis. In the inductive approach, the organization phase includes open coding, creating categories, and abstraction (Elo & Kyngäs, 2008). In deductive content analysis, the organization phase involves categorization matrix development, whereby all the data are reviewed for content and coded for correspondence to or exemplification of the identified categories (Polit & Beck, 2012). The categorization matrix can be regarded as valid if the categories adequately represent the concepts, and from the viewpoint of validity, the categorization matrix accurately captures what was intended (Schreier, 2012). In the reporting phase, results are described by the content of the categories describing the phenomenon using a selected approach (either deductive or inductive).

There has been much debate about the most appropriate terms (rigor, validity, reliability, trustworthiness) for assessing qualitative research validity (Koch & Harrington, 1998). Criteria for reliability and validity are used in both quantitative and qualitative studies when assessing the credibility (Emden & Sandelowski, 1999; Koch & Harrington, 1998; Ryan-Nicholls & Will, 2009). Such terms are mainly rooted in a positivist conception of research. According to Schreier (2012), there is no clear dividing line between qualitative and quantitative content analysis, and similar terms and criteria for reliability and validity are often used. Researchers have mainly used qualitative criteria when evaluating aspects of validity in content analysis (Kyngäs et al., 2011). The most widely used criteria for evaluating qualitative content analysis are those developed by Lincoln and Guba (1985). They used the term trustworthiness. The aim of trustworthiness in a qualitative inquiry is to support the argument that the inquiry’s findings are “worth paying attention to” (Lincoln & Guba, 1985). This is especially important when using inductive content analysis as categories are created from the raw data without a theory-based categorization matrix. Thus, we decided to use such traditional qualitative research terms when identifying factors affecting the trustworthiness of data collection, analysis, and presentation of the results of content analysis.

Several other trustworthiness evaluation criteria have been proposed for qualitative studies (Emden, Hancock, Schubert, & Darbyshire, 2001; Lincoln & Guba, 1985; Neuendorf, 2002;Polit & Beck, 2012; Schreier, 2012). However, a common feature of these criteria is that they aspire to support the trustworthiness by reporting the process of content analysis accurately.Lincoln and Guba (1985) have proposed four alternatives for assessing the trustworthiness of qualitative research, that is, credibility, dependability, conformability, and transferability. In 1994, the authors added a fifth criterion referred to as authenticity. From the perspective of establishing credibility, researchers must ensure that those participating in research are identified and described accurately. Dependability refers to the stability of data over time and under different conditions. Conformability refers to the objectivity, that is, the potential for congruence between two or more independent people about the data’s accuracy, relevance, or meaning. Transferability refers to the potential for extrapolation. It relies on the reasoning that findings can be generalized or transferred to other settings or groups. The last criterion, authenticity, refers to the extent to which researchers, fairly and faithfully, show a range of realities (Lincoln & Guba, 1985; Polit & Beck, 2012)

Researchers often struggle with problems that compromise the trustworthiness of qualitative research findings (de Casterlé, Gastmans, Bryon, & Denier, 2012). The aim of the study described in this article was to describe trustworthiness based on the main qualitative content analysis phases, and to compile a checklist for evaluating trustworthiness of content analysis study. The primary research question was, “What is essential for researchers attempting to improve the trustworthiness of a content analysis study in each phase?” The knowledge presented was identified from a narrative literature review of earlier studies, our own experiences, and methodological textbooks. A combined search of Medline (Ovid) and CINAHL (EBSCO) was conducted, using the following key words: trustworthiness, rigor OR validity, AND qualitative content analysis. The following were used as inclusion criteria: methodological articles focused on qualitative content analysis in the area of health sciences published in English and with no restrictions on year. The search identified 12 methodological content analysis articles from databases and reference list checks (Cavanagh, 1997; Downe-Wamboldt, 1992; Elo & Kyngäs, 2008; Graneheim & Lundman, 2004; Guthrie, Yongvanich, & Ricceri, 2004; Harwood & Garry, 2003; Holdford, 2008; Hsieh & Shannon, 2005; Morgan, 1993; Potter & Levine-Donnerstein, 1999; Rourke & Anderson, 2004; Vaismoradi, Bondas, & Turunen, 2013). The reference list of selected papers was also checked, and qualitative research methodology textbooks were used when writing the synthesis of the review. The discussion in this article helps to clarify how content analysis should be reported in a valid and understandable manner, which, we expect, will be of particular benefit to reviewers of scientific articles.