Having just finished up my PhD proposal (bar a few tweaks here and there), I thought I’d write about something I discovered whilst putting it together. Something I found problematic, and something I felt should be highlighted, even if only by me, on this personal blog of mine.
Part of my data will be literature. To collect it, I planned on using a systematic literature review (SLR). SLRs claim to produce literature reviews which are objective, replicable, unbiased, scientific, and rigorous (Okoli & Schabram, 2010). However, just when I thought I was set, I came across a critique of SLRs, by Boell & Cecez-Kecmanovic (2015), which convincingly showed how the claims, for which SLRs are so heralded, do not hold. Before getting into this, let me give a (very) brief overview of SLRs.
Systematic Literature Reviews (SLRs)
While the SLR approach has its origins in the evidence-based medicine movement of the early 1990s, it was first proposed for use in Information Systems (IS) in 2000 by Atkins & Louw, with the first set of guidelines for IS being published in 2010 by Okoli & Schabram (Boell & Cecez-Kecmanovic, 2015). Figure 1, below, illustrates the SLR approach for IS (Please excuse the pixelation 😣).
One of the most attractive features of SLRs is their emphasis on literature searching processes (Step 3 in Figure 1), an aspect which remained largely unaddressed by other approaches (Boell & Cecez-Kecmanovic, 2014, 2015; Okoli & Schabram, 2010). Considering that the amount of published material is only increasing, “finding literature efficiently through searches in large literature reference database such as Scopus, Web of Science, or Google Scholar is increasingly important” (Boell & Cecez-Kecmanovic, 2014, p. 258; emphasis in the original).
As mentioned, another attraction of the SLR approach, and often the main reason for its (perceived) superiority, are the claims that it produces reviews which are objective, replicable, unbiased, scientific, and rigorous (Okoli & Schabram, 2010). It is on these characteristics, however, that things begin to fall apart.
Boell & Cecez-Kecmanovic’s (2015) critique of SLRs
With regards to objectivity and replicability, SLRs are only objective in terms of their procedures and the steps that they follow. Following these procedures and steps does not guarantee an objective literature review. This is because subjective decisions are made along the way. Okoli & Schabram (2010) themselves acknowledge that “[t]he practical screen is a very subjective part of the literature review” (p. 23). Even if different researchers happened to include the same publications in their reviews, they “will extract different meanings from texts, according to their background assumptions and their current priorities and beliefs” (Maclure, as cited in Boell & Cecez-Kecmanovic, 2015, p. 166). Review outcomes being based on subjective judgement means that SLRs are not and cannot be replicable. Replicability exists only in the “adherence to the tasks and to a style of reporting them, not the enactment of those tasks” (Sandelowski et al., as cited in Boell & Cecez-Kecmanovic, 2015, p. 166). SLRs also imply that the summation of evidence from previous research can be objective. However, unlike in medicine, research problems in social science (and IS) rarely present their so-called evidence quantitatively i.e. in form that is easy to count and summarize objectively. Furthermore, while the medical field ranks ‘evidence’ according to a hierarchy (professional opinions being the weakest and randomized controlled trials the strongest), the social sciences (and IS) does not have approaches that are clearly superior to others. Boell & Cecez-Kecmanovic (2015) conclude that “it is impossible to ‘objectively’ select and integrate results from different and diverse studies as part of a review, in a way that is independent of researchers’ subjective judgement and understanding” (Boell & Cecez-Kecmanovic, 2015, p. 167).
In terms of review outcomes being unbiased, Boell & Cecez-Kecmanovic (2015) say it is important to look at why SLRs were developed in the first place. Pre-SLRs, medicical literature reviews were relying heavily on specific top-end journals and databases. They excluded ‘grey’ literature, such as publications in foreign languages, negative or inconclusive findings (published less frequently than positive results), and unpublished works. To increase the rigour (and remove the bias) of literature reviews in the medical field, SLRs were developed to include as many publications, pertaining to the research question or problem, as possible, including the ‘grey’ literature mentioned above. However, as Boell & Cecez-Kecmanovic (2015) state, “SLRs in IS . . . often choose a particular set of journals or conferences as a foundation for their searches” (p. 167) and in doing so, reinforce “the very bias that SLRs in medicine were initially developed to minimize” (p. 167). As per the guideline for using SLRs in IS research (Okoli & Schabram 2010), literature can be excluded based on content, publication language, journal quality, authorship, setting, participants or subjects, program or intervention, research design or sampling methodology, date of publication, duration of data collection, and source of financial support. But this is subjective. Choosing a particular database or set of journals over another inevitably introduces bias. Arranging studies according to a hierarchy of scientific methods introduces another form of bias, particularly problematic for IS research. The hierarchy of scientific methods places quantitative, positivist methods at the top and case studies and qualitative methods at the bottom. In IS, many of the approaches employed are borrowed from the social sciences i.e. approaches of the qualitative type. As Boell & Cecez-Kecmanovic (2015) put it, “[p]rivileging particular studies . . . based on their approach and method . . . introduces a particular systematic bias, thus contradicting SLR’s major claim to non-bias” (p. 167).
Finally, there are the claims of SLRs being scientific and rigorous. This claim “is based on its [the SLR’s] methodical commitment to a prescribed protocol that is supposed to ensure scientific objectivity, replicability and control of the review process and thus guarantee high quality [and no bias] in the resulting literature review” (Boell & Cecez-Kecmanovic, 2015, p. 167). However, as highlighted above, these so-called protocols fail to exclude subjectivity and bias, and they do not ensure replicability of the review outcome. In other words, much like the objectivity, replicability, and non-bias, the rigor of the SLR approach extends only to its adherence to prescribed protocols and search processes. This is different to traditional literature reviews, for which rigor is “a much deeper concept that . . . encompasses an appropriate breadth and depth of analysis and classification, consistency and logic of argumentation, criticality of assessment of a body of knowledge and creativity and effectiveness of synthesis and presentation” (Boell & Cecez-Kecmanovic, 2015, p. 167).
Boell & Cecez-Kecmanovic (2015) do not, however, discard SLRs entirely. Rather, they conclude that SLRs are useful for answering very specific, narrow, closed, and unequivocal research questions i.e.: not questions of ‘how’ and ‘why’. Additionally, SLRs are only appropriate when (p.169; emphasis in the original):
- “it is potentially possible to identify all literature addressing this particular topic or question and
- research questions can be answered by extracting evidence regarding particular relations and their strength”.
Boell & Cecez-Kecmanovic’s response to the SLR approach is their hermeneutic literature review (HLR) (Boell & Cecez-Kecmanovic, 2014).
The hermeneutic literature review (HLR) of Boell & Cecez-Kecmanovic (2014)
Unlike SLRs (which emphasize rigor, replicability, and objectivity of the review process and outcome), HLRs are interested in intellectual engagement with literature. For HLRs, there is no “correct or ultimate understanding [that] can be achieved . . . instead . . . [HLRs are] interested in the process of developing understanding” (Boell & Cecez-Kecmanovic, 2014, p. 259). This process of developing understanding is best illustrated in Figure 2, below.
One of the main differences between SLRs (Figure 1) and HLRs is the iterative nature of HLRs. Where SLRs follow a linear process of steps, HLRs follow circular processes, and do so repetitively. This iterative process is best described by Boell & Cecez-Kecmanovic (2014) in the following paragraph (p. 263):
“understanding of a text (a part) draws from the reader’s preunderstanding of a context (a whole); and vice versa, the understanding of a context (a whole) develops from understanding individual texts or text equivalents (parts)”. Furthermore, “the understanding of both the text (part) and the context (whole) are continually revised and mutually co-produced”. This is because “[a]s more texts are engaged with, the dialogical encounter is extended and the fusion of horizons broadened to texts reaching to each other’s as well as to the reader’s horizon”
Loosely, the search and acquisition circle refers to the parts while the analysis and interpretation circle refers to the whole. This means that the smaller search and acquisition circle forms part of the larger analysis and interpretation circle. These are referred to as the hermeneutic circles.
The hermeneutic literature review is seen “as an organic system that is constantly growing and changing ” (Levy and Ellis, 2006 as cited in Boell & Cecez-Kecmanovic, 2014, p. 273) as a researcher’s interpretation and understanding of literature expands. This differs to SLRs, which emphasize the summation of so-called evidence from literature.
So what’s the problem?
For all that HLRs are good for, there was one aspect that I found problematic. While Boell & Cecez-Kecmanovic (2015) take issue with the fact that SLRs in IS exclude grey literature and use particular protocols to select which literature is ‘relevant’, they don’t mention grey literature in their HLR approach, and employ similar (if not the same) protocols for deciding which literature is relevant (Boell & Cecez-Kecmanovic, 2014).
On the one hand Boell & Cecez-Kecmanovic (2014) state that “thorough and . . . complete literature review[s] can . . . not be limited to a specific or prescribed set of journals” (p. 272). At the same time however, there are statements such as:
- “identifying additional publications through citation tracking . . . can also help to identify important authors, journals, and conferences” (p. 272, emphasis added).
- “for every area of research some ‘core authors’ can be identified” (p. 272, emphasis added).
- “it allows a researcher to identify ‘core journals‘ for specific topics” (p. 272, emphasis added).
- “Using field search . . . one can then focus on core journals and important conferences only” (p. 272, emphasis added).
- “an important question is where to look for IS publications. A first point of reference can be the AIS electronic library” (p. 272, emphasis in original).
- “ProQuest’s ABI/Inform or the ACM digital library may be good places to start searching” (p. 272, emphasis in original).
It seems contradictory, to me, to criticize SLRs on their exclusion of grey literature and preference for certain literature when, bar one sentence pertaining to journals, the hermeneutic approach for conducting literature reviews and literature searches by Boell & Cecez-Kecmanovic (2014) is full of prescriptions on where to find ‘important’ or ‘core’ literature.
I am not suggesting that Boell & Cecez-Kecmanovic (2014) are being intentionally contradictory. I am simply pointing out that some of the flaws they saw in SLRs, I have seen in their HLR. As such, I propose using a modified HLR for my PhD work. This mHLR will include grey literature and avoid attributing more or less relevance to different sources of information. In this way, the collection of my data (literature) will be more aligned with *Ulrich’s (2005) notion that experts do not exist (also referred to as The Death of the Expert in Ulrich (2001)) and his position that “we can never [critically] justify our claims by referring to the methodologies we use or to the theories we rely on” (p. 4).
To recap, I wrote this post to highlight something I found problematic when putting my PhD proposal together. Something I felt should be shared. I am in no way trying to dis any of the authors mentioned here. Rather, my goal is make young researchers, such as myself, aware that sometimes even the ‘experts’ make mistakes. It is up to us to build upon and extend their work by identifying their inconsistencies.
*Werner Ulrich’s critical systems heuristics (CSH) is the methodological core of my proposed PhD work. His work and philosophies have had a great impact on how I approach other research material. I highly recommend his work to anyone interested in critical self-reflectivity.
- Boell, S. K., & Cecez-Kecmanovic, D. (2014). A hermeneutic approach for conducting literature reviews and literature searches. Communications of the Association for Information Systems, 34(1), 257–286.
- Boell, S. K., & Cecez-Kecmanovic, D. (2015). On Being “Systematic” in Literature Reviews in IS. Journal of Information Technology, 30(2), 161–173.
- Okoli, C., & Schabram, K. (2010). A Guide to Conducting a Systematic Literature Review of Information Systems Research. Sprouts: Working Papers on Information Systems, 10(26), 1–49.
- Ulrich, W. (2001). The Quest for Competence in Systemic Research and Practice. Systems Research and Behavioral Science, 28(1), 3–28. https://doi.org/10.1002/sres.366
- Ulrich, W. (2005). A brief introduction to critical systems heuristics (CSH). Web Site of the ECOSENSUS Project, Open University, Milton Keynes, UK, 14(October), 1–15. Retrieved from http://projects.kmi.open.ac.uk/ecosensus/publications/ulrich_csh_intro.pdf