Content analysis is a research method for studying documents and communication artifacts, which might be texts of various formats, pictures, audio or video. Social scientists use content analysis to examine patterns in communication in a replicable and systematic manner. One of the key advantages of using content analysis to analyse social phenomena is its non-invasive nature, in contrast to simulating social experiences or collecting survey answers.
Practices and philosophies of content analysis vary between academic disciplines. They all involve systematic reading or observation of texts or artifacts which are assigned labels (sometimes called codes) to indicate the presence of interesting, meaningful pieces of content. By systematically labeling the content of a set of texts, researchers can analyse patterns of content quantitatively using statistical methods, or use qualitative methods to analyse meanings of content within texts.
Computers are increasingly used in content analysis to automate the labeling (or coding) of documents. Simple computational techniques can provide descriptive data such as word frequencies and document lengths. Machine learning classifiers can greatly increase the number of texts that can be labeled, but the scientific utility of doing so is a matter of debate.
Video Content analysis
Goals of Content Analysis
Content analysis is best understood as a broad family of techniques. Effective researchers choose techniques that best help them answer their substantive questions. That said, according to Klaus Krippendorff, six questions must be addressed in every content analysis:
- Which data are analyzed?
- How are the data defined?
- From what population are data drawn?
- What is the relevant context?
- What are the boundaries of the analysis?
- What is to be measured?
The simplest and most objective form of content analysis considers unambiguous characteristics of the text such as word frequencies, the page area taken by a newspaper column, or the duration of a radio or television program. Analysis of simple word frequencies is limited because the meaning of a word depends on surrounding text. Keyword In Context routines address this by placing words in their textual context. This helps resolve ambiguities such as those introduced by synonyms and homonyms.
A further step in analysis is the distinction between dictionary-based (quantitative) approaches and qualitative approaches. Dictionary-based approaches set up a list of categories derived from the frequency list of words and control the distribution of words and their respective categories over the texts. While methods in quantitative content analysis in this way transform observations of found categories into quantitative statistical data, the qualitative content analysis focuses more on the intentionality and its implications. There are strong parallels between qualitative content analysis and thematic analysis.
Maps Content analysis
Computational Tools
More generally, content analysis is research using the categorization and classification of speech, written text, interviews, images, or other forms of communication. In its beginnings, using the first newspapers at the end of the 19th century, analysis was done manually by measuring the number of lines and amount of space given a subject. With the rise of common computing facilities like PCs, computer-based methods of analysis are growing in popularity. Answers to open ended questions, newspaper articles, political party manifestoes, medical records or systematic observations in experiments can all be subject to systematic analysis of textual data.
By having contents of communication available in form of machine readable texts, the input is analyzed for frequencies and coded into categories for building up inferences.
Reliability
Robert Weber notes: "To make valid inferences from the text, it is important that the classification procedure be reliable in the sense of being consistent: Different people should code the same text in the same way". The validity, inter-coder reliability and intra-coder reliability are subject to intense methodological research efforts over long years. Neuendorf suggests that when human coders are used in content analysis two coders should be used. Reliability of human coding is often measured using a statistical measure of inter-coder reliability or "the amount of agreement or correspondence among two or more coders". Lacy and Riffe identify the measurement of inter-coder reliability as a strength of quantitative content analysis, arguing that, if content analysts do not measure inter-coder reliability, their data are no more reliable than the subjective impressions of a single reader.
Kinds of Text
There are five types of texts in content analysis:
- written text, such as books and papers
- oral text, such as speech and theatrical performance
- iconic text, such as drawings, paintings, and icons
- audio-visual text, such as TV programs, movies, and videos
- hypertexts, which are texts found on the Internet
History
Over the years, content analysis has been applied to a variety of scopes. Hermeneutics and philology have long used content analysis to interpret sacred and profane texts and, in many cases, to attribute texts' authorship and authenticity.
In recent times, particularly with the advent of mass communication, content analysis has known an increasing use to deeply analyze and understand media content and media logic. The political scientist Harold Lasswell formulated the core questions of content analysis in its early-mid 20th-century mainstream version: "Who says what, to whom, why, to what extent and with what effect?". The strong emphasis for a quantitative approach started up by Lasswell was finally carried out by another "father" of content analysis, Bernard Berelson, who proposed a definition of content analysis which, from this point of view, is emblematic: "a research technique for the objective, systematic and quantitative description of the manifest content of communication".
Quantitative content analysis has enjoyed a renewed popularity in recent years thanks to technological advances and fruitful application in of mass communication and personal communication research. Content analysis of textual big data produced by new media, particularly social media and mobile devices has become popular. These approaches take a simplified view of language that ignores the complexity of semiosis, the process by which meaning is formed out of language. Quantitative content analysts have been criticized for limiting the scope of content analysis to simple counting, and for applying the measurement methodologies of the natural sciences without reflecting critically on their appropriateness to social science. Conversely, qualitative content analysts have been criticized for being insufficiently systematic and too impressionistic. Krippendorff argues that quantitative and qualitative approaches to content analysis tend to overlap, and that there can be no generalisable conclusion as to which approach is superior.
Recently, Arash Heydarian Pashakhanlou has argued for a combination of quantitative, qualitative, manual and computer-assisted in a single study to offset the weaknesses of a partial content analysis and enhance the reliability and validity of a research project.
Content analysis can also be described as studying traces, which are documents from past times, and artifacts, which are non-linguistic documents. Texts are understood to be produced by communication processes in a broad sense of that phrase--often gaining mean through abduction.
Uses
Holsti groups fifteen uses of content analysis into three basic categories:
- make inferences about the antecedents of a communication
- describe and make inferences about characteristics of a communication
- make inferences about the effects of a communication.
He also places these uses into the context of the basic communication paradigm.
The following table shows fifteen uses of content analysis in terms of their general purpose, element of the communication paradigm to which they apply, and the general question they are intended to answer.
See also
- Hermeneutics
- Donald Wayne Foster
- Transition words
- Text mining
- The Polish Peasant in Europe and America
References
Further reading
- Pashakhanlou, Arash Heydarian (2017). "Fully integrated content analysis in International Relations". International Relations. 31 (4): 447-465. doi:10.1177/0047117817723060.
- Graneheim, Ulla Hällgren; Lundman, Berit (2004). "Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness". Nurse Education Today. 24 (2): 105-112. doi:10.1016/j.nedt.2003.10.001.
- Budge, Ian (ed.) (2001). Mapping Policy Preferences. Estimates for Parties, Electors and Governments 1945-1998. Oxford, UK: Oxford University Press. ISBN 978-0199244003.
- Krippendorff, Klaus, and Bock, Mary Angela (eds) (2008). The Content Analysis Reader. Thousand Oaks, CA: Sage. ISBN 978-1412949668.
- Roberts, Carl W. (ed.) (1997). Text Analysis for the Social Sciences: Methods for Drawing Inferences from Texts and Transcripts. Mahwah, NJ: Lawrence Erlbaum. ISBN 978-0805817348.
- Wimmer, Roger D. and Dominick, Joseph R. (2005). Mass Media Research: An Introduction, 8th ed. Belmont, CA: Wadsworth. ISBN 978-0534647186.
Source of article : Wikipedia