“Tool Without a Handle: Trustworthy Tools”
“’What is truth?’ said jesting Pilate, who did not stay for an answer.” – Francis Bacon, Of Truth (Essays, Civil and Moral (1625).
This blog previously dealt with one flavor of “fake news”: provocative fictions that can prompt panic and violence. These issues have not gone away; one recent headline suggested a false report prompted a nuclear threat. In this blog, though, I’ll deal with a related issue: propaganda.
“Propaganda” admits of a variety of definitions. Here, I’ll refer to it as misleading or biased information used to influence public opinion, political outcomes or even to incite violent action. It is a form of “news” in that propaganda often tells stories (a primary way humans relate to one another and form opinions), promotes bonding over shared grievances, celebrates heroes, and mocks or demonizes enemies.
Jason Stanley, author of “How Propaganda Works,” noted the goals of propaganda are to define a version of reality and then shift public opinion to a value system that responds to that view of things. Networked information tools – particularly where social media groups affiliated interests – are very powerful means for both framing reality and aligning people to associated value systems. The distributed nature of information tools creates opportunities for people to not simply consume propaganda and re-distribute it piecemeal (e.g., by word of mouth) but to re-distribute it whole and to large audiences instantaneously. Bots and fake accounts can accelerate the spread of propaganda.
The Data & Society group recently published a collection of essays illustrating this relationship between technology and propaganda, including both contemporary issues and the historical context of propaganda. Among the points noted there: the role Internet tools play in self-segregation of the US population, and the influence of algorithms in fostering reality framing.
My observations here regarding propaganda touch on concerns I’ll develop in a following post on the contribution of Internet tools to both the collection and publication of stolen information. By the very nature of its illicitness and its secrecy, stolen information is particularly attractive for achieving the goals of propaganda. Secret or private information, stolen and then revealed, can promote bonding over shared grievances by fueling perceptions of conspiracy. It can be used to mock, demonize, or even blackmail enemies. Particularly if a data subject objects to release of private information, it can be spun to suggest that person his hiding is hiding something shameful.
Unlike restrictions on theft, no U.S. law restricts “propaganda” – nor am I suggesting one should. But there are objections to propaganda in international law. As noted in the last blog, use of Internet tools to distribute misleading content is generally beyond the scope of regulation except where a foreseeable harm results. And it’s with foreseeable harms in mind that the International Covenant on Civil and Political Rights (“ICCPR”) prohibits “propaganda for war,” along with content that advocates national, religious or ethnic hatred that constitutes an incitement to violence. The basis for this proscription can be traced to core principles of international law, codified in the UN Charter itself: if aggressive war is prohibited in international law, so too are actions that constitute incitement to aggressive war.
While adding incitement concepts to the ICCPR can be compatible with rights to free expression (ICCPR Article 20 immediately follows Article 19 which protects free expression), free expression concerns understandably blunt the vigor of Article 20 as applied. International legal prohibitions on propaganda for war are limited by both practical concerns and by various treaty reservations, primarily aimed at protecting rights to free expression (by both individuals and states). And ironically, it was the Soviet states that embraced a prohibition on propaganda for war because it was an “acceptable” rationale for stifling dissent.
These challenges with ICCPR Article 20 illustrate challenges that would pertain to any attempt to restrict propaganda. Additional challenges include the unavoidable role of political perspective in determining what is propaganda worthy of sanction. At the time of the ICCPR, it was a long-standing concern of states that propaganda, particularly if targeting political minorities, was used to foment wars of revolution. At the same time, wars of revolution are often precisely how totalitarian regimes are overthrown, and how colonial states gaine independence. What is worthy of sanction is inextricable from how one views the merits of such political change.
Moreover, nearly all states engage in some form of public information efforts which naturally seek to influence public opinion favorably towards that state and to frame negatively the actions of parties with whom it has objections. Accordingly, regulatory sanctions on propaganda are likely to simply be used by one political faction against another, rather than creating a reduction in misleading information. Preferable to focus political parties on simply engaging in more responsible speech.
And, as Caroline Jack points out, it is not only state-controlled media that warrants examination in considering “propaganda” issues. Institutions, publishers, activists, commercial actors, and individuals all create and distribute information, including asserted facts and narratives, and all such actors can play some role in harms deriving from propaganda: legitimizing illiberal agendas (such as mass regulation of particular ethnic or religious groups), or weakening the power of science and reason on which a deliberative democracy depends. The effect can be cyclical: as deliberative debate becomes more difficult, that creates attractive opportunities for authoritarian leaders who disparage dissenters, and seek to consolidate power rather than foster democratic debate.
There are no simple solutions to these considerations. More information is not a clear answer: with Internet tools able to generate information sources from anyone and anywhere (with no standard of credibility or accuracy required), a common set of facts becomes more difficult to achieve. More skeptical approaches to information may not be a clear answer: as danah boyd asks, it is possible that fostering critical habits towards information sources can backfire, by undermining trust in otherwise trustworthy tools. Rather, any solutions should start with how propaganda works: which is less by persuading as to the objective facts, but by persuading human minds to assign meaning to asserted events and opinions.
The Economist and many other commentators have observed (with concern) that we may be living in a “post-truth” age, and “post-truth” was the Oxford Dictionaries “Word of the Year for 2016. But “post-truth” was preceded by 50 years (or more) by “post-modernism” (and a related concept, deconstructionism) which posits serious doubts about objective truth. The “post-truth” age has been with us for quite some time; in many disciplines it has been asserted that the “truth” is essentially nothing more than observations and probabilities.
To over-simplify the views of one key deconstructionist writer, Jacques Derrida, words themselves do not signify an objective thing but instead simply identify other signifiers (i.e., other words), rendering the meaning of any given text unstable. Similarly, I’ve noted before the principles of quantum mechanics in physics – in particular that the act of observation changes what can be perceived about the object observed. So as it is for particles it is for “news” – both the subjective views of the reader and the act of reading the content affect what is perceived.
Additionally, scholars of emotional intelligence and others (including the Stoic school of ancient philosophers) have noted that humans form opinions through this process of observation and interpretation, and then react emotionally to the interpretations assigned. Fake or not, “news” itself has no meaning; our minds assign a meaning, based on rough analogy to prior events or even childhood experiences. Propaganda achieves its harmful effects by the meaning readers assign to the content.
As such, responses to propaganda should focus on the process by which readers assign meaning, and how that process leads to anxiety or anger. As danah boyd observed, cultural conditioning shapes the meaning different people (and groups) assign to events, and so addressing these concerns similarly involves cultural change about how we make sense of information, whom we trust, and how we understand our own role in grappling with information.
This means focusing on propaganda content, even efforts to flag the truth-value of such content, is an incomplete approach. The nature of the content cannot, by itself, consistently identify its propaganda value. For example, the exact same online videos of ISIS atrocities are used by ISIS to convey one meaning (“we are powerful, fear us”) and by the US government to convey another (“ISIS is immoral, and contrary to Islam”). The impact of the video depends on the meaning given by the viewer.
In this light, efforts at media literacy are not mis-focused, but should focus less on fostering skepticism (that news does not mean what it says, or has a hidden agenda or bias) and more on fostering patience in assigning meaning. Patience in assigning meaning involves withholding full judgment and being able to observe and control one’s emotional reactions to content: good skills for everyone to learn. As such, teaching emotional intelligence should be linked to efforts to improve media literacy, and to improve political discourse in general.
In a following post, I’ll address uses of Internet tools more susceptible to bright-line prohibitions: intentional unauthorized access to private information and publication of that information. Stolen information is useful for propaganda, and also the means of its production (theft) are pernicious if widespread. Thus, even in cases where there is arguably public value in its publication, the calculus of whether to acquire and/or publish stolen information needs to consider offsetting costs with respect to national security, personal privacy, and encouragement for further hacking and theft, which can undermine the trust in Internet tools on which so much of their utility depends.
See Black’s Law Dictionary: http://thelawdictionary.org/propaganda/ (“A message that is aimed at a specific audience that will try to change their thinking to that of the person releasing such propaganda. It will often contain disinformation to promote a certain view point in politics”).
http://press.princeton.edu/titles/10448.html; see reviews at http://www.nytimes.com/2016/12/26/books/how-propaganda-works-is-a-timely-reminder-for-a-post-truth-age.html; https://weeklysift.com/2015/10/19/how-propaganda-works/
Caroline Jack, “What’s Propaganda Got To Do With It?” https://points.datasociety.net/whats-propaganda-got-to-do-with-it-5b88d78c3282#.6ktrit1gz
danah boyd, “Why America is Self-Segregating,” https://points.datasociety.net/why-america-is-self-segregating-d881a39273ab#.xv8wj694g
Ethan Zuckerman, “Ben Franklin, the Post Office and the Digital Public Sphere,” https://points.datasociety.net/ben-franklin-the-post-office-and-the-digital-public-sphere-c5eee8dcb658#.g7bpzkkwb
See Arthur Larson, “The Present Status of Propaganda in International Law,” Law and Contemporary Problems Vol. 31, No. 3, International Control of Propaganda (Summer, 1966), pp. 439-451; online at: http://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=3117&context=lcp
In the context of Article 20’s prohibitions on “hate speech,” one commentator noted that while those prohibitions are coherent with protections on free expression, in practice the effectiveness of legal prohibitions on hate speech (as compared to counter-speech remedies) is limited, in part because as a practical matter some states can enforce such prohibitions selectively – protecting interests useful to the state while failing to protect minority rights. See https://www.article19.org/data/files/pdfs/conferences/iccpr-links-between-articles-19-and-20.pdf
See Richard B. Collins, “Propaganda for War and Transparency,” http://www.law.du.edu/documents/denver-university-law-review/v87-4/Collins_FINAL.pdf
In the US, these efforts are carried out by the State Department and through sources regulated by the Broadcasting Board of Governors, an appointed group of experts who manage services such as the Voice of America under a legislative framework, the Smith-Mundt Act. https://www.bbg.gov/who-we-are/oversight/legislation/. More recently, legislation also created an interagency coordination center and additional authority to help the US government counter propaganda from foreign sources. See, e.g., http://bit.ly/2ieGJZQ
As President Obama noted, the challenge for his political party – now largely out of power – is in part to “rethink our storytelling,” including the use of technology and digital media, to be more persuasive across the entire population. http://www.rollingstone.com/politics/features/obama-on-his-legacy-trumps-win-and-the-path-forward-w452527
See n.8, supra.
danah boyd, “Did Media Literacy Backfire?” https://points.datasociety.net/did-media-literacy-backfire-7418c084d88d#.fgi10rq1s
See, e.g., https://plato.stanford.edu/entries/bayes-theorem/ (describing learning – forming of opinions – as a process of incorporating new information in a manner that yields a new belief as to the probability of a given state of affairs).
See generally Daniel Goleman, Emotional Intelligence; http://www.danielgoleman.info/topics/emotional-intelligence/; http://neuroscience.uth.tmc.edu/s4/chapter06.html (role of amygdala in forming emotions)
See Epictetus, Handbook 20, (trans. Matheson) http://www.iep.utm.edu/epictetu/#SH4d (“Remember that foul words or blows in themselves are no outrage, but your judgement that they are so.”) This principle also illustrates why there are myriad reasons why some “fake news” content is circulated: it can be clicked and shared for amusement, to point out its falsity, to argue for the truth of the matter asserted, and in still other cases to inquire as to its accuracy. The content itself has no single effect; it’s the meaning given to it that matters.
See n.20, supra.
See https://www.facebook.com/zuck/posts/10103269806149061 (Facebook proposal to partner with fact-checking organizations and exploring labeling “fake news” as such); http://www.slate.com/articles/health_and_science/science/2017/01/educating_people_about_sources_won_t_stop_fake_news.html (illustrating that the fact-checking approach is incomplete, given how minds work).
“In a propaganda war against ISIS, the U.S. tried to play by the enemy’s rules,” http://wapo.st/2irK8IN ; see also http://cyberlaw.stanford.edu/blog/2016/03/tool-without-handle-tools-terror-tools-peace (noting that addressing violent extremism requires understanding the cultural and emotional factors that contribute to it).
See, e.g., http://ei.yale.edu/what-we-do/emotion-revolution/ (description of a joint project between the Yale Center for Emotional Intelligence and Born This Way Foundation to build awareness of the critical role emotions play in young people’s learning, decision-making, academic achievement, and overall wellness); Cherniss, Goleman, Emmerling, “Bringing Emotional Intelligence to the Workplace: A Technical Report Issued by the Consortium for Research on Emotional Intelligence in Organizations,” online at: http://www.eiconsortium.org/reports/technical_report.html.
Source: Cyber Law
Source: Privacy Online