Who am I?

Rene Descartes said famously that I am a thinking substance and thus, presumably, that every self or subject is a thinking substance. That claim is, however, flawed. Does the statement that I believe this or that mean that there is a substance somewhere that believes this or that? Does the claim that I am in pain mean that there is a substance somewhere that is in pain? In these two cases we are not making factual statements about substances but we are affirming a belief of our own and expressing our own pain. Wittgenstein concluded that we don’t, in fact, use the word “I” to refer to anything. There is no such thing as the I, the self, or subject that the word “I” could refer to. Following Lichtenberg, he suggested that our grammar may mislead us. We tend to assume that a noun has meaning always by referring to some object and we hold that the same is true of the pronoun “I.” But both assumptions are wrong. We should think, instead, of sentences like “I am in pain,” Lichtenberg said, as we do of “It is raining.” In the latter case there is obviously no it that does the raining and so, similarly, we should conclude that there is no I that has the pain.

But this Lichtenberg-Wittgenstein view has difficulties of its own. Let us grant that we say “I am in pain” in order to express pain. Wittgenstein goes so far in this case as to maintain that we could actually replace the sentence with a moan to the same effect and thus with something that doesn’t contain the word “I” at all. But what about “Yesterday, I was in terrible pain”? This must be a true or false statement about an incidence of pain. It is certainly not an expression of pain. But the statement doesn’t mean that somewhere or other there was terrible pain yesterday. It means to say that I was the one who suffered. But who then is that I?

When I am asked who I am, I will usually recite certain facts about myself. But when I think about myself, I think rather of my life experiences, my hopes and aspirations. Others may identify me with some external characteristics: the appearance of my face, the sound of my voice, certain characteristic ways of moving. To myself I am, however, someone with certain experiences, feelings, memories, thoughts etc. And here I get back to the theme of privacy, to the fact that the large body of my experiences, feelings, memories, thoughts, etc. is not in practice accessible to others. The one who I am is not a substance with its own identity from the first moment of my existence. I become myself, I become who I am, rather, in the course of my life, as experiences, feelings, memories, thoughts accumulate. But this being who I have become is not fully transparent to others. I relate to it in a way in which I don’t relate to anyone else. I am a being with hidden, secreted corners and I am aware that others are in this respect just like me. (The question of the nature of the self and that of the degree to which our sensations, feelings, thoughts are private belong thus together.) Click here

In proposing this view, I am turning one of Wittgenstein’s arguments on its head. It is one of the few explicit arguments in his Tractatus – one that concern the self or subject. Wittgenstein argues in that passage that there cannot be a thinking self. Such a self, he says, would have to be simple, but a thing that thinks must have an internal complexity in order to entertain thoughts since facts and propositions have both a complex structure. I want to turn this argument around and conclude that a thinking self cannot be a simple substance but must be complex and that its complexity is determined by the complexity of its experiences, feelings, memories, thoughts.

One conclusion Descartes and his followers drew from their picture of the self as a simple substance was that this self could not come into existence or cease to exist through the normal processes of growth and decay. These would be always processes of composition and decomposition but in a simple substance there would be neither. This seemed to him to give us some assurance of the immortality of the soul. The alternative picture of a complex soul or subject allows us, on the other hand, to see this subject precisely as something that is formed in a natural way and that dissolves again in a natural fashion as our experiences, feelings, memories, thoughts disintegrate and fade away.

Wittgenstein on the Puzzle of Privacy

“In what sense are my sensations private? – Well, only I know whether I am really in pain; another person can only surmise it. – In one way this is wrong, and in another nonsense. If we are using the word ‘to know’ as it is normally used (and how else are we to use it?), then other people often know when I am in pain. – Yes, but all the same not with the certainty with which I know it myself! – It can’t be said of me at all (except perhaps as a joke) that I know I am in pain. What is it supposed to mean – except perhaps that I am in pain.”

Readers of Wittgenstein’s Philosophical Investigations will be familiar with this intriguing passage (PI, 246). But there are reasons for being dissatisfied with it. Wittgenstein’s argument appears effective against those who postulate an absolute division between body and mind. But this hardly exhausts what we need to say on the topic of the privacy of sensations, feelings, experiences, memories, thoughts, etc. Yes, it is true that people often know when another person is in pain. But even more often they don’t. We might grant Wittgenstein that sensations, feelings, etc. are not in principle private, but practically they often are. And that they are, we might add, is practically inevitable. Much of what goes on in us never sees daylight. Do I tell people all my dreams and if I do, will I succeed in communicating to them what made them disturbing or funny? Do others know what I feel and think when I shave myself in the morning in front of the mirror? Do I speak of the twinge in my ankle as I walk to work? Do I communicate all the associations and memories that some words in a book evoke in me? None of this happens and others could not even in principle come to know all these things about me. Nor do I or could I know all that they think and feel and experience in their lives.

Hence, the disconnect that always exists between us. As a result, there is an element of uncertainty in all our social relations. There is the reality of misunderstanding, also of coldness and cruelty. Yes, Wittgenstein is right when he writes that it is possible to see that another person is in pain. In such cases, there is no question of a laborious inference from the person’s behavior to his feeling. The pain is manifest in the sufferer. But Wittgenstein passes over the fact that we do just as often not see that pain. Or we may see it in someone close to us and fail to notice it in a stranger. This disconnect produces frictions in our relations with each other. It leads to breakdowns of friendships and marriages. At the level of politics, it produces hostility and war. Over the course of human evolution, our inner life has, no doubt, become increasingly richer and therefore more difficult to discern for others. At the same time, we have learned to expand the range of our words and the vocabulary of our gestures so as to be able to communicate more effectively with each other. The inner life and the external expression bear, moreover, reciprocally on each other. My most private thoughts are, no doubt, shaped by the words I have learned from others and the words I use are imbued with feeling. The boundary between the inner and the outer is thus blurred. That is something, Wittgenstein establishes, no doubt, in his reflections on privacy. But there still exists a boundary and that it exists gives shape to our social practices and our political institutions. Privacy is a political issue precisely because it is a practical fact and that is something Wittgenstein has failed to notice.

There goes philosophy — Claremont Graduate University closes its philosophy program

The Digital media outlet INSIDE HIGHER ED reports that Claremont Graduate University has just closed down its philosophy department. “We were each given the day before an offer to continue as contract employees,” one of the two tenured professors in the department said, according to the report. “The offers were unacceptable in form and content, and presented as take-it-or-be-fired. We ignored them and got fired the next day.” According to the interim university president of Claremont Graduate University, Jacob Adams, the decision was “in recognition of a unique combination of market, enrollment and limited faculty resources that militated against the program’s sustainability, even academic viability.” According to Adams, the trustees considered that Claremont “is not a comprehensive university,” but rather a “graduate university offering degrees in selected fields with unique programs of study and opportunities to study across disciplinary boundaries.” In this spirit, Claremont is now undertaking “a process of program reprioritization.” The university’s news release cites a number of other institutions to have closed graduate programs in recent in years, and such closures — for reasons similar to those at Claremont — are on the rise. Click here

The implications of this decision are surely complex. There is no doubt that there are too many Philosophy Ph.D.’s being produced and too many Ph.D. programs  churning them out. Just a few years ago, Berkeley advertised one position in philosophy and received some 600 applications. That year there were 300 job openings in philosophy altogether in the US. Some of the Berkeley applicants may, of course, have already been teaching somewhere, but we must assume that many of them were left without a job. The closure of Claremont’s small philosophy department will not make much difference to this situation. But it may indicate that a larger retrenchment is on the way and that is surely to be welcomed.

At the same time, such a retrenchment will mean a reduction in the number of teaching positions available and this spells trouble for those who are now getting ready to enter the job market. It is, moreover, far from clear that we are facing only a limited retrenchment. The humanities as a whole are now under pressure. One reason for this are steeply increasing tuition fees. These force students to concentrate on subjects that promise well-paid employment. They also induce them to cut their time to a degree to a minimum and thus makes them forgo the luxury of taking courses in the humanities including philosophy. The number of humanities and philosophy majors has been sinking as a result across the country and so has the enrollment in humanities and philosophy courses meant for non-majors. Meanwhile, newly emerging technical subjects are in need of funding which they may seek to cover by stripping or eliminating other programs in their Universities.

We can’t put all the blame, however, on outside forces. Philosophy and the humanities in general need to rethink where they are and what they are doing. Too often their work has become an insider business. A re-orientation is called for. If we are lucky, the current pressure on these programs will help to bring this about. But there is no guarantee.

Disinformation: An Epistemology for the Digital Age

Here is part of a report on “Artificial Intelligence and International Security” that addresses some of the issues that an epistemology for the digital age needs to consider.

Artificial Intelligence and
International Security
By Michael C. Horowitz, Gregory C. Allen,
Edoardo Saravalle, Anthony Cho,
Kara Frederick, and Paul Scharre

Center for a New American Security
July 2018

Information Security

The role of AI in the shifting threat landscape has serious implications for information security, reflecting the broader impact of AI, through bots and related systems in the information age. AI’s use can both exacerbate and mitigate the effects of disinformation within an evolving information ecosystem. Similar to the role of AI in cyber attacks, AI provides mechanisms to narrowly tailor propaganda to a targeted audience, as well as increase its dissemination at scale – heightening its efficacy and reach. Alternatively, natural language understanding and other forms of machine learning can train computer models to detect and filter propaganda content and its amplifiers. Yet too often the ability to create and spread disinformation outpaces AI-driven tools that detect it.

Targeted Propaganda and Deep Fakes

Computational propaganda inordinately affects the current information ecosystem and its distinct vulnerabilities. This ecosystem is characterized by social media’s low barriers to entry, which allow anonymous actors – sometimes automated – to spread false, misleading or hyper-partisan content with little accountability. Bots that amplify this content at scale, tailored messaging or ads that enforce existing biases, and algorithms that promote incendiary content to encourage clicks point to implicit vulnerabilities in this landscape.9 MIT researchers’ 2018 finding that “falsehood [diffuses] significantly farther, faster, deeper and more broadly” than truth on Twitter, especially regarding political news, further illustrates the risks of a crowded information environment.10 AI is playing an increasingly relevant role in the information ecosystem by enabling propaganda to be more efficient, scalable, and widespread.11 A sample of AI-driven techniques and principles to target and distribute propaganda and disinformation includes:

• Exploitation of behavioral data – The application of AI to target specific audiences builds on behavioral data collection, with machine learning parsing through an increasing amount of data. Metadata generated by users of online platforms – often to paint a picture of consumer behavior for targeted advertising – can be exploited for propaganda purposes as well.12 For instance, Cambridge Analytica’s “psychographic” micro-targeting based off of Facebook data used online footprints and personality assessments to tailor messages and content to individual users.13

• Pattern recognition and prediction – AI systems’ ability to recognize patterns and calculate the probability of future events, when applied to human behavior analysis, can reinforce echo chambers and confirmation bias.14 Machine learning algorithms on social media platforms prioritize content that users are already expected to favor and produce messages targeted at those already susceptible to them.15

• Amplification and agenda setting – Studies indicate that bots made up over 50 percent of all online traffic in 2016.16 Entities that artificially promote content can manipulate the “agenda setting” principle, which dictates that the more often people see certain content, the more they think it is important.17 Amplification can increase the perception of significance in the public mind. Further, if political bots are “written to learn from and mimic real people,” according to computational propaganda researchers Samuel Woolley and Philip Howard, then they stand to influence the debate. For example, Woolley and Howard point toward the deployment of political bots that interact with users and attack political candidates, weigh in on activists’ behavior, inflate candidates’ follower numbers, or retweet specific candidates’ messaging, as if they were humans.18 Amplifying damaging or distracting stories about a political candidate via “troll farms” can also change what information reaches the public. This can affect political discussions, especially when coupled with anonymity that reduces attribution (and therefore accountability) to imitate legitimate human discourse.19

• Natural language processing to target sentiment – Advances in natural language processing can leverage sentiment analysis to target specific ideological audiences.”20 Google’s offer of political interest ad targeting for both “left-leaning” and “right-leaning” users for the first time in 2016 is a step in this direction.21 By using a systemic method to identify, examine, and interpret emotional content within text, natural language processing can be wielded as a propaganda tool. Clarifying semantic interpretations of language for machines to act upon can aid in the construct of more emotionally relevant propaganda.22 Further, quantifying user reactions by gathering impressions can refine this propaganda by assessing and recalibrating methodologies for maximum impact. Private sector companies are already attempting to quantify this behavior tracking data in order to vector future microtargeting efforts for advertisers on their platforms. These efforts are inherently dual-use – instead of utilizing metadata to supply users with targeted ads, malicious actors can supply them with tailored propaganda instead.

• Deep fakes – AI systems are capable of generating realistic-sounding synthetic voice recordings of any individual for whom there is a sufficiently large voice training dataset.23 The same is increasingly true for video.24 As of this writing, “deep fake” forged audio and video looks and sounds noticeably wrong even to untrained individuals. However, at the pace these technologies are making progress, they are likely less than five years away from being able to fool the untrained ear and eye.

Countering Disinformation

While no technical solution will fully counter the impact of disinformation on international security, AI can help mitigate its efficiency. AI tools to detect, analyze, and disrupt disinformation weed out nefarious content and block bots. Some AI-focused mitigation tools and examples include:

• Automated Vetting and Fake News Detection – Companies are partnering with and creating discrete organizations with the specific goal of increasing the ability to filter out fake news and reinforce known facts using AI. In 2017, Google announced a new partnership with the International Fact-Checking Network at The Poynter Institute, and MIT’s the Fake News Challenge resulted in an algorithm with an 80 percent success rate.25 Entities like AdVerif.ai scan and detect “problematic” content by augmenting manual review with natural language processing and deep learning.26 Natural language understanding to train machines to find nefarious content using semantic text analysis could also improve these initiatives, especially in the private sector.

• Trollbot Detection and Blocking – Estimates indicate the bot population ranges between 9 percent and 15 percent on Twitter and is increasing in sophistication. Machine learning models like the Botometer API, a feature-based classification system for Twitter, offer an AI-driven approach to identify them for potential removal.27 Reducing the amount of bots would de-clutter the information ecosystem, as some political bots are created solely to amplify disinformation, propaganda, and “fake news.”28 Additionally, eliminating specific bots would reduce their malign uses, such as for distributed denial-of-service attacks, like those propagated by impersonator bots throughout 2016.29

• Verification of Authenticity – Digital distributed ledgers and machine speed sensor fusion to certify real-time information and authenticity of images and videos can also help weed out doctored data. Additionally, blockchain technologies are being utilized at non-profits like PUBLIQ, which encrypts each story and distributes it over a peer-to-peer network to attempt to increase information reliability.30 Content filtering often requires judgement calls due to varying perceptions of truth and the reliability of information. Thus, it is difficult to create a universal filter based on purely technical means, and it is essential to keep a human in the loop during AI-driven content identification. Technical tools can limit and slow disinformation, not eradicate it.

References

9 Zeynep Tufekci, “YouTube, The Great Radicalizer,” The New York Times, March 10, 2018,
https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html.
10 Soroush Vosoughi, Deb Roy and Sinan Aral, “The spread of true and false news online,”
Science Magazine, 359 no. 6380 (March 9, 2018), 1146-1151.
11 Miles Brundage et al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention
and Mitigation,” (University of Oxford, February 2018), 16, https://maliciousaireport.com/.
12 Tim Hwang, “Digital Disinformation: A Primer,” The Atlantic Council, September 25, 2017, 7,

Digital Disinformation: A Primer


13 Toomas Hendrik Ilves, “Guest Post: Is Social Media Good or Bad for Democracy?”, Facebook
Newsroom, January 25, 2018, https://newsroom.fb.com/news/2018/01/ilves-democracy/; and
Sue Halpern, “Cambridge Analytica, Facebook and the Revelations of Open Secrets,” The New
Yorker, March 21, 2018, https://www.newyorker.com/news/news-desk/cambridge-analyticafacebook-and-the-revelations-of-open-secrets.
14 Michael W. Bader, “Reign of the Algorithms: How “Artificial Intelligence” is Threatening Our
Freedom,” May 12, 2016, https://www.gfe-media.de/blog/wpcontent/
uploads/2016/05/Herrschaft_der_Algorithmen_V08_22_06_16_EN-mb04.pdf.
15 Brundage et al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention and
Mitigation,” 46.
16 Igal Zeifman, “Bot Traffic Report 2016,” Imperva Incapsula blog on Incapsula.com, January
24, 2017, https://www.incapsula.com/blog/bot-traffic-report-2016.html.
17 Samuel C. Woolley and Douglas R. Guilbeault, “Computational Propaganda in the United
States of America: Manufacturing Consensus Online,” Working paper (University of Oxford,
2017), 4, http://blogs.oii.ox.ac.uk/politicalbots/wp-content/uploads/sites/89/2017/06/Comprop-
USA.pdf; and Samuel C. Woolley and Phillip N. Howard, “Political Communication,
Computational Propaganda, and Autonomous Agents,” International Journal of Communication,
10 (2016), 4885, http://ijoc.org/index.php/ijoc/article/download/6298/1809.
18 Woolley and Howard, “Political Communication, Computational Propaganda, and
Autonomous Agents,” 4885.
19 Alessandro Bessi and Emilio Ferrara, “Social bots distort the 2016 U.S. presidential election
online discussion,” First Monday, 21 no. 11 (November 2016), 1.
20 Travis Morris, “Extracting and Networking Emotions in Extremist Propaganda,” (paper
presented at the annual meeting for the European Intelligence and Security Informatics
Conference, Odense, Denmark, August 22-24, 2012), 53-59.
21 Kent Walker and Richard Salgado, “Security and disinformation in the U.S. 2016 election:
What we found,” Google blog, October 30, 2017, https://storage.googleapis.com/gweb-uniblogpublish-
prod/documents/google_US2016election_findings_1_zm64A1G.pdf.
22 Morris, “Extracting and Networking Emotions in Extremist Propaganda,” 53-59.
23 Craig Stewart, “Adobe prototypes ‘Photoshop for audio,’” Creative Bloq, November 03, 2016,
http://www.creativebloq.com/news/adobe-prototypes-photoshop-for-audio.
24 Justus Thies et al., “Face2Face: Real-time Face Capture and Reenactment of RGB Videos,”
Niessner Lab, 2016, http://niessnerlab.org/papers/2016/1facetoface/thies2016face.pdf.
25 Erica Anderson, “Building trust online by partnering with the International Fact Checking
Network,” Google’s The Keyword blog, October 26, 2017,
https://www.blog.google/topics/journalism-news/building-trust-online-partnering-internationalfact-
checking-network/; and Jackie Snow, “Can AI Win the War Against Fake News?” MIT
Technology Review, December 13, 2017, https://www.technologyreview.com/s/609717/can-aiwin-the-war-against-fake-news/.
26 “Technology,” AdVerif.ai, http://adverifai.com/technology/.
27 Onur Varol et al., “Online Human-Bot Interactions: Detection, Estimation and
Characterization,” Preprint, submitted March 27, 2017, 1, https://arxiv.org/abs/1703.03107.
28 Lee Rainie, Janna Anderson, and Jonathan Albright, “The Future of Free Speech, Trolls,
Anonymity, and Fake News Online,” (Pew Research Center, March 2017),
http://www.pewinternet.org/2017/03/29/the-future-of-free-speech-trolls-anonymity-and-fakenews-online/.; and Alejandro Bessi and Emilio Ferr, “Social bots distort the 2016 U.S.
Presidential election online discussion,” First Monday, 21 no. 11 (November 2016),
http://firstmonday.org/ojs/index.php/fm/article/view/7090/5653.
29 Adrienne Lafrance, “The Internet is Mostly Bots,” The Atlantic, January 31, 2017,
https://www.theatlantic.com/technology/archive/2017/01/bots-bots-bots/515043/.
30 “PUBLIQ goes public: The blockchain and AI company that fights fake news announces the
start of its initial token offering,” PUBLIQ, November 14, 2017,
https://publiq.network/en/7379D8K2.

And how about democracy?

Steven Levitsky and Daniel Ziblatt, How Democracies Die, Crown Publishing, New York 2018

Are we facing today the twilight of democracy? Patrick E. Kennon, a retired CIA analyst, argued this point more than twenty years ago in a book entitled The Twilight of Democracy. On the front of its dustcover it said: “Those societies that continue to allow themselves to be administered by individuals whose only qualification is that they were able to win a popularity contest will go from failure to failure and eventually pass from the scene.” On the back cover it added: “Washington isn’t the problem – Democracy is.” Kennon was convinced that democracy had reached its expiration date even though it was still being touted as an ideology. Democracy, he wrote, “is an earthbound, human creation subject to the entropy of all such creations. It now travels a course of declining relevance much like that of the European monarchy from the power of Elizabeth I to the impotence of Elizabeth II.” (p. 255) Replacing democracy, he foresaw, would be a new elite of experts: military, administrative, and private-sector specialists who would administer the state of the future in a bureaucratic fashion. Under such a scenario, he concluded that by 2050 the developed, first world “would have largely retired its politicians. The internal affairs of the country would be run by faceless but expert bureaucrats under the general supervision of equally faceless representatives of the population as a whole.” (p. 279)

It’s not obvious that Kennon’s vision is coming true. Today we are being ruled by a shaky business man who has convinced his followers that he is a master at making deals. He is also a media figure who has learned to stir their emotions into political frenzy. He may not be much of a democrat but he is also certainly not a faceless expert specialist operating an anonymous bureaucracy. We seem to be traveling on a different road from the one Kennon saw ahead. But he seems to have been proved right in assuming that the future of democracy is by no means assured. The result of this realization has led to a spate of recent books entitled The Crisis of Democracy, The Plot to Destroy America, Democracy in Decline?, How Democracy Ends, and Democracy: The God that Failed.

Levitsky and Ziblatt’s How Democracies Die is another contribution to this genre. Its title is, however, somewhat misleading in that the book is largely concerned with the United States. Its central question is to what extent Donald Trump represents a threat to American democracy and what to do about it – with illustrative references to the failure of democracies in Fascist Italy, Nazi Germany, post-Communist Russia, Argentina, Chile, and Venezuela, and passing references to various other places. This perspective structures but also limits the authors’ discussion of how democracies may fail. Their eye is on internal forces for failure; they are not concerned with the collapse of democracies due to foreign interventions, to military, economic, or environmental disaster, or to ideological and religious rifts. And their explanations are given in psychological terms: the authoritarian personality, the need for toleration and forbearance, the dangers of radical opposition in that it might provoke a political reaction. They do not ask whether there are structural changes in society that are destabilizing the democratic order

Levitsky and Ziblatt see American democracy threatened above all by the rise of an authoritarian figure who is set to undermine existing political institutions and practices. They identify four indicators of authoritarian behavior: 1. Rejection of (or weak commitment to) democratic rules of the game. 2. Denial of the legitimacy of political opponents. 3. Toleration or encouragement of violence. 4. Readiness to curtail civil liberties of opponents, including media. And they then proceed to document all four of these behavior patterns in Donald Trump. Trump is therefore on their view a serious threat to American democracy.

The two authors allow that authoritarian personalities exist in every society but, they argue, healthy democracies have procedures for keeping them in check. These are not, however, to be found in the existence of a written Constitution. “There is nothing in our Constitution or our culture,” they write, “to immunize us against democratic breakdown.” (p. 204) Other checks are needed to bring this about. The first is the fostering of a spirit of toleration and forbearance. If democracy is to work, political opponents must be respected as citizens and not be treated as enemies to be suppressed. And politicians need to restrain the use of their power so a not to undermine the democratic system for the sake of their own cause. In Levitsky and Ziblatt’s telling, the Republican and Democratic Parties have in the past served as “guardrails” that have kept American democracy in place by fostering these two fundamental political virtues. Through a process of selection and vetting of presidential candidates, the two parties have managed to keep authoritarians more or less at bay. The existence of political parties has thus proved essential for the survival of democracy. But the Republican and Democratic Parties have become less powerful in recent decades and they have therefore increasingly lost their guardrail function. This is due, the two authors think, to a polarization affecting all of American society and politics. “The weakening of our democratic norms is rooted in extreme polarization – one that extends beyond policy differences into an existential conflict over race and culture. America’s efforts to achieve racial equality as our society grows increasingly diverse have fueled an insidious reaction and intensifying polarization. And if one thing is clear from studying breakdowns throughout history, it’s that extreme polarization can kill democracies.” (p. 9)

The authors sketch three possible scenarios for post-Trump America. The first, optimistic one, is that Trump will fail and that the Trump interlude will be “taught in schools, recounted in films, and recited in historical works as an era of tragic mistakes where catastrophe was avoided and American democracy saved.” (p. 206) But they are not convinced that the end of Trump’s presidency will be enough to restore a healthy democracy. “A second much darker future is one in which President Trump and the Republicans continue to win with a white nationalistic appeal.” (p. 207) This would, of course, not be possible in a democratic way. Levitsky and Ziblatt are, however, convinced that – conceivable as it is – “such a nightmare scenario isn’t likely.” (p. 208) There remains a third possibility. “The third, and in our view, most likely post-Trump future is one marked by polarization, more departures from unwritten political conventions, and increasing institutional warfare – in other words, democracy without solid guardrails.” (p. 208) Levitsky and Ziblatt call this also a scenario in which democracy is left in a half-life state.

They proceed to consider how such a development may be prevented. They argue that it would be wrong for the opposition to use the same hardball tactics adopted by Trump and his Republican followers. They write: “In our view, the idea that Democrats should ‘fight like Republicans’ is misguided. First of all, evidence from other countries suggest that such a strategy often plays directly into the hands of authoritarians. Scorched-earth tactics often erode support for the opposition by scaring off moderates. And they unify progovernment forces, as even dissidents within the incumbent party close ranks in the face of an uncompromising opposition. And when the opposition fights dirty, it provides the government with justification for cracking down.” (pp. 215-216) The advice seems plausible, but it fails to address the question whether there will not be a point at which only all-out opposition can be effective. Clearly, America is not at this point and so Levitsky and Ziblatt’s reasonably suggest that “opposition to the Trump administration’s authoritarian behavior should be muscular but it should seek to preserve, rather than violate, democratic rules and norms. Where possible, opposition should center on Congress, the courts, and, of course, elections.” (pp. 217-218) So, not violence in the streets, but maintenance of the democratic values of toleration and forbearance in the building of broad opposition coalitions.

But the two authors understand that resistance to the abuses of the Trump administration is not enough. They believe, rather, that “the fundamental problem facing American democracy remains extreme partisan division – one fueled not just by policy differences but by deeper sources of resentment, including racial and religious differences.” They are convinced that “America’s great polarization preceded the Trump presidency, and it is very likely to endure beyond it.” (p. 220) They see the Republican Party as the main driver of the political chasm that has opened up. Hence: “Reducing polarization requires that the Republican Party be reformed, if not refounded outright.” (p. 223) And Democrats must address the problem of economic and social inequality. “The very health of our democracy hinges on it.” (p. 230)

Important as these considerations are, Levitsky and Ziblatt do not pursue them far enough to come to a compelling analysis of the state of democracy in the 21st century and particularly that of US American democracy. One obstacle on the way is their tendency to describe the situation in binary terms, as if there was a clear choice between being authoritarian and being democratic. Neither authoritarianism nor democracy is one thing; there are different degrees and forms of each and the two even occasionally overlap as in the so-called peoples-democracies of the Soviet era. Moreover, not every form of government is viable at any given moment. There are external constraints that make one system more viable than another at a given time. Thus, the radical democracy known to the Athenian state of the fourth century is not possible for us. Levitsky and Ziblatt follow mainline American thinking when they conceive the matter in an essentially voluntarist fashion. It is all a matter of choice for them. We must get ourselves into the right (democratic and anti-authoritarian) state of mind and then act according to its dictates. It’s all a matter of good will.

But we should ask ourselves what the constraints are under which modern democracy has developed and how and why these may be changing – and how then we are to proceed in this shifting terrain. The rise of Donald Trump is linked to an accumulation of wealth made possible by new technologies, to a globally operating financial system, and to the messaging power of the electronic media. The concentration of power in the hands of authoritarian leaders parallels and is accomplished through the accumulation of economic, financial, and informational power. The important point to understand is that we are not  facing just another authoritarian in Donald Trump, but a newly evolving form of authoritarianism. Reforming America’s political parties and striving for greater equality and less polarization may be good things, but they are not enough in the face of a newly forming system of political power.