Reviewed by E.B. Heath
The latest Griffith Review – The New Disruptors – aims to broaden readers’ perspective of the digital world beyond personal experience.
The contributions range widely such as: Russian trolls attacking western democracy; the ‘forever war’ going on in Pine Gap; how democracy can be refreshed via sortition; the rise of the right; the about-to-happen hydrogen technology; how outsourcing disrupts social relations; issues surrounding post-capitalism; cryptocurrency, and, the very real possibility of meeting intelligent life forms from other planets. (Typographical errors are noted: p.156 ‘supervolvanoes’ /volcanoes. and, p.120 ‘ate up them up’/ate them up.)
Not all works are discussed in this review, but all deserve to be read. Collectively they debate what is really new, and what are unresolved issues from the past. Whereas digital technology has eased our lives in many ways, are we being disrupted by creeping normality, not noticing the erosion of our democratic rights? As Hay notes in the Introduction, we need the ‘biggest picture we can frame’, to debate possible social and ethical consequences of paths taken.
In the future, the vapour trails of our current digital emissions may look like pollution. They may look like the place-markers through which we simply asserted who and where we were. They may look like the next step of advance and evolution. Or they may look like a warning – thickening; quickening – we would do well to heed.
Hay defines ‘disruption’ via ideas of innovation as framed by Clayton Christensen in his essay ‘Disruptive Technologies’ (2015): ‘a process whereby a smaller company with fewer resources is able to successfully challenge established incumbent business’. In this definition a disruption displaces an existing way of doing something to produce a new, more efficient method. It is both destructive and creative. Hay feels it is time to take stock, measure the balance between ‘destructive’ and ‘creative’. Particularly, as Julianne Schultz points out in ‘Move very fast and break many things’, the smaller innovative companies that made up FANG (Facebook, Apple, Amazon, Netflix and Google), originally motivated to enhance democracy and bring people closer together, now seem to be plundering private data and running us for their profit in an unchecked shadow economy. They, she says, have transformed capitalism and created a Big Other. This is hard to argue against, particularly since the well-publicised plundered data from eighty-seven million people in the Cambridge Analytica debacle. It is estimated that the collection of our behavioural data earns the data aggregator US$10,000 a year per user. In signing the (often incomprehensible) terms-of-service agreements, collectively we have valued convenience over privacy, and, knowingly, or impulsively, authorized this invisible intrusion into our private lives.
Some contributors in this collection take issue with ideas of ‘new disruptors’. Scott Ludlam forwards the notion in ‘Cypherpunks and surveillance power’, we are experiencing digital dimensions of old problems. Concentrations of power routinely become problematic, whether economic or political. In the past we have claimed other improvements to civil society and so we should fight for our digital rights. Mark Davis gets to the heart of the matter in a most useful essay, ‘Networked hatred’. He agrees that technological disruption is widespread but that Christensen’s theory of disruption linked to ideas of creative progress justified cutthroat business models and ‘the normalisation of totalitarian levels of surveillance … the greatest misappropriation of personal information in human history’. He worries about the damage done to democratic culture; particularly how the far right in Australia is using the Internet to delegitimize the humanity of Islam, refugees, feminists, and environmentalists in ever-increasing aggressive attacks. It is the bullying stance that worries him, saying we have left behind civility, and that debate has become forcefully polarised. Far right groups claim to be defenders of Western civilisation, this, he says, is ironic given that Western Enlightenment thought is based on civic and scientific principles and cosmopolitan ideas of a world citizenry. In rebuffing the hard right there is a trend to hark back to the glory days of the Enlightenment, but Davis believes the contradictions and complexities built into Enlightenment thought needs to be discussed before we launch into a redesign of it. He lists the pluralities of the Enlightenment as: social emancipation but the exclusion of women, rationalism leading to hierarchical classifications of race, cosmopolitanism underestimating a sense of local and national belonging, religious tolerance with presumption of western centrality, and scientific rationalism replacing religiosity positioned science as a de facto religion. Whereas early Enlightenment thinkers, like the Greeks, saw civil society as a function of the state, by the nineteenth century society and state became separate entities. Here it seems timely to mention Tim Dunlop’s ‘Sortition’. In this essay he re-imagines democracy with ideas of establishing panels of civilians, like a type of jury duty, to advise on a variety of issues. Experiments in Ireland with sortition have revealed that participants, previously feeling alienated by a system that failed to represent them, felt included and empowered. This is a good start to becoming more democratic, and perhaps answers some of Davis’ other concerns, namely the undermining of the managerial classes ‘… the material and cultural conditions and the knowledge economies that made ‘expert culture’ central to Western modernity are radically changed.’ Whereas, sortition might look like a quiet version of populism, if led by experts defining various aspects of issues in question, the outcome could satisfy democratic representation while reintroducing ‘expert culture’. By whatever process, democracy needs to be extended. As Davis says, we should not ‘fixate on what is ending rather than grapple with sparking new beginnings’.
Elise Bohan hypothesizes the indisputable mother of all disruptions. On reading ‘On becoming post-human: Big history and the future’, I realized that I must try harder to keep up. While I was not paying attention many are, apparently, planning to evolve beyond being human. Bohan hypothesizes the evolution of humans to a post-human condition within this century merging with AI technology. Bohan comments:
‘… it is imperative that we cultivate robust, scientifically informed worldviews that can help explain how we got here, why things are changing so rapidly, and why, barring any major setbacks, we will ultimately become superhuman – or develop more advanced forms of post-human life that will supersede us.’
To reveal ‘how we got here’ Bohan refers to Big History, pulling focus back to the Big Bang and taking billion-year leaps forward, via significant points of evolution, until she lands on modern humans and collective learning. At this point the transmission of information, previously having to rely on the glacial pace of DNA, now speeds up by means of storytelling, writing, development of the printing press, telegraphic messages to high speed digital processing and, finally, AI – our launching pad into a post-human world.
To enable this great post-human leap forward we need to ‘… collectively recognize and consciously strive to overcome the many limitations of human thought and perception that are not optimally evolved for global diplomacy, foresight or existential risk mitigation. Our hardwired cognitive biases, tribal instincts … are very dangerous traits to retain.’ But, she says, we cannot move ourselves out of these patterns of thought, until we have changed ‘… the human condition on a fundamental, biological level …’. On first reading Bohan’s essay it sounds as if that hoary old WWII eugenics issue is going global. However, it is said that equality is at the heart of this thesis, the hope that the intellectually challenged will think clearly, the blind will see, and rational thought will prevail over wildly differing cultural beliefs. But whose algorithms will define the ‘rational’ position? And how will we agree to manage an equal transition, or anything else, before we have rid ourselves of the ‘ape brain’. We might be going in circles for a while.
It seems as if transhumanists are standing on the edge of an abyss of inequality, seeing only a utopian superhuman waving to them from the other side. A merge with AI is more likely to become a tale of two humans – the best of times for some, and the worst of times for others.
Margaret Gibson’s ‘First life, second death: Dying in a digital world’ provides a glimpse into the personal world of love, grief and a different kind of death. Whereas new perspectives on the familiar are always stimulating, Gibson’s reportage is unsettling.
Today, biological and digital lives intersect with and overlay each other all the time. This we know. But what we might not consider enough – or at all – is that there are more types of death and temporality in stages of death and dying than ever before.
Gibson gives readers the benefit of her research on death within the virtual platform of Second Life. This site provides avatars with an online geography that exactly replicates cities such as New York, Paris and London. Within this very real virtual world the emotional bonds forged between avatars have given rise in cyber cemeteries. Gibson wonders about the legal ramifications asking who should decide if the digital dead be deleted or memorialized, should other actors in the virtual world have a say beyond family members in the real world. She refers to this as a second death and points out that the avatar fantasy is not unlike inner mind human fantasy. She makes a good point. One’s inner world provides a healthy psychological barrier to the harshness of some realities. If formalized in a virtual setting death requires an appropriate response. So far, not so much a disruption rather an extension, and in an area that only impinges on those immersed in virtual worlds. However, Gibson reports other new technologies, none more eerie than her discussion of the self-announcing death notice. In a pre-programmed message the deceased seek to secure agency beyond the grave and inform friends and relatives that they are, indeed, dead. Individuals, whose ego cannot accept their mortality, can now choose to continue living as a voice-bot or avatar. Apparently, the deceased can also record messages to remind loved ones of salient dates like an anniversary. A disruption of the creepy kind!
Gibson’s essay leaves readers wondering how post death conditions will develop in the future. Will the dead re-enact their finest moments in sophisticated holograms? Will new technology benefit grieving processes, or enable an egotistical refusal to die?
This thought-provoking collection provides the reader with much to consider and heed.
Edited by Ashley Hay