2019 04 – Anxious Citizen of the Digital State

The Anxieties of the Citizen of the Digital State

Eamon Dyas

“By a careful cultural design, we control not the final behaviour, but the inclination to behave – the motives, the desires, the wishes. The curious thing is that in that case the question of freedom never arises.” – the character of T.E. Frazier in Walden Two, by W.F. Skinner, published by Hackett Publishing Company, 2005 edn., pp.246-247

This is the first generation that has grown up in what is called the internet age. Although the internet was around longer, it is just over 20 years ago that the world could still be defined as offline. Connecting to the internet required the user to dial a phone number using a modem located close to a towering desktop computer and then waiting patiently as the connection bleeped and pulsed to life. The local internet café provided facilities for the many who sought to avail of its facilities without owning a computer themselves. The world today is a completely different one. If 20 odd years ago it could be defined as offline today it is most definitely online. The internet is all pervasive through personal computers, Wi-Fi and smart phones and the act of linking up to it as simple as turning on a light switch.

The foundation of this new world began earlier and an essential component of it could be said to date from the invention of the computer microchip. The first microchip was developed in 1971 by Intel, then an obscure company in what later became known as Silicon Valley. It was called the 4004 and was in effect the world’s earliest commercially available microprocessor, in other words it contained all the electronic circuits necessary for advance number-crunching in a single tiny package. Constructed from 3,300 tiny transistors, each measuring around 10,000 nanometres (or billionths of a metre) across and about the size of a red blood cell it was an astonishing achievement for its time. As to developments since then:

“The firm [Intel – Ed] no longer published exact numbers, but best guess is that they [the modern microchip – ED] have about 1.5 billion-2 billion transistors apiece. Spaced 14 nanometres apart, each is so tiny as to be literally invisible, for they are more than an order of magnitude smaller than the wavelength of light that humans can see.

Everyone knows that modern computers are better than old ones. But is it hard to convey just how much better, for no other consumer technology has improved at anything approaching a similar pace. The standard analogy is with cars: if the car from 1971 had improved at the same rate as computer chips, then by 2015 new models would have top speeds of about 420 million miles per hour. That is roughly two-thirds the speed of light, or fast enough to drive round the world in less than a fifth of a second.”

(“Beyond Moore’s Law”, by Tim Cross, in Megatech: technology in 2050, edited by Daniel Franklin. Published by The Economist Books, 2017, p.55)

The microchip has been the lynchpin that has enabled an ever-increasing amount of information to be processed in an ever-decreasing physical space at an ever-increasing speed. Without it we would not have any of the modern computers, Smart Phones, gaming consoles, or indeed any of the household utensils which assist our daily lives. The extent to which this single item has changed the nature of computing can be gauged by the fact that a modern smart phone contains more computing power than was available to entire nations in 1971.

But alongside companies like Intel, which produced products that relate to the hardware aspect of computing, there has developed those companies who produce the software – the means by which information can be processed and exploited across a variety of platforms. One of the earliest shapers of the modern digital age was Microsoft, which was founded in April 1975 and floated on the stock market in March 1986. It took 15 years to reach $1 billion in revenue. Google was founded in 1998 and reached the same figure of revenue in five years while Facebook took just over four years to reach that figure. Amazon was a slow beginner by these standards but still reached the $10 billion in revenue 13 years after its launch in 1994.

But something more than the microchip and software programmes was needed to get the computer age into the position it now occupies. It required the correct socio-political and financial environment to thrive.

One of the catalysts for this was the banking sector where the task of processing huge quantities of cheques and other paper currencies had, by the 1970s, become their biggest expenditure both in labour and capital (it was estimated that 65-70% of the cost of running a bank was in salaries). Any system that had the capacity of automating the process in a faster, cheaper way was always going to be attractive. Out of this need came the Electronic Funds Transfer System (EFTS).

Then, although initially a means of cutting costs, banking computer systems became a means of tapping into a new constituency of bank users. By the early 1980s the percentage of the population in Britain with bank accounts was only 52%. The decline in the manufacturing sector led to a fall in the number of commercial accounts held by the banks and they were eager to find a substitute source of account holders. This substitute emerged  partly through the need of people to open bank accounts in order to take advantage of the government sell-off of council properties. As a result new swathes of individuals, who had previously operated their monetary lives exclusively through the medium of cash, became bank customers. Initially it was the building societies that experienced this surge as it was to them that the new potential property-owners went in order to find the deposits needed to purchase their homes. Then the banks saw the potential and began to directly challenge the building societies for the custom:

“From scratch, one of the big English banks has already lent over £1,000m. – more than all but a dozen of the building societies – and a recent Financial Times survey estimated that 40 per cent by value, of all new loans are going through the banks. Naturally, the building societies cannot stand idly by, as the banks move into their territory, and one by one is hitting back by offering its customers banking services.” (High Street Battle for Your Cash: change at the Bank” by Barry White, Belfast Telegraph, 29 March 1982).

In the struggle between the building societies and the banks for the new constituency of customers:

“Convenience is a major factor, as well as cost, and both the building societies and the banks will argue strongly that they have the edge. The building societies’ main advantage is their longer working day, 9.30 to 4 and Saturday mornings, but the arrival of automatic teller machines, providing a 24 hour banking service, could cancel this out.” (Ibid.)

With the banks’ capacity to introduce more convenient opening hours being obstructed by the bank workers the advent of the ATM machines helped to neutralise the advantage of the building societies. However, the cost of the machines (£30,000 each), and initial resistance of the bank staff meant that this facility took a while to make itself felt.

In the meantime other developments were helping to expand the numbers of potential new banking customers. In 1980 the Government proposed the payment of social security benefits by automated credit transfer into the recipient’s bank account. This was meant to be introduced by 1982 and was seen to offer significant savings on the cost of processing the cash transactions involved in the 1,000 million social security benefit payments it dealt with each year. At the same time a government “think tank” was also undertaking an investigation into the possibility of persuading Britain’s workers to switch from cash wages to payment by cheque – or bank transfer – a move that was supported by Labour representatives:

“Prime Minister Mrs. Margaret Thatcher is to be pressed in Parliament this week to discourage weekly payment of wages in cash and to encourage, instead, monthly payments by cheque. Mr. Gwilym Roberts, Labour M.P. for Cannock, today tabled a Commons question to her urging her to have discussions with the T.U.C. and C.B.I. with a view to setting this trend in motion.” (“Pay Monthly Plea by M.P.”, Liverpool Echo, 1 June 1981, p.8)

This trend to persuade workers to accept a non-cash payment continued through the 1980s with the tactic of offering one-off bonuses to sign up to such changes being increasingly successful. In Ireland, by 1987, efforts were also being made to have workers move from cash payments:

“Labour Minister Bertie Ahern wants to make it easier for employers to pay staff by non-cash methods. He proposes strengthening the employer’s hand by changing the law on the payment of wages. But workers would continue to retain the right to have their wages paid in cash. The Minister outlined the proposed changes today in a discussion document on a number of aspects of labour law. The changes are suggested because, it is believed, employers do not have enough flexibility in this area.” (New Laws on Cash Wages, Evening Herald, 30 Nov. 1987).

The argument used to encourage this trend was one of security and it had the support of the trade unions as well as employers. From the trade union viewpoint the payment of wages in cash had always been open to abuse by unscrupulous employers who failed to pay the proper tax or the proper national insurance. From the viewpoint of the Government, the more general replacement of cash wages with bank transfers or cheque payment was seen as a means of ensuring that its tax revenues were less open to cheating.

These social trends apparent in the early 1980s led one commentator to predict that:

“The most dramatic breakthrough in banking in the next five years will be outside the banks rather than inside. Automatic teller machines, and an array of buttons and a small video screen, will provide all the information, and most of the services, that the ordinary customer will need from month to month, so that his visits to the bank itself may be very occasional. At any time of day or night, he will be able to find out the state of his account, order a cheque-book or draw as much money as he wants, to the limit of his credit.” (“Faster Cash to the Man in the Street”, by Barry White, Belfast Telegraph, 30 March 1982)

And he observed:

“Experiments are already taking place with point-of-sale terminals, in supermarkets, where customers will simply feed in their bank card to have their accounts debited by the correct amount, without any money changing hands. The French are well in the lead, with their high percentage of ‘banked’ customers, but there are problems over the cost of terminals, and what the shops are prepared to pay for the convenience.

In America, a bank has offered a special service to selected customers, who can call up their bank statements on their TV screens, and order and pay for goods and services without leaving home.”

But none of this could have arrived without significant financial investment in the embryonic computer industry.

Venture capital investment in the industry began as early as 1959 when the U.S. firm, Fairchild Semiconductor, received funding from Vernock Associates, a company associated with the Rockefeller family. However, it was 1978 that the industry experienced its first major fundraising year when around $750 million of venture capital was invested. One of the major events that kick-started this investment was a decision by the Carter administration in the United States to loosen the restraints on the investment strategies of pension funds.

“In that year [1978 – ED] the U.S. Labor Department relaxed certain restrictions under the Employee Retirement Income Security Act, allowing corporate pension funds to invest in the asset class and providing a major source of money to venture capitalists.” (“Tech Generations – the Past as Prologue”, by Ann Winblad, in “Megatech: technology in 2050”, edited by Daniel Franlin. Published by The Economist Books, 2017, p.68)

The freedom given to the American pension funds to spread their investments more widely brought them into contact with venture funds and added significantly to the pool of investment that flowed into the computer industry after 1978. However, initially, venture capitalists were reluctant to invest heavily in software companies:

“Fear of the assets – the software engineers – walking out of the door at night, as well as the fledgling nature of business models in this new sector, kept software investing to $400 million-$600 million a year in the late 1980s and early 1990s. In 1995 the total invested in software companies would finally exceed $1 billion. By 2015 venture dollars in software had swelled to $23 billion of the $58 billion invested in the U.S. This increased the number of companies entering each wave. In 1995, 435 software deals were funded by venture capitalists. By 2015 that number had increased to over 1,800. The winners in software also grew fast, both organically and by acquiring many other new companies. Microsoft’s revenue reached $93 billion by 2015. Salesforce, a fourth-wave company, became the sixth-largest software company with $6 billion in revenue, Amazon, with $107 billion in revenue, and Google, with almost $75 billion, came top of the internet-company list.” (Ibid., pp.68-69)

The point at which venture capitalists began to get involved in a significant way was when it became obvious that the use of digital technology had passed the point where it remained an option for businesses to exploit:

“Entrepreneurs and venture capitalists together have begun their march to unbundle all that can be digital in industry after industry, throwing down the gauntlet to global businesses in their quest to attack bigger opportunities. Venture-capital investment for new software companies unbundling just one industry, financial services, reached $13.8 billion in 2015, more than double the total invested in such ‘fintech’ in 2014 and six times more than the funding deployed in 2011.” (Ibid. p.74)

It has now become a condition of a business’s existence that it utilises the whole array of what digital technology has to offer with the result that it now touches every aspect of people’s daily lives. But as far as the industry in which it first began to take a firm foothold – the financial industry – is concerned the use of digital tools long ago evolved beyond the bank customer and made its way directly into the financial markets.

“About half of all buying and selling on many of the world’s crucial financial markets is now automatic high-frequency trading. HFT is ultrafast. Whenever I speak to someone who might know and be prepared to tell me, I ask them just how fast that currently is: in other words, what’s the minimum time interval between the arrival of a ‘signal’ – a pattern of market data that feeds into an HFT algorithm – and an HFT system responding to the signal by sending an order to buy or sell, or cancelling an existing order? When I first asked, in 2011, the answer was five microseconds: five millionths of a second. At the time it seemed extraordinarily fast, but now it seems leisurely. Data released last September by Eurex, Europe’s leading futures exchange, indicated that the speed is now 84 nanoseconds (billionths of a second): sixty times faster than it was in 2011.” (Just How Fast: the increasing speed of high-frequency trading”, by Donald MacKenzie, in London Review of Books, 7 March 2019)

We are now in the realms of an unknown world where space measured in terms of the size of a transistor on a microchip can be smaller than the wavelength of light and the speed between stimulus and response is measured in terms close to instantly.

All of this can be deeply disturbing when the science behind it occupies such an intimate part of our daily lives. Our behaviour is now not only influenced by the internet but large swathes of it is determined by it. Increasingly we communicate with family, friends and even our doctors by email, purchase goods and services and interact with our banks online and use it to extract information to answer a multitude of questions. But in the process we generate an online version of ourselves, which reveals our interests, our patterns of consuming, our politics and beliefs, and our network of friends. This version of ourselves is constructed into a profile by computer algorithms which is then used by advertisers to target goods and services at their most likely consumers. We become identified no longer as an organic entity with a physical presence but a digital one that only exists in terms that advertisers understand.

But this online profile remains a component of who we are in the physical world and would not exist without the pre-eminence of ourselves in the physical world. It also provides a vulnerable portal by which those who have the skills to navigate the digital world can invade and take advantage of the growing reliance of our physical selves on the digital world. To do this of course such people require motivation and the most obvious one is financial. This vulnerability was to a large extent overlooked at the dawn of the new digital world.  One of the original justifications by banks to incrementally shift customers towards a cash-free environment was the claim that it offered a more secure relationship between the individual and his/her money. The early talk was that traditional criminal offences like robberies, forgeries, counterfeiting and theft would be made obsolete with the arrival of Electronic Funds Transfer Systems (EFTS). Unfortunately that has not happened. What has in fact happened is that there has been a decline in things like bank robberies and an explosion in online fraud.

In January 2017 the Daily Telegraph reported that online fraud had become the most common crime in the country with almost one in ten people falling victim. More than five and a half million cyber offences were believed to take place each year accounting for almost half of all crime in the country (see: “Fraud and cyber crime are now the country’s most common offences”. (Daily Telegraph, 19 January 2017).

The ingenuity of hackers and fraudsters grows with every development of computer security. It is similar to an arms race. But people are not only concerned about their computers being phished or hacked. They are also concerned about computer malware viruses invading their computers. The latest one has gained access through the normally trusted mechanism by which operating systems programmes are automatically upgraded.

“Researchers at cybersecurity firm Kaspersky Lab say that ASUS, one of the world’s largest computer makers, was used to unwittingly install a malicious backdoor on thousands of its customers’ computers last year after attackers compromised a server for the company’s live software update tool. The malicious file was signed with legitimate ASUS digital certificates to make it appear to be an authentic software update from the company” (“Hackers Hijacked ASUS Software Updates to Install Backdoors on Thousands of Computers”, Motherboard, 25 March 2019)

It seems that the critical technology which society relies upon to function and on which individuals depend is constantly under threat. The anxiety that this creates among significant numbers of people is regularly acknowledged by warnings from institutions, businesses and governments about the way in which our online profile is vulnerable. This is then countered with messages of assurance by articles in the media, which seek to allay those anxieties through highlighting the more fanciful myths surrounding the new technology. But it is also acknowledged in the way that modern philosophy is evolving.

One of the leading philosophical thinkers of the era is Luciano Floridi who is currently a Professor of Philosophy and Ethics of Information at Oxford. He is one of the proponents of what is called the Philosophy of Information. In one of his books he makes the observation that:

“The agricultural revolution took millennia to exert its full impact on society, the industrial revolution took centuries, but the digital one only a few decades. No wonder we feel confused and wrong footed.”

But, confused and wrong footed is a generous and rather neutral way of putting it. The predominant condition the digital revolution has created is not one of confusion or being wrong-footed – it is anxiety. Floridi seems to attribute what he prefers to describe as confusion to the rapidity of the digital revolution. While this undoubtedly is a contributing factor it is not central to its cause. People can accommodate rapid change even though they may feel uncomfortable with that change. Our townscapes are changing all the time as new finance-driven property booms lead to the destruction of familiar physical landmarks to be replaced by new multi-story residential apartment blocks. Familiar physical landmarks are disappearing at a rate that may be uncomfortable and may create anger and feelings of nostalgia for the old landmark but those changes do not create anxiety. What is felt is temporary and people soon learn to navigate the new landmarks and grow accustomed to them. In other words, speed of change does not in itself create anxiety. It is not the same when it comes to the digital world.

The problem is that anxiety has become intrinsic to the nature of the thing that defines the modern age. To the ordinary person the technology involved departs from, and is alien to, the conventional technology that goes into the making of our familiar physical world. As one leading theoretical physicist, mathematician, and Nobel laureate has put it:

“Fundamental physics both constrains and enables technology. Abstractly this is a truism, since much of technology is embodied in machines and structures which, being physical objects, are subject to the laws of physics. Yet over much of history, in almost all areas of technology, the relationship between fundamental theory and practical applications has been rather loose. Consider, for example, some understanding highlights of Roman engineering, their great roads, aqueducts and the Colosseum. As described by Vitruvius in De Architectura, the technology that supported these feats was based on long-accumulated experience, codified in empirical rules of construction materials and their preparation – in some ways, anticipating the composites of today – but there is nothing that we would recognise as systematic materials science. Similarly, the central motif of Roman construction, the arch, is presented as a template, not as a mathematically determined solution to problems of loading and stress. . . .

Today the connection between fundamental physics and technology is much tighter. Notably, modern microelectronics and telecommunications supporting the processing and transmission of information at speeds that would have seemed utterly fantastic just a few decades ago. These profoundly enabling technologies would be unthinkable without deep, reliable understanding of the quantum theory of matter and of light (including radio, microwaves and the rest of the electromagnetic spectrum). No amount of tinkering or “innovation” could have got you there. ,. . .

Thus in principle we could, by solving the appropriate equations replace experimentation with calculation, in all those applications. This represents, in human history, a qualitatively new situation. It has arisen over the course of the 20th century, primarily as a result of dramatic advances in the application of quantum mechanics.” (Physical Foundations of Future Technology”, by Frank Wilczek, in Megatech: technology in 2050, edited by Daniel Franlin. Published by The Economist Books, 2017, pp.22-23).

Ever since the division of labour became a condition for the survival of humankind there has been a division of the skills required for that survival. In the early days of its evolution those skills were by today’s standards fairly primitive with the capacity of individuals to bridge the divisions of the activities involved. Thus, it was possible for someone with the skills to make stone implements to also have the skills to use those implements either in hunting or in agriculture – in fact it was an advantage if this cross-over existed as it helped to generate the empirical means by which both skills and methodology might be improved. As social organisation advanced, however, the division of labour became increasingly complex with the capacity for this cross-over of skills markedly reduced. The advent of steam-power and then electricity impacted on everyone’s life but this came with the need to have people with increasingly rarefied skills to enable everyone to benefit from these new technologies. We then had the emergence of skills that became associated with the making of machines (toolmakers, welders, metal-workers, iron and steel-makers etc.) and the skills associated with the maintenance of them (mechanics, plumbers, electricians, painters, etc.) At each stage of this advancement there emerged a class of people with skills that could not be easily shared and which the general population were increasingly excluded from. However, what remained was the tactile relationship of the broader population with the physical representations of the dynamic technologies that defined their world. People could see the machine in action, walk across the bridge that spanned expanses of space and even though it remained invisible, they could see the pylons, cables, switches and junction boxes that brought electricity to their homes. Likewise when something went wrong the cause of the fault could be seen in physical form and the solution directly linked in a tangible way to that fault.

On the other hand, in the modern age where an understanding of the digital forces that define our existence remain a mystery for the vast majority of people, the skills of the people who control and operate these forces are also at a further remove

“Non-physicists are often bemused to hear physicists speak of the ‘simplicity’ of their fundamental theories. For in practice only a very small proportion of the human race understands those theories, and it takes years of determined study and hard thinking for any individual human to achieve understanding.” (Ibid., p.26)

The theories behind past technologies were also elusive to the majority of people. Things like steam engines, electric telegraph, internal combustion engines and air travel all worked to principles that defied ordinary understanding. But they remained separate and distinct in their operation. Steam power, electricity, combustion and aerodynamics all occupied their own corral and could be left to the experts to develop and operate in the sure knowledge that they did not invade people’s personal space in any over-reaching way. No record was retained every time a person availed him or herself of the facilities that those technologies made available. As a result no profile was build up on the basis of how people lived their daily lives as individuals. This has now all changed. The technology that is now an integral part of modern facilities transgresses most aspects of our lives that had previously remained separate. On top of that it operates according to principles of physics that ensures it cannot be properly monitored by those who it impacts in such an intimate way.

While much of what the digital technology offers is extremely useful and acts as an aid to greater efficiency the underlying basis of that technology generates a profound and abiding sense of anxiety among a significant section of the populace. From a political point of view it is this and not so much the well-articulated surveillance of people’s opinions by the State that should concern us. Rather it’s the constant atmosphere of anxiety that the technology leaves in its wake that the State can exploit in ways that can exclude the need for rational assessment of its actions.

While the anxiety caused by the nature of the underlying technology is something that is inevitable given the level of trust that we are expected to invest in intangible electronic pulses, the anxiety relating to the security of our relationship with the digital world is another matter. There is nothing to prevent governments from providing more protection to the user in this area. For instance, a simple online passport unique to every citizen would go a long way to preventing things like online identity theft. Yet, there is a marked inertia on the part of governments to act in this way. In the meantime there is a real benefit to the State in sustaining levels of anxiety among the citizens in this arena and it would be surprising if the State remained unaware of it.

In the early years of the 20th century in the period prior to Britain’s declaration of war on Germany the populace were maintained in a state of similar anxiety by the fear of imminent invasion by Germany. Novels, newspaper articles and politicians continued to push the likelihood of invasion despite the counter-arguments that relied on the rational logistical argument showing such a thing to be impossible. Such irrational invasion fears retained their potency precisely because the anxiety it created became, in itself, the means by which the populace were immunised from rational thought on the subject.

Today, the invasion is not German warships on the British coast but Russian cyber ships invading our computers. The portrayal of the potential Russian threat from this source is regularly pushed in novels, in newspaper and internet articles and politicians in the exact same way that the German naval invasion was forecast in the early 1900s and, just as then, its object is both to generate anxiety and exploit that anxiety. The extent to which this anxiety is exploited is revealed by a simple Google search for “Russian hacking” which produces over 26 million hits. The fact that the only evidence of cyber invasions by Governments have been against Iran in 2012 (by both the United States and Israel) and against Venezuela in 2019 (by the United States) is of course never part of the narrative.