A scientist first imagined a vast ‘organ of memory’ over 300 years ago, and today’s futurists are still attempting to refine the concept. Now work is in progress on an artificial intelligence so advanced that it immortalises people by ‘remembering’ their entire experience.
Artificial intelligence and the dream of eternal life
Words by Giovanni Tisophotography by Thomas S G Farnettiaverage reading time 7 minutes
- Article
When we talk about artificial intelligence (AI), we usually refer to the ability of machines to demonstrate human-like reasoning in the performance of specific tasks. Examples include excelling at strategic games such as chess, being able to understand speech, translate texts from one human language to another, formulate medical diagnoses and draft legal documents.
All these achievements are groundbreaking in their own way. Yet there is one essential function of the human mind that most people happily delegate to their desktop computers and personal devices on a daily basis that is not usually discussed under the rubric of AI: remembering things.
Perhaps it is because technologies of memory have been around us for a very long time. Speaking about the invention of writing, the Greek philosopher Socrates – as reported by his disciple Plato – cautioned that committing ideas to the page instead of keeping them alive in the mind would devalue them. The knowledge acquired from books, a parable in Plato’s ‘Phaedrus’ explains, would not lead to wisdom but rather to a mere semblance of it.
This ancient and perplexing warning is sometimes invoked as a precursor of current fears that an excessive reliance on digital technologies may be harming people’s cognitive faculties. The most succinct and direct expression of these fears can be found in the title of a much-discussed article for The Atlantic by Nicholas Carr – ‘Is Google Making Us Stupid?’ – later expanded into a book purporting to explain “what the internet is doing to our brains”.
Intelligence beyond memory
However, modern critiques draw a distinction between memory and reasoning that would have made no sense to Socrates or Plato. For them, a machine – such as today’s cheapest smartphone or personal computer – that could be instructed to remember things would inherently possess a kind of intelligence.
What makes the marginal place of memory in the current discourse about AI even more peculiar is that there are some very influential scientists and IT executives who believe that computers will soon be able to remember them so accurately as to make them immortal.
The genesis of this idea could be traced to the American mathematician Norbert Wiener, the father of cybernetics, who proposed in his 1964 book ‘God and Golem, Inc.’ that it should be “conceptually possible for a human being to be sent over a telegraph line”. Or maybe it is to be found in the writings of British science-fiction author Arthur C. Clarke, who had already imagined in his 1956 novel ‘The City and the Stars’ that in the future computers might be capable of recording and recreating people.
As a matter of fact, this is an area in which science fiction and scientific or quasi-scientific speculation cannot be prised apart. Two years before penning the influential essay ‘The coming technological singularity’ (1983), American computer scientist Vernor Vinge had published a novella that dramatised his hypothesis about the day when computers would outmatch the complexity of the human brain.
And when it came time for Austrian roboticist Hans Moravec to explain precisely how a person’s mind might be transferred onto a digital machine in his non-fiction tract ‘Mind Children’, he simply switched into science-fiction mode mid-book. His theory took the form of a short story.
When the natural philosopher and founding member of the Royal Society Robert Hooke proposed – around the turn of the 18th century – that a soul operating without sleep or interruption over the course of 100 years could insert into “the Repository or Organ of Memory” a sum total of 3,155,760,000 ideas, wasn’t he also telling a fiction under the guise of science?
It is the fiction that reached the boardrooms of some of the world’s foremost technology companies 300 years later, when a new generation of fabulist scientists repackaged it and sold it with the added promise of eternal life.
Digitising life’s experiences
In the 1990s, long-time senior Microsoft executive Gordon Bell dedicated the last years of his professional life to MyLifeBits, a life-logging project that saw him wearing a recorder on his person at all times to digitise and then catalogue his every experience. However, this was not merely an act of extreme diarying: funded by Microsoft’s research division, his project was ultimately aimed at achieving immortality.
In a paper published in 2000, Bell and his collaborator Jim Gemmell explained that immortality requires “part of a person to be converted to information (Cyberized), and stored in a more durable media”. They continued: “We believe that two-way immortality, where one’s experiences are digitally preserved and which then take on a life of their own, will be possible within this century.”
In the same decade, the artificial-life team at British Telecom made headlines with its Soul-Catcher Chip. Funded to the tune of £40 million, the project aimed to produce a device capable of storing “a complete record of every thought and sensation experienced during an individual’s lifetime”, thereby heralding “immortality in the truest sense”.
Cataloguing every experience was not merely an act of extreme diarying... Bell’s project was ultimately aimed at achieving immortality.
BT researcher Ian Pearson even supplied an estimate of how much information the human brain processes in a lifetime that is astonishingly reminiscent of Robert Hooke’s 300-year old calculations. It was 10 terabytes of data, he explained, or “the equivalent to the storage capacity of 7,142,857,142,860,000 floppy disks”.
We may smile at these extravagant prognostications or point to the chasm between the recordings captured by wearable sensors and the full richness and variety of human experience. We may also vigorously question whether these digital archives could ever come close to replicating our plastic, imperfect, continuously changing stores of memory, let alone be ‘reanimated’ and qualify as the seamless continuation of a human being’s life.
But it is undeniable that these ideas are proving popular in the highest echelons of corporate power. This is especially true in the industries that trade in personal information and have every interest in promoting the idea that our daily interactions with digital devices are the truest possible representation of who we are.
The essence of being human
When Google announced the appointment of Ray Kurzweil as director of engineering in 2014, it was speculated that his notoriety might prove a drawcard for star software engineers who, like him, cultivate dreams of immortality.
A successful inventor and charismatic divulgator, Kurzweil has been at the forefront of the ‘singularity’ movement (which believes that a superintelligence designed to benefit humans will be created) for decades. However, his story has a twist: besides planning to cheat death himself, Kurzweil has been working tirelessly to bring his father back to life, too.
Here’s the problem: a conductor, pianist and music educator, Dr Fredric Kurzweil died in 1970 at the age of 57, years before the first personal computer went on sale. There is no digital record of him: only photos, letters and the odd paper – the typical archive of a person of his social class living at that time. It is therefore up to his son to fill in the blanks: to tell the story of his father, to grasp what was unique about him and teach it to his machines.
This goes well beyond a computer’s ability to reproduce our facial features or mimic our conversations, which – to paraphrase Socrates – could never go beyond producing a mere semblance of us. Unable to rely on a mass of raw data, Kurzweil Jr must stumble upon a truth already intuited by Uruguyan novelist Eduardo Galeano: that human beings aren’t made of atoms, much less of bits. They are made of stories.
About the contributors
Giovanni Tiso
Giovanni Tiso is an Italian writer and translator living in New Zealand. He is the editor of Overland's online magazine.
Thomas S G Farnetti
Thomas is a London-based photographer working for Wellcome. He thrives when collaborating on projects and visual stories. He hails from Italy via the North East of England.