Welcome to Wait! Just Listen, a weekly series of short essays dedicated to unpacking moments of humanness from the all-consuming web of digitisation. If this type of content enriches your life in any way, please consider subscribing. If you need more reasons to subscribe, then click here.
But before we get started on today’s topic, I’ve a favour to ask. If you get a spare second, please consider sharing these weekly scribbles with someone who might enjoy them just as much as you have. Your continued support has made this literary voyage supremely rewarding.
Martha felt a bourgeoning sense of anticipation as she pulled out a seat at the new quaint local bistro across town on a pleasant but crisp Melbourne morning. A waiter, rather precariously resting 2 coffees and a bagel on his tray enquired, “Are you expecting anyone else?”, and Martha replied without hesitation, “yes, two others”, with an exaggerated emphasis on the word “two-ooo”. Funny how the human mind often, quite unintentionally, reveals its own attachment to particular fixations, ideals and ideas through communication. Either way, Martha was cherishing the idea of socialising this morning contrary to her normally reserved disposition.
She was pleasantly surprised to see human waiters for a change - the wobbly and nervous balancing act gave it away. It was a nostalgic reminder of yesteryear, a time when human interaction remained a staple part of the hospitality industry. There was a grim sterility with the current breed of ‘Robo-Aid’ droids - government funded robotic service helpers. But you eventually grow immune to their presence; they grey out into the surroundings with minimal fanfare and fuss unlike in the movies where they are often dramatised as part of some postmodern dystopic adventure.
The text she received last night was cryptic but significant - “there’s someone I’d like you to meet”. While she had only cursory awareness of her son’s previous relationships, this one seemed somewhat different and refreshing from the snippets she had been privy to, raising hope that perhaps her son would finally find a soulmate after years of heartbreak. She could hear the baritone resonance of her late husband’s words, “Stop building castles in the air Martha!” But even so, there was an undeniable poetic charm in meeting the future partner of one’s offspring - the symbolic equivalent of passing on the generational baton.
Always fashionably late, her son finally arrived, visibly frazzled from the morning rush.
Huddled beside him was his partner, looking calm but somewhat curiously vacant and unaffected by the biting cold enveloping much of Melbourne this winter. So much so that Martha started to fleetingly wonder if she was perhaps from a particularly icy part of the world, maybe from Scandinavia - an assumption not too far-fetched given her flowing blonde locks.
“Mum, I’d like to introduce you to Adele. Sorry we’re late, had a few…um… housekeeping matters to take care of”
Hastily rubbing his hands together for some warmth, he nonchalantly added, “she needed a software upgrade at the last minute, which caught us by surprise”. “Didn’t it honey?”, as he stole an endearing glance at Adele.
“I guess we got a bit blindsided as you normally do with issues like these. But she’s all good now….up to speed with our family history and other bits and bobs”.
Adele managed a smile - delivered with an immediacy that seemed mildly unsettling.
Martha looked down at her piping hot cappuccino as its steam formed a gradual mist on her glasses. There was now, more than ever, something ever so reassuring about a morning cup of coffee.
The above narrative snippet may be a work of fiction but will it always just be part of some postmodern imaginative tale of techno impressionism?
With the frantic rate of development in Artificial Intelligence (AI) and machine learning technology (I dabble with the latter in my profession), there is more than a glimmer of possibility that synthetic life (a.k.a robots) will at some point be integrated into the very fabric of human life. At the very least, one would expect further normalisation of AI and human relationships.
But there is one central problem.
There is the age old conundrum over how computer scientists can effectively replicate deep-seated and less tangible human values such as curiosity, emotional intelligence and human impulse. The truth is that we’ve radically and irreversibly altered our relationship with the natural world through technology and whilst todays robots are mostly in the flavour of virtual assistants and bots, it wouldn’t be absurd for such technologies to inherit more dramatic human characterisations. However such shifts wouldn’t be seamless nor ever complete. There would be pitfalls.
We don’t yet know how to algorithmically map, dissect, project, and replicate what it feels like to have a particular subjective experience — we only know how to feel it. This knowledge is non-transferrable with the current tools of science.
The above has been rather poignantly addressed by Brian Christian, an American non-fiction writer and computer scientist responsible for the spell-binding book, “The Alignment Problem - Machine learning and human values”.
At its core, The Alignment Problem provides a frank assessment of what is required for AI systems to capture our emotions, norms and values and fundamentally do what we want.
Perhaps the most intriguing aspect of Christian’s work lies in his emphasis of how robots, to be effective social creatures, must learn to not only replicate human actions with precision, but it must also do so when dealing with supposed instances of human irrationality - those ‘brain fart’ moments. Further to this, Christian argues that if AI robots were required to holistically interact with humankind they would need some way of distinguishing between intended action and unintended action. All actions may have a reaction but what that reaction is and how it eventuates remains a partial mystery, even to humans, at the best of times.
Christian paints the example of a computer game that an AI plays. An in-game reward system details specific tasks and expected points one can win. Through neatly engineered reinforcement loops, the AI may develop a mastery of the game or reward system. But what if the intended outcome of the game wasn’t just about winning or point totals? What if, a game rewards its players on their creative ability to rethink strategies in their gameplay approach, as Atari’s Montezuma's Revenge did in the 80s? Will the AI robot be able to mirror these non-formulaic arbitrary traits?
Christian notes that AI robots are typically experts in ‘ticking off’ specific operational tasks but they stumble when asked to achieve outcomes that defy common-sense. For example, how would one hard code the amorphous nature of human curiosity into machines? Whilst search engines such as Google may function on complex predictive logic to feed curious minds - by providing educated guesses of what you might desire or want - it remains detached from the actual inner workings (and failings) of your mind. It reacts and behaves in accordance to the data you generate without ever exhibiting a form of unique inquisitiveness.
You may get AI frameworks to do specific discrete tasks but how do you ensure it achieves that level of organic fallibility synonymous with the human condition? As Christian postulates, AI can prioritise certain tasks based on its programming but it ultimately remains ignorant of the overarching human-motive behind the sequence of actions. There is still a sense of dour lifelessness amidst all its sophisticated coding and hardwiring.
There needs to be some kind of acknowledgement in AI systems of the existence of scenarios/events that, in some respects, will always remain unexplainable outliers. There will be situations that defy common expectation and ones that quite simply remain outside the realm of any sense-making.
Ultimately, the biggest challenge in the AI space lies in capturing/confronting human ambiguity. It is a rather ironic if not familiar problem given that humans by their nature also struggle with accepting the unknown. Despite the blistering pace at which technology is advancing, one can’t help but notice the level of intricate details that need thinking through before any more complex and fluid forms of human socialisation with AI can take place.
If the human condition resists any simple explanation, then replicating it will require some level of stupendous technical wizardry. Perhaps, a Martha-like predicament is still in the realms of the fantastical rather than the real. Who could say?