There’s one part of the ‘Do Designers Dream of Electric Sheep’ afternoon I keep on going back to. One of the classic current AI discussion points is the problem of the out-of-control self-driving car. If one’s had some sort of glitch or accident and is about to crash into (for example) a bus stop queue, how does it decide who to avoid and who to hit, who will survive and who won’t? Who will it seem to feel most empathy for?
And that led to this –
One very reasonable response came out in answers three, four and six. Actually, the group pointed out, this was an entirely avoidable situation. So, it should be made impossible. Careful readers will also spot that answer five refers to this, albeit in a subtle, oblique and highly allusive way.
(Oh, and I stripped out answer one because it’s a bit incomprehensible if you weren’t at the event. And answer seven led to some darkly fascinating, but also rather tangential, thinking – so I’ll come back to it in a bit.)
The design of the car, the design of the bus stop, the design of the rules that regulate traffic, the design of the city that the traffic moves through, should be such that this kind of situation is impossible. This set me thinking that the network effect applies to empathy too – that the more people included within a given act of empathy, the more powerful it becomes.
This leads to an interesting question – can one feel empathy for groups, rather than just individuals? The general consensus seemed to be no – that empathy is a person-to-person act, happening on a one-to-one basis. So – for example – if empathy moves me to support a homelessness charity, I’m feeling empathy not for the charity, but for all the homeless people I’ve met.
Empathy is not abstract. It can only come from actual experience of real people.
That thought led to much discussion on the day. And, as an SF author, it raises a very interesting question for me. My books are set in imaginary futures, and they’re about imaginary people. How, then, can I help the reader feel empathy for them?
I need to make their futuristic motivations, behaviours, challenges and solutions comprehensible to a contemporary reader. I need to situate the present in the future. Or, turning things round, I need to use the unreal – my invented characters, their invented world – to talk about the real – the actual lived experience of my readers. In SF, the other we feel for is always some version of ourselves.
There’s an interesting point here for designers. When they design something, they invent a small part of the future. They do what science fiction writers do – they imagine the new into being. And like science fiction writers, that newness will only work if it reaches back into the present and responds to a real experience, or need, that already exists now – if it shows empathy with the actual. Good design, like good SF, is neither exclusively about the present or the future. Instead, it’s a bridge between the two, one that always builds out from the lived experience of people right now.
And I nearly forgot – answer seven. It was pointed out that, if malfunctioning cars steered towards or away from individuals according to their online profiles, a market in more attractive (and therefore safer) profiles would quickly develop. It was then realised that you could hack someone’s profile to make it much less attractive, making this scenario riskier for them and their lives probably more hazardous in general – a new and disturbing kind of crime. And of course that kind of decision-making implies a very dark society, one that’s both judgemental and unconcerned about risk in ways we’d find appalling.
That leads to two final thoughts. First of all, science fiction is never just about the tech; it’s about the people that decide how to use the tech, and the society that shapes those decisions. And secondly, something I’ve been thinking about a lot lately. SF isn’t really about tech that works – rather, SF stories happen because of mistakes, malfunctions and unintended consequences. More on that in a future post…