1.2292954-859089636
Image Credit: Ramachandra Babu/©Gulf News

As the Americans were flying over Hiroshima in the summer of 1945, Otto Hahn, the German scientist who first discovered that you could split a uranium atom in two, was being held captive in an English country estate by British forces.

As news of the bomb filtered through, the Brits surreptitiously recorded their prisoner’s reaction, catching on tape the moment that Hahn realised his work had been appropriated by the enemy and used in the most terrible show of force imaginable.

Hahn was shattered by the news. He confessed that when he first realised the potential of his discovery he had contemplated suicide. Now, witnessing the moment that his work had made the efficient annihilation of hundreds of thousands of people possible, he felt personally responsible.

There haven’t been many scientific discoveries in the intervening decades with the potential for such catastrophic consequences, but when I came first came across the transcripts of Hahn learning about the bomb, I couldn’t help but think about the modern parallels to this story, and the question of where the line of responsibility lies. Should he have felt guilty that his work went on to be used for harm? Should any of us?

Fast forward to 2011 and just after finishing college, Caleb Thompson was offered an internship at a software company. His job was to build a tool that could use cell phones to find a WiFi signal. It was an interesting challenge — as you moved around, the signal strength changed. Could you use that data to predict where the source was coming from? And then the goal shifted slightly: could you also sniff the airwaves for the invisible signals put out by other mobile phones and track those down too?

Caleb never asked why you might want to track down phones in the first place. He just got on with solving the problem and writing the code. But as time wore on, the true nature of what he had become involved in slowly came to light.

The client was the Department of Defence — the US military. His code wasn’t being used to track down mobile phones, per se. But to hunt the enemy carrying the phones and shoot them.

There are ways to rationalise all this: you can’t stop the march of progress; better that we have the technology than the other side; supporting the military is generally a good thing to do. But I’m still glad it’s not my code that’s being used to kill people.

We’re living in an age where new technology offers gigantic upsides — artificial intelligence has the potential to diagnose cancer, catch serial killers and reduce prison populations. But in an age where technology is advancing so rapidly, where algorithms designed for one purpose can be quickly picked up and used for another — the issue of where responsibility lies has been brought sharply into focus. Facebook’s news feed has been exploited to manipulate voter behaviour. Instagram’s messaging service to goad people into committing suicide. Google serving adverts for higher-paid jobs to men than women, and perpetuating gender imbalances in the process.

For my own part too, however, there have been times where the potential runaway applications of my work have troubled my conscience.

When I was working on a project with the Metropolitan police in London, we were looking back at what happened during the widespread riots in the city during 2011. We wanted to understand how rioters chose where to congregate, with the intention of being able to predict lawless behaviour in the future, if such an event were ever to occur again.

It was all proof of concept, and wouldn’t have been much good in a real-world riot, but nonetheless, I supported the idea: that police should be given all the tools at their disposal to bring about a swift resolution to any unrest.

Once the paper had been published, I went to Berlin to give a talk on the work. I was universally positive about it on stage. Here was this great promise of a new technique, I boasted to the audience, that the police could use to keep control of a city.

But if there is one city in the world where people truly understand what it means to live in a police state, it is Berlin. In a city where the repressive rule of the Stasi is so fresh in the memory, people of Berlin did not take kindly to my flippant optimism.

Naive as I was, it just hadn’t occurred to me that an idea used to quash lawless looting in London, might also be deployed to suppress legitimate protests. But the reaction of the audience that day stayed with me: I realised how easy it was to sit in your ivory tower and write lines of computer code without being mindful of the full potential consequences of your work.

Technology on its own isn’t good or evil. After all, satellite navigation was invented to guide nuclear missiles but is now used to help deliver pizzas. Likewise, Hahn’s splitting of the atom might have led to the terrible deaths of hundreds of thousands of people, but it has also been used for decades to provide a clean and sustainable source of energy for millions around the world. You can’t assess the value of an innovation in isolation, you have to consider whose hands it’s in.

But maybe we should try to be more mindful of the worst-case scenario. We should actively be thinking about what our inventions would look like if exploited by someone with a less of a moral compass and decide if the world would really be better off with them in it. Because once a new technology is out there, there’s no taking it back.

— Guardian News & Media Ltd

Dr Hannah Fry is an Associate Professor in the Mathematics of Cities at the Centre for Advanced Spatial Analysis at the University College London.