With a string of violent crimes plaguing Facebook in recent weeks, outraged social media users have demanded immediate accountability from Mark Zuckerberg’s team. Two harrowing clips of the Phuket incident, for instance, were accessible on Facebook for about 24 hours and were together viewed almost 400,000 times. The unrelenting frequency with which such horrific atrocities keep playing out on the platform have prompted several questions about the effectiveness of Facebook’s reporting systems and how violent and grisly content on social media can be flagged up faster.

While social media in general and Facebook in particular have played a pioneering role in bringing friends and families from far-flung corners of the globe together, such platforms cannot also deny their role — however inadvertent it might be — in amplifying visceral violence at a scale unparalleled in human history. Between its praiseworthy quest to promote virtual and augmented realities, therefore, it’s time for a reality check for Facebook itself.

Social media is a fantastic tool for sharing those lovely nuggets of our daily lives and thoughts — but it’s also a great responsibility for its providers as well as those who use it. There is a great deal of accountability that rests primarily on its users — but to assume all users to be self-regulated beacons of decent behaviour would be very naive indeed. It’s evident that Facebook needs to set up guaranteed mechanisms for oversight and moderation of its live content. While the company has pledged to review its moderation practices and use more artificial intelligence to speed up its response to such graphic content, the need here is not for more ad hoc deployment of technology but forging a fail-proof strategy that shields such acts of horror from the prying eyes of the world.