Facebook has debuted a “web-enabled simulation” in which a population of bots it has created based on real users can duke it out – supposedly to help the platform deal with bad actors. But it’s not totally isolated from reality.
The social media behemoth’s new playpen for malevolent bots and their simulated victims is described in a company paper released on Wednesday with the ultra-bland, ’please, don’t read this, ordinary humans’ title of “WES: Agent-based User Interaction Simulation on Real Infrastructure.”
While the writers have cloaked their and their bots’ activities in several layers of academic language, the report reveals their creations are interacting through the real-life Facebook platform, not a simulation. The bots are set up to model different “negative” behaviors – scamming, phishing, posting ‘wrongthink’ – that Facebook wants to curtail, and the simulation allows Facebook to tweak its control mechanisms for suppressing these behaviors.
Even though the bots are technically operating on real-life Facebook, with only the thinnest veil of programming separating them from interacting with real-world users, the researchers seem convinced enough of their ability to keep fantasy and reality separate that they feel comfortable hinting in the paper of new and different ways of invading Facebook users’ privacy.
Because the WW bots are isolated from affecting real users, they can be trained to perform potentially privacy-violating actions on each other.
But these nosy bots aren’t wholly walled off from reality by any means. “A smaller group of read-only bots will need to read real user actions and react to them in the simulation,” Input Mag pointed out in its coverage of the unusual experiment, pointing out that the bots’ masters “haven’t decided whether to completely sandbox the bots or simply use a restricted version of the platform.” The outlet described Facebook’s goal in creating this simulation as building “its own virtual Westworld” – the western-themed, robot-populated, quasi-dystopic Disneyland of HBO’s popular sci-fi series.
And just as Westworld’s robots come into contact with real humans, Facebook’s researchers acknowledge in their paper that the possibility of their supposedly self-contained bots interacting with real users exists. “Bots must be suitably isolated from real users to ensure that the simulation, although executed on real platform code, does not lead to unexpected interactions between bots and real users.”
However, given the sheer volume of ‘oops’ moments Facebook has experienced in recent years – from leaving hundreds of millions of users’ phone numbers on an unprotected server to letting apps like Cambridge Analytica data-mine tens of millions of unsuspecting users to feeding user data to phone companies and other third parties without their consent – the social media behemoth’s ‘word’ is unlikely to count for much to users concerned about being unwittingly enrolled in a bot-filled simulation.
Think your friends would be interested? Share this story!