Scott Robertson presented a “provocation” at the Decennial “Socio-Tech Futures” Event of the Consortium for the Science of Socio-Technical Systems Research, June 2018, University of Michigan, Ann Arbor. The event brought together alumni from over 10 years of summer workshops.
The Consortium for the Science of Sociotechnical Systems (CSST) is a NSF-supported research coordination network (Grant #1144934) supporting a community of academic and industry scholars who study sociotechnical systems.
Here is the provocation extended abstract:
What Happened? #Sociotech4Evil
Scott Robertson, Information and Computer Sciences, University of Hawaii
Trolling, Misuse, Propaganda, Weaponization, Politics
I have been studying social media and civic engagement for about ten years, which is more or less the lifespan of most social media platforms. I began with an optimistic and utopian perspective on the promise of these systems for broadening public discourse, engaging citizens, and encouraging participation. Over the ten years, this optimism has been tempered by reality, and ultimately soured completely by the overwhelming misuse of sociotechnical systems to mislead individuals and corrupt democratic discourse (#sociotech4evil?). So, what happened and why were we so naïve (not everyone was, of course)?
Our research community is now at an ethical and moral crossroads comparable to the situation of social scientists after WWII. Some of the best research in social science came out of attempts to understand conformity, persuasion, obedience to authority, diffusion of responsibility, moral judgement, and other matters relevant to maintenance of a civil and just society after the world collapsed into chaos in the mid-20th century. At this moment we find ourselves at the edge of a sociotechnical abyss, where social media companies are struggling to understand how to adjust their tools, legislatures and courts are attempting to respond to widespread information manipulation, and citizens are coping with propaganda and troll campaigns that infiltrate their everyday social sphere. This will only get worse as indefatigable social AI systems based on machine learning are exposed to training in the black arts of propaganda and social manipulation proliferate.
Have we as sociotechnical researchers focused too much on technical matters and the promise of social media, thereby failing to anticipate the motives of bad actors and the vulnerabilities of individuals immersed in new media technologies? How can we now change course, and pick up the difficult task of convincing funding agencies and others that we are not only facing a technical challenge, but a social challenge as well? Those who use sociotechnical systems to bad ends are taking advantage of new social and cognitive phenomena that are poorly understood: fractured attention, liquid tribalism, pervasive commercialization of social life and consciousness, media hyper-personalization, digital identity management, social media anxiety, big data profiling for profit, social bots trained to harm, and so on. While there are researchers who concern themselves with maleficent use of social media, it is critical for the whole sociotechnical community to develop a coordinated approach to these phenomena and possibly adopt a practice of exploring “threat assessment” as part of every research paper and proposal.