viernes, 14 de abril de 2017

The Future of Free Speech, Trolls, Anonymity and Fake News Online

Many experts fear uncivil and manipulative behaviors on the internet will persist – and may get worse. This will lead to a splintering of social media into AI-patrolled and regulated ‘safe spaces’ separated from free-for-all zones. Some worry this will hurt the open exchange of ideas and compromise privacy

By Lee Rainie, Janna Anderson and Jonathan Albright



The internet supports a global ecosystem of social interaction. Modern life revolves around the network, with its status updates, news feeds, comment chains, political advocacy, omnipresent reviews, rankings and ratings. For its first few decades, this connected world was idealized as an unfettered civic forum: a space where disparate views, ideas and conversations could constructively converge. Its creators were inspired by the optimism underlying Stuart Brand’s WELL in 1985, Tim Berners-Lee’s World Wide Web and Electronic Frontier Foundation co-founder John Perry Barlow’s 1996 “Declaration of Independence of Cyberspace.” They expected the internet to create a level playing field for information sharing and communal activity among individuals, businesses, other organizations and government actors.


One of the biggest challenges will be finding an appropriate balance between protecting anonymity and enforcing consequences for the abusive behavior that has been allowed to characterize online discussions for far too long.
Bailey Poland

Since the early 2000s, the wider diffusion of the network, the dawn of Web 2.0 and social media’s increasingly influential impacts, and the maturation of strategic uses of online platforms to influence the public for economic and political gain have altered discourse. In recent years, prominent internet analysts and the public at large have expressed increasing concerns that the content, tone and intent of online interactions have undergone an evolution that threatens its future and theirs. Events and discussions unfolding over the past year highlight the struggles ahead. Among them:
Respected internet pundit John Naughton asked in The Guardian, “Has the internet become a failed state?” and mostly answered in the affirmative.
The U.S. Senate heard testimony on the increasingly effective use of social media for the advancement of extremist causes, and there was growing attention to how social media are becoming weaponized by terrorists, creating newly effective kinds of propaganda.
Scholars provided evidence showing that social bots were implemented in acts aimed at disrupting the 2016 U.S. presidential election. And news organizations documented how foreign trolls bombarded U.S. social media with fake news. A December 2016 Pew Research Center study found that about two-in-three U.S. adults (64%) say fabricated news stories cause a great deal of confusion about the basic facts of current issues and events.
A May 2016 Pew Research Center report showed that 62% of Americans get their news from social media. Farhad Manjoo of The New York Times argued that the “internet is loosening our grip on the truth.” And his colleague Thomas B. Edsall curated a lengthy list of scholarly articles after the election that painted a picture of how the internet was jeopardizing democracy.
2016 was the first year that an internet meme made its way into the Anti-Defamation League’s database of hate symbols.
Time magazine devoted a 2016 cover story to explaining “why we’re losing the internet to the culture of hate.”
Celebrity social media mobbing intensified. One example: “Ghostbusters” actor and Saturday Night Live cast member Leslie Jones was publicly harassed on Twitter and had her personal website hacked.
An industry report revealed how former Facebook workers suppressed conservative news content.
Multiple news stories indicated that state actors and governments increased their efforts to monitor users of instant messaging and social media
The Center on the Future of War started the Weaponized Narrative Initiative.
Many experts documented the ways in which “fake news” and online harassment might be more than social media “byproducts” because they help to drive revenue.
#Pizzagate, a case study, revealed how disparate sets of rumors can combine to shape public discourse and, at times, potentially lead to dangerous behavior.
Scientific American carried a nine-author analysis of the influencing of discourse by artificial intelligence (AI) tools, noting, “We are being remotely controlled ever more successfully in this manner. … The trend goes from programming computers to programming people … a sort of digital scepter that allows one to govern the masses efficiently without having to involve citizens in democratic processes.”
Google (with its Perspective API), Twitter and Facebook are experimenting with new ways to filter out or label negative or misleading discourse.
Researchers are exploring why people troll.
And a drumbeat of stories out of Europe covered how governments are attempting to curb fake news and hate speech but struggling to reconcile their concerns with sweeping free speech rules that apply in America.

To illuminate current attitudes about the potential impacts of online social interaction over the next decade, Pew Research Center and Elon University’s Imagining the Internet Center conducted a large-scale canvassing of technology experts, scholars, corporate practitioners and government leaders. Some 1,537 responded to this effort between July 1 and Aug. 12, 2016 (prior to the late-2016 revelations about potential manipulation of public opinion via hacking of social media). They were asked:

In the next decade, will public discourse online become more or less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust?

In response to this question, 42% of respondents indicated that they expect “no major change” in online social climate in the coming decade and 39% said they expect the online future will be “more shaped” by negative activities. Those who said they expect the internet to be “less shaped” by harassment, trolling and distrust were in the minority. Some 19% said this. Respondents were asked to elaborate on how they anticipate online interaction progressing over the next decade. (See “About this canvassing of experts” for further details about the limits of this sample.)

Participants were also asked to explain their answers in a written elaboration and asked to consider the following prompts: 1) How do you expect social media and digital commentary will evolve in the coming decade? 2) Do you think we will see a widespread demand for technological systems or solutions that encourage more inclusive online interactions? 3) What do you think will happen to free speech? And 4) What might be the consequences for anonymity and privacy?

While respondents expressed a range of opinions from deep concern to disappointment to resignation to optimism, most agreed that people – at their best and their worst – are empowered by networked communication technologies. Some said the flame wars and strategic manipulation of the zeitgeist might just be getting started if technological and human solutions are not put in place to bolster diverse civil discourse.

A number of respondents predicted online reputation systems and much better security and moderation solutions will become near ubiquitous in the future, making it increasingly difficult for “bad actors” to act out disruptively. Some expressed concerns that such systems – especially those that remove the ability to participate anonymously online – will result in an altered power dynamic between government/state-level actors, the elites and “regular” citizens.

Anonymity, a key affordance of the early internet, is an element that many in this canvassing attributed to enabling bad behavior and facilitating “uncivil discourse” in shared online spaces. The purging of user anonymity is seen as possibly leading to a more inclusive online environment and also setting the stage for governments and dominant institutions to even more freely employ surveillance tools to monitor citizens, suppress free speech and shape social debate.

Most experts predicted that the builders of open social spaces on global communications networks will find it difficult to support positive change in “cleaning up” the real-time exchange of information and sharing of diverse ideologies over the next decade, as millions more people around the world become connected for the first time and among the billions already online are many who compete in an arms race of sorts to hack and subvert corrective systems.

Those who believe the problems of trolling and other toxic behaviors can be solved say the cure might also be quite damaging. “One of the biggest challenges will be finding an appropriate balance between protecting anonymity and enforcing consequences for the abusive behavior that has been allowed to characterize online discussions for far too long,” explained expert respondent Bailey Poland, author of “Haters: Harassment, Abuse, and Violence Online.”

The majority in this canvassing were sympathetic to those abused or misled in the current online environment while expressing concerns that the most likely solutions will allow governments and big businesses to employ surveillance systems that monitor citizens, suppress free speech and shape discourse via algorithms, allowing those who write the algorithms to sculpt civil debate.

Susan Etlinger, an industry analyst at Altimeter Group, walked through a future scenario of tit-for-tat, action-reaction that ends in what she calls a “Potemkin internet.” She wrote: “In the next several years we will see an increase in the type and volume of bad behavior online, mostly because there will be a corresponding increase in digital activity. … Cyberattacks, doxing, and trolling will continue, while social platforms, security experts, ethicists, and others will wrangle over the best ways to balance security and privacy, freedom of speech, and user protections. A great deal of this will happen in public view. The more worrisome possibility is that privacy and safety advocates, in an effort to create a more safe and equal internet, will push bad actors into more-hidden channels such as Tor. Of course, this is already happening, just out of sight of most of us. The worst outcome is that we end up with a kind of Potemkin internet in which everything looks reasonably bright and sunny, which hides a more troubling and less transparent reality.”

One other point of context for this non-representative sample of a particular population: While the question we posed was not necessarily aimed at getting people’s views about the role of political material in online social spaces, it inevitably drew commentary along those lines because this survey was fielded in the midst of a bitter, intense election in the United States where one of the candidates, in particular, was a provocative user of Twitter.

Most participants in this canvassing wrote detailed elaborations explaining their positions. Their well-considered comments provide insights about hopeful and concerning trends. They were allowed to respond anonymously, and many chose to do so.

These findings do not represent all points of view possible, but they do reveal a wide range of striking observations. Respondents collectively articulated four “key themes” that are introduced and briefly explained below and then expanded upon in more-detailed sections.

The following section presents a brief overview of the most evident themes extracted from the written responses, including a small selection of representative quotes supporting each point. Some responses are lightly edited for style or due to length.
Theme 1: Things will stay bad because to troll is human; anonymity abets anti-social behavior; inequities drive at least some of the inflammatory dialogue; and the growing scale and complexity of internet discourse makes this difficult to defeat

While some respondents saw issues with uncivil behavior online on somewhat of a plateau at the time of this canvassing in the summer of 2016 and a few expect solutions will cut hate speech, misinformation and manipulation, the vast majority shared at least some concerns that things could get worse, thus two of the four overarching themes of this report start with the phrase, “Things will stay bad.”


The individual’s voice has a much higher perceived value than it has in the past. As a result, there are more people who will complain online in an attempt to get attention, sympathy, or retribution.
Anonymous software engineer

A number of expert respondents observed that negative online discourse is just the latest example of the many ways humans have exercised social vitriol for millennia. Jerry Michalski, founder at REX, wrote, “I would very much love to believe that discourse will improve over the next decade, but I fear the forces making it worse haven’t played out at all yet. After all, it took us almost 70 years to mandate seatbelts. And we’re not uniformly wise about how to conduct dependable online conversations, never mind debates on difficult subjects. In that long arc of history that bends toward justice, particularly given our accelerated times, I do think we figure this out. But not within the decade.”

Vint Cerf, Internet Hall of Fame member, Google vice president and co-inventor of the Internet Protocol, summarized some of the harmful effects of disruptive discourse:

“The internet is threatened with fragmentation,” he wrote. “… People feel free to make unsupported claims, assertions, and accusations in online media. … As things now stand, people are attracted to forums that align with their thinking, leading to an echo effect. This self-reinforcement has some of the elements of mob (flash-crowd) behavior. Bad behavior is somehow condoned because ‘everyone’ is doing it. … It is hard to see where this phenomenon may be heading. … Social media bring every bad event to our attention, making us feel as if they all happened in our back yards – leading to an overall sense of unease. The combination of bias-reinforcing enclaves and global access to bad actions seems like a toxic mix. It is not clear whether there is a way to counter-balance their socially harmful effects.”
Subtheme: Trolls have been with us since the dawn of time; there will always be some incivility

An anonymous respondent commented, “The tone of discourse online is dictated by fundamental human psychology and will not easily be changed.” This statement reflects the attitude of expert internet technologists, researchers and pundits, most of whom agree that it is the people using the network, not the network, that is the root of the problem.

Paul Jones, clinical professor and director of ibiblio.org at the University of North Carolina, Chapel Hill, commented, “The id unbound from the monitoring and control by the superego is both the originator of communication and the nemesis of understanding and civility.”

John Cato, a senior software engineer, wrote, “Trolling for arguments has been an internet tradition since Usenet. Some services may be able to mitigate the problem slightly by forcing people to use their real identities, but wherever you have anonymity you will have people who are there just to make other people angry.”

And an anonymous software engineer explained why the usual level of human incivility has been magnified by the internet, noting, “The individual’s voice has a much higher perceived value than it has in the past. As a result, there are more people who will complain online in an attempt to get attention, sympathy, or retribution.”
Subtheme: Trolling and other destructive behaviors often result because people do not recognize or don’t care about the consequences flowing from their online actions

Michael Kleeman, formerly with the Boston Consulting Group, Arthur D. Little and Sprint, now senior fellow at the Institute on Global Conflict and Cooperation at the University of California, San Diego, explained: “Historically, communities of practice and conversation had other, often physical, linkages that created norms of behavior. And actors would normally be identified, not anonymous. Increased anonymity coupled with an increase in less-than-informed input, with no responsibility by the actors, has tended and will continue to create less open and honest conversations and more one-sided and negative activities.”


Trolls now know that their methods are effective and carry only minimal chance of social stigma and essentially no other punishment.
Anonymous respondent

An expert respondent who chose not to be identified commented, “People are snarky and awful online in large part because they can be anonymous.” And another such respondent wrote, “Trolls now know that their methods are effective and carry only minimal chance of social stigma and essentially no other punishment. If Gamergate can harass and dox any woman with an opinion and experience no punishment as a result, how can things get better?”

Anonymously, a professor at Massachusetts Institute of Technology (MIT) commented, “We see a dark current of people who equate free speech with the right to say anything, even hate speech, even speech that does not sync with respected research findings. They find in unmediated technology a place where their opinions can have a multiplier effect, where they become the elites.”
Subtheme: Inequities drive at least some of the inflammatory dialogue

Some leading participants in this canvassing said the tone of discourse will worsen in the next decade due to inequities and prejudice, noting wealth disparity, the hollowing out of the middle class, and homophily (the tendency of people to bond with those similar to themselves and thus also at times to shun those seen as “the other”).


Unfortunately, I see the present prevalence of trolling as an expression of a broader societal trend across many developed nations, towards belligerent factionalism in public debate, with particular attacks directed at women as well as ethnic, religious, and sexual minorities.
Axel Bruns

Cory Doctorow, writer, computer science activist-in-residence at MIT Media Lab and co-owner of Boing Boing, offered a bleak assessment, writing, “Thomas Piketty, etc., have correctly predicted that we are in an era of greater social instability created by greater wealth disparity which can only be solved through either the wealthy collectively opting for a redistributive solution (which feels unlikely) or everyone else compelling redistribution (which feels messy, unstable, and potentially violent). The internet is the natural battleground for whatever breaking point we reach to play out, and it’s also a useful surveillance, control, and propaganda tool for monied people hoping to forestall a redistributive future. The Chinese internet playbook – the 50c army, masses of astroturfers, libel campaigns against ‘enemies of the state,’ paranoid war-on-terror rhetoric – has become the playbook of all states, to some extent (see, e.g., the HB Gary leak that revealed U.S. Air Force was putting out procurement tenders for ‘persona management’ software that allowed their operatives to control up to 20 distinct online identities, each). That will create even more inflammatory dialogue, flamewars, polarized debates, etc.”

And an anonymous professor at MIT remarked, “Traditional elites have lost their credibility because they have become associated with income inequality and social injustice. … This dynamic has to shift before online life can play a livelier part in the life of the polity. I believe that it will, but slowly.”

Axel Bruns, a professor at the Queensland University of Technology’s Digital Media Research Centre, said, “Unfortunately, I see the present prevalence of trolling as an expression of a broader societal trend across many developed nations, towards belligerent factionalism in public debate, with particular attacks directed at women as well as ethnic, religious, and sexual minorities.”
Subtheme: The ever-expanding scale of internet discourse and its accelerating complexity make it difficult to deal with problematic content and contributors

As billions more people are connected online and technologies such as AI chatbots, the Internet of Things, and virtual and augmented reality continue to mature, complexity is always on the rise. Some respondents said well-intentioned attempts to raise the level of discourse are less likely to succeed in a rapidly changing and widening information environment.


As more people get internet access – and especially smartphones, which allow people to connect 24/7 – there will be increased opportunities for bad behavior.
Jessica Vitak

Matt Hamblen, senior editor at Computerworld, commented, “[By 2026] social media and other forms of discourse will include all kinds of actors who had no voice in the past; these include terrorists, critics of all kinds of products and art forms, amateur political pundits, and more.”

An anonymous respondent wrote, “Bad actors will have means to do more, and more significant bad actors will be automated as bots are funded in extra-statial ways to do more damage – because people are profiting from this.”

Jessica Vitak, an assistant professor at the University of Maryland, commented, “Social media’s affordances, including increased visibility and persistence of content, amplify the volume of negative commentary. As more people get internet access – and especially smartphones, which allow people to connect 24/7 – there will be increased opportunities for bad behavior.”

Bryan Alexander, president of Bryan Alexander Consulting, added, “The number of venues will rise with the expansion of the Internet of Things and when consumer-production tools become available for virtual and mixed reality.”
Theme 2: Things will stay bad because tangible and intangible economic and political incentives support trolling. Participation = power and profits

Many respondents said power dynamics push trolling along. The business model of social media platforms is driven by advertising revenues generated by engaged platform users. The more raucous and incendiary the material, at times, the more income a site generates. The more contentious a political conflict is, the more likely it is to be an attention getter. Online forums lend themselves to ever-more hostile arguments.1
Subtheme: ‘Hate, anxiety, and anger drive participation,’ which equals profits and power, so online social platforms and mainstream media support and even promote uncivil acts

Frank Pasquale, professor of law at the University of Maryland and author of “Black Box Society,” commented, “The major internet platforms are driven by a profit motive. Very often, hate, anxiety and anger drive participation with the platform. Whatever behavior increases ad revenue will not only be permitted, but encouraged, excepting of course some egregious cases.”


It’s a brawl, a forum for rage and outrage. … The more we come back, the more money they make off of ads and data about us. So the shouting match goes on.
Andrew Nachison

Kate Crawford, a well-known internet researcher studying how people engage with networked technologies, observed, “Distrust and trolling is happening at the highest levels of political debate, and the lowest. The Overton Window has been widened considerably by the 2016 U.S. presidential campaign, and not in a good way. We have heard presidential candidates speak of banning Muslims from entering the country, asking foreign powers to hack former White House officials, retweeting neo-Nazis. Trolling is a mainstream form of political discourse.”

Andrew Nachison, founder at We Media, said, “It’s a brawl, a forum for rage and outrage. It’s also dominated social media platforms on the one hand and content producers on the other that collude and optimize for quantity over quality. Facebook adjusts its algorithm to provide a kind of quality – relevance for individuals. But that’s really a ruse to optimize for quantity. The more we come back, the more money they make off of ads and data about us. So the shouting match goes on. I don’t know that prevalence of harassment and ‘bad actors’ will change – it’s already bad – but if the overall tone is lousy, if the culture tilts negative, if political leaders popularize hate, then there’s good reason to think all of that will dominate the digital debate as well.”
Subtheme: Technology companies have little incentive to rein in uncivil discourse, and traditional news organizations – which used to shape discussions – have shrunk in importance

Several of the expert respondents said because algorithmic solutions tend “to reward that which keeps us agitated,” it is especially damaging that the pre-internet news organizations that once employed fairly objective and well-trained (if not well-paid) armies of arbiters as democratic shapers of the defining climate of social and political discourse have fallen out of favor, replaced by creators of clickbait headlines read and shared by short-attention-span social sharers.


It is in the interest of the paid-for media and most political groups to continue to encourage ‘echo-chamber’ thinking and to consider pragmatism and compromise as things to be discouraged.
David Durant

David Clark, a senior research scientist at MIT and Internet Hall of Famer commented that he worries over the loss of character in the internet community. “It is possible, with attention to the details of design that lead to good social behavior, to produce applications that better regulate negative behavior,” he wrote. “However, it is not clear what actor has the motivation to design and introduce such tools. The application space on the internet today is shaped by large commercial actors, and their goals are profit-seeking, not the creation of a better commons. I do not see tools for public discourse being good ‘money makers,’ so we are coming to a fork in the road – either a new class of actor emerges with a different set of motivations, one that is prepared to build and sustain a new generation of tools, or I fear the overall character of discourse will decline.”

An anonymous principal security consultant wrote, “As long as success – and in the current climate, profit as a common proxy for success – is determined by metrics that can be easily improved by throwing users under the bus, places that run public areas online will continue to do just that.”

Steven Waldman, founder and CEO of LifePosts, said, “It certainly sounds noble to say the internet has democratized public opinion. But it’s now clear: It has given voice to those who had been voiceless because they were oppressed minorities and to those who were voiceless because they are crackpots. … It may not necessarily be ‘bad actors’ – i.e., racists, misogynists, etc. – who win the day, but I do fear it will be the more strident. I suspect there will be ventures geared toward counter-programming against this, since many people are uncomfortable with it. But venture-backed tech companies have a huge bias toward algorithmic solutions that have tended to reward that which keeps us agitated. Very few media companies now have staff dedicated to guiding conversations online.”

John Anderson, director of journalism and media studies at Brooklyn College, wrote, “The continuing diminution of what Cass Sunstein once called ‘general-interest intermediaries’ such as newspapers, network television, etc. means we have reached a point in our society where wildly different versions of ‘reality’ can be chosen and customized by people to fit their existing ideological and other biases. In such an environment there is little hope for collaborative dialogue and consensus.”

David Durant, a business analyst at U.K. Government Digital Service, argued, “It is in the interest of the paid-for media and most political groups to continue to encourage ‘echo-chamber’ thinking and to consider pragmatism and compromise as things to be discouraged. While this trend continues, the ability for serious civilized conversations about many topics will remain very hard to achieve.”
Subtheme: Terrorists and other political actors are benefiting from the weaponization of online narratives by implementing human- and bot-based misinformation and persuasion tactics

The weaponization of social media and “capture” of online belief systems, also known as “narratives,” emerged from obscurity in 2016 due to the perceived impact of social media uses by terror organizations and political factions. Accusations of Russian influence via social media on the U.S. presidential election brought to public view the ways in which strategists of all stripes are endeavoring to influence people through the sharing of often false or misleading stories, photos and videos. “Fake news” moved to the forefront of ongoing discussions about the displacement of traditional media by social platforms. Earlier, in the summer of 2016, participants in this canvassing submitted concerns about misinformation in online discourse creating distorted views.


There’s money, power, and geopolitical stability at stake now, it’s not a mere matter of personal grumpiness from trolls.
Anonymous respondent

Anonymously, a futurist, writer, and author at Wired, explained, “New levels of ‘cyberspace sovereignty’ and heavy-duty state and non-state actors are involved; there’s money, power, and geopolitical stability at stake now, it’s not a mere matter of personal grumpiness from trolls.”

Karen Blackmore, a lecturer in IT at the University of Newcastle, wrote, “Misinformation and anti-social networking are degrading our ability to debate and engage in online discourse. When opinions based on misinformation are given the same weight as those of experts and propelled to create online activity, we tread a dangerous path. Online social behaviour, without community-imposed guidelines, is subject to many potentially negative forces. In particular, social online communities such as Facebook also function as marketing tools, where sensationalism is widely employed and community members who view this dialogue as their news source gain a very distorted view of current events and community views on issues. This is exacerbated with social network and search engine algorithms effectively sorting what people see to reinforce worldviews.”

Laurent Schüpbach, a neuropsychologist at University Hospital in Zurich, focused his entire response about negative tone online on burgeoning acts of economic and political manipulation, writing, “The reason it will probably get worse is that companies and governments are starting to realise that they can influence people’s opinions that way. And these entities sure know how to circumvent any protection in place. Russian troll armies are a good example of something that will become more and more common in the future.”

David Wuertele, a software engineer at Tesla Motors, commented, “Unfortunately, most people are easily manipulated by fear. … Negative activities on the internet will exploit those fears, and disproportionate responses will also attempt to exploit those fears. Soon, everyone will have to take off their shoes and endure a cavity search before boarding the internet.”
Theme 3: Things will get better because technical and human solutions will arise as the online world splinters into segmented, controlled social zones with the help of artificial intelligence (AI)

Most respondents said it is likely that the coming decade will see a widespread move to more-secure services, applications, and platforms and more robust user-identification policies. Some said people born into the social media age will adapt. Some predict that more online systems will require clear identification of participants. This means that the online social forums could splinter into various formats, some of which are highly protected and monitored and others which could retain the free-for-all character of today’s platforms.
Subtheme: AI sentiment analysis and other tools will detect inappropriate behavior and many trolls will be caught in the filter; human oversight by moderators might catch others

Some experts in this canvassing say progress is already being made on some fronts toward better technological and human solutions.


The future Web will give people much better ways to control the information that they receive, which will ultimately make problems like trolling manageable.
David Karger

Galen Hunt, a research manager at Microsoft Research NExT, replied, “As language-processing technology develops, technology will help us identify and remove bad actors, harassment, and trolls from accredited public discourse.”

Stowe Boyd, chief researcher at Gigaom, observed, “I anticipate that AIs will be developed that will rapidly decrease the impact of trolls. Free speech will remain possible, although AI filtering will make a major dent on how views are expressed, and hate speech will be blocked.”

Marina Gorbis, executive director at the Institute for the Future, added, “I expect we will develop more social bots and algorithmic filters that would weed out the some of the trolls and hateful speech. I expect we will create bots that would promote beneficial connections and potentially insert context-specific data/facts/stories that would benefit more positive discourse. Of course, any filters and algorithms will create issues around what is being filtered out and what values are embedded in algorithms.”

Jean Russell of Thrivable Futures wrote, “First, conversations can have better containers that filter for real people who consistently act with decency. Second, software is getting better and more nuanced in sentiment analysis, making it easier for software to augment our filtering out of trolls. Third, we are at peak identity crisis and a new wave of people want to cross the gap in dialogue to connect with others before the consequences of being tribal get worse (Brexit, Trump, etc.).”

David Karger, a professor of computer science at MIT, said, “My own research group is exploring several novel directions in digital commentary. In the not too distant future all this work will yield results. Trolling, doxxing, echo chambers, click-bait, and other problems can be solved. We will be able to ascribe sources and track provenance in order to increase the accuracy and trustworthiness of information online. We will create tools that increase people’s awareness of opinions differing from their own and support conversations with and learning from people who hold those opinions. … The future Web will give people much better ways to control the information that they receive, which will ultimately make problems like trolling manageable (trolls will be able to say what they want, but few will be listening).”
Subtheme: There will be partitioning, exclusion and division of online outlets, social platforms and open spaces


Technology will mediate who and what we see online more and more, so that we are drawn more toward communities with similar interests than those who are dissimilar.
Lindsay Kenzig

Facebook, Twitter, Instagram, Google and other platform providers already “shape” and thus limit what the public views via the implementation of algorithms. As people have become disenchanted with uncivil discourse “open” platforms they stop using them or close their accounts, sometimes moving to smaller online communities of people with similar needs or ideologies. Some experts expect that these trends will continue and even more partitions, divisions and exclusions may emerge as measures are taken to clean things up. For instance, it is expected that the capabilities of AI-based bots dispatched to assist with information sorting, security, and regulation of the tone and content of discourse will continue to be refined.

Lindsay Kenzig, a senior design researcher, said, “Technology will mediate who and what we see online more and more, so that we are drawn more toward communities with similar interests than those who are dissimilar. There will still be some places where you can find those with whom to argue, but they will be more concentrated into only a few locations than they are now.”

Valerie Bock, of VCB Consulting, commented, “Spaces where people must post under their real names and where they interact with people with whom they have multiple bonds regularly have a higher quality of discourse. … In response to this reality, we’ll see some consolidation as it becomes easier to shape commercial interactive spaces to the desired audience. There will be free-for-all spaces and more-tightly-moderated walled gardens, depending on the sponsor’s strategic goals. There will also be private spaces maintained by individuals and groups for specific purposes.”

Lisa Heinz, a doctoral student at Ohio University, commented, “Humanity’s reaction to negative forces will likely contribute more to the ever-narrowing filter bubble, which will continue to create an online environment that lacks inclusivity by its exclusion of opposing viewpoints. An increased demand for systemic internet-based AI will create bots that will begin to interact – as proxies for the humans that train them – with humans online in real-time and with what would be recognized as conversational language, not the word-parroting bot behavior we see on Twitter now. … When this happens, we will see bots become part of the filter bubble phenomenon as a sort of mental bodyguard that prevents an intrusion of people and conversations to which individuals want no part. The unfortunate aspect of this iteration of the filter bubble means that while free speech itself will not be affected, people will project their voices into the chasm, but few will hear them.”

Bob Frankston, internet pioneer and software innovator, wrote, “I see negative activities having an effect but the effect will likely be from communities that shield themselves from the larger world. We’re still working out how to form and scale communities.”

The expert comments in response to this canvassing were recorded in the summer of 2016; by early 2017, after many events (Brexit, the U.S. election, others mentioned earlier in this report) surfaced concerns about civil discourse, misinformation and impacts on democracy, an acceleration of activity tied to solutions emerged. Facebook, Twitter and Google announced some new efforts toward technological approaches; many conversations about creating new methods of support for public affairs journalism began to be undertaken; and consumer bubble-busting tools including “Outside Your Bubble” and “Escape Your Bubble” were introduced.
Subtheme: Trolls and other actors will fight back, innovating around any barriers they face

Some participants in this canvassing said they expect the already-existing continuous arms race dynamic will expand, as some people create and apply new measures to ride herd over online discourse while others constantly endeavor to thwart them.

Cathy Davidson, founding director of the Futures Initiative at the Graduate Center of the City University of New York, said, “We’re in a spy vs. spy internet world where the faster that hackers and trolls attack, the faster companies (Mozilla, thank you!) plus for-profits come up with ways to protect against them and then the hackers develop new strategies against those protections, and so it goes. I don’t see that ending. … I would not be surprised at more publicity in the future, as a form of cyber-terror. That’s different from trolls, more geo-politically orchestrated to force a national or multinational response. That is terrifying if we do not have sound, smart, calm leadership.”

Sam Anderson, coordinator of instructional design at the University of Massachusetts, Amherst, said, “It will be an arms race between companies and communities that begin to realize (as some online games companies like Riot have) that toxic online communities will lower their long-term viability and potential for growth. This will war with incentives for short-term gains that can arise out of bursts of angry or sectarian activity (Twitter’s character limit inhibits nuance, which increases reaction and response).”
Theme 4: Oversight and community moderation come with a cost. Some solutions could further change the nature of the internet because surveillance will rise; the state may regulate debate; and these changes will polarize people and limit access to information and free speech

A share of respondents said greater regulation of speech and technological solutions to curb harassment and trolling will result in more surveillance, censorship and cloistered communities. They worry this will change people’s sharing behaviors online, limit exposure to diverse ideas and challenge freedom.
Subtheme: Surveillance will become even more prevalent

While several respondents indicated that there is no longer a chance of anonymity online, many say privacy and choice are still options, and they should be protected.


Terrorism and harassment by trolls will be presented as the excuses, but the effect will be dangerous for democracy.
Richard Stallman

Longtime internet civil libertarian Richard Stallman, Internet Hall of Fame member and president of the Free Software Foundation, spoke to this fear. He predicted, “Surveillance and censorship will become more systematic, even in supposedly free countries such as the U.S. Terrorism and harassment by trolls will be presented as the excuses, but the effect will be dangerous for democracy.”

Rebecca MacKinnon, director of Ranking Digital Rights at New America, wrote, “I’m very concerned about the future of free speech given current trends. The demands for governments and companies to censor and monitor internet users are coming from an increasingly diverse set of actors with very legitimate concerns about safety and security, as well as concerns about whether civil discourse is becoming so poisoned as to make rational governance based on actual facts impossible. I’m increasingly inclined to think that the solutions, if they ever come about, will be human/social/political/cultural and not technical.”

James Kalin of Virtually Green wrote, “Surveillance capitalism is increasingly grabbing and mining data on everything that anyone says, does, or buys online. The growing use of machine learning processing of the data will drive ever more subtle and pervasive manipulation of our purchasing, politics, cultural attributes, and general behavior. On top of this, the data is being stolen routinely by bad actors who will also be using machine learning processing to steal or destroy things we value as individuals: our identities, privacy, money, reputations, property, elections, you name it. I see a backlash brewing, with people abandoning public forums and social network sites in favor of intensely private ‘black’ forums and networks.”
Subtheme: Dealing with hostile behavior and addressing violence and hate speech will become the responsibility of the state instead of the platform or service providers

A number of respondents said they expect governments or other authorities will begin implementing regulation or other reforms to address these issues, most indicating that the competitive instincts of platform providers do not work in favor of the implementation of appropriate remedies without some incentive.


My fear is that because of the virtually unlimited opportunities for negative use of social media globally we will experience a rising worldwide demand for restrictive regulation.
Paula Hooper Mayhew

Michael Rogers, author and futurist at Practical Futurist, predicted governments will assume control over identifying internet users. He observed, “I expect there will be a move toward firm identities – even legal identities issued by nations – for most users of the Web. There will as a result be public discussion forums in which it is impossible to be anonymous. There would still be anonymity available, just as there is in the real world today. But there would be online activities in which anonymity was not permitted. Clearly this could have negative free-speech impacts in totalitarian countries but, again, there would still be alternatives for anonymity.”

Paula Hooper Mayhew, a professor of humanities at Fairleigh Dickinson University, commented, “My fear is that because of the virtually unlimited opportunities for negative use of social media globally we will experience a rising worldwide demand for restrictive regulation. This response may work against support of free speech in the U.S.”

Marc Rotenberg, executive director of the Electronic Privacy Information Center (EPIC), wrote, “The regulation of online communications is a natural response to the identification of real problems, the maturing of the industry, and the increasing expertise of government regulators.”
Subtheme: Polarization will occur due to the compartmentalization of ideologies

John Markoff, senior writer at The New York Times, commented, “There is growing evidence that that the Net is a polarizing force in the world. I don’t believe to completely understand the dynamic, but my surmise is that it is actually building more walls than it is tearing down.”

Marcus Foth, a professor at Queensland University of Technology, said, “Public discourse online will become less shaped by bad actors … because the majority of interactions will take place inside walled gardens. … Social media platforms hosted by corporations such as Facebook and Twitter use algorithms to filter, select, and curate content. With less anonymity and less diversity, the two biggest problems of the Web 1.0 era have been solved from a commercial perspective: fewer trolls who can hide behind anonymity. Yet, what are we losing in the process? Algorithmic culture creates filter bubbles, which risk an opinion polarisation inside echo chambers.”

Emily Shaw, a U.S. civic technologies researcher for mySociety, predicted, “Since social networks … are the most likely future direction for public discourse, a million (self)-walled gardens are more likely to be the outcome than is an increase in hostility, because that’s what’s more commercially profitable.”
Subtheme: Increased monitoring, regulation and enforcement will shape content to such an extent that the public will not gain access to important information and possibly lose free speech

Experts predict increased oversight and surveillance, left unchecked, could lead to dominant institutions and actors using their power to suppress alternative news sources, censor ideas, track individuals, and selectively block network access. This, in turn, could mean publics might never know what they are missing out on, since information will be filtered, removed, or concealed.


The fairness and freedom of the internet’s early days are gone. Now it’s run by big data, Big Brother, and big profits.
Thorlaug Agustsdottir

Thorlaug Agustsdottir of Iceland’s Pirate Party, said, “Monitoring is and will be a massive problem, with increased government control and abuse. The fairness and freedom of the internet’s early days are gone. Now it’s run by big data, Big Brother, and big profits. Anonymity is a myth, it only exists for end-users who lack lookup resources.”

Joe McNamee, executive director at European Digital Rights, said, “In the context of a political environment where deregulation has reached the status of ideology, it is easy for governments to demand that social media companies do ‘more’ to regulate everything that happens online. We see this with the European Union’s ‘code of conduct’ with social media companies. This privatisation of regulation of free speech (in a context of huge, disproportionate, asymmetrical power due to the data stored and the financial reserves of such companies) raises existential questions for the functioning of healthy democracies.”

Randy Bush, Internet Hall of Fame member and research fellow at Internet Initiative Japan, wrote, “Between troll attacks, chilling effects of government surveillance and censorship, etc., the internet is becoming narrower every day.”

Dan York, senior content strategist at the Internet Society, wrote, “Unfortunately, we are in for a period where the negative activities may outshine the positive activities until new social norms can develop that push back against the negativity. It is far too easy right now for anyone to launch a large-scale public negative attack on someone through social media and other channels – and often to do so anonymously (or hiding behind bogus names). This then can be picked up by others and spread. The ‘mob mentality’ can be easily fed, and there is little fact-checking or source-checking these days before people spread information and links through social media. I think this will cause some governments to want to step in to protect citizens and thereby potentially endanger both free speech and privacy.”
Responses from other key experts regarding the future of online social climate

This section features responses by several more of the many top analysts who participated in this canvassing. Following this wide-ranging set of comments on the topic will be a much-more expansive set of quotations directly tied to the set of four themes.
‘We’ll see more bad before good because the governing culture is weak and will remain so’

Baratunde Thurston, a director’s fellow at MIT Media Lab, Fast Company columnist, and former digital director of The Onion, replied, “To quote everyone ever, things will get worse before they get better. We’ve built a system in which access and connectivity are easy, the cost of publishing is near zero, and accountability and consequences for bad action are difficult to impose or toothless when they do. Plus consider that more people are getting online everyday with no norm-setting for their behavior and the systems that prevail now reward attention grabbing and extended time online. They reward emotional investment whether positive or negative. They reward conflict. So we’ll see more bad before more good because the governing culture is weak and will remain so while the financial models backing these platforms remain largely ad-based and rapid/scaled user growth-centric.”
‘We should reach ‘peak troll’ before long but there are concerns for free speech’

Brad Templeton, one of the early luminaries of Usenet and longtime Electronic Frontier Foundation board member, currently chair for computing at Singularity University, commented, “Now that everybody knows about this problem I expect active technological efforts to reduce the efforts of the trolls, and we should reach ‘peak troll’ before long. There are concerns for free speech. My hope is that pseudonymous reputation systems might protect privacy while doing this.”
‘People will find it tougher to avoid accountability’

Esther Dyson, founder of EDventure Holdings and technology entrepreneur, writer, and influencer, wrote: “Things will get somewhat better because people will find it tougher to avoid accountability. Reputations will follow you more than they do now. … There will also be clever services like CivilComments.com (disclosure: I’m an investor) that foster crowdsourced moderation rather than censorship of comments. That approach, whether by CivilComments or future competitors, will help. (So would sender-pays, recipient-charges email, a business I would *like* to invest in!) Nonetheless, anonymity is an important right – and freedom of speech with impunity (except for actual harm, yada yada) – is similarly important. Anonymity should be discouraged in general, but it is necessary in regimes or cultures or simply situations where the truth is avoided and truth-speakers are punished.”
Chatbots can help, but we need to make sure they don’t encode hate

Amy Webb, futurist and CEO at the Future Today Institute, said, “Right now, many technology-focused companies are working on ‘conversational computing,’ and the goal is to create a seamless interface between humans and machines. If you have [a] young child, she can be expected to talk to – rather than type on – machines for the rest of her life. In the coming decade, you will have more and more conversations with operating systems, and especially with chatbots, which are programmed to listen to, learn from and react to us. You will encounter bots first throughout social media, and during the next decade, they will become pervasive digital assistants helping you on many of the systems you use. Currently, there is no case law governing the free speech of a chatbot. During the 2016 election cycle, there were numerous examples of bots being used for political purposes. For example, there were thousands of bots created to mimic Latino/Latina voters supporting Donald Trump. If someone tweeted a disparaging remark about Trump and Latinos, bots that looked and sounded like members of the Latino community would target that person with tweets supporting Trump. Right now, many of the chatbots we interact with on social media and various websites aren’t so smart. But with improvements in artificial intelligence and machine learning, that will change. Without a dramatic change in how training databases are built and how our bots are programmed, we will realize a decade from now that we inadvertently encoded structural racism, homophobia, sexism and xenophobia into the bots helping to power our everyday lives. When chatbots start running amok – targeting individuals with hate speech – how will we define ‘speech’? At the moment, our legal system isn’t planning for a future in which we must consider the free speech infringements of bots.”
A trend toward decentralization and distributed problem solving will improve things

Doc Searls, journalist, speaker, and director of Project VRM at Harvard University’s Berkman Center for Internet and Society, wrote: “Harassment, trolling … these things thrive with distance, which favors the reptile brains in us all, making bad acting more possible and common. … Let’s face it, objectifying, vilifying, fearing, and fighting The Other has always been a problem for our species. … The internet we share today was only born on 30 April 1995, when the last backbone that forbade commercial activity stood down. Since then we have barely begun to understand, much less civilize, this new place without space. … I believe we are at the far end of this swing toward centralization on the Net. As individuals and distributed solutions to problems (e.g., blockchain [a digital ledger in which transactions are recorded chronologically and publicly]) gain more power and usage, we will see many more distributed solutions to fundamental social and business issues, such as how we treat each other.”
There are designs and tech advances ‘that would help tremendously’

Judith Donath of Harvard University’s Berkman Center, author of “The Social Machine: Designs for Living Online,” wrote, “With the current practices and interfaces, yes, trolls and bots will dominate online public discourse. But that need not be the case: there are designs and technological advances that would help tremendously. We need systems that support pseudonymity: locally persistent identities. Persistence provides accountability: people are responsible for their words. Locality protects privacy: people can participate in discussions without concern that their government, employer, insurance company, marketers, etc., are listening in (so if they are, they cannot connect the pseudonymous discourse to the actual person). We should have digital portraits that succinctly depict a (possibly pseudonymous) person’s history of interactions and reputation within a community. We need to be able to quickly see who is new, who is well-regarded, what role a person has played in past discussions. A few places do so now (e.g., StackExchange) but their basic charts are far from the goal: intuitive and expressive portrayals. ‘Bad actors’ and trolls (and spammers, harassers, etc.) have no place in most discussions – the tools we need for them are filters; we need to develop better algorithms for detecting destructive actions as defined by the local community. Beyond that, the more socially complex question is how to facilitate constructive discussions among people who disagree. Here, we need to rethink the structure of online discourse. The role of discussion host/moderator is poorly supported by current tech – and many discussions would proceed much better in a model other than the current linear free for all. Our face-to-face interactions have amazing subtlety – we can encourage or dissuade with slight changes in gaze, facial expression, etc. We need to create tools for conversation hosts (think of your role when you post something on your own Facebook page that sparks controversy) that help them to gracefully steer conversations.”
‘Reward systems favor outrage mongering and attention seeking almost exclusively’

Seth Finkelstein, writer and pioneering computer programmer, believes the worst is yet to come: “One of the less-examined aspects of the 2016 U.S. presidential election is that Donald Trump is demonstrating to other politicians how to effectively exploit such an environment. He wasn’t the first to do it, by far. But he’s showing how very high-profile, powerful people can adapt and apply such strategies to social media. Basically, we’re moving out of the ‘early adopter’ phase of online polarization, into making it mainstream. The phrasing of this question conflates two different issues. It uses a framework akin to ‘Will our kingdom be more or less threatened by brigands, theft, monsters, and an overall atmosphere of discontent, strife, and misery?’ The first part leads one to think of malicious motives and thus to attribute the problems of the second part along the lines of outside agitators afflicting peaceful townsfolk. Of course deliberate troublemakers exist. Yet many of the worst excesses come from people who believe in their own minds that they are not bad actors at all, but are fighting a good fight for all which is right and true (indeed, in many cases, both sides of a conflict can believe this, and where you stand depends on where you sit). When reward systems favor outrage mongering and attention seeking almost exclusively, nothing is going to be solved by inveighing against supposed moral degenerates.”
Some bad behavior is ‘pent-up’ speech from those who have been voiceless

Jeff Jarvis, a professor at the City University of New York Graduate School of Journalism, wrote, “I am an optimist with faith in humanity. We will see whether my optimism is misplaced. I believe we are seeing the release of a pressure valve (or perhaps an explosion) of pent-up speech: the ‘masses’ who for so long could not be heard can now speak, revealing their own interests, needs, and frustrations – their own identities distinct from the false media concept of the mass. Yes, it’s starting out ugly. But I hope that we will develop norms around civilized discourse. Oh, yes, there will always be … trolls. What we need is an expectation that it is destructive to civil discourse to encourage them. Yes, it might have seemed fun to watch the show of angry fights. It might seem fun to media to watch institutions like the Republican Party implode. But it soon becomes evident that this is no fun. A desire and demand for civil, intelligent, useful discourse will return; no society or market can live on misinformation and emotion alone. Or that is my hope. How long will this take? It could be years. It could be a generation. It could be, God help us, never.”
Was the idea of ‘reasoned discourse’ ever reasonable?

Mike Roberts, Internet Hall of Fame member and first president and CEO of ICANN, observed, “Most attempts at reasoned discourse on topics interesting to me have been disrupted by trolls in last decade or so. Many individuals faced with this harassment simply withdraw. … There is a somewhat broader question of whether expectations of ‘reasoned’ discourse were ever realistic. History of this, going back to Plato, is one of self-selection into congenial groups. The internet, among other things, has energized a variety of anti-social behaviors by people who get satisfaction from the attendant publicity. My wife’s reaction is ‘why are you surprised?’ in regard to seeing behavior online that already exists offline.”
Our disembodied online identity compels us to ‘ramp up the emotional content’

Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp., wrote,

“In the next decade a number of factors in public discourse online will continue to converge and vigorously affect each other:

1) Nowness is the ultimate arbiter: The value of our discourse (everything we see or hear) will be weighted by how immediate or instantly seen and communicated the information is. Real-time search, geolocation, just-in-time updates, Twitter, etc., are making of now, the present moment, an all-subsuming reality that tends to bypass anything that isn’t hyper-current.

2) Faceless selfism rocks: With photos and video, we can present ourselves dimensionally, but due to the lack of ‘facework’ in the online sim, our faces are absent or frozen in a framed portrait found elsewhere, and so there is no face-to-face, no dynamic interactivity, no responsive reading to our commentary, except in a follow-up comment. Still, we will get better at using public discourse as self-promotion.

“3) Anonymity changes us: Identity-shielding leads to a different set of ‘manners’ or mannerisms that stem from our sense (not accurate, of course) that online we are anonymous.

4) Context AWOL: Our present ‘filter failure,’ to borrow Clay Shirky’s phrase, is almost complete lack of context, reality check, or perspective. In the next decade we will start building better contextual frameworks for information.

5) Volume formula: The volume of content, from all quarters – anyone with a keypad, a device – makes it difficult to manage responses, or even to filter for relevance but tends to favor emotional button-pushing in order to be noticed.

“6) Ersatz us: Online identities will be more made-up, more fictional, but also more malleable than typical ‘facework’ or other human interactions. We can pretend, for a while, to be an ersatz version of ourselves.

7) Any retort in a (tweet) storm: Again, given the lack of ‘facework’ or immediate facial response that defined human response for millennia, we will ramp up the emotional content of messaging to ensure some kind of response, frequently rewarding the brash and outrageous over the slow and thoughtful.”
We will get better at articulating and enforcing helpful norms

David Weinberger, senior researcher at Harvard University’s Berkman Klein Center for Internet & Society, said, “Conversations are always shaped by norms and what the environment enables. For example, seating 100 dinner guests at one long table will shape the conversations differently than putting them at ten tables of ten, or 25 tables of four. The acoustics of the room will shape the conversations. Assigning seats or not will shape the conversations. Even serving wine instead of beer may shape the conversations. The same considerations are even more important on the Net because its global nature means that we have fewer shared norms, and its digital nature means that we have far more room to play with ways of bringing people together. We’re getting much better at nudging conversations into useful interchanges. I believe we will continue to get better at it.”
Anonymity is on its way out, and that will discourage trolling

Patrick Tucker, author of “The Naked Future” and technology editor at Defense One, said, “Today’s negative online user environment is supported and furthered by two trends that are unlikely to last into the next decade: anonymity in posting and validation from self-identified subgroups. Increasingly, marketers need to better identify and authentication APIs (authentication through Facebook for example) are challenging online anonymity. The passing of anonymity will also shift the cost benefit analysis of writing or posting something to appeal to only a self-identified bully group rather than a broad spectrum of people.”
Polarization breeds incivility and that is reflected in the incivility of online discourse

Alice Marwick, a fellow at Data & Society, commented, “Currently, online discourse is becoming more polarized and thus more extreme, mirroring the overall separation of people with differing viewpoints in the larger U.S. population. Simultaneously, several of the major social media players have been unwilling or slow to take action to curb organized harassment. Finally, the marketplace of online attention encourages so-called ‘clickbait’ articles and sensationalized news items that often contain misinformation or disinformation, or simply lack rigorous fact-checking. Without structural changes in both how social media sites respond to conflict and the economic incentives for spreading inaccurate or sensational information, extremism and therefore conflict will continue. More importantly, the geographical and psychological segmentation of the U.S. population into ‘red’ and ‘blue’ neighborhoods, communities, and states is unlikely to change. It is the latter that gives rise to overall political polarization, which is reflected in the incivility of online discourse.”
‘New variations of digital malfeasance [will] arise’

Jamais Cascio, distinguished fellow at the Institute for the Future, replied, “I don’t expect a significant shift in the tone of online discourse over the next decade. Trolling, harassment, etc., will remain commonplace but not be the overwhelming majority of discourse. We’ll see repeated efforts to clamp down on bad online behavior through both tools and norms; some of these efforts will be (or seem) successful, even as new variations of digital malfeasance arise.”
It will get better and worse

Anil Dash, technologist, wrote, “I expect the negative influences on social media to get worse, and the positive factors to get better. Networks will try to respond to prevent the worst abuses, but new sites and apps will pop up that repeat the same mistakes.”
Sites will ban the ‘unvouched anonymous’; look for the rise of ‘registered pseudonyms’

David Brin, author of “The Transparent Society” and a leader of at the University of California, San Diego’s Arthur C. Clarke Center for Human Imagination, said, “Some company will get rich by offering registered pseudonyms, so that individuals may wander the Web ‘anonymously’ and yet vouched for and accountable for bad behavior. When this happens, almost all legitimate sites will ban the unvouched anonymous.”
Back around 20 B.C., Horace understood these problems

Fred Baker, fellow at Cisco, commented, “Communications in any medium (the internet being but one example) reflects the people communicating. If those people use profane language, are misogynistic, judge people on irrelevant factors such as race, gender, creed, or other such factors in other parts of their lives, they will do so in any medium of communication, including the internet. If that is increasing in prevalence in one medium, I expect that it is or will in any and every medium over time. The issue isn’t the internet; it is the process of breakdown in the social fabric. … If we worry about the youth of our age ‘going to the dogs,’ are we so different from our ancestors? In “Book III of Odes,” circa 20 B.C., Horace wrote: ‘Our sires’ age was worse than our grandsires. We, their sons, are more worthless than they; so in our turn we shall give the world a progeny yet more corrupt.’ I think the human race is not doomed, not today any more than in Horace’s day. But we have the opportunity to choose to lead them to more noble pursuits and more noble discussion of them.”
‘Every node in our networked world is potentially vulnerable’

Mike Liebhold, senior researcher and distinguished fellow at the Institute for the Future, wrote, “After Snowden’s revelations, and in context accelerating cybercrimes and cyberwars, it’s clear that every layer of the technology stack and every node in our networked world is potentially vulnerable. Meanwhile both magnitude and frequency of exploits are accelerating. As a result users will continue to modify their behaviors and internet usage and designers of internet services, systems, and technologies will have to expend growing time and expense on personal and collective security.”
Politicians and companies could engage ‘in an increasing amount of censorship’

Jillian York, director for International Freedom of Expression at the Electronic Frontier Foundation, noted, “The struggle we’re facing is a societal issue we have to address at all levels, and that the structure of social media platforms can exacerbate. Social media companies will need to address this, beyond community policing and algorithmic shaping of our newsfeeds. There are many ways to do this while avoiding censorship; for instance, better-individualized blocking tools and upvote/downvote measures can add nuance to discussions. I worry that if we don’t address the root causes of our current public discourse, politicians and companies will engage in an increasing amount of censorship.”
Sophisticated mathematical equations are having social effects

An anonymous professor at City University of New York, wrote, “I see the space of public discourse as managed in new, more-sophisticated ways, and also in more brutal ones. Thus we have social media management in Mexico courtesy of Peñabots, hacking by groups that are quasi-governmental or serving nationalist interests (one thinks of Eastern Europe). Alexander Kluge once said, ‘The public sphere is the site where struggles are decided by other means than war.’ We are seeing an expanded participation in the public sphere, and that will continue. It doesn’t necessarily mean an expansion of democracy, per se. In fact, a lot of these conflicts are cross-border. In general the discussions will stay ahead of official politics in the sense that there will be increasing options for participation. In a way this suggests new kinds of regionalisms, intriguing at a time when the European Union is taking a hit and trade pacts are undergoing re-examination. This type of participation also means opening up new arenas, e.g., Facebook has been accused of left bias in its algorithm. That means we are acknowledging the role of what are essentially sophisticated mathematical equations as having social effects.”
The flip side of retaining privacy: Pervasive derogatory and ugly comments

Bernardo A. Huberman, senior fellow and director of the Mechanisms and Design Lab at Hewlett Packard Enterprise, said, “Privacy as we tend to think of nowadays is going to be further eroded, if only because of the ease with which one can collect data and identify people. Free speech, if construed as the freedom to say whatever one thinks, will continue to exist and even flourish, but the flip side will be a number of derogatory and ugly comments that will become more pervasive as time goes on.”
Much of ‘public online discourse consists of what we and others don’t see’

Stephen Downes, researcher at National Research Council of Canada, noted, “It’s important to understand that our perception of public discourse is shaped by two major sources: first, our own experience of online public discourse, and second, media reports (sometimes also online) concerning the nature of public discourse. From both sources we have evidence that there is a lot of influence from bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust, as suggested in the question. But a great deal of public online discourse consists of what we and others don’t see.”
How about a movement to teach people to behave?

Marcel Bullinga, trendwatcher and keynote speaker @futurecheck, wrote, “Online we express hate and disgust we would never express offline, face-to-face. It seems that social control is lacking online. We do not confront our neighbours/children/friends with antisocial behaviour. The problem is not [only] anonymous bullying: many bullies have faces and are shameless, and they have communities that encourage bullying. And government subsidies stimulate them – the most frightening aspect of all. We will see the rise of the social robots, technological tools that can help us act as polite, decent social beings (like the REthink app). But more than that we need to go back to teaching and experiencing morals in business and education: back to behaving socially.”


About this canvassing of experts





The expert predictions reported here about the impact of the internet over the next 10 years came in response to one of eight questions asked by Pew Research Center and Elon University’s Imagining the Internet Center in an online canvassing conducted between July 1 and Aug. 12, 2016. This is the seventh Future of the Internet study the two organizations have conducted together. For this project, we invited nearly 8,000 experts and members of the interested public to share their opinions on the likely future of the internet, and 1,537 responded to at least one of the questions we asked. Some 728 of them gave answers to the follow-up question asking them to elaborate on their answers about the future of online discourse:

In the next decade, will public discourse online become more or less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust?

The answer options were:
Online communication becomes more shaped by negative activities
Online communications becomes less shaped by negative activities
I expect no major change in the tone of online interaction

Then we asked:

Please also consider addressing these issues in your response. You do not have to consider any of these. We have added them because we hope they might prompt your thinking on important related issues: How do you expect social media and digital commentary will evolve in the coming decade? Do you think we will see a widespread demand for technological systems or solutions that encourage more inclusive online interactions? What do you think will happen to free speech?

Some 39% of these respondents opted for the prediction that online activity would be more shaped by negative activities, while 19% predicted online communication would become less shaped by negative activities. Some 41% chose the option that they expect no major change in tone in online interaction.

The web-based instrument was first sent directly to a list of targeted experts identified and accumulated by Pew Research Center and Elon University during the previous six “Future of the Internet” studies, as well as those identified across 12 years of studying the internet realm during its formative years. Among those invited were people who are active in global internet governance and internet research activities, such as the Internet Engineering Task Force (IETF), Internet Corporation for Assigned Names and Numbers (ICANN), Internet Society (ISOC), International Telecommunications Union (ITU), Association of Internet Researchers (AoIR) and the Organization for Economic Cooperation and Development (OECD). We also invited a large number of professionals and policy people from technology businesses; government, including the National Science Foundation, Federal Communications Commission and European Union; think tanks and interest networks (for instance, those that include professionals and academics in anthropology, sociology, psychology, law, political science and communications); globally located people working with communications technologies in government positions; technologists and innovators; top universities’ engineering/computer science departments, business/entrepreneurship faculty and graduate students and postgraduate researchers; plus many who are active in civil society organizations such as Association for Progressive Communications (APC), Electronic Privacy Information Center (EPIC), Electronic Frontier Foundation (EFF) and Access Now; and those affiliated with newly emerging nonprofits and other research units examining ethics and the digital age. Invitees were encouraged to share the survey link with others they believed would have an interest in participating, thus there was a “snowball” effect as the invitees were joined by those they invited to weigh in.

Since the data are based on a non-random sample, the results are not projectable to any population other than the individuals expressing their points of view in this sample. The respondents’ remarks reflect their personal positions and are not the positions of their employers; the descriptions of their leadership roles help identify their background and the locus of their expertise. About 80% of respondents identified themselves as being based in North America; the others hail from all corners of the world. When asked about their “primary area of internet interest,” 25% identified themselves as research scientists; 7% as entrepreneurs or business leaders; 8% as authors, editors or journalists; 14% as technology developers or administrators; 10% as advocates or activist users; 9% as futurists or consultants; 2% as legislators, politicians or lawyers; and 2% as pioneers or originators; an additional 25% specified their primary area of interest as “other.”

More than half of the expert respondents elected to remain anonymous. Because people’s level of expertise is an important element of their participation in the conversation, anonymous respondents were given the opportunity to share a description of their internet expertise or background, and this was noted where relevant in this report.

Here are some of the key respondents in this report:

Robert Atkinson, president of the Information Technology and Innovation Foundation; Fred Baker, fellow at Cisco; danah boyd, founder of Data & Society; Stowe Boyd, chief researcher at Gigaom; Marcel Bullinga, trend watcher and keynote speaker; Randy Bush, Internet Hall of Fame member and research fellow at Internet Initiative Japan; Jamais Cascio, distinguished fellow at the Institute for the Future; Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp.; David Clark, Internet Hall of Fame member and senior research scientist at MIT; Cindy Cohn, executive director at the Electronic Frontier Foundation; Anil Dash, technologist; Cathy Davidson, founding director of the Futures Initiative at the Graduate Center of the City University of New York; Cory Doctorow, writer, computer science activist-in-residence at MIT Media Lab and co-owner of Boing Boing; Judith Donath, Harvard University’s Berkman Klein Center for Internet & Society; Stephen Downes, researcher at the National Research Council of Canada; Bob Frankston, internet pioneer and software innovator; Oscar Gandy, emeritus professor of communication at the University of Pennsylvania; Marina Gorbis, executive director at the Institute for the Future; Jeff Jarvis, a professor at the City University of New York Graduate School of Journalism; Jon Lebkowsky, CEO of Polycot Associates; Peter Levine, professor and associate dean for research at Tisch College of Civic Life; Mike Liebhold, senior researcher and distinguished fellow at the Institute for the Future; Rebecca MacKinnon, director of Ranking Digital Rights at New America; John Markoff, author of “Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots” and senior writer at The New York Times; Jerry Michalski, founder at REX; Andrew Nachison, founder at We Media; Frank Pasquale, author of “The Black Box Society: The Secret Algorithms That Control Money and Information” and professor of law at the University of Maryland; Demian Perry, director of mobile at NPR; Justin Reich, executive director at the MIT Teaching Systems Lab; Mike Roberts, Internet Hall of Fame member and first president and CEO of ICANN; Michael Rogers, author and futurist at Practical Futurist; Marc Rotenberg, executive director of the Electronic Privacy Information Center; David Sarokin, author of “Missed Information: Better Information for Building a Wealthier, More Sustainable Future”; Henning Schulzrinne, Internet Hall of Fame member and professor at Columbia University; Doc Searls, journalist, speaker, and director of Project VRM at Harvard University’s Berkman Klein Center for Internet & Society; Ben Shneiderman, professor of computer science at the University of Maryland; Richard Stallman, Internet Hall of Fame member and president of the Free Software Foundation; Brad Templeton, chair for computing at Singularity University; Baratunde Thurston, a director’s fellow at MIT Media Lab, Fast Company columnist, and former digital director of The Onion; Patrick Tucker, author and technology editor at Defense One; Steven Waldman, founder and CEO of LifePosts; Jim Warren, longtime technology entrepreneur and activist; Amy Webb, futurist and CEO at the Future Today Institute; and David Weinberger, senior researcher at Harvard University’s Berkman Klein Center for Internet & Society.

Here is a selection of some of the institutions at which respondents work or have affiliations:

AAI Foresight, Access Now, Adobe, Altimeter Group, The Aspen Institute, AT&T, Booz Allen Hamilton, California Institute of Technology, Carnegie Mellon University, Center for Digital Education, Center for Policy on Emerging Technologies, Cisco, Computerworld, Craigslist, Cyber Conflict Studies Association, Cyborgology, Dare Distrupt, Data & Society, Digital Economy Research Center, Digital Rights Watch, DotTBA, Electronic Frontier Foundation, Electronic Privacy Information Center, Ethics Research Group, European Digital Rights, Farpoint Group, Federal Communications Commission, Flipboard, Free Software Foundation, Future of Humanity Institute, Future of Privacy Forum, FutureWei, Gartner, Genentech, George Washington University, Georgia Tech, Gigaom, Gilder Publishing, Google, Groupon, Hack the Hood, Harvard University’s Berkman Klein Center for Internet & Society, Hewlett Packard Enterprise, Human Rights Watch, IBM, InformationWeek, Innovation Watch, Institute for Ethics and Emerging Technologies, Institute for the Future, Institute of the Information Society, Intelligent Community Forum, International Association of Privacy Professionals, Internet Corporation for Assigned Names and Numbers (ICANN), Internet Education Foundation, Internet Engineering Task Force, Internet Initiative Japan, Internet Society, NASA’s Jet Propulsion Laboratory, Karlsruhe Institute of Technology, Kenya ICT Action Network, KMP Global, The Linux Foundation, Lockheed Martin, Logic Technology Inc., MediaPost, Michigan State University, Microsoft, Massachusetts Institute of Technology (MIT), Mozilla, NASA, National Institute of Standards and Technology, National Public Radio, National Science Foundation, Neustar, New America, New Jersey Institute of Technology, The New York Times, Nokia, Nonprofit Technology Enterprise Network, New York University (NYU), OpenMedia, Oxford Martin School, Philosophy Talk, Privacy International, Queensland University of Technology, Raytheon BBN Technologies, Red Hat, Rensselaer Polytechnic Institute, Rice University’s Humanities Research Center, Rochester Institute of Technology, Rose-Hulman Institute of Technology, Semantic Studios, Singularity University, Social Media Research Foundation, Spacetel, Square, Stanford University’s Digital Civil Society Lab, Syracuse University, Tech Networks of Boston, Telecommunities Canada, Tesla Motors, U.S. Department of Defense, US Ignite, UCLA, U.K. Government Digital Service, Unisys, United Steelworkers, University of California, Berkeley, University of California, Irvine, University of California, Santa Barbara, University of Copenhagen, University of Michigan, University of Milan, University of Pennsylvania, University of Toronto, Vodafone, We Media, Wired, Worcester Polytechnic Institute, Yale University, York University.

Complete sets of for-credit and anonymous responses to the question can be found here:

http://www.elon.edu/e-web/imagining/surveys/2016_survey/social_future_of_the_internet.xhtml
http://www.elon.edu/e-web/imagining/surveys/2016_survey/social_future_of_the_internet_credit.xhtml
http://www.elon.edu/e-web/imagining/surveys/2016_survey/social_future_of_the_internet_anon.xhtml





Theme 1: Things will stay bad, Part I



Respondents to this canvassing were very focused on human nature and the special character of online interactions. They offered a series of ideas along these lines: To troll is human; anonymity abets anti-social behavior; inequities drive at least some inflammatory dialogue; and the growing scale and complexity of internet discourse makes this difficult to defeat

Trolls are the internet’s primary bad social actors. Due in part to the focus of this question, many of the respondents in this canvassing generalized most socially disruptive activities including harassment, threats, hate speech, “flaming,” “griefing,” and “doxing” under the umbrella terms “troll” and “trolling.”


Human nature has not much changed over the past 2,000 years; I don’t expect much change over the next 10.
Anonymous respondent

Many pointed out that negative behaviors online are encouraged by actors’ lack of physical proximity and said they are mostly empowered by a lack of attribution or anonymity.

While there is likely no way to quantify the percentage of “positive” discourse as compared with the “negative” online, it is quite possible that the socially beneficial declarations and conversations being carried on really outweigh those that are not. So why do experts perceive the tone of online social discourse to be troubling? Bad actors and propaganda pushers are motivated to command center stage – in fact they crave it and they generally get it – and their actions can create states of fear, mistrust, polarization, anger, withdrawal that cause significant damage.

At the time of this canvassing, the summer of 2016, a vast majority of respondents expressed opinions ranging from disappointment to deep concern about the social climate of the internet.
Those among the 42% in this canvassing who said they expect “no major change” in online tone by 2026 generally see the state of online discourse to be raising important challenges, and they expressed worries in their written elaborations.
The 19% who said they expect the internet will be “less shaped” by bad actors by 2026 said things are bad now, but they expressed confidence in technological and human solutions.
And the 39% who said they expect the future to be “more shaped” by negative activities had little hope for effective solutions.
Trolls have been with us since the dawn of time; there will always be some incivility

Many respondents observed that prickly and manipulative behaviors are a fundamental part of human nature due to group identification and intercultural conflict. They added that the particular affordances of the internet make trolling especially potent.


Trolls online are trolls in real life. It’s just the person you are. The internet has provided closet trolls an outlet.
Anonymous respondent

David Krieger, director of the Institute for Communication and Leadership at IKF in Lucerne, Switzerland, said, “Trolls we will always have with us. Despite everything, they serve the useful purpose of challenging and improving the evolution of the social immune system. The pressure for more transparency and authenticity that comes with increasing connectivity and flow of information will tend to make life more difficult for the trolls. … Privacy will yield to ‘publicy’ in knowledge economy of abundance. … What we need is Network Publicy Governance instead of market individualism and bureaucratic hierarchies.”

Jim Warren, internet pioneer and longtime technology entrepreneur and activist, wrote, “It seems clear – at least in the U.S. – that ‘bad actors,’ children of all ages who have never been effectively taught civility and cooperation, are becoming more and more free to ‘enjoy’ sharing the worst of their ‘social’ leanings.”

Jan Schaffer, executive director at J-Lab, commented, “I expect digital public discourse to skew more negative for several reasons, including: the polarization of the country, which is a barrier to civil discourse; the rise of websites, Twitter accounts, and Facebook pages dedicated to portraying an opponent in a bad light; and the awful online trolling and harassment of women who are active in social media. I do not think things will get better on their own.”

Simon Gottschalk, a sociology professor at the University of Nevada, Las Vegas, wrote, “Public discourse online seems to have been hurled into a negative spiral. … [I] anticipate the issue of free speech to become altered beyond recognition and to alter our understanding of it. In the end, it matters little if what we write/say online is indeed already officially and legally surveilled or not. The reasonable hunch that it is shapes how we experience everyday life and what we’re willing to write/say in that setting. According to a New York Times article published a few days ago, even Facebook CEO Mark Zuckerberg covers the camera/microphone of his computer.”

An anonymous respondent said, “Human nature has not much changed over the past 2,000 years; I don’t expect much change over the next 10.”

Daniel Menasce, professor of computer science at George Mason University, said, “While social media and digital commentary have some very positive aspects, they also serve as tools for the dissemination of lies, propaganda, and hatred. It is possible that technological solutions may be developed to assign crowdsourced reputation values for what is posted online. This, in my opinion, will not stop people from consuming and re-posting information of low value provided it conforms with their way of thinking.”

An anonymous respondent remarked, “Trolls online are trolls in real life. It’s just the person you are. The internet has provided closet trolls an outlet.”

Paul Edwards, professor of information and history at the University of Michigan, commented, “Social media will continue to generate increasingly contentious, angry, mob-like behavior. The phenomenon that underlies this behavior has been consistently observed since the early days of email, so there is no reason to think that some new technique or technology will change that. Mediated interaction tends to disinhibit people’s expression of strong opinions, use of inappropriate language, and so on. It also makes it easier to misunderstand others’ tone. Emoticons have at least given a means of indicating the intended tone. Fact-checking sites have also helped to control the spread of rumors, but not very much. The very rapid interaction cycle on social media causes it to be governed by ‘fast’ thinking (see Daniel Kahneman’s “Thinking Fast and Slow”), which is intuitive, reactive, and often emotionally based. For this reason, social media discourage long-form arguments and long, complex exchanges of nuanced views.”

Paul Jones, clinical professor and director of ibiblio.org at the University of North Carolina, briefly quoted earlier, had a fuller comment: “The id unbound from the monitoring and control by the superego is both the originator of communication and the nemesis of understanding and civility. As we saw in “Forbidden Planet,” the power of the id is difficult to subdue if not impossible. Technologies of online discourse will continue to battle with the id, giving us, most recently, Yik Yak (id-freeing) and comment control systems like Disqus (id-controlling). Like civility in all contexts, controlling the id is more social and personal in the end. Technologies will nonetheless attempt to augment civility– and to circumvent it.”

The comment by Dean Landsman, digital strategist and executive director of PDEC (Personal Data Ecosystem Consortium), represents the experts’ who expect online tone to improve despite the state of affairs today. He wrote, “With each new communications medium comes fear, loathing, abuse, misuse, and then a calming down. Gutenberg printed a bible, and shortly thereafter the printed word represented a danger, a system used for wrongdoing. … Free speech is made possible and more freely distributed by technology. Capture (read: production) and distribution are burgeoning. The individual has more than a soapbox; he or she or they have video and streaming or ‘store now and play later’ with repositories in the cloud becoming cheaper by the moment.”
Trolling and other destructive behaviors often result because people do not recognize or don’t care about the consequences that might flow from those actions

A large share of these respondents added that the natural tendency of humans to be nasty at times to each other is especially enabled by the terms of online engagement. People are more emboldened when they can be anonymous and not ever confront those they are attacking.


Online our identity is disembodied, only a simulation of what we do in the physical presence of others; it is missing our moving countenance, the mask that encounters – and counters – the world.
Barry Chudakov

An anonymous respondent wrote, “In any setting where there is a disconnect between speech and social consequences, whether that’s a chat room, a mob, talk radio, a pulpit, whatever, a large minority of humans will be hateful. That’s humans, as a species.”

Robert Bell, co-founder of the Intelligent Community Forum, commented, “The nature of instantaneous online communications is to vastly amplify that which attracts or threatens us, and a very small number of actors can make a very loud noise, despite the fact that they are less than 1% of the conversation.”

Tim Norton, chair of Digital Rights Watch, wrote, “Anonymity (or at least the illusion of it) feeds a negative culture of vitriol and abuse that stifles free speech online. Social media allows people to take part in a public debate that they may have not previously had access to. But alongside this an increasing culture of attack language, trolls, and abuse online has the potential to do more damage to this potential.”

An anonymous respondent commented, “People are snarky and awful online in large part because they can be anonymous, or because they don’t have to speak to other people face-to-face.”

Andrew Walls, managing vice president at Gartner, noted, “The quality of online discourse ebbs and flows. In certain environments, trollish behavior is more noticeable, while in others trollish behavior is largely absent. Anonymity fuels a lack of accountability for some online discourse, producing, at times, an online “Lord of the Flies” (LoF) situation. LoF situations have persisted in human social groups for eons and are not created by the availability of online fora. Despite the poor behavior of some, the world of social discourse in online environments is growing in depth, diversity, and levels of participation. Free speech is readily available, but the speaker may lack the protections afforded by a close social group.”

Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp., replied, “In trolling, even challenging or calling out those who agree with you, self-presentation becomes a game of catch-me-if-you-can. What shapes our discourse? Our hidden physical state. Online our identity is disembodied, only a simulation of what we do in the physical presence of others; it is missing our moving countenance, the mask that encounters – and counters – the world. As online discourse becomes more app-enabled, our ability to disembody ourselves will only grow more dexterous. Online, our face is absent – a snapshot at best, a line of code or address at worst. Politeness, sociologists tell us, is about ‘facework’ – presenting a face, saving face, smiling, reassuring, showing. But online we are disembodied; our actual faces are elsewhere. This present-yet-absent dynamic not only affects our identity, whether people can identify us behind the shield of online presentation, it also affects our speech and, ultimately, our ‘performance.’ Into this pool jump the hackers and mischief-makers and deadly serious manipulators who realize that they can do their work behind the shield with impunity – until they are caught or ‘outed.’”

Some argued that trolling has to recede because it has reached its peak and resistance to trolls is growing.

Bailey Poland, author of “Haters: Harassment, Abuse, and Violence Online,” wrote, “We are close to a tipping point in terms of online dialogue. Things are likely to get much worse before they get any better, but the state of online discourse has been ugly for a very long time, and people are beginning to rally for real changes.”


Technological evolution has surpassed the evolution of civil discourse. We’ll catch up eventually. I hope. We are in a defining time.
Ryan Sweeney

Chris Zwemke, a web developer, commented, “People feel empowered to say hateful things and complain and shame those hateful things if they aren’t face to face. Shaming a harasser or a troll is definitely negative noise (I don’t know that it is wrong, but it is negative noise). We haven’t reached peak argument yet online. Folks will continue in the next decade to speak ill of each other in either true hate or trolling. Either way, the people who visit ‘public’ places online will have worse content to consume. Best to avoid the comment sections for the foreseeable future. My hope is that online discussion can solve the echo chamber problem of online discourse so that people can see the other side with more clarity.”

Lee McKnight, associate professor of information studies at Syracuse University, wrote, “In the year that WWE-trained Donald Trump became presidential it is hard to imagine bad actors, harassment, trolling, griping, distrust, and disgust – what we used to call flaming and then learned not to do online – becoming more plentiful and empowered worldwide than those so engaged do now.”

“Although I believe the online environment today is extremely negative, I also believe this environment has reached peak negativity and it will remain at this level,” replied an anonymous respondent.

Ryan Sweeney, director of analytics at Ignite Social Media, commented, “Online discourse is new, relative to the history of communication. The optimist in me believes we’re in the worst of it right now and within a decade the kinks will have been worked out. There are a lot of tough and divisive but crucial conversations occurring online at this moment in time. People are more open with their communication. Groups that have previously had minimal access to mass communication are able to have a voice and those who previously had the microphone aren’t quite sure how to react pleasantly. Technological evolution has surpassed the evolution of civil discourse. We’ll catch up eventually. I hope. We are in a defining time.”

“I don’t think it can get worse,” wrote an anonymous respondent. “There should be better methods to filter and block ‘bad actors’ in the near future.”

Anonymously, a leader of city government in a Silicon Valley community said, “There are a number of largely unmoderated forums like NextDoor which in my city have been taken over by anti-politics – people use false identities to promote their points of view and squelch everyone else’s.”

Tiffany Shlain, filmmaker and founder of The Webby Awards, optimistically said, “As we connect our identity more to what we say, there will be more accountability. Since it is easier to say harsh words anonymously, the continued direction of transparency will lead to more civil discourse.”

An anonymous technology writer, expressed a great deal of frustration, arguing, “The presence of harassment and mobs online effectively silences me from voicing opinions where they can be heard. Doxxing is dangerous to my family and neighbors, and I can’t risk it. The ability for anyone anywhere to find and publicize personal information for any member of any minority group who might draw ire is incredibly, incredibly dangerous. Anonymity and privacy are already more-or-less mythical. Either we, as a society, start designing explicitly for inclusivity or we accept that only the loudest, angriest voices have a right to speak and the rest of us must listen in silence.”
Inequities drive at least some of the inflammatory dialogue

Some respondents noted that “anger gets translated into trolling and other really bad behavior,” and many of the participants in this canvassing noted that social and economic bifurcations or inequities are the motivation behind online angst.

Dara McHugh, a respondent who shared no additional identifying details, said, “The overall trend in society is toward greater inequality and social conflict, and this will be reflected in online discourse.”


As internet access becomes more expansive due to the increasing affordability of smart phones, the socioeconomic gap between the world’s poorest and richest members of society will unfortunately become evident in their interactions on the Web.
Nicholas V. Macek

Richard Lachmann, a professor of sociology at the University at Albany, wrote, “The internet will reflect greater conflict in most societies [in the future], as economic decline and environmental pressures lead to conflicts that will be reflected online.”

Giacomo Mazzone, head of institutional relations at the European Broadcasting Union, commented, “Social media are simply the reflex of the society in which they are encapsulated. In Europe, the U.S. and all rich countries of the world, the social media debate will worsen because in the next decade the populations there will become older and poorer. It’s demography, stupid!”

“The most important issues of our time are complex, and social media does not allow for a complex discourse. Furthermore, algorithmically selected content based on our existing interests also steers us towards more ideological isolation, not openness,” added an anonymous consulting partner.

Robin James, an associate professor of philosophy at the University of North Carolina-Charlotte, commented, “The problem with online harassment isn’t a technological problem, it’s a sociopolitical problem: sexism, racism, etc. These systems of domination motivate harassment online, in the street, in homes. As technology changes and adapts, so do the underlying systems of domination. So online harassment may look different in the future, but it will still exist. Sexism and racism also impact how we need to talk about free speech: the issue here isn’t censorship but power inequities. The language of ‘free speech’ misidentifies the actual problem: punching down, people in positions of power and privilege using speech to harass and assault people in minority positions.”

Axel Bruns, a professor at the Queensland University of Technology’s Digital Media Research Centre, said, “Unfortunately, I see the present prevalence of trolling as an expression of a broader societal trend across many developed nations, towards belligerent factionalism in public debate, with particular attacks directed at women as well as ethnic, religious, and sexual minorities.”

Annette Markham, an expert in information studies, observed, “Two factors seem relevant to mention here: Historically, new media for communication have been accompanied by large spikes in impact on forms of interaction. This tends to decline as technologies move from novel to everyday. This suggests that extreme uses tend to normalize. The second factor to add to this is that many stakeholders are responding to extreme homophily.”

Masha Falkov, artist and glassblower, wrote, “Online, speech becomes more than just printed word on paper. It becomes a vector for the binding of a community. People who wish to speak hatefully against their targets – women, minorities, LGBT, etc. – seem to bind together with much more force than people who speak to defend those same targets. Hate speech online isn’t the polar opposite of supportive conversation or polite discourse. It’s a weapon made to cause its targets to feel fear, inadequacy, and it has real-world effects on those people, with virtually no consequences for the speaker save for disapproval for the community. … Whether limits on hate speech and abuse online are part of a larger trend toward limits on freedom of speech should be evaluated on a case-by-case basis rather than shouting an alarm that our freedoms are being eroded.”

Randy Albelda, a professor of economics at the University of Massachusetts Boston, confidently predicted, “Inequality will play out badly for online interactions. The ‘haves’ will not need it for their own communications and interactions but will have more power/resources to control the venues, messages, and even research on how data collected from the internet is used (and then thrown back to us in the form of ads, etc.). The ‘have-nots’ – but mostly those on the bottom rungs without much mobility will be angrier and angrier. (Let’s face it, [neither] Trump nor Clinton will provide short-run or long-term policies toward more equality, making people even more politically disaffected). Anger gets translated into trolling and other really bad behavior generally, but especially online.”

Nicholas V. Macek, digital director at a political firm, wrote, “As internet access becomes more expansive due to the increasing affordability of smart phones, the socioeconomic gap between the world’s poorest and richest members of society will unfortunately become evident in their interactions on the Web. Especially in the context of political and social movements, and civil rights, the lack of understanding between people of different backgrounds will become more pronounced.”

Luis Miron, professor at Loyola University-New Orleans, wrote, “Although I am not a pessimist I am deeply worried that in the next decade, and perhaps beyond, racial and economic conflict will likely exacerbate. And social and economic inequality will widen before narrowing. Globally. My fear is that terrorism will continue to strike fear in the hearts and minds of ordinary citizens, furthering the online negativity.”


We are watching what happens when the audience becomes accustomed to ‘having a voice’ and begins to assume that being heard entitles one’s opinion to dominate rather than be part of a collaborative solution.
Pamela Rutledge

Elisabeth Gee, a professor at Arizona State University, wrote, “The growing economic and social divides are creating a large number of disenfranchised people and undoubtedly they will express their frustration online, but they’ll mostly be interacting with each other. Just as ‘public’ places like city parks have become mostly the realm of the poor, so will public online spaces. I suspect that the real trend will be toward increasingly segmented and exclusive online interactions. We know that’s already happening.”

Dave Kissoondoyal, CEO of KMP Global, located in Mauritius, commented, “With the rapid change in the human environment today – be it in a social context, or professional, or even societal – people have the tendency to be stressed, frustrated, and demotivated. … People use social media to express anger, disgust, and frustration. This tendency will continue and it will expand in the next decade.”

Pamela Rutledge, director of the Media Psychology Research Center, observed, “Communications are a reflection of local and global sentiment – online public discourse reflects how people feel offline. We are in a period of considerable economic and political chaos across the globe. All people instinctively seek certainty and stability to offset the fear of chaos and change. This increases tribalism and ‘othering,’ as people seek to make their worlds feel more stable and controllable. Media provides a means of identifying tribes and groups and these tendencies have deep evolutionary roots. The problem won’t be trolls and general troublemakers – these have always been a minority. The problem is the tendency of the cacophony of negative media voices to increase the social schisms contributing to the rising anger over a world undergoing massive shifts. We are watching what happens when the audience becomes accustomed to ‘having a voice’ and begins to assume that being heard entitles one’s opinion to dominate rather than be part of a collaborative solution.”

Alan Moore, a software architect based in the U.S., framed his comment within the environment of the raucous 2016 presidential campaign, arguing, “The tone of the internet, especially social media, is driven by people being frustrated by our system of government and especially the corporatocracy that money in politics brings. Those without the money to pay for access will vent online. … We want to be free from manipulation and coercion, from incessant tracking of our every move. As technology lures us into its comforting ease and convenience, many, not all, will slowly lose whatever sense of privacy we have left.”

Joshua Segall, a software engineer, said he doesn’t think that technology is capable of solving many of these problems, “Online activity is already heavily shaped by negative activities and there’s no reason to expect the trend to reverse. The effect is due to two broad drivers. First, the social media companies have taken a false neutral stance in which they apparently believe that technology will solve social issues as opposed to amplifying them. … Abusive activity is much more of a threat to free speech than almost any policy or action that could be taken by these companies. I think there is demand for more-inclusive systems but I don’t see a pure technology play that will enable it. Abuse is already widespread, so it’s unclear how much more demand there can be. The second driver is the ongoing economic stagnation across the globe, which is increasing tension between groups and fueling a sharp rise in nationalism, racism, fascism, and violence. This will be reflected online by increased abuse and negative activity, especially on social networks. Technical solutions and social media have little control over this aspect, but the underlying forces will affect them nonetheless. I don’t think this has anything to do with anonymity, privacy, or free speech. It’s a reflection of society, and people will find a way to use any system to express themselves. Any systemic change would have to be more broad-based than a single company’s online policies. However, there is a role for these companies to play in shaping public discourse by encouraging inclusiveness, civility, and true discussion.”

Chris Kutarna, a fellow at the Oxford Martin School and author of “Age of Discovery” wrote, “Part of the context we need to understand is that unpleasant shocks are becoming more frequent and more severe in their effect. This is a consequence of rising concentrations and complexity within society, and within social and natural systems. Our global entanglement makes us more vulnerable, while also making it harder to see cause and effect and assign accountability and responsibility to the injuries we suffer. Anger and frustration are a predictable consequence, and I expect public discourse online to reflect it.”

Scott McLeod, associate professor of educational leadership at University of Colorado-Denver, was optimistic that something can be done, writing, “The internet will continue to serve as an outlet for voices to vent in ways that are both productive and necessary. Societal and political ‘griping’ and ‘disgust’ often are necessary mechanisms for fostering change. We are going to find ways to preserve anonymity where necessary but also evolve online mechanisms of trust and identity verification – including greater use of community self-moderation tools – that foster civil discourse in online communities that desire it. Yes, there will be marginalized communities of disgust but many of these will remain on the fringes, out of the mainstream. The ideas that bubble up from them and gain greater traction will represent the larger public and probably deserve some constructive attention.”

An anonymous professor of public relations wrote about the origins of the most volatile and outspoken rage being expressed publicly in online fora arguing, “We are on a downward spiral, but I disagree that it is because of bad actors, trolls, etc. This is a time of great unrest in this country with distrust of media, academic experts, and government. The voices of anger, anxiety, and frustration are loud, and discourse by elites that these are uneducated or uninformed disgruntled citizens, contributes to the malaise and feelings of disempowerment. I continually hear, ‘What the hell can the average person do?’ voiced by these angry citizens as they shake their heads in disgust. This negativity will spiral out of control without leaders’ recognition of the legitimacy of these concerns.”
The ever-expanding scale of internet discourse and its accelerating complexity make it difficult to deal with problematic content and contributors

Do you think online discourse seems to be increasingly contentious now? Wait until a billion more humans are connected. There are 7.5 billion people on the planet, and about 3.6 billion are internet users today.2 A billion more are expected to get online in the next decade or so. Some of these respondents expect that some of them are likely to be trolls or people who are motivated to manipulate others, maybe quite a few. Respondents also noted that rising layers of complexity due to the expansion of the Internet of Things and new tech, like the further development of virtual- and augmented-reality, will create even more new challenges in monitoring and attacking trolling activity.


With more people gaining access, there will be less tolerance, counter-reactions. There will be expansion but also contestation.
Anonymous respondent

M.E. Kabay, a professor of computer information systems at Norwich University, predicted, “As the global economy increases the number of people with modest disposable income, increasing numbers of people in developing countries around the world will use smartphones to access the internet (or the restricted portions of the Net permitted by increasingly terrified dictatorships). We will see increasing participation in social networks, including increasing numbers of comments by new users. The widespread availability of anonymity and pseudonymity will encourage social disinhibition; without real-world consequences for intemperate remarks and trolls (attempts to provoke angry responses), the amount of negativity will increase. The numbers of new users will overwhelm the resources dedicated to monitoring and purging (some) social networks of abusive language – even today, networks such as Facebook are experiencing difficulty in taking down abusers. … Perhaps we will see the development of social media sites with stringent requirements for traceable identity. These systems may have to demand evidence of real-world identity and impose strong (e.g., multifactor) authentication of identity. Even so, malefactors will continue to elude even the best of the attempts to enforce consequences for bad behavior.”

An anonymous respondent wrote, “With more people gaining access, there will be less tolerance, counter-reactions. There will be expansion but also contestation.”

Itır Akdoğan, research communication director at Istanbul Bilgi University/TESEV, commented, “My perspective is from the developing world: Turkey. Gradually, those who are less-educated start being active in social media/digital commentary. As much as it sounds democratic at first, we then observe an increase in hate speech, harassment, and trolls. Statistically, the less-educated are the majority of the population. In this sense, I can say that the future of digital commentary will not be more democratic.”

Jon Lebkowsky, CEO of Polycot Associates, said, “With more voices in the discussion, facilitated by the internet, negative elements have become more visible/audible in civil discourse. This could be seen as the body politic releasing toxins – and as they are released, we can deal with them and hopefully mitigate their effect.”

Bryan Alexander, president of Bryan Alexander Consulting, wrote, “The negative comments will occur wherever they can, and the number of venues will rise, with the expansion of the Internet of Things and when consumer production tools become available for virtual and mixed reality. Moreover, the continued growth of gaming (where trash talk remains), the persistence of sports culture (more trash talk and testosterone), and the popularity of TV news among the over-50 population will provide powerful cultural and psychological backing for abusive expression.”

Wendy M. Grossman, a science writer and author of “net.wars” wrote, “It’s clear that the level of abusive attacks on sites like Twitter or those that leverage multiple sites and technologies operates at a vastly different scale than the more-confined spaces of the past.”

Matt Hamblen, senior editor at Computerworld, commented, “[By 2026] social media and other forms of discourse will include all kinds of actors who had no voice in the past; these include terrorists, critics of all kinds of products and art forms, amateur political pundits, and more. Free speech will reign free but will become babble and almost incomprehensible to many listeners.”

Lindsay Kenzig, a senior design researcher, said, “Given that so much of the world is so uneducated, I don’t see that more-inclusive online interactions will be the norm for many years.”

While some predict that adding a billion more people online might raise the level of negative discourse, one disagrees. Christopher Mondini, a leader for a major internet organization, said, “Taking a global perspective, the billion Internet users who will be newly connected in the next four years will have the same initial surge of productive and valuable interactions experienced by more mature online markets a dozen years ago. This will counterbalance growing pockets of self-important and isolated pockets of griping and intolerance that we see in these mature markets.”

No hay comentarios.:

Publicar un comentario