Applied Complexity Fellow, Santa Fe Institute
Joshua Garland serves as an Applied Complexity Fellow at the Santa Fe Institute. In the study of complex adaptive systems, the available data often fall far short of the demands of the theory. Joshua’s research aims to develop rigorous models that bridge the gap between theory and observation---data that may be wildly lacking in the eyes of mathematics but may still contain valuable information about the system. Said differently: when perfect isn’t possible, how can we adapt mathematics to describe the world around us? In studying complicated, ill-sampled, noisy systems, Joshua’s work focuses on understanding how much information is present in the data, how to extract it, to understand it, and to use it---but not overuse it. As an Applied Complexity Fellow, he works with Mirta Galesic to understand one such messy complicated system---online human social dynamics. In this capacity, he combines social theory, machine learning, time series analysis and natural language processing to understand effective strategies to counter the spread of hatred online without impinging on civil liberties. In particular, he studies the efficacy of so-called counter speech, a promising alternative to censorship where citizens come together to counter hatred and bullying online. He received his PhD from the University of Colorado as well as an MS in applied mathematics. He has a dual BS in mathematics and computer science from Colorado Mesa University. joshuagarland.com
Joshua is speaking at
Watch the prerecorded talks for this session beginning on Oct. 15. (Video embargoed until 4:00 p.m. ET 10/22/20.) Then tune in here on Oct. 22 for live Q&A with the speakers.
We in the mainstream media, stung by unfair tarring with the "fake news" label, are hypersensitive to the growth of actual misinformation online. And it's galling to see our Twitter feeds and the comment streams beneath our stories polluted by bigoted, chauvinistic, and sometimes threatening snark and vitriol. But new research suggests this front-row view to the downsides of free speech may have distorted our perspectives on these problems. Powerful observation biases skew what we as science writers see, compared to the general public.
In this session, researchers who have assembled big data sets to study these questions will present more objective views of the consumption of misinformation by the public and of how people respond to organized hate speech. The results of their quantitative analyses of large-scale, longitudinal studies challenge conventional wisdom about these phenomena.
Two scientists from the Santa Fe Institute who have used AI to classify nearly 200,000 Twitter conversations, spanning four years of activity on political accounts and large news sites in Germany, will share answers to a crucial question: is it better to ignore trolls or to combat them with counter-speech?
Then a data scientist at Penn who previously worked at Yahoo and Microsoft Research will update us on what his team's study of media consumption among large, nationally representative audiences in the U.S. reveals about how often Americans are exposed to truly fake news—and news of all kinds. The problems of ensuring an informed electorate, this research suggests, are quite different from what most of us have assumed.
Add to my calendar
Create your personal schedule through the official app, Whova!Get Started