People Power and AI: Chris Wiggins & Matt Jones

May 18th, 2023

“We have so much collective influence just via the way we construct our norms.”

Chris Wiggins and Matthew L. Jones are co-authors of How Data Happened: A History from the Age of Reason to the Age of Algorithms. Chris is an associate professor of applied mathematics at Columbia University and the New York Times’s chief data scientist and Matt is a professor of history at Columbia. Together, they taught a course called “Data: Past, Present, and Future," and their book is an extension thereof. We discuss the history of how data is made; the relationship between data and truth; and the unstable three-player game between corporate, state, and people power. 

We are currently in an unstable and unpredictable three-player game between state power, corporate power, and people power. In fact, we have a lot of collective influence via the way we construct norms. Our constant human activity is the grist of the mill for machine learning. Corporations do not have all the power. Still, the mix between advertising and data has created a lot of the most pressing concerns in the world’s algorithmically mediated reality.

Follow Chris on Twitter: 

https://twitter.com/chrishwiggins

Follow Matt on Twitter: 

https://twitter.com/nescioquid

Follow Mila on Twitter:

https://twitter.com/milaatmos

Follow Future Hindsight on Instagram:

https://www.instagram.com/futurehindsightpod/

Love Future Hindsight? Take our Listener Survey!

http://survey.podtrac.com/start-survey.aspx?pubid=6tI0Zi1e78vq&ver=standard

Want to support the show and get it early?

https://patreon.com/futurehindsight

Credits:

Host: Mila Atmos 

Guest: Chris Wiggins & Matthew L. Jones

Executive Producer: Mila Atmos

Producers: Zack Travis

  • Wiggins & Jones Transcript

    Mila Atmos: [00:00:04] Welcome to Future Hindsight, a podcast that takes big ideas about civic life and democracy and turns them into action items for you and me. I'm Mila Atmos.

    If you really weren't thinking much about artificial intelligence a year ago, but now read headlines and articles that daily contribute to a sinking feeling about a dystopian AI future, believe me; you're not alone. I'm right there with you.

    In fact, sinking feeling is exactly what we said in a group chat for the show after Geoffrey Hinton announced his resignation from Google so he could speak freely about his fears about AI gaining power over humans.

    But, you know, we don't do helpless and hopeless on this show. We find hope in action. And so I'm optimistic that today's conversation might lift us out of our gloom and doom.

    Joining us today are Chris Wiggins and Matt Jones. Chris is an associate professor of applied mathematics at Columbia University and the New York Times's chief data scientist. And Matt is a professor of history at Columbia and has been a Guggenheim Fellow.

    Together, they teach a course called "Data: Past, Present, and Future," and are co- authors of How Data Happened: A History from the Age of Reason to the Age of Algorithms. It's an actionable history that illuminates how data is made, not found -- and what that means for our civic action toolkit.

    Welcome to Future Hindsight.

    Chris Wiggins: [00:01:40] Thank you, Mila. Matt Jones: [00:01:41] Thanks for having us.

    Mila Atmos: [00:01:42] And actually, Matt, it's welcome back, in your case, because you were on the show in 2019 talking about your class that you teach together. And I'm so happy to have you back.

    Matt Jones: [00:01:51] It's great to be back.

    Mila Atmos: [00:01:52] So my first question here... Is my ever sinking feeling misplaced? That's a big question. But what I'm hoping is that we can start this conversation by laying out what you see as the stakes here. Are we looking at AI domination of humans?

    Chris Wiggins: [00:02:08] I think it's unlikely any in our lifetime that AI will dominate humans. I mean, the AI is being drafted by, created by, deployed by, maintained by humans. So I think it's unlikely for AI to become autonomous in some way that would subvert that. However, AI does have real and present problems in society right now. Few of those problems are affecting CEOs or extremely successful anointed scientists like Geoff Hinton, you know, CEOs of extremely valuable companies. But there are real and present concerns about the role of automated decision systems in our life right now.

    Matt Jones: [00:02:44] And there's a certain kind of danger of confusing the two kinds of problems, the one which sort of like the end of humanity versus the sort of way that automated decision systems are causing harms in the here and now or in the very near future that affect real people and real communities. It's important that the conversation is now very much in the public eye. There is a danger, though, we're talking about the wrong things a lot of the time.

    Mila Atmos: [00:03:09] Yes. So, Matt, since you're the historian, I wanted to ask you this question because your prologue describes this book as an actionable history. Why is it so important to understand the history here and how can it be turned into action?

    Matt Jones: [00:03:23] So when historians think about science or technology, we take a view that in some sense "it ain't necessarily so," meaning that to look at history is to understand how things could have been otherwise. For example, decisions were made about the idea that corporate data isn't subject to strict privacy legislation in the 1970s, but those decisions are frequently lost to time and seem immutable. So to look at the

    history of this phenomena is to open up those sets of decisions and to make clearer the kinds of different choices we might be able to make now, in thinking about real harms in the present, and potential harms in the very near future.

    Mila Atmos: [00:04:03] In this context, throughout the book you show the long history of how data is made -- not found. There are design choices that are always made along the way of collecting data, starting with a person or the entity that does the collecting and the purpose of data collection. And you start the history with a Belgian astronomer who inspired Florence Nightingale with an evidence based vision for making law, right, based on data about people, a new scientific politics that led to a rearrangement of power. Can you tell that story in a little bit more detail?

    Matt Jones: [00:04:35] Yeah. So the Belgian astronomer, a man named Quetelet, was fascinated by the transformations that had happened in the knowledge of the stars, and he thought that the very kinds of mathematics and technologies that had made the best predictive science humans had ever come up with, as far as we know, that that could be conjoined not just to data about the positions of the moon or the sun or of planets, but might be applied to the ever increasing amount of data that was collected on human beings. And as he thought through this, he came to believe that he had discerned nothing less than the social physics underlying human phenomena such as birth rates, criminal rates, and other sorts of things. And so he brought into focus an entire stratum of human existence, but not just for the purpose of describing the world, but also thinking about how one might make a better world. And he was, in some ways a little prone to huge policy prescriptions. But the vision that one would collect data, analyze it using new mathematical tools in order to understand better the kind of world we might come to, came to be very inspiring. It took a long time to actually become the backbone of a lot of policy making, and often it's not; even to this day. And it was inspiring to figures like Florence Nightingale, who was deeply concerned about the health of British soldiers, both on its own terms, but also as an essential component of Britain's imperial world. And so she became a huge advocate of applying this to the pressing issues of the time: compulsory education, the organization of the Indian colonies, and medical sorts of things. And so she was both an advocate of the use of data informed politics and a great practitioner of it herself in concrete political conversations, particularly around health.

    Mila Atmos: [00:06:29] Right. So your book is really full of really good examples like that, of how data collection influenced public policy. And I want to fast forward here to what you call data's mathematical baptism. And I think this question is perfect for you, Chris. You point to when Guinness was listed as a public company and hired three scientists to conduct experiments. Can you maybe flesh out that story and tell us why you highlighted it as such a key moment in the history?

    Chris Wiggins: [00:06:55] Yeah. So the history of data is really a rich subject. We try to pick out stories that we thought would help make the present strange. Stories that we could connect as a foil to present moments. So the story of Guinness is a great one because it was the hot IPO of the day. It was ridiculously valuable and they had sufficient resources to, again, use whatever was the latest tech of the day to try to in their case. They were trying to make a buck. They were there trying to improve the cost efficiency and the quality of their product. So the way they leveraged the best technology of the day was to hire statisticians. They weren't calling them statisticians, they were calling them brewers, which was the sort of highest title you could get at Guinness. And one of the problems that they tackled was how to figure out which of different processes for making beer would produce the best beer or the most cost effective beer. En route, they ended up creating a mathematical technology which we now normatively think of as part of our own truth-making algorithm. Right? There are many fields in which this idea of statistical hypothesis testing is considered normatively to be the algorithm for deciding what is true. It's really permeated into so much of our, of our way that we decide what even can we believe for complex systems, which includes things like, you know, which COVID vaccine is effective or not effective. Right? Ultimately that's done using statistical techniques which trace back to work by Guinness and to a particular researcher who published under the name Student, he considered himself a student of of Karl Pearson in this case, but his real name was Gosset. And so when you were an undergraduate and you learn mathematical statistics, it sort of culminates in this idea of statistical hypothesis testing and the T test. These are tools derived from that time and in particularly from Guinness.

    Mila Atmos: [00:08:44] Right. Well, one thing that really struck me about Neyman, he tested for decision making, which is one of the things you describe here and that he apparently had argued for decades, you write in the book, that most people think of hypothesis testing as being about truth. But he argued it's really about choices. It's

    really about optimizing choices. And I think when you go and like search online about the best COVID vaccine, right, it's basically just an aggregate of what people decided, but it's not actually necessarily true that that is the best COVID vaccine. It doesn't actually point you to the truth. It just points you to the aggregate answer.

    Chris Wiggins: [00:09:19] Yes. So Neyman's take on what it is that we do with data, I think, did try to make that distinction that there... I think for Fisher, the scientist, he wanted to know what was true and Neyman was looking at what we do when we use data and saying ultimately we're not so much interested in knowing which of two models of the world is the true model. Ultimately, we have to make a decision, and that sort of sits within another way of understanding what we want from data. Do we want data to describe the world? Do we want data to make a prediction about what's going to happen in the future? Or do we want data to tell us what action should we take in order to get some outcome that we want? That's a story we try to tell in the, in the book about the difference between description, prediction, and prescription. And that's part of the divide between Neyman and Fisher.

    Mila Atmos: [00:10:07] So I want to skip forward again. There were so many tensions in the history of how the current form of AI came to be. Today it is primarily about machine learning and making predictions. And so I think this is maybe a really simple or base question, but how does machine learning actually work?

    Chris Wiggins: [00:10:28] Well, machine learning methods in general require a training set of data, so they look at some set of data, usually from the past, saying what has happened in the past, and then try to learn, which is really just a statement about fit or optimize whatever you want to think about a mathematical operation that tries to take even something as simple as fitting data to a straight line and generalize it to something as potentially complicated as predicting the next word after you've seen several words in a sentence. So once you're armed with a lot of data about what is the next word, somebody's going to say, given the three or 4 or 800 words that they've said before, it turns out you can build a pretty successful predictive algorithm that will predict the next word. Important there is that it's done using data rather than some sort of understanding of the world. Often it's done in a way that has very little understanding of the world, which has been quite the revelation in the context of very complicated neural network models, where at the end you have very little understanding of how the prediction was

    made. But for most of the life of the academic field, artificial intelligence people were pretty sure that the way to do it was through rules and through actually understanding the way we think. Now, that's a subtle point because we don't really know how we think. We think we know how we think. But for most of the life of artificial intelligence from 1955 until, say, the mid 90s, people were convinced that data was absolutely the wrong way to do it. And in fact, there's a great essay written about this by the scholar Herb Simon in 1983 called "Why Should Machines Learn?" which makes the point that if we really want to get artificial intelligence out of a computer, why should we try to do it using a machine learning approach rather than just programming what we know to be the way experts think about the world? That turned out to be, absolutely, well, a failure, compared to the data abundant approach which is shaping our reality today.

    Matt Jones: [00:12:20] One thing I'd say about the data abundant approach, particularly for a lot of the things that concern us, is that data isn't sort of just out there in the world to be found. A lot of it is data that is classified, augmented, considered by human activity. So human activity of classification say, for example, of how we look at web pages or what word we say next. That's the data that is the product of human action that much of machine learning attempts to emulate and predict. So, constant human activity in classifying the world, categories and so on, deciding, becomes grist for the mill of machine learning in many, many cases. And so when we say something is making uninterpretable decisions about, say, sentencing in a criminal context, it's often doing so on the basis of a large amount of information about how humans have done that kind of task and then attempting to make predictions along that sort of thing. So the data -- this is not true of all machine learning, but of many of the one uh aspects of it that are most important in the current conversation -- absolutely depend on humans to produce much of the data that then is essential to building the kinds of systems that we're concerned with.

    Mila Atmos: [00:13:35] Well, as I sit here today -- and we're all avid users of our smartphones and of course the symbiotic rise of, let's say, the iPhone and social media. And while Apple professes to care about privacy -- we firmly are under the dominance of the ad revenue model, which is a big chapter in your book, and how advertising agencies demand ever more granular information about each of us in the pursuit of profits. So how did this model come to be accepted as the default and how is it warping our reality?

    Chris Wiggins: [00:14:11] It's a great question. So advertising has been around for a long time, clearly. And the idea that free products should be ad supported has been around for a long time as well. In fact, the phrase "you are the product," we trace in the book to a video from 1970s, actually not to some sort of modern -- meaning of this century -- realization, but we talk about how this mix between advertising and data has really created a lot of the most pressing concerns, the things that we set out as the stakes of algorithmically mediated reality in our world. So part of it is the fact that the delivery mechanism for all of this information in the palm of our hand is mediated by extremely successful optimization algorithms, and those algorithms are able to decide what is the content that can be delivered to somebody that is the most engaging. Right. And engagement can translate into simply how many things are clicked or how many minutes you're watching a product that's extremely useful to directly maximize the profit for these companies under the advertising model. Separately, of course, these companies have to convince somebody else, a marketer, to pay for those pixels, right? I'm an advertiser and I'm selling pixels and some marketer from another company wants to pay for those pixels. I convince the marketer that my pixels are valuable because I'm convincing them my algorithm is valuable. So part of that is a narrative in which advertisers have to convince marketers that there's some value-add from having as much granular data as possible about people. Right. So it's really part of one narrative, and now it's also part of one economy, an extremely successful and dominant economy in the United States right now.

    Matt Jones: [00:15:48] And it's a good example of the kind of thing you were asking me about. How is this an actionable history? In the early days of the Internet, it wasn't clear at all how people were going to pay for content, how email was going to be served and paid for. And it became to us blindingly obvious that it was going to be ads that were served by spying on us and then trying to give us things that sort of appealed to us in various kinds of ways and understanding what made that possible. The combination of a sense that there wasn't really privacy around commercial data, but also the push by advertisers in the move from more traditional forms of media to online media to have better understanding of their customers. So it comes together and then it congeals in such a way that it seems like we couldn't think otherwise. And so to understand a moment where we could think otherwise really was to set up an opportunity for us now

    to say, "Well, how might we rejigger systems in a way that don't have these properties that to us seem in many ways so negative and are not necessary?"

    Mila Atmos: [00:16:54] So how can we rejigger?

    Matt Jones: [00:16:56] Well, so that's a good question. I mean, Chris could perhaps talk a little bit, but it wasn't so long ago that people would tell you the subscription model was completely dead for all kinds of media. And The New York Times and many other sorts of media have shown this. But in our very moment, we're looking at what seems to be the long term death of Twitter and an explosion of things like Substack and other sorts of things which are based on entirely different kinds of economic models. So it's a little unclear, but that unclarity shouldn't be taken as an index of impossibility. It should be a moment of saying we don't have to put up with something that goes against so many of the fundamental values that we share.

    Mila Atmos: [00:17:35] Right. I mean, I have so many subscriptions now, right? Matt Jones: [00:17:38] Right, so that may be that's another possibility.

    Mila Atmos: [00:17:40] I mean, I'm being feed to death everywhere here. You know, it's definitely not free, what I'm consuming.

    Matt Jones: [00:17:44] Yeah.

    Mila Atmos: [00:17:47] We're taking a quick break to share about a podcast called The

    Catch.

    The Catch: [00:17:54] This season on The Catch, I head to the upper Gulf of California for a look at the shrimp industry and the efforts to make it more sustainable and less harmful to marine life and to fishing communities. Join me on my journey as I hear directly from fishers, environmentalists, and officials about efforts to untangle the mess and hopefully revive this area that Jacques Cousteau called the Aquarium of the World. Check out The Catch, Season Two, wherever you get your podcasts.

    Mila Atmos: [00:18:33] So I kind of want to circle back to the question about the relationship between data and truth, especially in the way that we're being consumed as data and the way that we are consuming data today.

    Chris Wiggins: [00:18:45] Well, I would say information is sort of an umbrella that includes the problems of advertising, news, and political persuasion. And at this point, all of those have become individual manifestations of what Zeynep Tufekci calls persuasion architectures. So part of the question we just raised is about how are you going to pay for things? And one model, as we've spell out in the book, is the advertising model. It's one particular way that you can imagine monetizing people's attention. Subscription is an alternative and there's plenty of others, including philanthropic donation supported, or state supported examples of news. But in the context of news, I think I see that as one example of how we all have these forces on us. Lessig, the legal scholar Lessig, talks about norms, laws, markets, and architecture. So markets includes what are we willing to pay for? Architecture includes technological architecture, what are the algorithms capable of? But a lot of this is about our norms. Do we think that it's okay to have, you know, technologies that spy on us all the time in exchange for convenience? That's really a normative statement about what we as a society support. The point about subscription supported digital content is a good one. When The New York Times started a paywall in 2011, there was plenty of smart people saying that that was a ridiculous idea because as Long said, "information wants to be free." Actually, by the way, that phrase information wants to be free is only the first half of of the statement. It's actually much more nuanced than that. In any event, apparently many times information does in fact, want to be expensive and that people normatively are willing to pay for subscriptions, as you just laid out.

    Mila Atmos: [00:20:26] So I want to skip forward again, back to the future, and my sinking feeling, and our current inflection point. As we're thinking about AI, analyzing the massive trove of data about us, this raises questions about who has the power to make decisions on the basis of data. So the way we experience it today, it seems that corporations and ad agencies have all the power. But you argue in the book that we are in an unstable three player game between corporate power, state power, and people power. Tell us more.

    Chris Wiggins: [00:21:01] Well, it gets back to your point about a sinking feeling of doom and gloom. We really do want people to have optimism. I certainly I have optimism. Right. And so the whole book in some ways points the reader towards Chapter 13, which is about power. It's about what are the sources of power. If you think that corporations do have all the power and there's nothing that can be done, I can understand not being very optimistic then. Basically you're saying, "Golly, I hope that the corporations fight it out with each other in such a way that we benefit." So that's why we try to trace out in the book the fact that corporations don't exist as islands, right? They sit within, for example, governments, right? And states have their own sources of power that guide and constrain what companies can do. Not always constraining. In some sense, the actions of governments are themselves creating the arena in which these companies can compete. So they are making possible innovation, not just constraining innovation. And then, of course, these corporations are both populated by people, right? There are employees internal to the company, and then we, as the market for these companies, are providing these companies sometimes our money, and certainly our data, and sometimes our talent if you think about how those companies have to recruit people and convince them to go work at these companies. So we try to lay out in the book this unstable game among corporate power, state power, and people power, if I can use a less scholarly phrase. The more scholarly phrase in the book is private ordering, taken from a legal literature. But the idea that that's one sort of analytic way of saying: it is not the case that corporations have all of the power, right? And in fact, the corporations do battle themselves somewhat. But we want to give people a sense that there are some tools at our disposal.

    Matt Jones: [00:22:40] And a lot of those tools are going to come together in what are often sort of curious kinds of solidarities when civil rights leaders come together with certain kinds of corporations to put a check on certain kinds of government power and other corporations. There are numerous examples of when that has happened. And at some level, those different groups may have to hold their nose when they're working together because they don't share all of the values. But sharing all of the values is too high to precondition; would eliminate the possibility of political action at all of the various kinds of levels or of economic action. So I think that's an important source of hope, is that there's lots of opportunities by understanding that there isn't total uniformity among corporations, total uniformity among citizens, and total uniformity around government.

    Mila Atmos: [00:23:30] Mm. Well, you both sound optimistic. So what is your take on Geoff Hinton resigning?

    Chris Wiggins: [00:23:37] It's really grabbed people's attention. It's amazing how many people read that story and reacted to it. And Hinton has given 1 or 2 interviews thereafter. I think it gets back to what Matt was saying earlier, that if you think about risks of artificial intelligence and AI-dominated realities, you can certainly imagine these futuristic Terminator scenarios of general artificial intelligence that's more intelligent than, and has power over, its designers. But it's much easier to just look at the real and present concerns today. Like, algorithms are already exacerbating inequalities for people. Algorithms already are operating in ways that are inscrutable and difficult for us to challenge, even in contexts where they're being leveraged by our governments. So there are real present problems today. Perhaps those problems are not big problems for people like Geoff Hinton or CEOs of extremely successful companies, but those are real and present problems today. So I think there's a large community of people who are concerned about AI who are focused on that, the problems that really exist right now. So, you know, I think it's interesting that Geoff resigned. I think his resignation is clearly sparking even more conversation. Clearly, there are many scholars and activists who have been looking at AI with a critical eye for decades. But I don't know that I would focus on the particular things that Hinton is focused on.

    Matt Jones: [00:24:58] Yeah, And I think it's fair to say that a lot of the people who have been pushing more critical scrutiny on data and now AI are people who both emerged from within companies like Google and often a very antagonistic context and elsewhere in academic circles who are not at such a level of prominence as Geoff Hinton. And they felt that on the one hand they have been articulating this for nearly a decade, and secondly, they've been focusing on these more concrete issues that are at the heart of our civil life rather than these kind of potential existential risks. And I think it's understandable that they have seen their work insufficiently cited and considered in the public realm. And I think the more that we invoke both of those kinds of critiques, the richer our discourse can be.

    Mila Atmos: [00:25:49] Right. Well, there are so many hot takes right now in op-eds, you know, from the sky as.

    Matt Jones: [00:25:54] Yeah, exactly

    Mila Atmos: [00:25:54] you like to say. But there are people who are working in the here and now today about the problems with AI today. So what are the things that you're seeing in terms of the people or activists that they're really getting their teeth in, in this moment? And that should inform our thinking about AI right now?

    Chris Wiggins: [00:26:14] Hmm. Well, there are several concerns. An organizational concern for me has been the decimation of all of the AI ethics teams from all of the major tech companies. So headlines lately that concern me include headlines about companies just simply disbanding and giving up on attempting to construct AI ethics teams within those companies. There's some higher visibility examples and some less highly visible examples, but by and large it was a big narrative for corporations a few years ago and has in some sense fallen by the wayside for a variety, I'm sure, a variety of reasons that are really interesting for scholars to construct after the fact. In our book, there's a chapter called "The Battle for Data Ethics," which is about this. How is it that we fight to define ethics in a way that we think is aligned with our values? And then how is it that we ask for these companies to design process that is informed by these definitions and acts in a way that meets our norms, including our norms around consumer protection? Getting back to the final chapter, part of it is that we lack any sort of checks and balances around corporations in the way that we have come to expect checks and balances from our governments, right? So when we think about the way that the US federal government was in possession of abundant data in the 1960s and 70s people reacted with a series of legislations and acts that we still rely on today for transparency into government and for protection of our data. Now that that power has moved over to the corporate sector, there is no corresponding check or balance. And so ethics has become this capacious term where we sort of put all of our hopes and dreams that maybe these companies will not be bad. But it's really amorphous. And as these companies sort of move away from even making an attempt, that's one concern of mine. Matt, what else do you see?

    Matt Jones: [00:28:01] A different way of looking at it is, where are there people who are very actively working in this domain? And one of the striking things is people's first instinct, I think has to do in the US with federal legislation or in the EU with the GDPR, which is the privacy legislation that's so important. But much of the most important

    advocacy right now is happening at municipal, and state, and other layers of government. So particularly around questions of facial recognition or other kinds of surveillance technologies which very rapidly become normalized once they're introduced into municipal contexts. A lot of activist action is working precisely at that sort of municipal level as well as contesting things, say, in the floors of Congress, where right now the NSA is in the midst of worrying about the re-upping of legislation that it claims is essential to the national security of the United States, but clearly allows large amounts of capturing of data about Americans along the wayside. So I think it's really important to understand there's all these layers of governance in which different kinds of actors, from outright activists to government lobbyists, are working in very unstable, but in many cases powerful, ways to change our relationships with these technologies.

    Mila Atmos: [00:29:18] So it sounds like there are lots of people working on the problem, so to speak, between ethics and also trying to get good legislation and regulation in place to change the dynamic about how our data is both collected and analyzed. And so as an everyday person, what are two things that I could be doing, or you could be doing - the listener, to help bring this change about?

    Chris Wiggins: [00:29:47] So an obvious one in the States is that we're members of the electorate. So we enjoy democracy. We enjoy democracy in various ways. Democracy doesn't only happen once every four years, people should know. And so there's a lot you can do as a member of a Democratic electorate. Right. Which has some checks and balances on your elected officials. I think there's a lot that we do by being informed and being critical. So one of the things that I think is useful in this dynamic is not just looking at the state, whether it's the US, federal, state, or GDPR in EU, but the role of press, right? The role of press as a source of critical inquiry shapes the way these conversations take place, right? These companies react when they're called out insufficiently high visibility places. And part of why they react, of course, is because it puts pressure on elected officials to change the way these these companies are regulated. But it also can make it difficult for these companies to establish a market position and for, again, getting back to norms and for these companies to influence our norms such that we think that that particular way of life is normal and acceptable and to be expected.

    Matt Jones: [00:30:52] And I'd say in all of the diverse communities we find ourselves in, say, the school community we might be in, or the PTA that you might be a member of, your place of employment. We are often beset by the introduction of new sorts of technologies that we are supposed to consent to, and having a conversation about the legitimacy of those sorts of things that often seem too good to be true because they're often presented, say, in an educational context. Software that's only going to improve outcomes. Well, what does it come with? What sort of demands does it place on people to assent to? And is there space to push back? Because the phenomena we're discussing happens at a very high level with huge corporations operating with seemingly limitless power. But it tends to operate through this incredibly granular introduction and transformation of our everyday life. And we do have possibilities of pushing back at various kinds of levels. And that may seem small, but it's an important additional component to a political action and municipal level working within companies and very large and important national and transnational legislation. I think all of these levers need to be put into practice to restore, as it were, the kind of ambitions we have for the kinds of civil society that we want.

    Mila Atmos: [00:32:11] Yeah, don't agree to download the app right away.

    Matt Jones: [00:32:14] Yeah, exactly. No, I mean, it seems trivial and it is actually one of the turning points in the history is, when did the web browser make it that you had to actively opt out of being spied upon? It seems like a trivial technical decision, but it wasn't. It was a decision that the browser had a different sort of relationship to us, and that history didn't have to go down that way. But because of the way it went down, we are introduced into a culture where we have to actively opt out of collection and... It seems small, but is part of a much larger political project that one can become invested in.

    Mila Atmos: [00:32:49] Well, in the last chapter you quote an article about algorithmic injustice, which calls on us to resist the apocalypse-saturated discourse on AI that encourages a mentality of learned helplessness. So I think this has been incredibly illuminating to understand how we can deal with data in our everyday lives. And in that context, looking into the future, what makes you hopeful?

    Chris Wiggins: [00:33:17] I would say the diversity of powers at our hands. Right? So that quote actually does a much better job saying what I was trying to say earlier in too many words, which is our learned helplessness reflects the lack of hope. And it's easy to give up on hope if you don't realize how unstable and unpredictable the three-player game is. Framing the game as unpredictable and unstable, I hope, gives people hope. So if people feel like there's nothing to be done, it's easy to just feel like, "well, we simply have to consent." But in fact, you know, we have so much collective influence just via the way we construct our norms, right? That's what I mean by norms, after all. It's a collective statement about how we behave. And if our norms are "we won't pay for information at all costs," then we're effectively consenting to an ad model, right? We are consenting to surveillance capitalism, if we agree, if we go around saying that information has to be free. In general, though, I think one of the things that gives me optimism is to try to be more analytic about what are the various powers that shape the behavior of, in this case, private companies that have all of our data, right, that include the reaction of other companies. You invoked Apple briefly about Apple's stance since 2015 that privacy is a fundamental human rights. The states -- we've talked about how it's not just the US federal government, there's all sorts of municipalities, and EU and other other jurisdictions. And then of course all the things that we as individual people do, whether that's employees of the company who engage in collective action or whistleblowing or walkouts, or we as the public who give these companies the data, right. And just by using these companies, particularly under the ad model, we are actually empowering these companies as well as by members of the electorate in which we have our indirect power on the companies via the state. So that sort of analytic of power gives me optimism to realize just how many tools actually are at our disposal.

    Matt Jones: [00:35:07] Yeah, a different form of optimism comes for me from the fact that our book emerged from a class we teach, and the amazing young people we teach are, to use an old fashioned term by now, they are more than digital natives. And there was a story that lots of people like Mark Zuckerberg were telling that privacy was dead. Get over it. This sort of thing. That is not at all obvious for these young people. The future future leaders, both technological and political and and corporate, and other ways. So I have a lot of hope that that fluency with technology combined with a striking lack of pessimism that they often have about the worlds in which they could live. That's a very potent combination. And our hope in both teaching the class and in the book we produce is very much to provide precisely those young people tools that they are going

    to fashion many of the answers to these questions. It's unlikely to come from us, but is likely very much to come from a very politically engaged, a very technologically savvy, generation. So that gives me a tremendous hope to see what they will be up to in the next 10 to 15 years.

    Mila Atmos: [00:36:20] Well, I have to ask you one more question then. In which case do you really believe it's going to take 10 to 15 years or is this something that we can change, that we can change this dynamic of power in a shorter time span?

    Chris Wiggins: [00:36:33] Technology changes quickly. Markets change almost as quickly. Norms take a longer time, and laws take even longer. We have an economy right now organized around a few incumbents which are supported by the ad model to the tune of many, many dollars every year. So it's going to take a long time to dismantle that particular power structure. But I do believe that norms are quite fluid, right? People change the way they support some particular social issue with surprising rapidity, right? Surprising is relative. It implies that you didn't know in advance that they were going to change the way they do things. But I do think norms can change pretty quickly, and often it's because there was some rapid change in the technology and then people had to decide, well, how do we feel about this technology? Large language models have been around for many years. Generative AI has been around even longer. Even GPT3 was. We taught it in our class a year ago. But the release of a chat bot that's powered by GPT has really captured people's imagination. And it's really enjoyable for me to watch the way people's norms change so rapidly in higher education. For some people, there was an immediate reaction, like we have to ban it. And then there was a reaction of people saying, Well, how does this change the way we think about our job as educators? Is it the case that part of our job as educators is to help people use this technology wisely? And Matt often talks about the use of a calculator. You know, it's not like we ban calculators entirely. We think about how it fits into the educational process or spell checkers for that matter, or mail utilities that will suggest the end of the sentence to you. These are all changes in technologies or architecture, so to speak, and it's really always interesting to see how dynamic our norms are in response to it. So norms can change quickly. Laws take a long time to respond. So I'm very optimistic about change, but I recognize that different types of change have different time scales.

    Matt Jones: [00:38:25] Yeah, and I think when we think about many of these technologies, it's useful not to think of them as, say, one device that you're just adding to the repertoire of things in your everyday. We're talking about things that are changing the infrastructures of educational institution, news, the way governments work at all levels, of corporations. And different sets of regulations and collections of data make possible different kinds of futures. So if we think about technologies as part of infrastructures, then we begin to be able to see things as not an either or, but rather, look, if we're going to adopt the automobile, we don't have to choose to cut a highway across the Bronx and destroy communities. There are various ways of developing and using technologies. And above all, people will often say, well, technologies, they move faster than the laws. Chris was just saying; that's true. But then they'll say something which is very much not the case, that the technology just implies how the law has to change. That is not true at all. The technology certainly suggests that laws need to be transformed, but it does not give us the detailed content of our norms or our laws. And that's our collective responsibility to recognize that which is transformed by virtue of a technology and then to think about what values, how does it fit into our selection of values, what can it do to build upon that, to inspire, and what might it do that would move against the sort of values that we want? So if we understand that technology doesn't imply norms and laws, we open up the space for our collective deliberation about the kinds of society we want with that technology employed in ways that enable us, rather than impacting us negatively, and particularly impacting the least empowered among us negatively.

    Mila Atmos: [00:40:13] That's a great reminder that we are people with agency and we make the decisions. Thank you very much, both of you, for joining me on the show. It was really a pleasure to have you on.

    Chris Wiggins: [00:40:23] Mila. Thanks for having us. Matt Jones: [00:40:24] Yes, thank you, Mila.

    Mila Atmos: [00:40:26] Chris Wiggins is an associate professor of applied mathematics at Columbia University and the New York Times's chief data scientist. And Matthew L. Jones is a professor of history at Columbia and has been a Guggenheim Fellow.

    Next week on Future Hindsight, we are joined by Representative Anna Eskamani, who serves on behalf of Florida's 42nd District of Orange County in the state House of Representatives.

    Anna Eskamani: [00:40:55] Our slogan is "Working for You, Fighting for Us," which was not a slogan that we had when I first ran for office -- something that we developed over time because it reflected who we are that we're working for you, and fighting for us. And I do think that's a, that is a part of our success.

    Mila Atmos: [00:41:09] That's next time on Future Hindsight.

    Did you know we have a YouTube channel? Seriously, we do. And actually, quite a lot of people listen to the show there. If that's you. Hello! If not, you'll find punchy episode clips, full interviews, and more. Subscribe at YouTube.com/FutureHindsight.

    This episode was produced by Zack Travis and Sara Burningham. Until next time, stay engaged.

    The Democracy Group: [00:41:44] This podcast is part of the Democracy Group.

Previous
Previous

Fighting for Good Governance: Anna Eskamani

Next
Next

Dignity and Justice: Judge Victoria Pratt