Company

Video-Interview with Tristan Harris

  • Share
    Two clicks for more data privacy: click here to activate the button and send your recommendation. Data will be transfered as soon as the activation occurs.
  • Print
  • Read out

Interview with Tristan Harris, previous Google Design Ethicist and co-founder of the “Center for Humane Technology“ on the attention economy and the daily manipulation by the AI of  Facebook and Co.

Interview with Tristan Harris.

Hi Tristan, thanks for taking the time for making an interview. We are talking about digitization and first I’d like your opinion - if you would complete these sentences. What’s your hope for digitization? 

Tristan Harris: My hope for digitization is that we as the technology industry focus on making sure that we are strengthening the social fabric and strengthening the bonds and the coordination capacity of humanity. So as opposed to trying to crawl down people’s brainstems and get things out of them, that we instead ask – there is this thing called the social fabric which includes our relationship, democracy, shared truth, mental health, romantic relationships – all of that are part of this invisible ecology, almost like an environment and what I’m excited about is for technology to be humane, or wrap around those parts of the social fabric and fit them like a glove. Because right now technology is designed to fit our lizard brain like a glove. Like it wants to climb on to your amygdala, your fear center and wrap perfectly around that or wrap around your dopamine slot-machine reward center. That’s fine but it’s causing all these other problems. So it’s not that we don’t want technology or we don’t want to digitize, we want to digitize and make more efficient all the things that stand between us and the things that make us extra-human. One of the lines we use is: We don’t want technology that makes us super-human, we want to bring out what makes us extra-human. So that’s what I think we need to do.

And the biggest fear? To come to the opposite scenario.

Harris: Well, the opposite scenario which unfortunately is also the default scenario is kind of doom, just to be clear, because I think what people need to get is, you know, having technology that gets better and better and more and more persuasive, overpowering the limits of the human mind, and I say that and it sounds like a conspiracy theory like I had a tinfoil hat, but when you realize that, you know, when I was a kid, I was a magician and it teaches you that you can have an asymmetric amount of power over some else’s experience without them realizing it. You can overpower the limits of someone’s mind. And technology especially with AI is going to get better and better at knowing your weaknesses, better than you know yourself. And once it does that it can take control. Look, think about two military actors. If I’m a military actor and I know your plans and your weaknesses better than you know your weaknesses which one wins? So we have a super computer, every day pointed at two billion people’s brains, called Google and Facebook. And, you know, we’re bringing the same Paleolithic hardware from millions of years ago, so it’s like bringing a knife to a space laser fight. The default outcome will be bad, because we will lose control of our mental sovereignty. And democracy is based on that and shared truth is based on that. And so we are de-basing the fundamental substrate of what makes our society work if we don’t fix this problem. 

So that’s the race for attention you are talking about? 

Harris: Yes, the race for attention comes from the advertising business model. Because, you know, Herbert Simon said in the 1970s that when information gets abundant, attention gets scarce. It used to be that there is a competition for who has more information. Because information was not abundant. It was hard to get access to information. And the internet came along and it just exploded information and then that has to fit into a finite pool of attention that’s called the attention economy and there is only so much, it takes nine months to grow a new human being and plug them into the attention economy, so it becomes what we call the race to the bottom of the brainstem. That I have to go lower into your brainstem than my competitor otherwise I won’t get as much attention from them. And so it becomes this not just the race to the bottom of the brainstem, but a race to frack attention, because it’s for example if we run out of attention we want you to start paying attention to two or three things or four things at once. Because then we can quadruple the size of the attention economy to sell it out to advertisers. But what we are selling to advertisers’ is crappier, smaller time slices of attention, more and more like, you know, fake slices of attention to advertisers and meanwhile we are fracking our brains. And the average time that we spend focused on a computer screen is 40 seconds. And that was two years ago, it’s getting worse. You know, we can’t read critical books anymore, we’re losing nuance, complexity. And the world is getting more and more complex and we are thinking simpler and simpler thoughts. We are saying simpler and simpler things about a more and more complex world. And this is a recipe for disaster. 

You really think this is possible that tech can hack a human being?

Harris: I think people like to think “Oh, it’s unfortunate those dumb people over there that they got affected by Russian propaganda or those people over there got sucked into Youtube, but not me”. You and I we are here in front of this castle, we are the smart ones. Only those people get affected. In magic I was a sort of hand magician there is a saying that usually the people who have PHDs are easiest to manipulate, because they are so confident. And magic doesn’t operate and the level of what you know, like your intelligence. How can a 12-year old magician fool a PHD in astrophysics? It’s because it’s not about the PHD. It’s about something about how your mind works. And our feelings are increasingly hackable. I knew of a group during the French elections -I later found out about this group- that was manipulating the election by looking at your public social media profile posts about a controversial topic. Let’s say immigration. And let’s say you use words like "it’s a disgrace" or "these animals", or whatever words you were using they would generate content that matched your own word choices. So you would perfectly nod your head because someone was telling you the exact thing that you want to hear 
So we don’t even know how we are affected. It’s because we have so little self-knowledge that we really created this runaway system that is pulling on the puppet string of two billion people. So if this is the end colony of humanity and it’s got two billion people in it we are tilting, this digital Frankenstein is tilting the playing field in all these crazier directions. And we have to stop it. 

Is there an ethical way to steer the thoughts and emotions of billions of people? 

Harris: Well I think the thing about this question is people have to realize it’s not as if Google can’t influence. Meaning can you tell Google: Hey, please stop influencing all of people’s thoughts and all of people’s actions. You can’t ask them to do that, it’s impossible. The search results page will preference certain results and not others, by definition. So that means that, you know, some search results will win which means those most influence people and others will influence them less. So it’s that same thing with Youtube and those other things. So we need to actually have a model for how do you ethically influence people’s thoughts and that involves aligning incentives between those who are designing the technology and those who are using it. And waiting for people to bring their own values to the table as opposed to technology trying to crawl down your brainstem and hijack your values. The biggest way to ethically influence people’s thoughts, what I call ethical persuasion, is to make sure that the persuader’s goals are in line with the persuadee. That’s like a psycho therapist who cares about the result for you. Or children’s developmental psychologist who with the child cares about that child’s development. So it’s that kind of nurturing, steward-like relationship, stewardship, custodianship, that kind of relationship that we need. 

This sounds like a danger for democracy.

Harris: Democracy is based on the idea that human thoughts and feelings are the foundation for how we crowd-source sense making and choice making. And if in a world where you can at mass-scale, not at a shallow level but at a deep level, manipulate people’s thoughts and feelings as Russia did in the United States, for example, and know as, I know the playing field, Iran, North-Korea, Saudi-Arabia, China are all now realizing how cheap it is. You know the joke was for less than the cost of an F35 that Russia was able to massively influence the US election. Reaching more than a 126 million Americans which is more than 90 percent of the US voting population. If we don’t have sovereign believes or thoughts though conversation with facts and shared truth, that’s the end of democracy. And in Yuval Harari’s words who is a close friend of mine, democracy becomes an emotional puppet show. Where it is just about playing with and manipulating the feelings. And this is a really big deal because the democratic system is based on the customer is always right, the voter knows best, and this is what we have to preserve and figure out how do we protect and strengthen the foundation of what we thinks and feel. And we have to really develop that as a field. 

Is that something the users themselves can fix or is there a need for regulation and politics to wake up? Or is it something that the companies should do themselves with selfbinding ethical rules?

Harris: Well, it certainly can’t be done just by individual action. Because it’s like trying to change climate change by turning off your light bulbs and driving a Tesla. It’d be great if we all did that, but individual action is nowhere close to enough. In the case of climate change, the top 100 companies make up 71 percent of emissions. So in the case of social climate change we call human downgrading, because the climate change of culture it’s downgrading all these facets of human life, a mass human downgrading, it can only be solved by changing these business models and that has to be done at the regulatory level. But the point is that we have to change it from the top. We have to change the business models of these companies and that involves policy makers and like we’re re-classing them from being in a contract peer-to-peer relationship with people to being in a fiduciary relationship. That acknowledges that these companies have asymmetric power. They have privileged access to information about you that could be used to extort you, the same level and degree to which a priest or a lawyer or a psychotherapist or a doctor has that level of asymmetry. So the line is that Silicon Valley tends to defend itself to congress using neoliberal economics, but they run their companies based on behavior economics. Behavioral manipulation, which is the mission that they have asymmetric power to manipulate you. But then they tell congress we actually have an equal relationship with our users, they are consciously choosing to consent and so this is the fallacy that has to be fixed. And as soon as you fix that fallacy it means that they can’t have the advertising business model. Because the advertising business model would be like a doctor whose entire business model is to sell you whatever will maximize their revenue, instead of what’s good for you, given the sensitivity of that relationship. 

But can there be a kind of ethical advertising model? What about the future of these companies?

Harris: Well the future of the advertising model is going to be deeply made in questions by these changes. I will say on a spectrum, advertising that treats you with dignity as a conscious agent that is trying to live your life on your own terms, is closer to the advertising in Google search which might be controversial for some. But I think is on the better side of advertising compared to advertising that says: I don’t even know who you are or what you are, my entire job is to crawl down your brainstem and manipulate you and get you using this for as long as possible and tilting the playing field towards conspiracy theories. And that engagement advertising which is trying to get your attention is the big problem. And that’s the thing that we really have to shut down.

Image AI puppets

Are we all puppets from Facebook & Co?

Social media manipulate us. We are uninformed and live in filter bubbles. The mature, responsible citizens among us are disappearing. 

FAQ