So I actually don’t have much to say here because I’m still in the process of learning and understanding certain things about my project and what I want to achieve.
This week, I found a Congress Hearing called Optimizing Engagement: Understanding the Use of Persuasive Technology on Internet Platforms. This hearing took place last year in June and had 4 witnesses, all of which who are prominent people in the industry:
- Mr. Tristan Harris, Co-Founder and Executive Director, Center for Humane Technology
- Ms. Rashida Richardson, Director of Policy Research, AI Now Institute
- Ms. Maggie Stanphill, Director, Google User Experience, Google, Inc.
- Dr. Stephen Wolfram, Founder and Chief Executive Officer, Wolfram Research
You can go to: https://www.commerce.senate.gov/2019/6/optimizing-for-engagement-understanding-the-use-of-persuasive-technology-on-internet-platforms to see more and to watch the video of the hearing you can go here: https://www.youtube.com/watch?v=yjKV_j_mFIQ
I’ve spent time watching the whole hearing (all of which is about 2 hours, I’m almost an hour in) and I’ve learned A TON which is very difficult to summarize in a blog post.
However, a key term that I have come to learn that is important in the hearing is asymmetrical power which Harris described with the following anecdote:
“I first learned this [asymmetrical power] as a magician as a kid. I learned that the human mind is highly vulnerable to influence. Magicians say ‘pick any card.’ You feel that you’ve made a ‘free’ choice, but the magician has actually influenced the outcome upstream because they have asymmetric knowledge about how your mind works.”
Tristan Harris
What Harris is saying is that with any magic trick, the participant feels that they are given a free choice in when they’re asked to pick a card, but in fact, the magician knows something you don’t and ultimately can influence the outcome since they understand how your mind works.
Now take this idea, and apply it to machine learning and artificial intelligence. When you’re on platforms such as Facebook, Instagram, or even Twitter, the content being shown to you is automated. An algorithm is in place showing you content that it thinks you’ll like. With the magician example, the algorithm is the magician as it has asymmetric knowledge about how your mind works and it has power over you in that it is in control of what you’re seeing, and you don’t even know it.
The second key point I’ve learned is just more about how AI works and the complexity of creating constraints on it. Dr. Wolfram said the following in his testimony:
“People often assume that computers just run algorithms that someone sat down and wrote. But modern AI systems don’t work that way. Instead, lots of the programs they use are actually constructed automatically, usually by learning from some massive number of examples and if you go look inside those programs, there’s usually embarrassingly little that we humans can understand in there. Here’s the real problem: It’s sort of a fact of basic science that if you insist on explained ability then you can’t get the full power of a computational system or an AI.”
Stephen Wolfram
So that clears up how AI works, in a very condensed nutshell. The next thought that I was intriguing to me is Wolfram brought up this idea of creating a contract that says what the AI is allowed to do. This is a very new idea to me and it really makes me wonder what the possibilities are. Wolfram elaborates that:
Well partly through my own work, we’re actually starting to be able to formulate computational contracts. Contracts that are not written in legalese, but in a precise executable computational language suitable for an AI to follow.
But what should the contract say? I mean, what’s the right answer for what should be at the top of someone’s newsfeed?” Or what exactly should be the algorithmic rule for balance or diversity of content?
Dr. Wolfram
These are the main points I found out so far that have been thought provoking and important so far in my research.
Nina and I spent some time chatting after our 1:1’s and she thought it would be interesting if I made a web app that had the user essentially act as an AI to really demonstrate what it does and how it works. The first thing that really needs to be addressed is the public’s lack of awareness of what’s going on. If you’ve seen Social Dilemma on Netflix, you’d understand that the documentary portrayed the AI with 3 different people who all looked alike and we’re all in charge of the content they were showing to the kid. I would kind of do something similar except the experience would be more educational for the user. Nina said I could start off by saying “Welcome, you’ve just been installed as the new AI”. And then, the user would have two buttons to choose from: Start job now or Learn about the Job. In the learn about the job.
She recommended that I use https://twinery.org/ to map the experience as it’ll make it super easy for me to create. I’m curious if I should stick to a web-based application or should try out a mobile app…thoughts?