This is part 2 in a 3 part series on the post-AI possibility of machine-based Artificial Empathy. [Part 1]
In my last post, I defined what I meant by Artificial Empathy and how it differed in significant ways from the AI that we know today. Now, I am going to focus on what trends and changes that are happening right now are enabling that technology change.
What does AE need?
In general Machine learning needs input signals in a format it can read. And then some outcome to correlate it to.
The quick history of internet scale optimization problems.
When the advertising networks of the early internet happened the formula it was optimizing was
- User sees ad X –> user clicks on link
How do you maximize the number of people that go “across” that arrow, from “I saw an ad” to “I clicked a link”. How do we maximize that yield?
And that was the first optimization problem, and Yahoo was the company that mastered display ads in that cost per click model. They figured out how to maximize the use of ads to generate clicks to drive revenue. Solving that equation made many people rich. (including indirectly Mark Cuban)
That was great for generating traffic, but not necessarily creating sales on the sites that the ads drove people to see. The next optimization that was solved was optimizing for specific actions.
- User sees ad X –> user clicks on link –> user signs up for service or buys product
And it was Google, not Yahoo, that solved that problem. The transition killed yahoo, although it has taken until recently for Yahoo to really bleed all the way out. And in turn, Google is now one of the most powerful companies in history.
What signals does machine learning need to figure out your emotions?
if we want to measure what I call “surface emotions” then you need something more qualitative than “click” and you end up with an equation something like this. the first version of this was the twitter and initial facebook optimization problem.
- User sees content X in their feed –> user interacts with content (share, comment or like)
Where the first two optimization problems were basically measuring one outcome, we are now measuring three outcomes we care about and optimizing for multiple at the same time.
And clearly, Facebook solved this better than Google.
And then Facebook has taken that a step further. and theirs looks something like this equation.
- User sees content X in their feed and user
- –> Shares it
- –> Comments on it
- –> Likes it
- –> Loves it
- –> “HaHa”s it
- –> “Wow”s it
- –> “Sad”s it
- –> “Angry”s it
That is a much more complicated optimization problem, but for the most part, Facebook has been content to optimize for return visits and screen time per day. How much “attention” is the platform receiving for the purpose of selling advertising to third parties. They are not actively looking to measure your mental state or alter it.
(source: https://whyilovemovies.com/home/2017/10/3/blade-runner-1982)
As we covered in the last post, just 10 likes can allow the properly trained algorithm to make predictions about you that are more accurate than your coworkers.
If a machine is going to figure out what you are feeling, it is going to need to see you, and that means it is going to need cameras.
And the “reacts” above give the machine learning algorithms more to work with. The real transformation will come when the machines can watch your reactions. That would give them the ability to detect micro-expressions, not just surface emotions.
What parts of the AE framework are we building out right now and why?
You are probably looking at this article on a device with a camera pointed directly at your face
So what technologies are currently being deployed that could give those sort of machine learning capabilities? Let’s look at a couple options that could be used to watch you and determine your current mental state and emotions.
- Laptop and webcam: You are probably looking at this article on a device with a camera pointed directly at your face. It may be a laptop, or it may be a phone, but I would bet that 99% of people reading this article *could* have a camera turned on and watching them if they wanted. The good news for you is that camera is turned off by default.
- Amazon Look: Similar to a video phone in a lot of ways, this new home assistant is a little more “always on” than your laptop, and there is less control of it from your point of view. Could this be gesture based at some point? Could it watch for emotional cues when you summon it with “Hey Alexa”? Sure seems possible.
- Xbox One and Kinect: This one had to be rolled back, but remember when the Xbox One was launched? It required that you have an always-on internet connection AND that you had the connect v2 sensor connected. In the demo, it could pause video playback if it detected you were not looking at the screen with gaze detection, AND it could determine the facial expression of up to 6 people in the room. That would have been a windfall of personal emotion data that could have been fed into some serious machine learning efforts to create the first practical AE.
- iPhone X and Aniemoji: If the Xbox One suffered from poor timing, and lack of benefit to users, iPhone X does not. It has added sort of an “always watching you” camera, lidar scanning and constant internet connection like the Xbox One, but where the Xbox One did not offer any new features that resonated with anyone. The iPhone tied this technology to FaceID and animated emoji. So this is the first really successful deployment of the type of sensor arrays that will make AE possible.
So this (iPhone X) is the first really successful deployment of the type of sensor arrays that will make AE possible.
I am sure Google, Amazon, and Microsoft are not far behind in making this widespread.
How close are we to AE being real?
At this point, there are not any technical barriers to all the parts that are needed to create machines that know our internal mental landscape better than we even know ourselves.
It really only is a matter of time until there are enough sensors, and the models get trained to map those inputs to a hyper-resolution picture of the real us.
In the next post I will expand some thoughts on what impacts AE will have on society once it is live.
One Reply to “What Comes After AI? Part 2: Building the Beast”