I have been watching some more AI-themed Lex Fridman podcasts. I have reached a point where I am more interested in going meta, taking the podcasts as data points. Maybe I should let an AI study the sample (therefore, more power to Lex; the more data, the better). It might come up with interesting results about that ubiquitous concept, bias. I suspect that it is but a buzzword. It repackages facts about human nature and the roles we take on, but somehow suggests that we can get rid of them, just like there is the concept of bias correction in statistics.
Sometimes the boys display the male role via tech talk: they are building stuff, having fun with it, and competing with each other. And sometimes the female role reflects on the building of stuff, on the having of fun, and on the competition. Neither is the male role the ‘role of the male’, nor the female role the ‘role of the female’, but of course the correlation between role and gender is substantial. Both roles are important, and both can go wrong. We know how the male role can go wrong, because history. We will have to learn how the female role can go wrong, now that it seems to have gained dominance in Western societies. As Mary Harrington commented: "In a sense, the internet has cucked all of us. [...] Women are just as aggressive and competitive as men but they go about it differently. [...] Once you transfer all of human interaction onto the internet you foreclose the possibility of physical violence, and in a sense it means that all conflict now happens in the female key."
To illustrate some of the issues, I transcribed (and lightly edited) two sections from Lex Fridman’s podcast with Kate Darling:
[around 24:00] Kate: For example, you create a robot that looks like a humanoid and it's, you know, Sophia or whatever. Now suddenly you do have all of these issues: are you reinforcing an unrealistic beauty standard? Are you objectifying women? Why is the robot white? I think that with creativity you can find a solution that's even better where you're not even harming anyone and you're creating a robot that looks not humanoid but like something that people relate to even more, and now you don't even have any of these bias issues that you're creating. How do we create that within companies? I don't think that edginess or humour or interesting things need to be things that harm or hurt people, or that people are against. There are ways to find things that everyone is fine with. Why aren't we doing that? […]
[around 27:00] Lex: In the book you have a picture of two hospital delivery robots with a caption that reads: “two hospitals delivery robots whose sexy nurse names Roxy and Lola made me roll my eyes so hard they almost fell out”. What aspect of it made you roll your eyes? Is it the naming?
Kate: It was the naming. The form factor is fine, it's like a little box on wheels. The fact that they named them is also great. That'll let people enjoy interacting with them. We know that even just giving a robot a name facilitates technology adoption. People will be, like, “Betsy made a mistake, let's help her out” instead of “this stupid robot doesn't work”. But why Lola and Roxy? Those are too sexy. I mean, there's research showing that a lot of robots are named according to gender biases about the function that they're fulfilling. Robots that are helpful in assistance and are like nurses are usually female gendered. Robots that are powerful, all-wise computers like Watson usually have a booming male coded voice and name. You're opening a can of worms for no reason. Just give it a different name. Why Roxy? It's because people aren't even thinking. I don't like PR departments but getting some feedback on your work from a diverse set of participants listening and taking in things that help you identify your own blind spots – and then you can always make your good leadership choices and you can still ignore things that you don't believe are an issue, but having the openness to take in feedback and making sure that you're getting the right feedback from the right people, I think that's really important.
Lex: So, don’t unnecessarily propagate the biases of society.
When listening to such reasoning, so many thoughts are coming to my mind that I am unable to arrange them into a coherent whole, and have to fire ugly bullet points instead:
Why are Roxy and Lola sexy nurse names, but Betsy and Sophia are not? Might there come a time when it is the other way around?
If you were asked not about a company building robots but about a company designing sexy nurse costumes, would you complain if they named their products Betsy or Sophia? Or would you demand closing of the business altogether?
Why are ‘you’ creating bias issues if ‘you’ create a robot? Is this the same ‘you’?
What is a ‘bias issue’ anyway? Is bias some Platonic universal, and the becoming aware of it is the issue?
The very etymology of edginess suggests harming and hurting.
Commissioned feedback is not feedback (actually, Lex makes this point later on in the podcast: departments that are assigned the task of identifying harm will find harm, even if there is none). With a two-step process (‘right feedback’ from the ‘right people’) you can get complete control.
I stay away from Twitter, but I have the impression that Elon Musk just fired the ‘diverse set of participants listening and taking in things that help you identify your own blind spots’.
Just say ‘the roles’ instead of ‘gender biases about the function’ and undo the strange layering of the sociological on top of the technological.
They shall not hurt nor destroy in all my holy mountain, saith the Lord, after having got some feedback on His work from a diverse set of participants listening and taking in things that helped Him identify His own blind spots.
It is not ours to find things that everyone is fine with. Should we try when we build something? Sure. Should we point out to others when they go too far? Sure. But with the measure we use, it will be measured to us.