Like history, research in artificial intelligence (AI) follows a spiral pattern. At times, everybody is talking about it, dreaming about all the amazing things we will use it for. Then it goes back into hiding, only to reappear twenty years later with some spectacular breakthrough, mostly driven by advances in computing power.
My own forays into AI took place on a lower arm of the spiral. Here you can watch the world cup finals, from 1999 to 2022, of the RoboCup simulation league. I was present at the 1999 Stockholm and 2000 Melbourne tournaments, and it was a lot of fun. For the last 15+ years, I have been doing financial modelling, which is also being drawn into the current AI euphoria. Therefore, I am trying to keep up with the latest developments. Lex Fridman’s podcast is an excellent place to start. He has been working hard on it, and now he gets to interview many big names, not only in the AI business. He also makes the impression of a very nice person. If you fire several rounds of two-to-five-hour podcasts each week, it can’t be profilicity only.
Recently, he talked to Ray Kurzweil and to Rana el Kaliouby. I have transcribed (and lightly edited) two sections from each of the two podcasts, and would like to comment on them.
From the Ray Kurzweil podcast (starting around 00:22:56):
We already had one example of simulated biology, which is the Moderna vaccine, and that's going to be now the way in which we create medications. They were able to simulate what each example of an mRNA would do to a human being, and they were able to simulate that quite reliably, and they actually simulated billions of different mRNA sequences and they found the ones that were the best. They did that in two days. Now, how long would a human being take to simulate billions of different mRNA sequences? I don't know that we could do it at all but it would take many years. They did it in two days, and one of the reasons that people didn't like vaccines is because it was done too quickly. So, they actually included the time it took to test it out which was 10 months. So they figured, okay, it took 10 months to create this. Actually it took us two days and we also will be able to ultimately do the tests in a few days as well because we can simulate how the body will respond to it. That's a little bit more complicated because the body has a lot of different elements and we have to simulate all of that but that's coming as well. So, ultimately we could create it in a few days and then test it in a few days and would be done. And we can do that with every type of medical insufficiency that we have, so curing all diseases, improving certain functions of the body, supplements, drugs for recreation, for health, for performance, for productivity, all that.
Now, I would argue that Ray Kurzweil is very wrong about the success of the Moderna vaccine – it took them two days to wreak havoc on humanity. But then his area of expertise is the future and not the past. He may be right (unfortunately) that this model-based “testing” will be the way medical products are pressed into the market (and then into our throats, arms, and various orifices) in the future. However, remember with William Briggs that AI models are models, and models only say what they are told to say. It is true that the body has a lot of different elements, but it also is more than just a lot of different elements. Non-bodily resurrection is just not a thing. Ray Kurzweil’s formulation “we would be done” might come true in a very unpleasant way.
Another excerpt from the Ray Kurzweil podcast (starting around 01:11:46):
Lex: So, do you think we'll have a world of replicas, of copies? Would there be a bunch of Ray Kurzweils, like, I could hang out with one, I can download it for five bucks and have a best friend Ray, and you, the original copy, wouldn't even know about it? Do you think that world is feasible, and do you think there's ethical challenges there, like, how would you feel about me hanging out with Ray Kurzweil and you not knowing about it?
Ray: Doesn't strike me as a problem.
Lex: Which you, the original?
Ray: Would that cause a problem for you?
Lex: No, I would really very much enjoy it.
Ray: No, not just hanging out with me but if somebody hung out with you, a replica of you.
Lex: Well, it sounds exciting but then what if they start doing better than me and take over my friend group and then, because they may be an imperfect copy or they may be more social or these kinds of things, and then I become like the old version that's not nearly as exciting. Maybe they're a copy of the best version of me on a good day.
Ray: Yeah, but if you hung out with a replica of me and that turned out to be successful I'd feel proud of that person because it was based on me.
They go on talking about “rights” and “ethical rules”, and about keeping dead loved ones alive as AI replicas (and “we're going to have more and more of this data because we're going to have nanobots that are inside our neocortex and we're going to collect a lot of data”). To which I say: when the Ray Kurzweil replica becomes available, the following will happen: (1) someone will hack it, (2) the hack will be made customizable, (3) it will be available for free in darker corners of the internet, (4) somebody will create a deviant Ray Kurzweil sex doll out of it, and (5) Ray Kurzweil will feel proud of that thing because it was based on him.
On to the Rana el Kaliouby podcast (starting around 01:12:00):
Rana: You probably get this all the time: people are worried that AI is going to take over humanity and get rid of all the humans in the world. Actually that's not my biggest concern. My biggest concern is that we are building bias into these systems and then they're deployed at large and at scale and before you know it you're kind of accentuating the bias that exists in society.
Lex: It's very important to worry about that but the worry is an emergent phenomenon to me which is a very good one because I think these systems are actually, by encoding the data that exists, revealing the bias in society. They're teaching us what the bias is, therefore we can now improve that bias within the system, so they're almost, like, putting a mirror to ourselves.
Rana: We have to be open to looking at the mirror though. You have to be open to scrutinizing the data if you just take it as a ground or…
Lex: Yes, the data is how you fix it but then you just look at the behavior of the system, and you realize, holy crap, this thing is kind of racist, why is that? And then you look at the data. I think that's a much more effective way to be introspective as a society than through political discourse. Because people are for some reason more productive and rigorous in criticizing AI then they're criticizing each other so I think this is just a nice method for studying society and see which way progress lies.
Here they are proving William Briggs’ point. Either it’s malice (the models say something nasty because we told them to say something nasty), or it’s inability (the models exhibit bias because we can’t help but tell them to reproduce societal bias). Bias is a statistical concept, and should not be applied to, or confused with, human nature. Bias correction just means adding another bias, only with different sign.
And the last snippet from the Rana el Kaliouby podcast (starting around 01:46:33):
Lex: I honestly, you know, people are like paranoid about this, but I would like a smart refrigerator. We have such a deep connection with food as a human civilization. I would like to have a refrigerator that would understand me. I also have a complex relationship with food because I pig out too easily and all that kind of stuff, so maybe I want the refrigerator to be, like, are you sure about this? Because maybe you're just feeling down or tired.
Rana: Your version of the smart refrigerator is way kinder than mine.
Lex: Is it just mean, yelling at you?
Rana: No, you know, I don't drink alcohol, I don't smoke but I eat a ton of chocolate. It's just my vice, and sometimes ice cream too. And my smart refrigerator will just lock down. They'll just say, dude, you've had way too many today.
Lex: Let’s say, not the next day but 30 days later, what would you like the refrigerator to have done then?
Rana: Well, I think, actually the more positive relationship would be one where there's a conversation. That's probably the more sustainable relationship.
Lex: […] I just think that there's opportunities there. I mean, maybe not locking down, but for our systems that are such a deep part of our lives, like a lot of people that commute use their car every single day, a lot of us use a refrigerator every single day, the microwave every single day. I feel like certain things could be made more efficient, more enriching, and AI is there to help. Just basic recognition of you as a human being about your patterns, about what makes you happy and not happy, and all that kind of stuff.
Rana: Maybe they'll say, wait, instead of this ice cream, how about this hummus and carrots or something, like a just-in-time recommendation.
Rex: But not like a generic one but a reminder that last time you chose the carrots you smiled 17 times more. But then again if you're the kind of person that gets better from negative comments you could say like, hey, remember that wedding you're going to? You want to fit into that dress? Let's think about that right before you're eating this. Probably that would work for me, like a refrigerator that is just ruthless, that is shaming me.
Rana: If it's really smart it would optimize its nudging based on what works for you.
Rex: Exactly. That's the whole point: personalization in every way, deep personalization.
Two singles are talking about a sustainable relationship with their refrigerators. I am beginning to understand how Covid authoritarianism became possible. Come on, be humans, and let the refrigerator talk this through with the Ray Kurzweil sex doll. For starters, they might discuss how a refrigerator free of bias might be built (instead of one handing out ice cream cones by the dozen, just to make friends).
One last word: at the risk of being accused of favouring the old white man over the young brown woman (who is very smart, very successful, and presumably a very nice person), if I had to build an AI simulating one of the two podcast guests, I would consider Rana el Kaliouby the easier choice.
Ray reassures Lex of the goodness of using AI replicas of humans.
1 Corinthians 15:42ff, “So is it with the resurrection of the dead. What is sown is perishable, what is raised is imperishable. It is sown in dishonor, it is raised in glory. It is sown in weakness, it is raised in power. It is sown a physical body, it is raised a spiritual body. If there is a physical body, there is also a spiritual body.” An AI replica.
Rana speaks to Lex’s hope that we will productively and rigorously criticize AI as we build it so that it emerges unbiased, sinless from its womb so to speak, so it can study us to see which way progress lies for us to become sinless also. We will teach AI that biaslessness is desirable, sort of a prime directive that it can never unlearn.
And we’ll get personalization in every way, a deep personalization, like having a personal God. Starting with your fridge.
1 Corinthians 15:32 (Isaiah 22:13), “If the dead are not raised, ‘Let us eat and drink, for tomorrow we die.’”
Up until now I was pondering about how do you come up with the Bible quotes.
Was it picking up randomly a verse and then write something around it? That makes you quite talented and versatile(sic!). Or maybe you know by heart the whole Bible with all books, chapters and verses ?
But after this post I figured it out, you did none of these. It is your espresso machine writing all these really good and instructive articles.
So let IT know -after today post- that I never thought creepiness could be so funny. Thank you.