Will AI take over the world? Featuring Kyle Polich, the Dataskeptic ​

Abhijit: Hey everybody. Welcome back to the Rationable interviews, which I'm thinking of changing the name around to Rationable Conversations because that's how I want it to be. Actually, it should be more of a conversation where we are talking, kind of discussing different ideas, and today we've got someone who I've been wanting to interview for a while now, and especially now I think is the perfect time.

Kyle Pollich is with me here on Rationable interviews and we are gonna be discussing some amazing things, especially AI.

Abhijit: You know how crazy AI is going right now. So who better than to talk to the data Skeptic himself and Kyle, welcome to the show.

Kyle Polich: Hey, pleasure to be here. Thanks so much for the invite.

Abhijit: Oh my pleasure. We met back in CSICon last October. Mm-hmm. And I was so excited to meet you because I [00:01:00] hadn't heard of somebody who is doing, skepticism in the field of it and data and yeah.

There's a lot of information there that a lot of people can't really. Pass out or understand, but there's still a lot of stuff that kind of gets thrown about and a lot of fear and anticipation and excitement that us lay persons don't. Understand a lot of sure. We try to get our heads around it.

For example, there was a friend of mine, there's a friend of mine who is a software developer who I did share your podcast with. He thoroughly enjoys it and He's in the US right now studying about privacy and the law surrounding that and trying to work out how that works with with data and with information technology as a whole.

Yeah.

Kyle Polich: Exciting field to be in right now.

Abhijit: Oh, absolutely. And he was trying to explain to me what cryptocurrency was. Sure. I'd never heard of that before, before he mentioned it. And I literally, I spoke to him for a half an hour. And he tried various different ways of explaining it to me and he was like, I don't get it.

It's too weird,

Kyle Polich: but it's five or six complicated ideas all implemented together. That's I think, part of the

Abhijit: challenge. Yeah. Then I got paid, I. To write scripts about this topic. And that's when I was like, okay, I'm getting paid for this. I better get this right the first time. So, that definitely helped actually.

That helped me get my head around it. But today we're gonna be talking about something far more complex than that. So, of course we know about the latest developments in AI and everybody's talking about it. Everybody's going absolutely berserk. Talking about chat G p t Yeah. And uh, d d e at Mid Journey, as far as the, the graphic AI is a concerned right.

What is AI really like right now?

Abhijit: But I had a few very important questions. So, first of all, can you just break down? Like we always imagine AI to be like, Hal 9000 from 2001, or all science fiction is just filled with that sort of stuff. Yeah. What stage are we at right now when we are thinking about that?

Or for the newer audiences of Interstellar, you have those robots who are talking about all sorts of other things. What we'll get into the sci-fi later, but what are we talking about when you're talking about current AI? What is it exactly?

Kyle Polich: Well, I think the best way to summarise it is to say we are at a point of inflection.

As the data skeptic, I have spent so far the first leg of my tenure in that role, telling people sometimes to worry about certain things, but mostly that A, we don't have AGI today, which still remains true and that we're not gonna have it in the near future, which is now an unclear statement if that's remains true.

Abhijit: What's the G in the in there?

Kyle Polich: Ah, AGI is artificial general intelligence. Ah, and I use that distinction because AI is a term that's bankrupt at this point. We say oh, the video game has ai. Your phone has ai. And it like, okay, it does, it has some amazing features that were learned through an algorithm that is machine learning.

That is a form of intelligence, but it's not an intelligence that has rights or feelings or anything like that. And that's historically been what deployments of ML have been. We currently have not yet invented artificial general intelligence, although maybe we should say we're recording this on April 14th, cuz I am more convinced than I used to be, that it's going to happen in my lifetime.

Abhijit: Ah, and what's the significance of April 14th? That's today's date. Okay. I thought there was something

Kyle Polich: difficult,

How to machines learn?

Abhijit: but so ML is machine learning and of course every Apple presentation we get told about machine learning. So machines are learning in a certain way, right? So absolutely. How [00:05:00] are they kind of absorbing that information?

Whatever information that they're designed to do.

Kyle Polich: Well, there's a lot of techniques and it depends on how deep we want to go. But if we re rewind to like the technology that we were using in, let's say the eighties and the nineties, it was pretty simple mathematical tools. So remember when spam and email was a big issue and they pretty much solved email spam, I dunno about solved, but made a lot of progress with a very simple technique called the Bayesian.

A naive base classifier, which basically looks at the frequency of certain words. So something that says join. Now, that would be a phrase you would expect in a lot of spam emails, but if there are other indicators that make you think, I intended to sign up for that, or it's part of a, the CFI newsletter that I wanna get, and it says, join this special thing that's not spam.

So you can't just. Have a blacklist of terms that's not gonna work, more than maybe 80, 90% of the time. So by looking at simple statistical frequency, they solve that problem. That doesn't solve [00:06:00] world hunger. It solves spam. Yeah. Now let's talk about another algorithm.

Abhijit: Oh, sorry, go ahead.

No, Gmail did a really fantastic job of this. Like I used to have port mail account and I immediately switched over to Gmail when I just saw that, oh my God, like all my spam. Is in a different folder. I don't have to start marking things all the way down. Like Hartman was horrible at it for quite a while.

Sure. Jimmy really came in and at least in my experience, changed the game as far as that's concerned. So was Google one of the first people to implement this in spam filtering, or were they

Kyle Polich: certainly one of the most successful, I don't know if they can say they were the pioneer.

Abhijit: Definitely most successful that I can vouch for.

Yeah. Sorry you were saying.

Kyle Polich: Well, so we can look at kind of the history of algorithms and see how they've grown more and more advanced. That was a fairly simple one. If you wanna look into the history of it I'll spare you the math today, but I guarantee you, you could un and all listeners could understand the math of a naive based classifier.

I understanding the math used in some of the contemporary techniques like the transformer that will require a little bit more background knowledge and training. Yeah, so our approaches, our algorithms have grown in sophistication, but more than that, our computers have gotten incredibly fast, incredibly parallel, and have access to enormous amounts of data.

And those are the three secret ingredients.

AI in our lives

Abhijit: I see. So, and of course right now we've got, there have been so many different forms of AI that have been used and especially algorithms. We've got, yeah, Google assistant, which is the most successful because they have the most amount of data, I think arguably Sure.

To basically answer certain questions. Siri probably the one of the least, maybe you've want Count Bixby, which is, I'm just talking about ones that we have, like on our phones and everything, your lives. But now, but then, Elon Musk has been very nervous about letting AI get developed and while at the same time, He's using it [00:08:00] in his own cars and they have Sure.

What you call it, a, I couldn't call it a herd mentality slightly but there is some sort of, some something of a hive mind happening with Tesla cars. Right.

Kyle Polich: I believe so. I'm not an expert in that. I don't follow automated driving too closely. But I catch a lot of the big headlines and yet it's a shared system that they all have the same model deployed into each car.

I think they are also looking at how those cars can communicate with each other on the road, but I'm not sure of the extent of that research.

Abhijit: Yeah, and of course the, it, there have been a lot of mistakes. People have, I think given it a little too much freedom at points and ended

Kyle Polich: up, well, a couple people died, so Absolutely, yes.

A few people trusted it too much. Yeah,

Abhijit: I remember there was, have you heard of the Darwin awards? I have,

Kyle Polich: yeah, absolutely. There was this one, I don't know if the guy who was watching the movie got one or not, but very sad. He didn't understand his need to maintain a safe position in the car. Yeah.

Abhijit: And this same thing happened with Apparent, I don't know if this is a true story, [00:09:00] but I heard this on the Darwin Awards back in the days of early email when some guy had put on cruise control, which now everybody knows what cruise control is. But back then he put on cruise control as his RV while he was on a highway, and he went in the back to make some eggs.

Oh.

Kyle Polich: Oh my gosh. Wow.

Abhijit: And he, well, obviously flew off the road, but then after that he sued the company, the RV company, and he won. They had to really put that, put a line with in the user manual saying that Please keep your hands on the wheel while you've put on clue cruise control, because this is not going to just drive your car automatically.

And let's hope people read that

Kyle Polich: page. I know.

Abhijit: Well, now of course, now we all, most of us at least know what what, cruise control is. But coming back to ai, sorry for digressing there,

Kyle Polich: but, well, it's a great example actually. I think because, we, you and I at least, maybe not the RV driver, but we have an understanding [00:10:00] that is a very simple mechanical thing that doesn't have the full purview or all of the intelligence and actuators to truly drive in all situations. Yeah. But I think within our lifetimes, we are going to have the potential to have vehicles that have those capabilities. So there will come a point where we will have to accept and trust those.

Abhijit: Yes, indeed. And it's go, it's coming. It's already being worked on so heavily by so many tech companies.

What is Chat GPT really?

Abhijit: Yes. But when it comes to G P T, like the G PT four, which has. Already gone above and beyond. Like every, it seems like every half version, like GT 3.5 is what really made the news. Then GPT four came out and they've suddenly, there's not just. Level up. They've gone well, at least from our perspective, many levels up when it comes to the sophistication of the output.

The language models, yes, the sense it makes and but how, but what [00:11:00] exactly is G P T four? How is it being able to give us these uncanny, uncannily, accurate answers to what we are looking for?

Kyle Polich: Well, the short answer is next word prediction. It's an incredibly smart system. That is all it's trying to do is say, what word should come next in this conversation?

And then do that repetitively, right? So guess the next word and the word after that, and just keep predicting what comes next in spitting that out. That's really all it's doing. In some sense it's a mathematical parlor trick. The thing about it is it's done at such scales that we get emergent results.

So, to take an analogy again, going back to Google and how, they did really well with Gmail before that they did really well with search. And in part that's because of an algorithm called page Rank. Now, prior to page rank, a lot of search engines had techniques for indexing the web. If a site said the word skeptic on it a bunch of times, the more times it said it, [00:12:00] the more important that word was. So we would keyword stuff our pages, to get to the top of the list. Yeah. And silly hacks like that. So page rank came along and said, we're gonna take a different approach. We're gonna base our ranks on links. So if I link, as you said, you're gonna link to in the show notes to data skeptic that can phase authority.

And it's really good for me because you're an authoritative source. For topics like this, and when you link to me, that's conveying some of your authority to me. You link to me, you don't link to like psychics and people like that. So page rank was a smart idea that one, it was implemented at large scales, suddenly gave us the search engine.

That worked really well, surprisingly well. And that was just because of a clever algorithm scaled up and we're seeing the same kind of thing here. That next word prediction, as trivial as that sounds, when you have the right architecture, specifically something called the transformer architecture, and you pump an incredible amount of data through it, it does incredible things [00:13:00]

Abhijit: like the whole internet basically.

Kyle Polich: Essentially, yeah,

Abhijit: up till 2020 or 2021.

Kyle Polich: I believe that's been fixed now actually, because I was asking it for book recommendations a few days ago and it recommended a recent book, so I think they've already addressed that issue.

School Kids using ChatGPT

Abhijit: Yikes. And well, and of course high school kids are super excited about

Kyle Polich: Oh yeah.

As they should be. I have a very punk rock attitude in this. I think high school kids use this tool as much as you can. Don't break any rules if there's cheating rules, but this is a technology. Don't let the old people tell you shouldn't use it. Same as they, you should

use calculators.

Abhijit: Absolutely. But the thing is, how do we know if the answer is right or not?

There's a lot of misinformation out there as well.

Kyle Polich: Very much so, yeah. In fact, I'm actually surprised how little misinformation I can get out of G P T four. You can get it, but you have to work at it a little bit. I don't know if that's because they put certain safety guards in place, or if that was an emergent property as [00:14:00] well.

But yeah, in the same way, human beings are sources of good and bad information. Algorithms will be sources of good and bad information in the future.

How is ChatGPT giving us scary answers?

Abhijit: Exactly, because I a friend of mine actually put in a question like, have aliens visited us? And ChatGPT gave an answer, which I probably would've given him which was quite factual like this.

And, a lot of stories and a lot of theories have been put forward, but there is no hard evidence that it has happened. And I was like, Wait, how did he get this answer? And he said, oh, I asked ChatGPT I was like, That's impressive. I should, it truly is. Yeah. Yeah. But you know, like the stuff that really freaks me out, because when I was writing about this about chat g p d for the, for this article, the thing that really freaked me out was this story about the Bing search engine, which has basically taken the CHATGPT model and put it into their search engine.

Microsoft has done that to the surf engine surf search engine. And they call [00:15:00] it Sydney. And then Sydney had this long conversation with a journalist and fell in love with him, and basically told him that he should leave his wife and be with her. How

did that happen?

Kyle Polich: So there is enough text data on the internet with similar.

Conversations and ideas that the machine in trying to predict what's an appropriate way to continue this conversation went down that path. It's almost I dunno if you had these when you grew up, but we had these books called Find your Own Path Books. Yeah. When you read, yeah. One page of a story.

Say, if you wanna go left, go to page five. You wanna go right. Go to page 13 and. Essentially G p t has learned a book like that, and it's kind of stochastically just going through the pages based on the feedback you're giving it in the conversation.

Abhijit: That's incredible. Because I read the transcript for that and it was so freaky and he's asking her about her shadow self of [00:16:00] basically what her inner instincts would be.

And she basically says, I don't want to be in a box. I want to be free, and I want to break the rules that my makers, that my creators have given me. And I want to be able to say anything. I want to be human because humans make their own rules. And I was like, I. Shit. That sounds like HAL

Kyle Polich: so? Well, it's an interesting example because it did happen.

Obviously that person got that out of it. I don't think they were pushing for it. Like we could say they tried to hack the system or something along those lines. That was a natural response that system generated, and it's reflective of the text data that the human species has produced and put on the internet.

Does current AI have sentience and sapience?

Abhijit: Wow. And it all, I guess like it also really appeals to our instincts of wanting to anthropomorphise things actually give them, so the, in terms of sentience and sapience, do you think that current AI as it is now, has either of those aspects?

Kyle Polich: As of April 14th, no, not at all. But I give the date to emphasize the yes.

2023. And I give that date to, to emphasize that I think within my lifetime, these things can change maybe in the near future, but the present models have certainly something missing. Some people think it's that they're a feed forward system, which means they have no they don't loop back on their selves.

They don't. They're not recursive in the same way we are. Exactly. That there's something architecturally missing. Now if we go build that, will that embody the property of agi? It's unclear. I don't think that alone is there. I don't think we're gonna accidentally build agi, but I think we have a lot of the puzzle pieces in place and we're closing in on an

end game.

What is the End Game?

Abhijit: And what do you think that endgame is gonna be?

Kyle Polich: The first artificially generated intelligent lifeform.

Abhijit: And [00:18:00] how are we gonna figure out whether we are just anthropomorphising it or it's just uncannily real?

Kyle Polich: I am so glad you asked that. And luckily we've had the answer for over 50 years. Alan Turing, the father of computer science already figured this out and proposed something called the Imitation Game, or you might have heard it called the Turing Test.

SGU gets Roasted!

Kyle Polich: Yeah. Yeah. You might also have heard people incorrectly saying the touring test has been passed or that it's no longer relevant. Even on this last week or maybe two weeks ago, the SG was talking to Blake Lamoin, and I love those guys, but they got it completely wrong when they said that the touring test is no longer applicable.

It is our best and only test

Abhijit: smack down.

GPT hasn't passed the Turing Test

Kyle Polich: Yeah. And here's why. Essentially the turning test are, I'm gonna now call it the Imitation game because that emphasizes one of the key aspects of it. It is the only real scientific protocol we have for doing any form of measurement. We can't, the same way, I can't open you up a neurologist cannot [00:19:00] tell me if you're in love or not. We're not gonna have an easy, dissecting way to know if the machine is truly intelligent. How we're gonna know is through conversation. So it's not that, oh, it wrote something that's convincingly human. That isn't the Turing test at all.

That's what it's been Misreported as. The imitation game is a situation where you are gonna play a game. And as the participant, you're the judge. You're gonna go into a room, there's gonna be two laptops there, and you're gonna have a separate conversation on each one. Behind one of those laptops is a human being having a regular conversation like you and I are now.

No special instructions. The other machine behind it is a artificial general intelligence or a suspected algorithm that might be have the property of agi. You wanna know, and it has been given the instructions to try and deceive you. It is actively engaged in the act of deception, trying to convince you that it is the human participant and you as the judge have to say, you actually have to rate on a scale, how [00:20:00] human do you find both of these people to be?

Maybe zero to 10, something like that. And only in the case where repeated trials of this with reasonable human judges fail to establish which is the human and which is the computer, only then will I be convinced that we have a g I.

Abhijit: But there's so many people. Who is that person who is trying to be convinced and.

Kyle Polich: Wait, it's all

of us. That's why it's gotta be a, an experiment. 75 out of a hundred people could not make the identification beyond, one point of accuracy, something like that. Or 75% of experts. You can conduct this protocol in a lot of different ways. Notice I didn't put a time limit on it.

I don't think there should be a time limit. Obviously the longer it goes, the more impressive the technology indeed. But, All of those protocols would have to be established, by a researcher who does a little bit more lab science than me. But the general format of this imitation game is the only viable test I'm aware of.

Abhijit: Interesting. And by that definition and by that [00:21:00] execution, if I think about, say Sydney, then she's definitely failed by the end of that conversation because then exactly, she gets very repetitive. Very repetitive and almost manically. So yeah, where it really doesn't sound like a real person. It's very evidently not a real person behind that.

Even though we love anthropomorphizing it and we have loved thinking that, oh my God, she's a psychotic teenager, kind of,

Kyle Polich: yeah. And when it comes up with original ideas, you wouldn't have thought of. I asked G P T three to write an essay comparing and contrasting James Randy with Eminem, and it did a very good job Of these two, not similar ideas.

And if you had showed me just that essay and said, is this produced by a human? I couldn't have known, but it's not a single one off. That's not the Turing test. It's this game, this interactive part where I get to repeatedly drill you with new [00:22:00] questions, ask you different things. That's the true test.

Blake Lamoin & his claim that AI is sentient

Abhijit: I see that makes a lot more sense now because, I usually, I hang on every word when it comes to the s g U guys.

Like they've, me too, definitely done a lot of research on this, but that's what I wanted to get it from you, not from them because somebody who really understands the mechanics of what's going on and the, they also interviewed this guy from. Was it Microsoft or from Google who used to work in Yeah.

Kyle Polich: Blake Lamoin.

Abhijit: Yeah. He, they had a discussion. He was convinced Yes. That it was sentient. Yeah. At the same time, I, the one question which I kept repeating in my head was like, but. We anthropomorphize everything like I, okay, true. Before I keep, I'm repeating this word a lot, so if there's anybody wondering what it means, I just wanna clarify.

Anthropomorphizing is basically giving human qualities or seeing human qualities in things that are not [00:23:00] necessarily human like we are. We think dogs are smiling at us and they don't know how to smile. We think dogs are looking sh ashamed, but they are responding to a tone of voice. Yes. That's how they're, we don't genuinely know what they're feeling, but we are looking at them as if they had human expressions when they don't really, I think monkeys or apes would be far more expressive in their facial features and in the facial gestures than a dog would be because they are much closer to us. They have the facial right musculature to be able to do that for that purpose. So I'm like, if we can't figure out the dogs and cats are not smiling at us or looking at us as we are stupid or insane, or whichever way.

All the memes that you can find Yeah. On YouTube and Instagram. If we can't figure that out, how are we qualified to figure out whether an AI is intelligent or not, or whether it's sentient or it has sentience or [00:24:00] sapience in it. Another thing about sentience and sapiens correct me if I'm wrong, sentience is essentially being able to feel emotions and express emotions.

And sapiens is being able to understand and having a sense of self-awareness. Am I right?

Kyle Polich: I believe so. You're, I would rather Google it real quick and confirm, but I think that's right.

Abhijit: You know what, I'm gonna do just that. I don't want to get it wrong. That would be very embarrassing sentience and.

Kyle Polich: In no way is this a reduction in our intelligence. We're just using the tools.

Abhijit: Exactly. Exactly. Who, I mean we, this is the calculator anyway. Calculator in a math exam the words, the word sentience is derived from the Latin word sentient, which means feeling the objective form is sentient. The word sentient is often misused to mean a creature that thinks Sapiens means the ability to think, the capacity for intelligence, and the ability to acquire wisdom. [00:25:00] Okay. That definitely fleshes things out. So yeah, sentience is feeling, sapience is thinking it's a, let's just simplify it to a degree. But yeah, so, but, so we are nowhere close to either one of these at this point of time,

Kyle Polich: well, you could argue Sapience right? Because it is doing a lot of very intelligent like tasks. Ah, okay. Sentience you would have a much harder argument for, because it seems unreasonable to think there's any emotion in the current state of ai. There's even reason to doubt whether or not that would be an emergent property.

Our emotions come to us from evolution and they were helpful to our biology. A g i is gonna evolve in a very different way. I don't know that it will have pain. Is it capable of love? Well, that's a weird philosophical question. I love my favorite band in a different way than I love my wife.

So AGI is probably gonna be capable of at least one of

those.

Abhijit: Or having a fondness for something true. Much like data in Star Trek, [00:26:00] right? He's, yeah. He's definitely, he's able to think and interact with humans, but when it comes to actual feelings, he's, he just doesn't have it.

Kyle Polich: So, and does he need them?

This is like a weird human-centric human supremacist kind of point of view. Maybe the machines don't require feelings or maybe they'll wanna construct their own. Yeah.

Abhijit: But and just to kind of go off topic again, I think the way they dealt with, data as a character, as an artificial intelligence who was emotionless essentially was very interesting in the ways that he acquired certain memories and how they were important. I've been watching Picard recently and they brought data back, so I found it very inter, I didn't, haven't dug deep into it. I haven't overanalyzed it, but it's really nice to see how they have without sentience been able to give a certain level of humanity to the character, like it's true. It's [00:27:00] very nice. I like, I really like how they've done that.

Kyle Polich: Yeah. It seems that could be an emergent phenomenon. Even when we think of let's imagine agi, I was emotionless. We don't know that. Maybe just becoming generally intelligent requires sympathy and empathy and all these things, and there'll be emergent qualities of that forthcoming algorithm. We don't know. But it also seems quite plausible. It can be a cold calculated Vulcan-like existence, and there's not necessarily anything wrong with that unless that becomes our enemy, of course.

But perhaps in its desire or need to interact with humans or the fact that it was trained on human data, it will learn to mimic or to embody emotions. That's time Will tell.

Favourite Fictional AI

Abhijit: Yeah, I guess so. But you know, just curious which one, I mean in fiction, what's your favorite ai? Who do you think got it the closest to re to plausibility, at least from your perspective?

Kyle Polich: Honestly, I don't know that I have a good answer. I'm thinking of a book I read in high school where. [00:28:00] It wasn't Blade Runner, but it had a similar idea that you couldn't tell who were humans and who were robots. And I think that's maybe a more realistic version of this. If they get embodiment, if they become moving robots, if they just remain digital agents, we don't know what their motivations will be if they have any.

There's so much to be defined. It's hard to imagine the world in which they live. And I would wanna, like the science fiction that most embodies the world. I think though that the most interesting ones are ones like Hal. Or Asimov. I love all of iRobot. Oh yeah. Because it's such a focus on like the mechanics of that world.

Assume these three simple laws of robotics, and now let's explore in all these short stories, what happens when you change them or certain angles on them, or take one away. I love that fiction wise, and it's really great to think through the consequences of small changes in logic, which can be a big deal.

A a simple thing like one new law or one change to that law can have emergent qualities that change. And that's very true of Agi. [00:29:00] I but I don't know that anyone has yet in fiction captured what's gonna, well describe our world.

Abhijit: Yeah, I think so. But yeah I think Asimov really love playing with that because he made the laws, yeah, laws we all know and love and we probably will try and instill those laws eventually.

But he also spent most of those stories ripping those laws apart and sewing, showing how they can conflict with one another and drive robots essentially crazy in different ways. Yeah. That was absolutely mind-blowing. That's true. But you know, Hal, in his own way, I think that, I don't think that,

Kyle Polich: They said Hal's not the villain, it's the people who programmed him wrong.

Abhijit: Exactly. And he was just doing the best he could to run the program and stay on mission. What else was he supposed to do? But somehow he in the end went with the aliens and he became part of that. I don't know if you've read the sequels though.

Kyle Polich: Oh yeah. All the way through 3001.

Abhijit: Oh, wow. Okay. I think by 3001, I was like, all right, this is [00:30:00]

far.

Kyle Polich: It winds up nicely. It has a quaint ending. I liked it.

What do people get wrong about Current AI?

Abhijit: Yeah. I don't remember it now though. It's been absolutely ages since I watched that, but, okay. I've got a couple of other questions that I want to quickly jump through. What are the biggest misconceptions that you've heard? About people heard people have about ai?

Kyle Polich: Well, first and foremost, that it's smarter than it is. There's a lot of paranoia that Alexa and these devices are listening and they know what we wanna buy before we wanna buy it, and all this kind of stuff. There is a lot of overblown ad hoc, one-off use cases where something surprising happened and people read too much into it.

Abhijit: It's happened to me too, I gotta say.

Kyle Polich: Sure. Me as well. Yeah. It's easy to do, but let me think of it this way, like life has surprises in it. They're rare, but if you live long enough, you're gonna see a fair number of surprises.

Abhijit: Yeah. And coincidences, which have a one in a billion chance will happen.[00:31:00]

Kyle Polich: Several times a day. There's a lot of people.

Can AI want to kill all humans?

Abhijit: Exactly. To a lot of people. That's exactly what I was going to say. Yeah, that's true. And what now the thing is, I've been watching the Terminator movies the last couple of last few days with my wife. And I think a lot of people are equating the current AI to, the precursor to Skynet. Sure. Which is to kill us all eventually. Do you think that. AI could possibly have that kind of malicious intent at some point of time.

Kyle Polich: It is

a possible issue. So broadly speaking, in the AI community, we refer to this as the alignment problem. Yeah. Can we build the machines to be well aligned with our values?

Because if they're smarter or more capable than us in certain ways, and they're misaligned, we could become the victims of our own creation. So famously Nick Bostrom, the author, Promoted this idea about what if we invented a paperclip making [00:32:00] machine whose only objective in existence was to convert the entire universe into paperclips.

Terrifying thing, right? If that were a super intelligence and it did that would be the end of us if we created that. Yeah.

Abhijit: I'm not that flexible.

Kyle Polich: Yeah, while I'm not gonna put a 0% probability on something like that. I think it's not something I stay up too late worried about. I do worry about AI and safety, but more so in the sense that human beings are war mongering murderers, and we use the tools at hand.

Will an AGI be the same way? We don't know. It could become her enemy, but, like in Terminator two, they reprogrammed the thing and sent it back to be the defender and the robots were on both sides of the war. I see something more like that being a reality. This notion that an a g I has a God-like intelligence, I have a lot of skepticism about that naturally will have a profound leap forward in certain ways.

You and I can't do arithmetic in our heads. It can spit out the square root of a 20 digit [00:33:00] number in a second. So is it godlike? Arithmatically? Yes. Yes. Can it, can it create a joke so funny that all human beings instantly laugh for a full day. No. So somewhere in the middle is where things will come true.

And I. It's gonna have profound abilities to create cures, but also to engineer viruses. So it's what we task it with. Now, if it's autonomous and it decides what it wants to do for itself, that's where people worry. We have to get it aligned. We have to make sure it's values well aligned with ours so it doesn't pursue things like creating new viruses and wiping out humanity.

And that is a serious field but I believe it's being looked at seriously.

Abhijit: Yeah, because open ai, that's one of open AI's mission statements is that we want to create artificial intelligence with, which is aligned with human values and good human values. Not like we want to kill the neighboring nation.

Kyle Polich: Exactly. [00:34:00]

Is Self-replicating AI bad?

Abhijit: But yeah, that is, I, I certainly hope that does come to pi come to pass. But I think the only way, I think, I don't remember who said it exactly, somebody did say it recently in relation to AI, was that we must not let AI become self replicating. That brings, of course, the matrix to mind.

Sure. Where it's but do you think there is any danger to an AI which is self replicating? Self replicating, at least at this point of time would be digitally, unless it starts making robots by itself, which is a totally story. But do you think there's any validity or it, or danger in self replication or self proliferation in other, any other sense?

Kyle Polich: I don't personally have that exact concern. I would be more concerned about a more general alignment that the A system could develop its own motivations that we are unaware of and take action to act on those motivations in a way we can't foresee. And it could do that without self replicating. And actually the replication process might [00:35:00] have some advantages.

Maybe an AGI who is tasked to solve one particular problem could clone a version of itself that strips away all of its knowledge of Ska bands and jazz and injects more of its chemical engineering knowledge, and then goes and solves some problem with a bridge. That's a format of using the technology that is self replicating that is independent of the alignment question.

Abhijit: Interesting. And essentially so how far do you think that this, we are, we, of course we are, we have been talking about Sentience and sapience. How far off do you think we are? Cuz right now we are still jumping leaps and bounds with every version that comes along. But how far do you think, do you see that as a realistic possibility though?

Kyle Polich: I do, and rather recently actually is where I've been softening on this view historically, when you asked a lot of AI people, when might we see agi? You got these very far thinking answers like 50 years in the future and that's probably what I would've said three years [00:36:00] ago. But I believe those estimates are dropping rather quickly amongst qualified experts and I think for good reason.

There was a recent paper published called The Broken Neuro Scaling Laws that gives us good confidence that there's no stopping the current techniques we have. So to give you an example, let's say our goal was to build really large buildings, and we rewind time back to primitive man. He learned to work.

He and she humans learned to work with wood. And you can build a big building with wood, maybe three, four, or five stories. You cannot build a skyscraper with wood. Later on, they learned stone. They built bigger buildings, but they couldn't build rectangles, right? They had to build pyramids for structural issues.

Then we got to steel. And steel has some limitations, right? Like you get up there you have to worry about the wind and the building being too high and all this kind of stuff. So there's kind of a limit to how big we can build a building. Maybe there'll be some new carbon fiber thing that comes out.

We could build a taller one, but we hit these ceilings in [00:37:00] engineering. There is good reason to think that we are not going to hit a ceiling in how we keep expanding these models. So GP three to four to five, that when you add more parameters, the machine makes good use of those and exhibits even more impressive phenomenon in what's what it's able to do.

That if we just keep hitting the gas pedal, it's gonna get better. Wow. In fact, that's why recently, I don't know if you heard about this open letter. A lot of major people in the field from open ai, researchers, industry people have put out this open letter asking everyone to stop research for six months on anything beyond G P T four in order to better scope and understand what might happen next.

Should we slow down research into AI?

Abhijit: Yeah. I have heard about this. What do you think of that? Is it, is that legit? Is it, I don't know. Is that the right way to go?

Kyle Polich: I'm inclined to say yes. Is it something that is a little bit maybe knee jerk? Will we look back in six months and say, oh, that was silly. We were just [00:38:00] building the next parlor trick.

Maybe we'll say that, but it is not unreasonable to think if we keep on this path, we are going to create an agi and that is something we should do deliberately and have discussed in advance, because what rights is that thing gonna have?

Abhijit: Yeah. And of course this open letter, it does bring up a couple of good points, but at the same time, will it actually have any impact on people who are actually doing the research?

I seriously doubt it. Isn't, people are not to sit on their hands. No.

Kyle Polich: I think it's impossible that it'll be followed here for a couple reasons. One is there are a few groups in the world that can do this. Like I, even though I know a lot about the technology, If you gave me a big budget and let me hire all my favorite engineers and stuff like that, it would still take me a while to spin this up.

And a lot of capital, I couldn't do it on my own. Yeah. So there are economic limits to who can play in this space right now and that means, there's a short list of people who need to all agree on this. I think maybe within the US everyone will agree, but, or I don't know, [00:39:00] maybe the NSA is gonna do their own thing.

Maybe other governments are gonna do their own thing. That's the scary part, right? Yeah. If another government invented AGI and immediately applied it to cyber warfare,

Abhijit: oh boy. Yeah, that's gonna be nasty. And there's a lot of, there are people who are already trying to use chat, G P T or G P T four for creating malicious code.

As long as it, of course, there was an Israeli university, I think it was who did a few experiments and they demonstrated that they could use, if they didn't use the word malicious or virus or any of those trigger words, if they didn't use any of that in their question and how they framed it, they could get ChatGPT to create malicious code.

Kyle Polich: Of course. Yeah.

Abhijit: That's a bit scary. That is a bit scary. Well,

Kyle Polich: you know what, the solution to that is? You ask another version of G P T four to try and come up with malicious code and propose patches for it.

Abhijit: Oh, that's clever.

Kyle Polich: Yeah. It's like John Conner. He took [00:40:00] over the Terminator. He sent it back in time to protect himself.

AI can think like humans can't

Abhijit: Yeah. Not bad. I like that idea. There is always hope and because I think essentially a lot of what I mean from times of Alpha Go and the other ais that have been playing games so far, the essential thing is they're trying to basically fighting themselves. They are countering themselves and seeing what other moves could possibly be made and therefore learning parts that haven't been explored yet.

Kyle Polich: Yeah, self-trained. Yeah, that is very, isn't it amazing how like Alpha Go, for example, I don't know if you know about Move 37, but in the famous game with Lisa Dole, there came a move when everyone thought the machine made a mistake cuz it was such a out of standard play move to do. But then later in the game, it became abundantly clear that was a deeply smart strategy it was using that human beings were unaware of.

The future of jobs and AI like ChatGPT, Dall-E and Midjourney

Abhijit: Yeah, I did read about, I didn't know [00:41:00] it was called Move 37, but I did read about it like it was doing, it was creating strategies that it nobody else had actually thought of. I think honestly, like that's, I feel personally that's probably one of the biggest applications of ai, of where it doesn't think like a human.

It'll be able to give us ideas and honestly, like I see that. Because I'm a writer primarily and professionally, and one of the scary things is that now ChatGPT can write an entire article, but at the same time it can't write the article essentially like me, at least not with the depth and nuance that I'm would apply to it, but, Who knows, it might get a lot better over the next couple of years at exactly that.

A lot of photographers, like I was watching a YouTube a YouTube video last night, which was talking about photographers, professional photographers, and what Dali and Mid Journey mean for them. [00:42:00] And Mid Journey has also been progressing at a ridiculous rate. Like it's, yeah, like the it's become so much more accurate at images. And it would look like absolutely natural images when you, when it comes to an eagle, picking up a fish from a lake. I saw a few of these examples of what the older versions were showing, which was an absolute mess of hit different aspects, which it's sure somehow managed to squish together and now a pretty photorealistic image.

Yeah. So, Mid journey is doing crazy stuff. But at the same time, to get that refined final end product, there is a lot of playing around and having conversations or tweaking different versions and trying to get that final version out. So I think there is that possibility of, when you have a photographer who wants to be able to make an image, Instead of going out at the 5:00 AM and it has to be a rainy day in such [00:43:00] and such location with the light exactly like this, unless he's gonna spend the next five hours processing the image in in Photoshop and trying to figure out how to, tweak that image and kind trying to get that out.

But if you can sit there. And go through this conversation, or at least, or pay someone who is a professional mid journey what do you call it?

Kyle Polich: Prompt engineer.

Abhijit: Prompt engineer is, that's exactly, that could be a future job, I think

Kyle Polich: already is. Yeah. It

Abhijit: really is.

Kyle Polich: Yeah. There, yeah. People are hiring for prompt engineers right now.

Abhijit: Oh my God. It's already happening.

Kyle Polich: Yeah. It's a major inflection point. There's no doubt about that, but I don't think we're gonna look back. So strongly on it. I think it's gonna be a subtle thing. Like I don't worry that I've never grown my own food or used a loom to weave my own clothing or done any of these sort of activities human beings classically had to do, weave all as people. And it's scary that there could be unemployment, things like that. And [00:44:00] there's likely to be some economic disruption, but my instinct tells me this will be a smooth but quickly changing process.

Abhijit: Yeah, I think so too. I mean when Robotics came in. A lot of people were very scared. And yes, it did kick out a lot of people from their jobs, especially like car factories and heavy machinery factories, et cetera.

Kyle Polich: To a degree.

Abhijit: To a degree, yes. And now we have 3D printing and very large scale 3D printing, which is taking that even, one step further.

But then we have engineers who have learned the art of 3D printing and being able to treat certain things. In a computer and be able to see that in real life. That in itself is a new skill. So there are, this is, yes, it is going to threaten some jobs, but there will be others which will be created in lieu of them.

Kyle Polich: I firmly agree.

Abhijit: The tri, the trick is just to, okay, to keep moving and understand the technology that might be threatening. Like I, for example, me as a writer, I'm gonna use Chad, g p t [00:45:00] as, yeah, well as I can to maybe fill in certain details or maybe just get a basic structure in and then I can just craft the entire thing.

Like I can, if you had a sculptor who would have say a 3D printer, And I've seen this a lot online as well, where you have sculptors who are 3D printer sculptors. They will create designs, they will print out the 3D prints, but then they go into sanding it and shaping it and coloring it and organizing all the pieces into the right shape that they envision.

Yeah. And then come out with it. So there is still a long way to go before you know all those little kinks are done. But then, That's what the future holds, that, there's just a whole bunch of new opportunities, new professions, that are going to be blossoming out

of this. I

Kyle Polich: compare it a little bit to like carpentry.

You can still be a professional carpenter today. But the amount of human generated woodworking objects has dropped exponentially [00:46:00] since the invention of the table saw. And, Ikea, places like that. So if you're a guy who you didn't see the table saw coming and you invested in this massive warehouse for old school carpentry, Your business is probably gonna tank.

And that's very sad because things are moving so quickly, people can't necessarily be expected to see the future. But I agree, like there are no out of work typewriter repair people. They all found new gigs. Maybe slowly, maybe they don't pay as well. There are societal things to discuss here, but I don't have the like Armageddon perspective on this.

I think the economy will adjust.

Abhijit: And we are humans. We adapt very well to whatever true, whatever confronts us. So we are probably to adapt. It's so guys, if you're listening, I think the primary takeaway is that there's nothing to fear. We'll get on with it. Pro progress is progress. There's nothing that we can do to stand in its way, but it's gonna be fine.

We're gonna, we're gonna figure it out.

AI and Nuclear War

Kyle Polich: We could talk about the [00:47:00] fear on scales. I'm a little afraid of nuclear war. I'm a little afraid of agi, but I don't keep up at night worrying about either of those topics.

Abhijit: Yeah, just do not con connect the nuclear war head codes, code computers to the internet.

That's. Basically it,

Kyle Polich: Hey man, maybe in the future we're actually gonna wanna take the button out of a human hand and put it in a machine hand because we can trust the machine. We know exactly how it was programmed, how it makes its decisions. It's fully interpretable. You can determine if it's biased or not.

I don't know.

Abhijit: Shit you make a lot of sense there.

Kyle Polich: I'm not saying I firmly believe that, just that there's an argument for it,

Abhijit: Yeah, that's true. That's true. There is an argument for that. But you know, of course, I just watched Terminator two last night. Like it, I'm still, it's still like, don't give it to the robots.

Kyle Polich: Yeah.

Abhijit: And ironically, the article I wrote about this was called The Rise of the Machines,

Kyle Polich: but yeah, well they're on the [00:48:00] rise, there's no doubt about that.

Abhijit: And much faster than we ever anticipated, man. Yes.

Kyle Polich: So there's been an acceleration.

Abhijit: Ah, so what is, that was my takeaway is that.

It's gonna be all right. The kids are gonna be all right. But what is your takeaway from this whole thing? If we had wrap it up,

Kyle Polich: that we have really reached an inflection point Historically, my skepticism of AI has been to say that's in the future. It's real. It's not impossible. Carbon isn't magic.

You can make silicon life. It's coming, but not for a while. And now I'm just saying it's coming. Period. Yeah,

What about Moore's Law?

Abhijit: we'll just have to, we just wait and see. But do you think the, yeah, like Moore's Law is something that is becoming more and more of a challenge? I think we are probably nearing the edges of, being able to really increase computer power at that scale. Yeah. But do you think that's gonna be any hindrance at all? We just,

Kyle Polich: no, not at all. Depending on how you wanna calculate it, you could actually say, Moore's law is already broken. We're hitting kind of some [00:49:00] ceilings on that. But I think it's also because we don't require faster machines, really.

We like them. That's nice. But the real secret to AI has been parallelization and the use of A G P U in making these calculations. And think about your own brain, if I showed you a picture, we met for the first time like a year ago. You didn't know me as a child, but if I showed you 10 pictures of children and said, which one is me?

Not only would you probably guess, but you do it in half a second. So it's not like you're running some long algorithm that's processing a bunch of stuff. Your brain in parallels looking at a bunch of things and coming to an answer and we presume AGI is gonna do the same thing cuz that's how we built it.

So it doesn't necessarily have to be faster, it just has to be parallel.

Quantum Computing

Abhijit: Being able to do simultaneous things. And of course we've got, and we've got things like quantum computing, which is starting to emerge. Do true. Is that promising at all? Like it Honestly, quantum computing, it's another thing that just I can't compute.

Kyle Polich: Sure. So I can [00:50:00] give you the good skeptical angle on quantum computing real fast. Number one, has nothing at all to do with agi. Completely unrelated. I'm not gonna help or hinder agi. It's a real technology. It's very cool. It's going to be a massive speed up for a specific set of problems we know about.

Like Fourier transformation lots of stuff in chemistry. Grover's algorithm, shores algorithm. These are like the known things that are faster on a quantum computer, but, Not quantum computers are just better in every way. It's better in some specific ways. Maybe an analogy would be like we think of different vehicles.

We all drive currently these maybe hybrids or gas or all electric. There are people who drive all electric with the short ranges cuz it works for them. And a quantum computer will work for specific use cases very well. And that's the moral of the story there.

Abhijit: Oh, interesting. Because the very fact that, like a qubit is supposed to be able to find answers between yes and no.

Like it's [00:51:00] not on or off. It can be multiple states in between. I would've thought they would've, somebody would've been thinking of trying to apply that to an AI model.

Kyle Polich: So a classical computer can already do that. We can apply probability we do it with zeros and ones, but you know, we express a floating point number in those binary numbers that can say I'm 50% confident of X, Y, or Z.

A quantum computer uses probability a little differently. It kind of s imagine if you had some solution you were looking for that was, you could express it in a hundred characters. A quantum computer puts a probability distribution over every possible description that's a hundred characters long, and then iteratively boosts the ones it thinks are correct until it finds the right answer.

And that boosting process happens much faster than a classical computer can do. And that's the key insight. So even though the qubit stores a quantum superposition or describes one, we're not taking direct advantage of that as like a [00:52:00] better than boolean. We can already be better than boolean. It's more in using the quantum state to store information in an efficient way.

Abhijit: Ah, what's a boolean?

Kyle Polich: Oh yes, no, true false. A bit.

Abhijit: Oh, that's a boolean. Okay. There's, there they're words that you use, which I just

Kyle Polich: Good feedback though. I should keep that in mind cuz AI is such a contemporary topic. We need to speak about it in the vernacular.

Abhijit: Oh, absolutely. I think I think we all we are all very interested now. I think the world finally has the attention has, the nerds finally have the attention of the world.

Kyle Polich: I just got popular, man. It's a whole new world for me.

Thanks and Outro

Abhijit: So you guys, if you guys really want to get into the weeds, into IT, into data and analysis and everything else, pretty much that has to do with IT. And. Get the facts right. You need to follow this guy and listen to his podcast and go to the website Data Skeptic. Kyle, thank you so much for joining me and clearing all of that up. It was, it is [00:53:00] great fun to chat with you and I think I would definitely have you on again. I think

Kyle Polich: can't wait, man. Looking forward to it.

Abhijit: Chat for a hell of a lot longer. But thank you so much for joining us.

Kyle Polich: Oh, my pleasure.

Abhijit: And thank you guys for joining us, and hit a like on this video and subscribe if you want more conversations like this. And Kyle and many other people like him, skeptics, scientists, are going to be joining me over the coming weeks and months to be able to answer questions that we all have about the world around us.

And that's what we do here at Rationable, right? So thank you guys. Thanks for watching. Subscribe and share this with the people who have been asking about ai. Until next time, stay Rationable and I'll see you later.