AI - technology breakthrough, or the end of humanity?
The Why? CurveNovember 02, 2023x
74
39:0635.94 MB

AI - technology breakthrough, or the end of humanity?

Artificial intelligence is everywhere - and politicians and business leaders are rushing to get on top of what could be an advance bigger than the Industrial Revolution. But could it also be a risk to human life on the scale of an asteroid collision or nuclear war? Is there any practicable way to control something we barely understand? Or will caution stop us from reaping the huge benefits for universal prosperity? Tony Prescott, Professor of Cognitive Robotics at Sheffield University lays out to Phil and Roger both the risks and gains from AI.


Hosted on Acast. See acast.com/privacy for more information.

[00:00:00] The Why Curve, with Phil Dobbie and Roger Herring. We see on the scale of the Industrial Revolution, or a threat bigger than asteroids, pandemics, and nuclear war. So what is the truth about AI? And is there any effective way to control it without losing the advantages for

[00:00:44] are we in the last days of mankind? The Why Curve, so you know what, I use chatGBT, which is a form of artificial intelligence to ask. Who is Roger Herring? And he said, Roger Herring is a journalist and former news presenter.

[00:01:02] He's worked as a presenter on the BBC World Service and a Bloomberg radio. That's all it said. Yes. Didn't actually go to a great deal. There's nothing else to be said. I'm a current. He comes and I might play with him. Yeah, they are. So God, that's wrong.

[00:01:13] So then I said, who is Phil Dobbie? Well, Phil Dobbie. It says, Phil Dobbie is a journalist and former news presenter. He's worked as a presenter on the BBC World Service and a Bloomberg radio. I think it's got me confused with it.

[00:01:26] Yes, we are the same person where we speak at the same time, you see. He's only talking to him. He's only talking to him. He's only talking to him. He's only talking to him. He's only talking to him. One little gentleman. He's not saying. Yeah, he's terrifying.

[00:01:38] But yes, but this is an issue. The reason being AI gets it wrong. Yeah. Well, I mean, wrong information in and wrong conclusions. So imagine if that was then taken. Yes. I could be offered a job based on your experience.

[00:01:50] And a bonus through, ever, gives me the job. Obviously. Oh, you can say, I can't universally be the other way around. Yes. So I also asked Bing there. AI, which is chat TVP isn't what are the rights and dangers of AI? And it's given eight.

[00:02:05] We can do it. We've got the whole. It's done all the research for us to be here. Come on. Miss information. Yes. You can see fake images. Yeah. Like privacy concerns. Job losses. You know, in order to make jobs that would be used by humans, bias and

[00:02:17] discrimination, so it can perpetuate them. The Lord exists in society. The market and financial volatility. Yes. They insist as based on the income, the income, the income, the income, the data can just sell the markets. Hey, why? Singularity that it could surpass human intelligence.

[00:02:32] So you just have machines and not us. Yeah. It's leading all the thinking. We're no critical thought for us. Weaponization, so it could be used to build chemical weapons or other dangerous things. Yep. And misinformation campaigns, which we see a great deal of. Obviously. So there we are.

[00:02:50] That's the program. I'm sorry. Well, yes. But we are going to delve into a little deeper because it is very much of the moment. Hey, everyone's talking about it. It seems to be out there. Seems to be aware and it's both of the dangers, which we just

[00:03:02] been listening. But also the advantages. I mean, the things it could do, medical breakthroughs, all kinds of stuff. And where she's guys somewhere this week in days of a sudden. And they, so we wanted to look into, is it a threat?

[00:03:11] Was it more of a threat than it is in an advantage? So we found someone who can give us that, who is human. Thank God. We see you too. Yeah. Who is Tony Prescott? He's very good. Tony Prescott, professor of cognitive robotics, a chef field university.

[00:03:25] So Tony, Rishi is inviting a bunch of representatives from around the world to his summit at a black-sheep park. Is this? I mean, he said that one of the dangers is the potential end of mankind from artificial intelligence. From artificial intelligence. Is he overplaying it a bit?

[00:03:43] I think you can see a path towards scenarios, but not just scenarios, very unlikely, but still possible where AI becomes dominant on our planet. And that could spell bad news for humans. So in a way, I mean, are we talking about

[00:04:05] a week in complete reconstruction of the term in A-ta happening here? I think there's a lot of science fiction scenarios which are caught compelling, but actually quite implausible. But there are, thank God. Other scenarios where you can see AI becoming out of control.

[00:04:22] And therefore, you might want to start thinking now about how you would guard against them. So it was case scenarios or things such as AI deciding perhaps even for our own good that humanity should be limited in some way, which might be against our wishes.

[00:04:45] And there are actually more plausible scenarios where probably people with bad intentions, misuse AI. And that could be an existential threat. But I think AI itself could be an existential threat in the future because we just don't know what the limit is of what we might build here.

[00:05:04] We could certainly build AI sort of more intelligent than ourselves. And second guessing what they would want to do is difficult. Isn't there also the question of how far you allow it to carry through the information it might all the conclusions it might draw?

[00:05:18] So I mean, it would be quite right, probably. And a lot of people are arguing now. There's too many people on the planet. We shouldn't have anymore. That doesn't mean that is science fiction scenario that we start newking the population.

[00:05:30] So AI might come out and say there's too many people who got to do something about it. It's when we allow it to do something about it itself than we've got the problem. Yeah, I think there's two things here.

[00:05:40] One is, as we design AI's, how can we build in safeguards so that they don't try and do things which are against our interests? People have thought about that for a long time, famously sort of Isaac Asimov's Laws of Robots and so on.

[00:05:59] And it's actually quite hard to do to think what it is that we want AI to do, which won't get a sense of the sort of king-miders trap, you know, of wishing that you could turn things to gold by touching them and finding out that that, you know,

[00:06:13] causes lots of things to food. Exactly. So you've got an eagle. So how do we write this optimization function for AI? This is what you should do in a way that is full proof against those kinds of cochops. So that's one aspect.

[00:06:30] The second aspect is how do we limit AI so they don't have too much power that they could, if they decided to do something which was against our interests and put it in that plan? Well, let's go right back to the basics on.

[00:06:42] We've kind of going into almost like the current issue which is clear. But what is AI because it's a term that's certainly been around but much, much more in the last couple of years. Suddenly it's up there in front. So I suppose two questions one,

[00:06:55] what actually, what's the best definition of AI and why are we now has there been some major leap forward which is the reason we're all taking it so seriously? Well, AI is the attempt to build machines that think intelligently, which begs the question of what is intelligence

[00:07:11] and generally what we understand by that is to be able to do things that humans do that we regard as intelligent, and fundamentally that's things like problem solving, language, creativity, science, all those sorts of things. We would see as intelligent and if we see those in a machine

[00:07:32] then we would probably recognize that as intelligent. So AI has been progressing in fits and starts over the last sort of five decades, maybe longer, started really about 1940. And but in the last 10 years, 20 years has been a significant acceleration and I think that's what made people nervous now.

[00:07:57] We're seeing a lot of it in the consumer space because for example, Bing has got a supposedly an AI interface now. Is that, and when you use it, it seems like it's scraping the internet and paraphrasing stuff. So I asked what is AI?

[00:08:14] It says AI is a branch of computer science that focuses on creating machines that can perform tasks that typically require human intelligence such as learning reasoning and problem solving and then it credits a link to tech4freshard.com which has almost but not exactly that quote.

[00:08:30] I mean, it's paraphrasing. I think that's what, all right, I see that. That's what we seem to be seeing a lot is just a paraphrasing of what's already on the web. That's essentially what the large language models do. So the latest excitement about AI

[00:08:46] has come about because of these large language models of which chatGPT is the best known but they're a several of them. And what those models have done is they have collected a huge amount of language data, largely off the internet and also from books and so on.

[00:09:02] And then they've analyzed that with a very large artificial neural network with billions of parameters in it. And essentially, that's memorized huge chunks of techs but it's also learned how was to connect to each other. So yes, it is a kind of paraphrasing,

[00:09:19] part of paraphrasing, but it's also can summarize it can also synthesize things in a way that can be surprising to people. So it doesn't go beyond human knowledge because everything that's on the internet, at the moment is generated by people.

[00:09:36] But it can sort of summarize that back to us which is quite a useful function. But it's quite a dim function, it's not, we talk about intelligence, but someone can cut and paste from the net and possibly even see different words in it.

[00:09:49] But to actually think to construct, if you like, beyond what is there, put two things together and come to a separate conclusion. That is artificial intelligence. Does this is still a human at the beginning of all of this who do define what AI is, isn't it?

[00:10:04] Well, I think a large language model can put two and two together in that sense. And that it can say things that have never been said before which is one of the fundamental features of language that was felt to be unique to humans.

[00:10:18] It can create new sentences that people have never used. So what it learns about is how words occur with other words, both sort of sequentially and over relatively long chunks of text. And it can do that to answer any question you give it.

[00:10:35] And it does so in a remarkably human-like way, and it pulls together pieces of knowledge from disparate parts of the internet to do that. So I think it's quite impressive, but it's not simply taking a piece of text and summarizing it's finding links between bits of information

[00:10:52] that is gathered from different places that are relevant and address your question. So I mean, we don't know what human creativity is. We know that probably human creativity are part of it is drawing those sorts of links from having a mass large amounts of knowledge.

[00:11:10] So I think it is reflecting something that we could call intelligence. So people, lots of people write books where they do precisely that and they've got a bit of expertise in a particular area and they refer to other works and try and draw conclusions

[00:11:23] that haven't been drawn before, but there's a human brain working on that. So what you're saying is, well, okay, that human brain could, could actually be replaced by machine learning, by AI artificial intelligence, which is drawing those conclusions and adding something else to it

[00:11:39] based on the experience of that machine has and that could be greater than the power for a human to do that. Well, I mean, this large amount of model already know and they have a master, huge amount of knowledge. So the human couldn't in their lifetime

[00:11:54] read all the material that chatGPT has read and certainly you wouldn't be able to retain it to the extent that these large language models do. So already they're super intelligent in the amount of knowledge that they've amassed, but what they are lacking

[00:12:11] is any understanding of what that means beyond how words relate to each other. So the meaning of words is partly how they relate to each other, but also a big part of it is how they relate to the world, the non-linguistic world that we're sharing.

[00:12:25] It's interesting using the word understanding there, because that is a very key thing. I mean, at the sense, I suppose there are things like sense self awareness, is that something that could ever happen with computers and therefore understanding at that level

[00:12:40] and also can they talk to each other? And we say, there, we're talking about several different things can they communicate with each other? Well, one of the issues with the large language models that they say are sort of something of a black box.

[00:12:52] So you can query them, you can get responses from them, but you can't so much get inside them and connect up what they understand at a deeper level to other systems. So that's something that people are working on. There are lots of different kinds of AIs

[00:13:09] is the other thing to say. So a large language model is a particular sort of AI that's good at processing and using language, that there are AIs for vision. They're sort of analyzing visual scenes, they're AIs for planning, they're AIs for mathematical theorem proving.

[00:13:30] All of these things can be connected up, but potentially and then you would start to have what people call artificial general intelligence. And once you've started to connect these things up and there's no reason in principle why they shouldn't talk to each other directly rather

[00:13:46] than through the intermediary of a human. So then you're getting a much more rounded set of set of capabilities. There's still the question of whether the AI actually understands anything, but I think once you connect the AI to sensors such as cameras and microphones

[00:14:03] and so on, you may be connected to robots so it can move around in the world it can act in the world. Then you're starting to give the AI the same kind of direct experience that our human brains have. And I think it'd be quite hard to argue

[00:14:19] that AI that could sense the world directly with its cameras and then talk about what it sees didn't have an understanding of the world. There would be a different two hours, but it would still be a form of understanding. Because they talk about the consciousness

[00:14:32] that goes to the machine, the idea that somehow even humans we don't really know what mind is and consciousness and how that comes about self-consciousness. And is there an implication? If you have a computer sufficiently sophisticated, that would just somehow naturally arise from these inputs.

[00:14:50] Well, I mean it could. And there's no principal reason to think why it should not. I mean if we go back a few hundred years, it was very difficult for day-cart to imagine that the mind was the same as the body. So famously who was a dualist.

[00:15:08] Now most scientists are materialists and they take the view that the mind and the brain are one thing and that the mind is this virtual entity that's sort of generated by the brain. So we've made that leap. And I think we might well make a leap

[00:15:26] towards understanding consciousness as also being a property of physical systems, such as the brain, in which case there is no good reason to think that there couldn't be an artificial form of consciousness because why should consciousness just reside

[00:15:43] in biological entities if you had the right kind of architecture? Why shouldn't it suitably intelligent machine or robot have self awareness? So I think we need to be careful about thinking that this thing consciousness is just for animals, just for humans. Because we've seen in the past

[00:16:02] that things that we thought were unique to humans aren't necessarily that and that we can create artifacts which have these properties. That sounds terrifying. We just need to start to get into sort of like fundamental, slight care, getting to the bottom of mass-loz hierarchy of needs and machine.

[00:16:18] Go on machines, they don't want to be fed or sheltered, but they want to survive. Well, I think there's a certain amount of anthropomorphizing going on there. Because why should an AI care about surviving? The reason that animals and humans care about that is we're evolved beings.

[00:16:38] And we have this sort of primal urge to survive and to reproduce. So if we didn't have that, we would go eating. Yeah, good point. Yeah, yeah. So what about you? You don't actually have to build those into our machines and our machines could be much more relaxed

[00:16:52] about whether they exist or not. Right, well, that was a thank goodness for that. But what about fundamental issues that society faces? Can they be fixed by AI? So for example, we've gone through a few decades where we've had vast expansion of wealth on the planet,

[00:17:11] but also a widening rich poor gap. So could AI? And there's lots of people obviously have opinions on why that's happening and it becomes politics. It becomes left wing and right wing views as to how you deal with this sort of discrepancy.

[00:17:28] Your third machine learning in the middle of that is it going to come with an answer? And how much did this kind of thing? And where does politics get into all of that? And so I guess one of the questions asking it,

[00:17:37] is AI going to be right wing or left wing? What are you going to end up as? Well, I don't think it's either. I mean, I think AI is going to be a powerful tool for helping us understand our future.

[00:17:49] I think a good model for this is if you think about what people have done in terms of understanding climate change. So climate change, we understand climate change and we can predict the future of our climate and how, for example, carbon emissions affected.

[00:18:04] And we do that because we have these hugely powerful computer models. And to me, there are four Mavay AI. And we have human AI teams who are trying to understand the climate and are trying to understand how greenhouse gases affect the climate

[00:18:19] and what things we can do to mitigate that. And that's been largely successful in changing the conversation from, you know, well, should we worry about climate change? So yes, we should worry about it. We need to do something and what should we do now?

[00:18:37] So I think we have used these really complex intelligent computer programs to help us understand how to fix the climate. We haven't followed through yet to do all the things that we need to do but more and more were relying on the computer models to tell us

[00:18:53] what would be the solutions that we could use. And economics could presumably fall in the same way in another incredibly complex thing, vast amounts of analysis of our amounts of AI already involved, of course, and invest in other things. Absolutely.

[00:19:05] And I think the problem right now is that the people that are using economic models that are informed and maybe use AI are doing it to drive profit for their own companies. They're not doing it to particularly address wealth inequality or reduce poverty or any of those things.

[00:19:24] So what we should be doing is using the power of AI to address those questions and to question the models that are being used. So there's a great deal of questioning now about whether central banks are using the right models by continually pushing up interest rates

[00:19:41] to try and control inflation. I mean, maybe AI can say, well actually, you've got it all wrong. Well, yeah, there's an issue called AI stupidity, which is not so much at the AI's a stupid but they're quite limited so they can reason about certain things.

[00:19:56] They can maybe tell you how to boost your profits but they're not necessarily aware of the repercussions that that could have in wider society. So there's a risk of overreliance on AI's in areas like trading, which could lead to sort of market crashes and so on.

[00:20:13] So we have to build smart AI's that can see the bigger picture. Because the danger isn't it just rear firms currently thinking rather than introducing critical thought? Well, which is why we need the right kinds of AI's. We need the kinds of AI's light climate models

[00:20:29] which can predict the future and we can use them. So essentially, if you have an AI that can predict the future, then you can say, well this is the future we want. What do we have to do in order to get closer to that?

[00:20:42] So an analogy is the sort of the Oracle of Ancient Greece. You could ask the Oracle any question and you'd get an answer. And I think AI's could be used like Oracle's and this sense if you could ask the AI, how do we reduce wealth inequality?

[00:20:58] And it could perhaps say well, you could restructure a economic system in this way. And that would reduce wealth inequality. It's then a case of obviously getting everybody to go along with that. But I think it's it would be a good way

[00:21:11] of finding out things we could do to make the world better. And you can apply it to climate. You can apply it to the economy. And you can apply it to all the different challenges we have. And I think that's the way to do it.

[00:21:25] So we could start to address all of those. But one of the problems of the Oracle's famously, in Greece is they would say something it would be so hard to determine what it meant and would be opened into interpretation by anyone in any different directions.

[00:21:38] And isn't part of the problem as with all computers that the building of it puts in a whole series of assumptions and ways of doing it. We certainly see accusations that because largely computer systems are currently built by middle-aged white men or younger white men,

[00:21:55] that there is a kind of bias implicit in the way things are done and the way computers operate and everything else. And that will be the same with AI what you put into terms, what you get out. It's actually true that the economy, AI is a biased

[00:22:09] and that's because perhaps could have some bias in the developers, but I think more fundamentally there's bias in the training sets. So we're taking data off the internet. We're using that to train AIs. And that means that whatever bias there is in the internet

[00:22:26] which is sort of the bias that humans have broadly is going to be inherited by the AI. So there's that risk. Particularly when you're talking about these learning models and the progress in the last 20 years has been a lot about machine learning models.

[00:22:43] But the thing is, you have to put that together with other kinds of AI, which are able to think deductively or think scientifically or attempt to take the output of a machine learning model and think what are the actual causing causal mechanisms

[00:22:58] that would make you think the conclusion of that model are true. So you can combine these different tools. And the other thing you can do, of course, is you can check whether the AI is making successful predictions. And if it's not, you can change the AI.

[00:23:12] So these things will evolve and get better over time but we do have to be on the lookout for bias in a particular for putting our own biases into the AI. So I guess the danger is letting it ultimately draw the conclusion rather than human being.

[00:23:27] So I mean, I can see a huge advantage. If you've got an issue that you want to address a big societal issue, you're holding a summit and you're getting some of the world's great thinkers. AI might be part of that and is actually there, thing out some arguments

[00:23:41] for discussion with some reasoned information behind it but the call is ultimately made by human beings. That sounds like a great outcome. Well, I think it isn't, isn't. Because with the AI, you have the possibility to think long term and to put in what it is

[00:24:01] you think should be the balance between sort of immediate positive things happening in the near term versus positive or negative things happening in the longer term and the problem we have with the governments is that there's a lower near term thinking.

[00:24:18] I mean, Rishi claims to be thinking for the long term but he's really not thinking for the anything beyond the next. We're willing to let's be honest. Having an AI summit this week, we sort of seems to have come from nowhere just when he's struggling in the polls.

[00:24:33] I mean, just could be counted and see they need to be a particularly smart computer to where I want to anticipate. I mean, it's clear that the sort of the short term thinking which has governed British politics for decades has resulted in a number of the current crises.

[00:24:47] For example, the housing crisis right now because successive governments have failed to build enough houses. In AI, you've hoped would have seen this coming. So it's a case of how you find the right balance between using these kind of intelligent models, I think.

[00:25:05] Rather than thinking them as AI, which have their own agenda, because I don't think they will have their own agenda certainly for the foreseeable future. At least not yet until they get a consciousness about it. Yeah, so they will be systems such we can use

[00:25:18] to understand and predict the future. And then people will be able to say, well, we would like this vision of the future and not that one. But we want our politicians, not just to be thinking about the near term, we want them to.

[00:25:32] So probably we want to make these kinds of models democratically accessible. So that was what we were going to come in because you mentioned bad actors earlier, potentially being ways in which AI could go badly wrong for us, not necessarily it's doing it itself,

[00:25:46] but people were where they use it. And one of the things that was discussed or has been discussed at this summit was about controls on AI. Is it possible really to institute anything that's going to control the development and where AI goes effectively?

[00:26:00] I think it's difficult, but it's been done before, for example, with nuclear weapons, which has significantly limited nuclear proliferation. I mean, there are differences obviously and the nuclear weapons require access to all sorts of things where you can restrict people's access like plutonium and so on.

[00:26:21] Whereas AI just requires the powerful computers, so it may be harder to limit access to AI. But you can also think about various safeguards. So you can avoid putting AI's in charge of critical infrastructure, for example, with no intermediary.

[00:26:40] If you have a bad actor who can take control of an AI, which is controlling your power stations, for example, then they could cause explosions to take place. And they could be really devastating damage. And governments have worried about this for some time.

[00:26:56] Cyber security is about preventing bad actors from doing these sorts of things. Bad actors can be empowered by having AI tools. So we need to worry about that. AI's will be good at breaking codes, and so on there will be good at getting around security systems

[00:27:14] all these sorts of developments could come through AI, which would make it harder to protect our institutions. So Alex Carp was on TV over the weekend on the BBC. He's the CEO of Palantir Technologies who have been doing AI stuff with NHS records.

[00:27:30] Which actually sort of raises a question about what is AI being used for now? But we'd park that one. He said, I believe that we are in an arms race and the world is fracturing. And if it was my decision to invite adversarial countries

[00:27:42] talking about the summit this week, I would not have. So is the world going to be split between those who, you know, embracing AI and those who are not and the different uses that put to AI? And what are companies that are throwing themselves headlong into this?

[00:27:57] What are they hoping to gain? Are they thinking that this is going to give us a competitive advantage for our economy? Or is it a defense thing? What do you, why does the world need to be so split on this?

[00:28:07] Well I think everybody's pretty much sort of fixed on achieving AI. And there is an arms race. So I think it is, there are risks obviously in sharing insights with your competitors. But at the same time, if you're going to be ground rules,

[00:28:26] they need to be agreed by everybody because there's not much point in the West agreeing to limit developments in AI and one direction of China is going to headlong towards that objective. So I think it's difficult to find the balance there,

[00:28:42] but there needs to be a global conversation about how we do this. So in the moment a lot of the breakthroughs are made in the US and in Europe. And then China piggybacks on that technology, but it's also developing its own AI technology

[00:28:59] and could easily surpass the West in certain areas that may already have gone so. But whenever, I mean, if they're doing that to create the expertise to all the technology to sell on, we're never going to buy that, are we? Because it is a blank box.

[00:29:12] I mean we don't know. And it's too difficult to pull at AI solution apart. Isn't it? We don't know what the biases within the technology that they develop. So there's going to be huge mistrust for anything that comes from what's not considered a friendly nation, isn't that?

[00:29:28] I mean, I think there are risks. I mean, a lot of these things, the consumer has the right not to use that product, but the problem is that we're progressing towards a surveillance society, for example, that we didn't think we wanted. And that's being enabled by artificial intelligence.

[00:29:49] And these things are quite incremental. We have cameras on the streets and that seems a good idea. But then suddenly the camera on the street can do facial recognition. And it can pick your face out of the crowd. And you said, well, actually, is that what we wanted

[00:30:04] when we put cameras in the street? So and that's one of the risks with AI is because it can move quite quickly and it can be very difficult to keep up with that if you're trying to create regulation. So one of the debates perhaps I'll be having

[00:30:21] the quickest about privacy and how AI is invading privacy and enabling things like facial recognition and places that we don't want to. So what about what it's doing now? So I mentioned Alex Carp. So his company got into the NHS. They offered their services for free,

[00:30:38] so they could help try and sort out something to do with NHS records. Don't quite know what. But can you give us examples of how AI is being used commercially now for the betterment of the organizations or for the countries that it's working in?

[00:30:52] Well, I mean, AI is making credit all advances in healthcare. So the great power of AI is to take huge amounts of data and to find patents and data that wouldn't be obvious to a human, even an expert human. So in sort of diagnosing rare diseases

[00:31:12] or identifying potential new treatments for disease, people are now extensively using AI. So AI can suggest it can come up with potential new therapies. It can know things about the properties of different drugs and it can know about if you created

[00:31:30] a new kind of compound will have it would interact with other compounds and so on. So I think the potential for new medical treatments building on AI is fast and really quite exciting. I'm not sure about the reorganizing the NHS records.

[00:31:47] I'm sure they're scope for introducing some artificial intelligence. Can I ask a lot of data privacy? There are data privacy issues, but there are data privacy issues as soon as you put your data on the internet in a way that people can access it.

[00:32:00] So being access per AI is all is there in a sense that there are lots of people that are able to access this type of different reasons. As we sort of come towards the end of our discussion here, which is absolutely extraordinary.

[00:32:15] We're going to weigh insights into what is potentially a feature should we? I mean, the central question I think a lot of people's minds should we fear this? More than see its opportunities. Is this something to be welcomed? Something that's just inevitable and we have to accept it

[00:32:28] or is it actually opening up a world of medical and economic and even climate science things that will be hugely beneficial to us? Where is the balancing? And one thing we could fear just to throw in at the end as well

[00:32:40] because we haven't had the chance to discuss it yet, but it seems quite big to me. Is this whole idea of deep fakes and the idea that public opinion can be swayed by seeing stuff that actually didn't happen?

[00:32:54] And I know this is why the idea that everybody should be Elon Musk wants everybody who's on Twitter or one of its called these states, X to be identified as a human being so that you can't get human opinion being driven by machines, pushing agendas

[00:33:11] with supported by deep fakes and the technology those is quite scary. So that's a worrying side of it all isn't it? Well, I mean, it's not new. I mean, bots have been on Twitter for probably since it began and swaying public opinion

[00:33:25] and think sort of North Korea and Russia probably have armies of bots right now trying to do it. But if they've got, if they've got one agenda which they've got a pursuant, they can dive and Bob and we even change that messaging based on how it goes.

[00:33:43] Yeah, but the photo is people be faking photom, you know, go stall in the period, people being erased from photos and things. That's not new, that image, that fakeery isn't new. It's just the sophistication of it that's new. Yeah, I mean, we will just have to learn

[00:33:57] to be less trusting and more suspicious. You won't be able to trust the video anymore because you can sort of substitute Joe Biden for whoever was in the figure in the video and it looks as though he was doing something he shouldn't.

[00:34:12] So this is just a future which is, you know, both good and bad that's the bad side of it but there's enormous opportunities for creativity now with video that didn't exist before AI was able to manipulate it. So I think there is a new world

[00:34:30] and it's as dramatic a change in the world as the invention of the computers or the invention of the internet. I mean, arguably it's the first time in human history or at least certainly the first time since the Neanderthals left there'll be another form of intelligence

[00:34:49] on the planet with which we can communicate and will be at our level even beyond our level and that could be a fantastic opportunity. I mean, we haven't talked about space exploration but I think the opportunity to get out into space and explore our solar system beyond

[00:35:07] is really opened up by AI and robotics and if you think about the long term future of humanity, space I think is an important part of it and that's a really exciting thing that we'll be able to do now. I have to say I've avoided sharing myself

[00:35:21] as a hitchhiker's guide geek by saying the answer to everything is 42. And they're got a great intelligence of the dolphins who are the most computer that's all good for everything. But yeah, all right, well it's just a final question and why now?

[00:35:35] It feels like this is all of a sudden become very big news has there been some sort of major movement forwards over the last year or two that brought this into the headlines? I mean, I think the large language models

[00:35:48] are remarkable in terms of what they can do and how they work, they're not actually that a bigger than advanced from where AI was even 30 years ago. What has changed is the availability of really powerful computers, sort of server farms that can train these huge neural networks

[00:36:09] and make it plausible to build a neural network with billions of parameters in it and then huge data sets which you can harvest off the internet. So back in the 1980s and 90s when I was doing my PhD, we were using slow, small computers

[00:36:26] and we had to generate our own data. And now we have these data sets we can use and these really powerful server farms to train the models. So that's what has made the difference. And what we've discovered in fact is that these learning algorithms are extremely powerful,

[00:36:44] properly as powerful as anything we have in our own brains. But what we don't have yet is the, what I would call the cognitive architecture, the pudding of all these parts together that you have in a human brain which makes us so adaptable

[00:36:57] which gives us what you could call general intelligence. Well, there isn't an principal reason why we couldn't make artificial general intelligence and that will definitely be a significant further step but we're not there yet. The moment of self-consciousness, not even worrying, but there we are.

[00:37:15] Potentially, don't be thank you so much for doing that. Scary, I think as long as I think there's a lot of hope and you think, we're worried about the consciousness of computers. I'm worried about some of the politicians that we've got to be honest.

[00:37:29] Could be replaced by computer, my PS4 would bounce anyway. Teddy, thank you for being with us. Yeah, thanks very much. Bye bye. Because it's Douglas Adams said that, it's as dole fins with a smartest consciousness on the planet. And we were the second most intelligent.

[00:37:43] So which brings us nice on to what we're going to talk about. Well indeed, in animals because they, how intelligent are animals? Well actually, more to the point, should they have rights? Should certain animals have rights? There's been a big series of cases in the US and here,

[00:37:56] and in Europe about the extent to which animals have any innate, inalienable rights to anything? Well how shall we accord them? Right. So we're talking about the right to life, the right to being able to not be hurt, the right to, I don't know,

[00:38:11] have some quality of life. I mean you can define these things in anywhere you like, but does it make sense to give animals rights in any way? We can't ask. Yeah, obviously what rights they require, a request and a random or right activist,

[00:38:25] why don't actually other rights that they are fighting for? Well indeed, and does it make sense to give the same right to an ant, but it makes to give to an elephant. In every way, as half of it, it's again, they see it totally lost it.

[00:38:36] But no, no, this is the things people are talking about. Yeah, the right fat animals are talking about, probably. We could kind of understand indeed. Maybe the sheen learning world was starting to figure out. We'll send you it out anyway.

[00:38:46] We will certainly talk about this next week on the Y-Cef. So we'll join you for that, and you can join us as well as the way it works. And maybe she'll add a little too. Maybe that's as well. Okay, you're Dr. Listenin.

[00:38:55] Absolutely, that's next week on the Y-Cef. Thanks for listening this week. Bye. The Y curve.