This will AFFECT everyone in 1-2 weeks.

This will AFFECT everyone in 1-2 weeks.

Excerpt from a video: Are you ever scared about the AI revolution or extremism yes it is a risk um AI is moving fast that would be on my risk list on near term time frame I think artificial Intelligence is something that we need to be quite concerned about the safety of AI and really observant.

You mentioned earlier in the chat GBT, um you know I was essentially instrumental in the creation of OpenAI at a time when I was concerned that Google wasn’t paying enough attention to AI security, along with a number of other people opening an Although it was initially formed as an open source non-profit.

It is now closed source and for profit Open AI I no longer have a stake nor am I on board nor do I control it in any way Church ABC has shown people how advanced AI is because Yes it’s advanced for a while it didn’t have a user interface that was um accessible to most people so what exactly has Chad GB done.

Just put into an accessible user interface on a technology that’s been around for a few years and there’s a lot more advanced versions that are coming out but if you say the introduction of seat belts uh the auto industry seat belts were introduced a As a safety measure, I think 10 or 15 years ago the regulators finally put them on tea belts and cars and that greatly improved the safety of the cars and another big improvement was airbags.

Security so my concern is if something goes wrong with AI the response could be very slow for regulatory perspective that you’re most worried about going wrong like I said you know AI and robotics um He will bring what might be called an age of abundance um others reject the term um and it is my prediction that there will be an age of abundance for everyone um I think uh the danger will be artificial general intelligence or digital super intelligence uh collective Human gets away from desire and goes in a direction that for some reason we don’t like whatever that direction is.

What kind of idea behind neural link we try to get more tightly coupled from the collective human world to uh digital uh super intelligence but you said AI is one of the things you’re most concerned about and Neuralink might be one of the ways where we can keep abreast of this uh yeah so it’s a short term thing that I think is helpful on an individual human level with injuries and then long term thing uh by bringing digital intelligence There is an attempt to address the civilizational risk of AI.

Biological intelligence together are you ever scared about the ai revolution a rebellion yeah it’s a risk [music] um ai is moving fast that would be on my risk list how soon do you think it’s going to worry you is an hour or 10 years, well less than 10 years right now there’s a tremendous amount of AI technology in the advertising curriculum so I think if you’re trying to get a huge amount of know AI is you’ve done some well clicked i have said for a long time i think ai security is a really big thing and we should have some regulatory agency looking after ai security but so far hardly any such thing Are.

Usually it takes years for a government agency to establish um so you know uh after the issue of population collapse I think AI security is probably the second biggest threat to the future of civilization and we’re a humanoid are building robots that would basically be like carpet with um feet and I keep uh open for a while because you know I definitely don’t want to have an AI apocalypse yeah I don’t think we need stability AI is needed to solve what is happening.

That might help us accelerate it but I think we should also be careful about AI and make sure that as we develop AI that you know it doesn’t get out of control and that AI helps to improve the future for humanity I mean humans have been the smartest creatures on earth for a long time and that’s going to change with uh what’s commonly called artificial general intelligence uh so it says is that an AI that is uh smarter than H can even emulate a human in every way.

But you know this is something we should be worried about, I think it should be government monitoring of the development of AI, especially super advanced AI, it’s just something that is a potential risk to the public Uh danger, because we generally agree that there should be government surveillance. To make sure public safety is taken care of but the rebuttal I get is well like you know China is going to free up AI development and that’s why we have regulations that slow that down.

China will take it to us and I think I see from my conversations with the governor officials in China that they are also quite concerned about AI and they uh the fact that they probably have the potential to be good I think is that the inspection is compared to other countries [MUSIC] so you had more hope and you let some of it go and now you don’t worry about AI as much as you like it it’s very yeah But no it doesn’t necessarily mean that it’s definitely going to be out of human control.

The intriguing thing here isn’t that it’s going to be very tempting to use AI as a weapon, it’s actually going to be very tempting, we’ll use it as a weapon for very serious AI. The danger on-ramp is that there will be more humans using it against each other. I think most likely I will try to convince people to slow down the AI in order to control the AI. Yes it was in vain I tried for years You can handle it You’re scaring me No one heard No one heard I met Congress I got left out of all 50 governors meeting.

And just talked about the threat of AI and I talked about the best I wanted to get a sense of where this is going, you know we’re really playing a crazy game with the atmosphere and the oceans here But don’t do this craziness taking massive amounts of carbon from deep underground and putting this knee into the atmosphere it’s very dangerous so we should accelerate the transition to de sustainable energy I mean the bizarre thing is obviously Normally we’re going to run out of oil. In the long run, you know we’re only going to get out so much oil that we can burn it by burning it.

There’s only one sustainable energy transport and energy infrastructure in the long term, so we know that’s the end point, we know why run this crazy experiment where we take trillions of tons of carbon out of the underground and put it into the atmosphere and oceans it’s a crazy experiment it’s the stupidest experiment in human history why are we doing this it’s crazy I think we need to look at the population collapse it’s somewhat counterproductive to most people who think that many There are humans maybe there are many humans but it is only because they live in a city.

If you’re a plane and you look down you say how many times would you hit a person if you basically never dropped material accountable basically falling from space natural meteorites all the time old rocket stages live in but nobody cares because there’s actually um there’s a cool website called weight but why and Scott tim urban like he actually just did the math and uh all humans on earth fit on one floor in uh new york city Maybe, there is never a need for upper floors.

So actually the cross-section of humans as seen from Earth is very, very small, basically vanishingly small almost nothing [music] um so we need to see population collapse low growth rates, I think it’s a big risk and it’s also not at all top secret you can go to wikipedia you know birth rates and and and it’s really this it’s definitely civilization ends with a bang uh no because This will be a sad end where the average age becomes too high and the young are virtually enslaved to effectively take care of older people. This is not a good way to go and AI is certainly one of the biggest risks. .

This may be the biggest risk, are we heading towards a future where AI will be able to out think us in every way so the answer is unequivocally yes maybe one is not AI the other is it should be us and what I say I could be wrong about that I’m certainly open to ideas if anyone can suggest a better way but I think we’re really going to merge with AI or be left behind [ Music] AI will have a goal behind it and humanity will just destroy a man that way humanity is definitely not a hard feeling, without even thinking about it it’s like we’re building a road.

There’s an ant hill the way we don’t hate ants we just want a road and so I think goodbye and health talk is actually quite likely that digital intelligence will be able to overtake us in every way and we will soon be able to simulate what we consider consciousness uh to the extent that you won’t be able to tell the difference if you’re talking to a digital super intelligence and can’t tell if it’s a computer or not a human Like let’s say you are just having a phone conversation or video conferencing you are you think you are talking it seems that one person makes everything right.

The inflections and moves and all the little subtleties that cost you as a human in the air community refer to the advent of the digital super. intelligence as a singularity not to say it’s good or bad but it’s very difficult to predict what will happen after that point uh and it has some probability that it will be bad some maybe it will be good or they and whether it’s more good than bad I’m concerned about some of the directions AI could take that won’t be good for the future I mean I think it’s fair to say that not all AI futures are benign There are no, not all are good um and so if you have some.

if it is if we create some digital super intelligence that is much bigger than us in every way it is very important that people be gentle there is a quote like me from old acton uh the man who came with power corrupt and Absolute power corrupts that freedom includes distribution of power and despotism in its concentration and so I think it is important if we have this incredible power of AI if not concentrated in a few hands and potentially a world leads to what we don’t want and what is that world what you do what you do when you said it’s called the singularity.

Because it’s hard to predict except um what that future might actually be I don’t know many people who love the idea of being under a despotic um you know I don’t usually think people but dictatorships but prefer to live in a democracy and autocracy would be computers what are people controlling computers if you consider any rate of advancement in AI we will be far behind and so we can be like you know that gentle But even minor situation if you have something then you have ultra intelligent AI we will know you far below them in intelligence then it will be like you know a pet,.

Yeah but honestly that would be the benign scenario and I think the biggest risk is not that AI will develop it’s will but it will follow the will of people who think its utility function or its adaptation function and that adaptation function If it’s not well thought out, I mean even if it’s relatively benign it can be very bad if it’s intended.

Pretty bad result for example if you were a hedge fund or private equity fund they said well I want my AI to maximize the value of my portfolio so it can decide if that’s the best approach uh less consumer stocks Stocks go with defense and start a war and that would be bad enough too and I think this digital super intelligence would potentially be a public safety risk as well and so I think it’s very important for regulators to keep an eye on that Is.

Leave a Comment