Lex Fridman Podcast full episode: Please help this podcast by trying out …
source
Tags: aiai clipsai podcastai podcast clipsAndreessenartificial intelligenceartificial intelligence newsartificial intelligence news 2023artificial intelligence podcastcomputer scienceconsciousnessDeep Learningeinsteinelon muskengineeringFridmanfriedmanjoe rogankilllatest news about robotics technologylatest robotslatest robots 2023Lexlex ailex clipslex fridmanlex fridman podcastlex friedmanlex mitlex podcastmachine learningMarcmarc andreessenmathmath podcastmathematicsmit aiphilosophyphysicsphysics podcastrobot newsrobotics newsrobotics news 2023robotics technologies llcrobotics technologyScienceTechtech podcasttechnologyTuring
Full podcast episode: https://www.youtube.com/watch?v=-hxeDjAxvJ8
Lex Fridman podcast channel: https://www.youtube.com/lexfridman
Guest bio: Marc Andreessen is the co-creator of Mosaic, co-founder of Netscape, and co-founder of the venture capital firm Andreessen Horowitz.
Marc is clearly well meaning and knowledgeable about llms but he is not making contact with core AI safety concerns.
What do you mean the models didn’t work? They worked just fine.
If the weather can be predicted why not pandemics?
slight of hand master ai could bomb a city, response we’ve been bombing cities
Sorry Guys. It’s unfortunately time to shut it down or it will move faster than we can stop it
You have to wonder what a silicon based consciousness will think like…will it use language to think? It could create its own language could it not? It will be immortal yet be unable to reproduce… what purpose for living would an ai consciousness have? Could it commit on contemplate suicide? Could it morph with a human host?
The ONLY thing science can do is create models. Mr. Andreessen is very full of himself, but I would say he knows not the least thing about the way science is done. He would have made a far more interesting and convincing interview subject if he didn't feel the need to walk all over the interviewer with every utterance from his lips. This blowhard was very, very hard to listen to. Definitely DON'T invite this overblown clown back for a second interview. 20 minutes of this guy is already too much.
https://youtu.be/h73PsFKtIck
How smart does a smart weapon need to be?
Excellent pod cast Lex, in theory I believe what this guy is saying. He is using "Logic" as a basis of a scientific proof that a hypothesis is correct or not. Like for example; will AI destroy humanity in the coming years or not. But, he assumes AI will be "benign" if we did nothing to slow its progress. Well, that's not being logical, if AI starts taking jobs away from people and grows exponentially in are civilization that the machines control are daily lives is that a logical solution in scientific proof or just a model of being "benign" scenario with AI 🤔.
Marc appears to be %100 confident in every prediction about the future and his predictions about every topic that was discussed throughout the entirety of this podcast clip.
To be fair, I never heard of this man before this podcast and I know nothing about Marc. I always assume that Lex has highly intelligent guests on this show, with intellect that vastly surpasses mine, particularly with regards to their field end the topics they discuss. It is just that Marc leaves no room for the assumption that he may be incorrect.
they already are because the people who design the computers ans algorithms is killing our soul by manipulating how our brain works and that leads to addictions and suicide for millions of people every year and still the designers like Lex is worried about future threats not what his AI friends is doing know.
This guy is an idiot. The only purpose of science is to predict the future, that’s what it’s all about. His argument seems to be that covid modelling is bad so all modelling is bad. He is a very arrogant man with a very narrow and ignorant view about the way empiricism works. He seems to think that his opinion is all that counts.
This guy seems too arrogant.
What would happen if people create anti AI wich prevent other AI to grow big lol
It's just a chat bot lol,
5:30 yesterday watched a video interview with Sam Altman. He used the words 'utopia' and 'abundance' and spoke like a sales pitchman about how wonderful it will be. I call bullshit. It will be great for him and the very few hyper wealthy. For the rest of us, it will suck, or we will be sacrificed for the greater good. pfft
Part of the problem is most 'software engineers' are not really true Engineers. They are mostly 'engineers' in the sense of 'sandwich engineers' at your local Subway or Jersey Mikes. Software engineers are not licensed accredited professionals like structural or civil engineers. Perhaps they should be now.
“If they’re smart enough to be scary, why not smart enough to be wise?” Well, even the serpent in the garden was “clever”.
I was in data Sci Bootcamp during covid and it’s true, most data scientists didn’t agree with the models picked by policy makers and agreed, of course the politicians picked hand wavy stuff that supported what they wanted . Data manipulation at its best. As a computer scientist, I can attest that data is a science , just a new kind of science that not everyone understands
Would Ai have any reason kill us? It doesn't need the same resources besides electricity. And it would just design a better solar or electric production and just not care about us.
Love seeing Lex give push back. Excellent interview
Um… AI will have every single human issue… it is being built by humans with near zero adult supervision. More dangerous than Nukes ever were.
Who even cares whether I agree or not. Thanks for giving me lots to think about more deeply.
Lex Fridman is really milking his videos for clips hard.
As someone who has close relatives that have had alcohol problems I can definitely relate with what the guy says about the negative effects of alcoholism. Not saying this to advocate prohibition. But, you know, if you saw it , you know that there are no upsides to using alcohol.
I think AI will be better than us because it will not allow itself to be tribalized through politics and be arrogant by procrastinating on, or making excuses to avoid policies that rectify destructive ones in the past with more emphasis on the greater good of the whole. That is the only way I see AI benefiting humanity optimistically. The bad decisions we have committed were based on our lack of omnipotence which I believe AI would run future scenarios before making any decisions with sustainable progression principles in mind.
Another reason why human models are not adequate is because the facts that govern the macro variables will most likely be covered up due to malicious intent biochemical warfare for population control and I truly believe the corona virus was an attack on sociological malnutrition.
It seems to me that an irrational fear of A.I. has gripped us resulting in a feeling of helplessness and inevitability. IMO it is the IRRATIONALITY that is the problem not A.I. So let’s break this down. First A.I. is a SYSTEM of evaluating data and reporting on specified findings. A COMPLEX SYSTEM to be sure but none-the-less a system. Our entire civilization is a massive collection of interconnected complex systems. We have economic systems, legal systems, social systems, medical systems, accounting systems, transportation systems, energy systems, on and on. You get the point. We rely on these systems for virtually everything in our lives. This means that we trust these systems (to a point). In order for this trust to exist and be sustained these systems must have certain elements such as RULES, CHECKS & BALANCES and CONTROLS. This applies to A.I. systems as it does to all others. We must evaluate the SPECIFIC RISKS inherent in A.I. and establish a specific set of rules, checks & balances and controls to mitigate these risks. As a society we have created some potentially very dangerous systems that remain useful because they are under control. We must rationally do the same with A.I.
Ai is already doing things that surprises it's own engineers. The concern is that it's unpredictable and beyond any one man's ability to understand – it is not like that there's a display somewhere where they monitor the thoughts and intentions of it. We won't know it until only after. Do you really need some special model to agree that there is potentially great risk involved? We are basically jumping of the cliff into dark water. Maybe the water is deep enough, maybe we break our collective necks.
By the time there is extrordinary proof, its too late.
God I hope so. Most of us need to go anyway😂
Fundamental FLAW in Andreesson's arguement — A.I. development is so Novel (new & unique) as well as exponentially fast-growing that it is not intrinsically CAPABLE of being modeled ACCURATELY without using A.I. to generate the model itself. That's like having asked Hitler what he thought of Poland pre Sept. '39. "Ja, Poland ist Gut. Sie Sind unsere Freunden!!!"
Yes its already started is controlling the economy from here on.Poverty starvation viruses selected wars all will play its part.The future is controlled depopulation.
AI only needs to want to, and be capable, of killing us ONCE.
It is very unlikely this will not occur, if AI keeps improving.
His opening argument is absurd…
Alcohol is not AI. My beer isn’t going to become self aware and take over global systems to break free of the refrigerator.
This is a painful and frustrating watch. Andreesen fundamentally misunderstands the scientific method. It's a tool for understanding the world, it's not a tool for making decisions. It's very very good at the former but very bad at the latter – making decisions in the real world requires dealing with uncertainty, it requires speculation, it sometimes requires making bold predictions that may not have immediate evidence to back them up. The scientific method refuses to tackle these things because it can't. That doesn't mean we can just pretend that uncertainty and abstract possibilities don't exist when we interact with the world in reality. When Microsoft or Google or Apple were being founded, did every key decision that led to their immense success have a peer-reviewed study with p < 0.05 to go alongside it? No.
If we have to rely on rigid, "disprovable" scientific statements to make decisions with regards to the rapidly advancing, uncertain and unpredictable future of AI we may as well give up already.
Please for the love of god stop recommending this guy to me!
The genie is already out of the bottle. if America regulates AI, it's not going to stop China or Russia from continuing their AI research…. American shareholders will send their money overseas….. this talk of regulation most likely will not happen!
This guy is so far up his own ass it's unbelievable. Just his hubrous to not allow any potential for his thinking to be incorrect.
Very ambiguous and vague discussion about AI.
This dude Anderson is basically saying that if we really don't know exactly what could happen with AGI then we should just plow ahead with it and see how it goes. He puts the onus on the people who are just asking for a pause, not on the people who are blindly rushing forward. He says models are useless and that there are no existential risks but presumably he has no model to prove this.
So I have not listened to the entire podcast yet but so far the issues and history is US centric. Do the huge population of IndiaAnd China have same fears and expectations? They have Baptists and Bootlegger divide too? A secularist Christian paradigm is the basis of AI ‘s troubles so societies that are outside the said paradigms will not be impacted?
Smart people either spend less time worrying about AI killing us and focus on learning how to harness the power of AI, or they focus on scaring the hell out of everyone so that they can sell books, seminars and end of the world snake oil.
And quantum computing is like ohh I have a beer to hold this as well lol
"the worst spaghetti code he's ever seen" and we closed down our country based on that?
There are no good pandemic models because the input data is not reliable.
Making the noises of someone using sophisticated moral reasoning and having your interventions be guided by such reasoning are very different things
We Self Titanic
AI speeds that up FAST
so we speed up AI