Marc Andreessen is the co-creator of Mosaic, co-founder of Netscape, and co-founder of the enterprise capital agency Andreessen …
source
Tags: agiaiai podcastAndreessenartificial intelligenceartificial intelligence newsartificial intelligence news 2023artificial intelligence podcastFridmanfutureInternetlatest news about robotics technologylatest robotslatest robots 2023Lexlex ailex fridmanlex jrelex mitlex podcastmachine learningMarcmarc andreessenmit aiPodcastrobot newsrobotics newsrobotics news 2023robotics technologies llcrobotics technologytechnology
Here are the timestamps. Please check out our sponsors to support this podcast.
Transcript: https://lexfridman.com/marc-andreessen-transcript
0:00 – Introduction & sponsor mentions:
– InsideTracker: https://insidetracker.com/lex to get 20% off
– ExpressVPN: https://expressvpn.com/lexpod to get 3 months free
– AG1: https://drinkag1.com/lex to get 1 year of Vitamin D and 5 free travel packs
1:06 – Google Search
8:54 – LLM training
21:25 – Truth
27:38 – Journalism
37:29 – AI startups
42:51 – Future of browsers
49:15 – History of browsers
55:16 – Steve Jobs
1:09:50 – Software engineering
1:17:05 – JavaScript
1:21:23 – Netscape
1:26:27 – Why AI will save the world
1:34:26 – Dangers of AI
2:04:46 – Nuclear energy
2:16:43 – Misinformation
2:32:02 – AI and the economy
2:38:10 – China
2:42:22 – Evolution of technology
2:51:41 – How to learn
2:59:50 – Advice for young people
3:02:40 – Balance and happiness
3:09:16 – Meaning of life
AI is not ML unless you assume I cannot exist without L. I don't think I have a preference toward an assumption right now, but I wouldn't suggest that I know that. I realize it makes it harder to define things, but that's kind of the point.
I'm not sure how many people talk about the concern over GPU's, but the only reason I know of that caused them to have hardware that is proficient in AI usage is for ray traced lighting using simpler matrix math. Maybe that is hardware direction that cannot be avoided due to it's benefits elsewhere, but hardware production restrictions contingent on reducing or rather slowing large model growth might be reasonable.
The problem I've seen is that on a per parameter/input basis, we still don't seem to even understand the simplest models and that's what gets me. Is it as "hard" to understand as it appears to be or is a matter of not enough people caring due to it not being profitable?
Marc is no doubt a knowledgeable dude, but I don't think he understands that of course most models are "wrong", that's quite literally what empirical science is, and you don't get the privilege of knowing you were wrong until the event you were trying to predict already happened. It's not even clear what point he is trying to make regarding Covid as if it didn't affect anyone. I agree on his points regarding eccentric view points, but his just leans in another direction as oppose to a more balanced approach to uncertainty. He is the guy who you turn to when shit goes wrong and says "oops". And then when you ask him for a model to adjust as a result he doesn't have one because he is actually just afraid of being wrong, or so it appears, because the way he discusses what "science" actually is appears to be in counter to what it actually is, especially with regard to how you act upon things you do know, and things you do not know. If from that you concluded that you do not necessarily know how to act simply from information produced by science, you are one step closer to understanding what science even is. Same thing is always though, it's how people understand and handle uncertainty.
I also don't know why he thinks all models were wrong, the CDC and WHO were saying wildly different things at the time and if your economy is so fragile and lacking in surplus that an event like that can disrupt it, then maybe you a very different set of problems on your hands than you think you do. Now he is suggesting models aren't part of science? Marc, the very basis of how the world works in your own head is a model. I also have to question if this man has ever had a real job in his life, because models run our world. I am perfectly fine with the idea of questioning them and pointing out where they are wrong, but when you do that, if you cannot provide something better you do have to acknowledge you have reached your own personal limit in what you can provide in that situation.
What bothers me is that Marc does not appear to be more knowledgeable in magnitude than the magnitude of monetary worth attributed to him. It's easier to believe the opposite, because then other people can just solve all the problems, but if anything Covid is a demonstration of just how wrong that is. This isn't his "fault" either, which is what makes it more difficult. This isn't just some redistribution problem either, rather the value just isn't there to begin with.
Marc, in case it wasn't obvious, ML is a model. I don't think your cautionary assertions towards models being wrong are that much different than the caution expressed over ML. If your whole point is that humans are bad at predicting complex systems then that's the entire basis of the concern. I don't think data centers should be blown up either (what does that even do? if you want to Dark Age everyone you have to full commit lol, otherwise you are just guessing at area/information control) , but pretending like regulation is automatically wrong and not nuanced is just lying to ourselves. Society's regulation are part of it's culture, and if you want a culture of handling uncertainty better, you need to sit down and do a big ol'think about how exactly something like that would come to be. You need to …. model it. Likewise, there is a big difference between regulating the usage of something and outright banning it.
If you can't come up with a way to do that and money doesn't magically make it happen, then I have to say I'm mostly out of ideas as well ; although my guess is that such a trend would start like most trends do. Word of mouth. If we spend more time discussing risk and uncertainty than we do discussing what we think we know, maybe that's a step in that direction?
I have to say, despite disagreeing with Marc's opinions, he comes across as someone who is hard not to like, because I feel like his heart's in the right place, but he patently does not understand what science is, seems to understand we do not understand how models work which is great, and has appeared to summarize the alignment problem as a non-issue because the AI told him so, which in the context of the previous statement is deeply confusing.
"If they are smart enough to be scary, how are they not smart enough to be wise." ….so, humans? Except, it's not necessarily a human, it's an aberration of a combined knowledge where it isn't necessarily a summation, subtraction, derivative…we just don't know. It's like every time he gets close to the point it just barely eludes him.
Credit where credit is due, Marc knows his nuke history and I'm sure he fully well knows AI is not that, yet, he seems to want it to fit the same scenario of demonstrating a problem in order to understand that. I like that outcome, it's the easy one.
Hyper-productivity does not mean you produce anything useful let alone noticible.
And a couple of hours ago i was just sitting here thinking HTML is the best motor for graphic users interfaces.
this man and his generation rocks!
🎉🎉🎉
Amazing interview. So many great insights about humans, provided with large dose of humour. Thank you for this interview. A true treasure.
Great interview!
The Netscape Dude. WOW.
I have probably said this before (as if anyone gives a shit) the best Lex Fridman interviews are obvious on topics I care about (filter 1) and a speaker that doesn't put me to sleep (filter 2) and ones that make me think (the only objective). This guy is great. He has a grasp of absolute reality or consciousness as in "everything new contains everything old" but he also has an eye on the possible futures.
I feel like Lex is trying to embrace the idea that Capitalism will someday drive a coming Utopia of sorts. Which might be true if your idea of utopia is having a computer for everything, a room to play whatever you want in, in a megastructure where they have to pump in filtered air and water with an EV transit system to your domed agriculture areas where they grow veggies and meat.
And I bet this guy realizes that's where we are going.
The future's so bright you gotta wear shades….. bio suit…..and radiation badge.
Edit: listened longer. I was wrong. This guy seems to think Economic Growth is the objective or the driver. It certainly is the driver. Then talks about Prohibition, which made illegal liquor worth more, created a malicious economy (my words) and he doesnt seem to see there is no distinction between addicting the world to money, technology, and every company pushing the price as gigh as possible, blocking new products and locking up markets legally, is any different than supporting organized crime and all its nastiness
I really liked this one. Never listened to him but he is very knowledgeable. I wish he could talk slower though.
I like the quote about media at 2:40
People talk like it was obvious back then that communism didn't work. But back then, there was no reason to be so sure. There was a real question whether or not it was gonna overtake capitalism in output. There was a lot of debate about it, and capitalism was by no means the obvious winner. The Soviets did achieve things, and not just being first in space. There are many things. For instance, the USSR built the world's first nuclear power station to generate electricity for a power grid. OK—I hear you—they did shovel unfathomable human misery into it, but western capitalism was nervous for a while. And remember, of all countries on Earth, the communist USSR was the only other superpower.
When EVERYONE has an AI assistant to prepare for a job interview, the gains wash out. It's like if everyone gets a college degree, it's no longer special. The income gains from having a degree were almost entirely signaling and not about what you learned in the degree. It was for economic sorting, not for training. Now that everyone has a degree, you have to have a PhD. And everyone has to spend longer in schooling for no real income benefit overall. We already train too many scientists. Science is a terrible career. We don't need more people succeeding in college at all. We're already way over training people for the work they do.
Not impressed with Marc. Way under estimates AI.
I would have like to hear Marc talk more about the direct impact of Corps rolling out AI to replace human labor rather than a generalized discussion on how better jobs will inevitably be created. Ultimately, better (i.e. less labor intensive), jobs will be created but for less people, which has drastic human implications. Much of this feels like apologist arguments from a beneficiary of this tech wealth.
where is that book list coming from?
Now as I know the views of Marc Andreessen, Bill Gates doesn't seem so bad.
Does anyone know what paper Marc refers to at 10:04 ? @lexfridman?
I cannot understand this guy. Im out.
I think Marc has a neurolink
Lex I apreciate you but do NOT spread misinformation. The G factor (IQ) has some genetic inheritability but is mostly nurture (obviously if one's parents are very intelligent, the nurture one grows up is affected by that as well, etc).