PROPOSITION “AI analysis and growth poses an existential risk.” SUMMARY With the debut of ChatGPT, the AI as soon as …
source
Tags: agiaiai alignmentai riskAlgorithmArtificialartificial intelligenceartificial intelligence newsartificial intelligence news 2023bengiochat gptclimatecrisisDebateexistential riskgptIntelligencelatest news about robotics technologylatest robotslatest robots 2023lecunmachine learningMetamitmitchellMLMunkmunk debatemunk debate aimunk debatesPandemicRiskrobot newsrobotics newsrobotics news 2023robotics technologies llcrobotics technologySciencetechnologytegmarkTwitter
I voted for the motion at the end of the debate. Amazing. Brainstorming.
ask an amazon worker about human agency
corporate capitalism poses an existential risk to humanity – let's do something about that
結局、AIを脅威と考えるか否かは、当面、AIが どの程度迄 賢くなるなると思っているかの見通しの違いのようです。脅威だと思っている人々は、脅威になるくらい賢くなると思っています。脅威でない思っている人々は、当面、脅威になる程 賢くならないと思っています。私は後者です。
In the end, whether or not we regard AI as a threat seems to be a difference in our outlook on how smart we think AI will become for the time being. People who think they're a threat think they'll be smart enough to be a threat. Those who think they are not a threat don't think they will be smart enough to be a threat any time soon. I am the latter.
Two truly scientific persons (Mitchell and LeCun) debating with two opportunistic techno-prophet-looking-for-money-instead-of-science.
Such a pity the debate got stuck on the meta level. OpenAI has been fine-tuning behavior in GPT 3.5 for months by simply rewarding friendly answers. The result? In the first days of its release, it's been threating people for questioning the fake facts it's been telling them or claiming it's hacked web-cameras & spying on people. It's relatively easy to create an intelligent system (you can reward correct answers), it's infinitely harder to create a system that thinks based on moral goals, because we don't know how goals emerge, let alone how to correct them once they do. That's a very technical problem Mitchell just doesn't seem to be familiar with – the concern isn't that a superintelligent AI won't get what we want but that it won't care, just like we behave differently from what we've been selected for by evolution.
How do you give something emotions like that? We have dopamine, serotonin, oxytocin etc. Those hormones give us happy sensations. How could we ever be sure that ai is feeling a positive or negative emotion? It also seems like there must be a threat of pain or suffering or a possibility for growth and flourishing for emotions to exist. What can you give something that lives forever and is born perfect to incentivize it to fear some bad thing and strive for some good thing?
Disappointing to see Mitchell making an argument for the sake of it
Melanie , no i don’t agree in anything you just said as you presume I do. Bad move
It is not a James Bond movie just look at Klaus Schwab for god sake
These people talk about today AÍ as if it totally understood but an interview from one of ChatGPT founder says that the actual AI way of thinking isn’t human and it is not understood, so maybe this entity came with a purpose and human stupidity can’t grasp this threat
Appeals to authority are hardly convincing. So the first 13 minutes of this video were a complete waste of time. I don't care how "credentialed" people are; just put forth your arguments.
So good to hear some experts calling for sanity. Some people have practically lost their minds over the threat of AI. AI is just a tool, it's not a sentient creature; it's not alive. People can speculate all they want about scifi scenarios but it doesn't make their fears real. PEOPLE will be the threat, not AI. I wish more experts would talk about how to keep PEOPLE from misusing AI. But most of the time they grandstand about the Matrix or Skynet. Appeals to authority (losing their minds) is not convincing.
Max Tegmark, GTFO. There is a risk in everything. Saying anything over a 0% risk is existential is absurd. Dream up scifi scenarios all you want; it's not reality. Max has made this event a debate rather than an open scientific discussion. Hey Max, this isn't about your debate skills trying to back someone into a corner with clever questions and follow ups. Max is grandstanding like a preacher on the street corner shouting that the world is ending so you better find Jesus right now. Max, I'm really disappointed in you.
These people may be brilliant in their respective fields, but this is one of the worst debates I have ever seen. At 22:33 the gentleman says "Most of human knowledge has nothing to do with text" and NOBODY calls him out on it. That's ridiculous. 😂
Max Tegmark didn't really provide any specifics. Just extrapolation without any real facts on how we get from ChatGPT to some super AI that will kill us.
Melanie Mitchell owned the debate!!
Apart from moments of Tegmark, rest of the points were very silly. Was expecting more nuance, but felt it was a little dumbed down. Anyone else?
This reminds me of the debate over the safety of biological gain-of-function research. What are the odds the US will fund the Chinese to test and develop AI?
The arogance and stupi9dity of these Pro AI "experts" need to face the fact that internet and google have made our children stupider and less capable of even passing minimum aptitude for military . The ones so blinded by their excitement don't realize, "internet censorship" and information control like Meta made more mistakes than helping during covid, WMD's was spread by these "moderators" of our information structure. Now they expect us to trust the ones controlling the censoring? Have I'q's dropped that much since AI? Millions were murdered by these "good guys" controlling the info, we are doomed.
This debate could be had by High-School kids who each read one book. Expected something more serious than Robots taking over.
Steam engines liberated us from physical exertion to explore the potential of our minds. AI will free us from intelligence-dependent tasks to explore the infinite potential of our spirits.
What are its properties how do we control it? How do we contain it? How do we understand it? How do we measure it? These are unknowns.
Ilya Sutskever June 2023
Non zero risk of annihilation… i recon its 50% inside 10 years
Never underestimate the power of unplugging
The Yan LeCun is a bit wako.
Melaney studied it… so the rest of the world sit up! … lol.