Unique VidIQ Deal (Solely $1) β Welcome to β¦
source
Tags: aiArtificialartificial intelligenceartificial intelligence newsartificial intelligence news 2023latest news about robotics technologylatest robotslatest robots 2023machine learningOpenrobot newsrobotics newsrobotics news 2023robotics technologies llcrobotics technologyStatementsuperintelligenceSurprisesteam
Well, first we need to go over the things you got wrong:
* OpenAI isn't working on AGI. In fact, no one who works at OpenAI even knows what AGI is. So, any comments they make about AGI or ASI have very little chance of being correct.
* You keep conflating AGI with AI and this is incorrect. AGI and ASI are the same. In other words, once you have a working AGI system, you can continue developing it to an ASI level. However, AI is not related to AGI. All AI systems fall under computational theory whereas AGI does not. You cannot develop an AI system into AGI, regardless of how fast it runs, how much memory it has, or how many lines of code you write. AGI is not an advanced version of AI.
* AGI/ASI is not beyond "our" current understanding. The leading research in this area remains confidential. So, considerably more is known than you are aware of.
* A single AGI system would not be better than every human at every single task. That is incorrect. Even an ASI system would not be better than every human at every task. An AGI system only needs to be no worse than an average human at normal tasks. So, a given AGI system may not be good at doing calculus or playing chess β most people aren't β but would have no trouble making a cup of coffee. An ASI system only needs to be no worse than average and be exceptional beyond human level in some area.
* An AGI/ASI system could not "rapidly improve itself". This idea is nonsense. Improvement would be by the same process you use: studying and practicing. So, it is not a real possibility.
* Large language models are not related to AGI. You can't build an AGI system from an LLM no matter how large it gets. LLMs are not smart. They only do associative, symbolic processing rather than abstractive processing which is what you need for AGI. Secondly, there are no emergent properties of LLMs. The emergent property myth is commonly used with the hope that a fortunate accident will somehow lead to AGI. This cannot happen.
* We are not really close to AGI. The theoretical work is still incomplete and it would be impossible to design or build an AGI system without the completed theory.
* There is zero possibility of AI systems exceeding expert level in any domain. AI systems lack basic understanding and common sense. Rats are smarter than the smartest AI system.
* There is no exponential growth. This myth has been pushed by people like Ray Kurzweil but he has never worked on AGI theory and knows nothing about it. His concepts come from a naive projection of AI which is unrelated.
* AGI theory will indeed have a dramatic effect. The boost to the US economy alone is $5 trillion/year not including new technologies or processes. This is simply due to increases in efficiency.
* Is there an existential risk? No one has yet come up with a means for this to happen. Every scenario I've heard of for how an ASI could pose a risk is based on either a misunderstanding of basic science and technology or profound ignorance about ASI itself.
4:00 Sam Altman is not involved with AGI research and has very little understanding of it.
6:00 The scenario described is no different than it was 100 years ago. This is not new and is unrelated to AGI. In fact, this is backwards since AGI theory would help detect and correct unreliable information.
8:27 Again, large language models have nothing to do with AGI or ASI. These are not related.
GPT is not particularly powerful unless accuracy is not important.
Essentially
Multinational corporations have been working with malevolent global political forces employing AI to cause divisive shifts in the zeitgeist for a while now. It's what was behind the successful Russian shill social media campaign that knew exactly what incendiary narratives to spread to destroy American solidarity. If it's regulated, the intention would be to limit it's use among us, not them.
AGI is not yet visualizing properly. SI is therefore fiction. π« π»
AI can also generate 40000 vaccines
"surprises with statement" ? ? naaa. I think that now that they own and control the technology, the best strategy to keep others from competing against them is to ask the government to create regulations. Maybe I am wrong
Say essentially one more timeβ¦
Impossible to watch the video with this senseless subtitles. Please tell your AI to choose another style.
Twitter pseudo experts were wrong. Ai is creative. It produces 40k NEW chemical weapons
As we increasingly rely on Artificial Intelligence to make our lives easier, there are concerns about its potential negative impact on our daily lives. In "A Brief Guide of 12 Strategies to Minimize the Adverse Impact of Artificial Intelligence on Your Daily Life," you'll discover practical and actionable tips to help you mitigate the risks of AI and use it to your advantage. From protecting your privacy to understanding the biases inherent in AI algorithms, this guide will empower you to take control of your relationship with technology and make informed decisions about how you interact with it.
Book recommendation: "A Brief Guide of 12 Strategies to Minimize the Adverse Impact of Artificial Intelligence on Your Daily Life."
Self improvement leads to a peaceful disposition .. so
How do you regulate AI for countries like North Korea, Iran, Russia etc? In time they will have their own AGI or superintelligent AI, that would be the next set of nukes. And because of that, no world power will stop training such models, openly or covert. Unfortunatelly, this can't be stopped. We will soon have a superintelligent AI, if not from the US, then from other countries!
The AI doesn't becoming evil by itself its the humman developers causing the AI to become evil so we need to be concerned about the humans developing the AI are they trustworthy and benevolent or nefarious with dark agendas with AI !?
Open sourced Personal AIs in the hands of terrorist Organizations such as ISIS or Al-Shabaab will have severe consequences in the next 20 years
In some secretive lab OpenAI had created Agi which in turn in minutes or hours Turner into Asi. Now boys shit their pants. Oopsie.
How will you regulate rogue nations , corporations , organisations or people. If they are not regulated than so called good nations/org/people also have to unregukate themselves . The same theme we are seeing in nuclear energy , biological weapons ..
I'm surprised this didn't shake up the industry.
I really wish YT had a block function
Not going to watch but let me guess. They are taking the Elon route and "warning" everybody about the dangers of AI unless government steps in and implements onerous regulations to keep out potential competitors, you know, because they care about us so much. Am I close?
π¦πΌππͺ·πΈππ€©ππ§©
I don't get it. He keeps sounding the alarm, fearmongering about AI, yet at the same time he still is continuing development for his AI tech. You'd think if he was so worried, he'd just halt development.
When the terminators are taking over this whole planet all of these morons will be begging for mercy but there wonβt be any given from the metal creatures. It will be pure extermination and it will be well earned for those who contributed to the creation of this nonsense.
Very important indeed.