Get Surfshark VPN at – Enter promo code KYLE for 83% off and three further months for FREE! Free-to-use …
source
Tags: #NuclearA.Iaiartificial intelligenceartificial intelligence newsartificial intelligence news 2023because sciencechat gptchat gpt explainedchatgpt explainedChatGPTschernobylengineeringgeneral AIhugekyle hilllatest news about robotics technologylatest robotslatest robots 2023learningmachine learningmathOpen AIphysicsProblemrobot newsrobotics newsrobotics news 2023robotics technologies llcrobotics technologyScienceSTEMthe facilitywhat is chatgpt
I'm not sure who to ask about AI. I'm trying to build an understanding if it's even possible. I've been asking thermal camera operators about how strong the detection can be on shipping containers. There would only be minor differences in surface temperature between containers with people in them and without side by side. Would AI be able to spot minor changes in temperature differences and flag them for inspection. Maybe mounting a small camera on the crane or a drone with a program running could detect small thermal differences???
We had calculators when in school, but weren't allowed to use them for class. Now it's the opposite.
"the best swordsman in the world doesn't fear the second best swordsmen, he fears the worst, because he has no idea what that idiot might do."
Looks in comments oh that's already been said? well does that make me the worst commenter or the second best?
Perhaps we should't use AI for information or especially advice. But chat gpt is really good to help with writing better text and inspiration when solving coding problems.
Give it 50 to 100 years and we might create the geth
Y’know, this kinda reminds me of learning math in school. You can get really good at solving algebra for instance, by using one of many methods you were taught in class. However, we are only taught how to solve a problem, rather than the conceptual idea of what algebra is. Therefore, I can solve an average algebra problem pretty well, but when I get a test question designed for a higher understanding of algebra, I cannot solve it and get lost. Usually these test questions are kind of similar to the Go AI exploits, they shift the game’s structure while still fitting within the rules. These situations require an understanding of the concept (algebra or Go) to see the bigger picture and be able to manipulate the problem in a similar way to reach a solution. Obviously, you fail if this wasn’t taught by someone else. And when you mentioned that the common solution to this, is to feed the AI more data, it kind of reminded me how my teacher would give me more worksheets when I failed. I would get better at those average math problems no doubt, but I’d still fail the higher understanding problems if I were to retake the test. It’s the same thing with AI, we never gave them the resources to understand the concept of what we want them to do. That’s why they fail at the “higher understanding” situations, like the average student in a math test.
The limits of stochastic parrots should seem obvious.
Ethicists: There are NO differences between the sexs and raycs.
AI: But
Ethicists: THERE ARE NO DIFFERENCES!
AI: Ok (proceeds to to remove sex organs during automated surgeries, prescribe the wrong medicines based on sex and rayc etc)
We should be very careful about convenient and or happy lies when teaching realty to an AI.
During the last couple of days I’ve seen several of your great videos. There’s one thing that’s bothering me… every single one of the videos I’ve watched has exactly 46 likes!!! What’s going on?!
Lee si dol (not sure if that's spelled right) is kind of a beeoch for retiring after his defeat. I guess it's only worth playing if the other guy loses. It's like the people who rail against Christianity online but refuse to mention Islam. One who only plays when they're guaranteed to win is essentially a bully.
demolished? 4-1 m8
demolished? 4-1 m8
AI: fear me, fleshy meatbags! For I am great and terrible!
Me, with a big ol’ magnet:
You look and sound gay
Try asking ChatGPT, or Bard for that matter, this question: "Can a father marry a mother?" The answer will clearly illustrate what Kyle is talking about.
Hey, you’re good at this video making thing. Keep it up if you can.
Imagine an AI that murders a human, replaces them, and then lives their life perfectly without anyone knowing or realizing.
Thor doing YouTube these days?
the Problem with AI is that AI DOSN'T EXIST yet 😂😂😂😂
Of course they don't know. If a fly evades your swatting, you don't assume it understands what a human is?
We've done nothing? ARE YOU NOT EDUTAINED? :raspberry:
Elo ratings are named after a professor called Elo – it isn't an acronym and is pronounced "Ell-oh".
Ii remember winning a chess game against my then girlfriends father who was a very smart classical violinist.
She admitted later that year that he told her he didn't know if I was really intellectual or an idiot who Could barely play.
Spoiler alert… it was the latter.
How much of our concern is because we don't understand why it does what it does, and how much is because we don't understand it and feel like we should? I've met well-paid professionals who would fall prey to their field's equivalent of the sandwich problem; they've memorized enough to get by, but have no capacity to synthesize new. More broadly, humans have lots of "exploits" – look at any cult, or pyramid scheme, or mob scene. Humans make easily avoidable mistakes all the time, and yet we let us be doctors and lawyers and surgeons and pilots and the person allowed to authorize a nuclear launch.
We can only begin to explain human behavior in the most superficial, imprecise, and broad terms. Why would we expect to do better with an AI running on a system we based on how the human brain functions?
And if we can't allow AIs with these flaws to do important things, then how can we let people do them? Sure, we can say people should know better, but we also know that in practice they don't. When a person makes a mistake, we don't ask "how many systems are we going to entrust to these mysterious systems we don't understand and that can fall prey to this obvious exploit?"
shrug
I'm not saying we should put ChatGPT in charge of medicine, or let it replace lawyers, or whatever. And I'm certainly not saying it's sentient or an AGI. But I'm having a harder and harder time hearing the arguments as to why we shouldn't and figuring out how they don't apply to humans just as well.
A lot of AI training data is getting poisoned by AI data
AI is like a CEO who can grow a company better than anyone else and can manage hordes of employees with efficiency we’ve never seen before, but when asked to hammer a nail, grabs a screwdriver.
0:10 even older than the royal game of ur?
By the logic of this video: we cannot trust the ML systems because we cannot comprehend their decision making. So let me remind you that the most complex AI system known to man is the human brain. And I assure you it fails in ways WAY more spectacular and strange than any AI.
Sounds like an overfitting problem
I remember sitting a class on programming in 2016 and for some reason the professor deviated from the thread of the class and started talking about AI and neural networks. He ended up saying exactly the same thing. He was so accurate that I still remember some of his words almost literally.
"The main problem with artificial neural networks, and neural networks in general, is that we do don't know how they work. We have no clue when they will misbehave. For example, yesterday a son killed his mother and we have no clue how that happened (he was referring to events from the news the day before). The same goes for the artificial models we are experimenting with. As a scientist, I don't like that! However, the best we can do is research more until we do."
Years later I started learning a bit more on machine learning and AI, just for fun. The situation is still the same, we have no clue how they really work. Of course, we have a full understanding of how to train AI, what functions to use for the "neurons", how to arrange them, etc. All the mathematical background that makes AI work is understood, but then we combine all that into a system that have emergency (as emergent behaviour) it is holistically incomprehensible to us. That there, is a fundamental flaw of AI, but also a great opportunity for research.
thats called ignoring the meta against an opponent that has to play it lol its been a thing in multiplayer games since forever
I mean if they did understand what they are doing then they'd be general intelligence. As long as they are tested correctly then issues that arise with this can be mitigated before they are deployed. People who develop AI are aware of these problems.
Too much hand movement feels forced like that one indian dude mrboss
obviously
In other words, we're still at the stage of hyping up systems which are in reality Artificial Idiots. The ELIZA effect.
What if this video was completely made by an AI?