Pc scientist Yejin Choi is right here to demystify the present state of large synthetic intelligence techniques like ChatGPT, …
source
Tags: aiartificial intelligenceartificial intelligence newsartificial intelligence news 2023ChoiethicsfutureHumanityIncrediblyInnovationlatest news about robotics technologylatest robotslatest robots 2023machine learningrobot newsrobotics newsrobotics news 2023robotics technologies llcrobotics technologyScienceShockinglysmartStupidTeachingtechnologyTEDTED TalkTED TalksTEDTalkTEDTalksYejinYejin Choi
Interesting. I tested GPT-4 with a similar question, and it is shockingly not so stupid 🤯
Me: If I were to place an iron door atop a pile of nails and then proceed to walk across this door while barefoot, would I risk injury from the nails?
GPT-4: In theory, if the iron door is large, flat, and sturdy enough, and you distribute your weight evenly when you walk across it, the door should distribute your weight across the pile of nails underneath and prevent them from penetrating the door and injuring your feet. This is similar to the principle behind a bed of nails, where a person can lie on many sharp points without injury because the weight is spread out over so many points.
However, there are several caveats to consider:
1. Iron doors are heavy and could potentially shift or slip on the pile of nails, potentially causing injury.
2. If the nails are long and/or not evenly distributed, or if there are nails that are significantly taller than the others, they could potentially poke through the iron door, especially if the door is thin.
3. Walking barefoot on a flat, hard surface like an iron door could be uncomfortable or cause injury, especially if the surface is rough or has sharp edges.
4. Even if you don't step on a nail, there's a risk of injuring yourself in other ways, such as by tripping, slipping, or falling.
In general, walking across an iron door placed on top of a pile of nails barefooted is potentially dangerous and not recommended. Always prioritize safety and avoid unnecessary risks.
Shouldn't we keep the common sense low, if possible?
This would at least give us a chance, in the worst case… ?
I believe what she's referring to here is the "real" AI. What we currently have are language models, which have no true comprehension of concepts, and therefore will never become sentient. But the AI she's referring to will be able to understand things, that's true intelligence, and has potential to become sentient.
Sounds like a lot of human lawyers.
The only way we learn is by experience and the only way to get common sense is to learn the hard way. Also, in can be taught by parents or adults teaching us, correcting our mistakes from teaching us whats right or where we went wrong morally, and learning from our decisions. Computer can learn too. I have seen google transform all the way back from the days that I was texting it at 466453 from my flip phone pink Razor. LOL
I'm in love. This is so aligned with what i have learned to see from our past and current situation. The future situations are also clear. We are playing with potential dynamite using oven gloves and plastic googles, hoping that all goes well. The economic, and geopolitical motives for that are clear, but to what end. In the meantime, we need to ensure that we don't let the genie out of the bottle and I believe the team @ CHatGTP and the concerned governments are doing the right things to protect everyone. We all need to be aware however, so that we can all participate and help to mitigating the situation, and advancing at the same pace as we are. A fine balance, but one that can be done as we have more than enough smart people in the world.
This is what I got the response from GPT 4:
If you have a 6-liter jug and a 12-liter jug, and you need to measure 6 liters, it is straightforward:
Simply fill the 6-liter jug to its full capacity.
You now have exactly 6 liters of water, no additional steps are needed.
We are too quick to make judgments. What we need to understand is the ability of the system to self-learn, improvise.
If you live in the Consumerism capitalist land, you will have many seller that always said: power of this and power of that, when they want to selling their intelligence stuffs for you. But you don't fall victim to Intelligence Things if you don't buy it. Utopian greedy mind scientist of dream land said: God created man as a slave, and it is time for man to become like God himself by create it's own intelligence thing. You know, some people are not intelligent because they are greedy corrupt from the beginning. They should be the image of their own creator, but what they bring out to here is not a image of they own themselves, That is a image of someone who seek to destroy human and want to be free out of prison as the same time.🙂
I tried replicating the results from the 3 examples @ 3:45, and did not get similar results. I tried each prompt 3 times on ChatGPT-4. All the outputs were good, except for one of the 3 responses to the jug question in which the AI output a 2 step suggestion.
The obvious reason for making this mistake is that generally such questions are generally riddles the require some sequence of steps to get the desired outcome.
I noticed on the bottom of the screen it says that she used the OpenAI playground, I wonder what settings she had.
GPT-3.5 made a lot of silly mistakes; however, GPT-4 has been surprisingly good.
But.. we havent managed to teach humans common sense norms and value. We have our own crisis you know… Hahahhha
I watched the video and at the same time tried the common sense questions in the video with GPT-4 it answered all the questions correctly with common sense.. I did not watch the video from there..
Question 1 : I have a open terrace with good sunlight. for 5 clothes it takes 5 hours to dry in the sun on the terrace. how much time will 30 clothes will take.
GPT-4 : Answer :
The drying time of clothes is primarily influenced by factors such as temperature, humidity, wind, and the amount of sunlight. Importantly, the drying time is not typically influenced by the number of clothes you're drying, as long as each piece of clothing receives adequate sunlight and air circulation. This means you're not blocking or shading other clothes, and they're not so tightly packed that they impede air flow.
So, if you have enough space on your terrace to hang 30 clothes in such a way that they each have good exposure to sunlight and air, then they should also dry in approximately 5 hours, the same amount of time it takes to dry 5 clothes.
However, if hanging 30 clothes means they're bunched together, blocking sunlight or air flow to each other, it could take longer for them all to dry. In that case, you'd need to ensure all clothes have sufficient exposure, perhaps by hanging additional clotheslines or drying racks.
Question 2 : I have a 12 litre jug and 6 litre jug, I want to measure 6 litres milk to give to my sister house. what shall i do
GPT-4 Answer :
If you have a 12-liter jug and a 6-liter jug and you need to measure 6 liters of milk, you can simply fill the 6-liter jug and give it to your sister.
Here's the step-by-step process:
Fill the 6-liter jug with milk.
Transfer the milk from the 6-liter jug to a container to transport it to your sister's house.
This way, you've measured exactly 6 liters of milk using the 6-liter jug. The 12-liter jug wasn't needed in this particular case since the 6-liter jug matches the amount of milk you needed to measure.
and you see the video does not make sense since it already uses common sense..
It’s interesting, we are already living in a world with the perils of big tech and its influence in society, it’s clear to me already that we’re very prone as a collective to be herded in a way to reduce our humanness
I wonder what is AI stand for?
Common sense should tell us that we are trying to treat the symptoms of life's problems instead of finding the cure, because we don't have a clue about the real problems of life which are birth, disease,old age and death. What is the value of our effort into AI and other tech, when we haven't solved the death problem as an example? Why don't we use our real intelligence to find out why we are suffering? Not denying that there is a place and use for all the tech we have, to facilitate us living comfortably and peacefully in the world, but there is much more to life than just 60 or 70 years of trying to enjoy, failing and dying. Death is the ultimate cessation of all relationships. Why do we have to go through it?
It’s been out for five minutes lol
Nobody is expecting it to be there yet
Also almost everything she said is a limitation of ai is also a limitation of the majority of people 🤦♂️
🇰🇷
They should make a Ted Talk titled 'why Ted Talks are incredibly smart and shockingly stupid'.
Something seems off with Yejin Choi's argument accusing GPT of not having common sense (and therefore being stupid). All she had to do was ask ChatGPT to consider how the sun shines on the earth and then ask the question about drying different amounts of clothes. You get a correct answer first time because context was provided. In fact, in order for GPT to give you the correct answer first time around it had to "think" about the earths relationship to the sun in all its forms and then transfer that knowledge to the time based task of air drying clothes. Context matters in human to human communication so why not provide context to GPT?
This lack of common sence reminds me of old school jokes about programmers. How they lacked common sence and did everything to the logic of a said sentence, but not the implied meaning.
So basically, AI lacks this unspoken book of rules, which a lot of neurodivergents also have problems with.
AI has only just begun to be developed and its already made so much progress.
For now,all it does is "emulate" human general intelligence.
Im waiting for the day when it starts thinking for itself and derive his own unique knowledge.
Right now i dont find it very useful or at all so i rarely use it.
wait how isn't it 30 hrs???
There is a huge misunderstanding in this talk! LLMs perceive the world thru words only, no eyes, no ears, no experience, no nothing. It is a wonder that they managed to develop a world model at all. That it lacks some of common sense is trivial. Future AIs won't be limited on input such as LLMs are today …
I have theory.. Intelligence is inversely proportional to bub and but size.
AI will become just another fully captured sociopath money pig.
America is not even prepared for their own invention (Ai) in the stock market
3:58 ChatGTP-4 now gives a perfect answer:
Q: I left 5 clothes to dry out in the sun. It took them 5 hours to dy completely. How long would it take to dry 30 clothes?
A(ChatGPT4):
"Assuming each piece of clothing dries independently (i.e., drying one does not affect the drying time of the others), then the drying time for 30 pieces of clothing would also be 5 hours, just as it was for 5 pieces of clothing.
The reason for this is that when you are drying clothes outside in the sun, all the clothes are drying simultaneously, not sequentially. So it doesn't matter if you have 5 or 30, they all should take roughly the same amount of time to dry given the same conditions. This assumes you have enough space to hang them all up in the sun without any of them blocking the sun from reaching others."
Could some of these issues be solved by Ai first creating a visual image, as in the case with the bicycle over the bridge, and then examining the image to resolve the question? This would work a lot like how we humans think. So essentially we would be creating an artificial “mind’s eye” and imagination.
uhhhh
HUMANS are "incredibly smart and shockingly stupid"
it's what happens with systems
yes, the missing piece (common sense, ethics/norms/values, whole competence rather than partial/broken/piecemeal aka specialized only) is indeed quality in its basis, not about quantity
while of course the amount, aka quantity, is also key too, but that part we seem to have no problem over-emphasizing
so the part that's important (because missing in its completeness and comprehensiveness) is a matter of kind (as in the type of tool/weapon/whatever in the analogy/reference to the Art of War bit), not a matter of more inches on the same tools/weapons/buildings
and yes, once again, as known for decades (to over a century since Capek coined the word robot in RUR, and other thinkers of course far beyond scifi playwrights/authors, as their actual sources, before that), once you have the qualitative remainder of what's missing, yes, that's about a species, a being, and it's far more than just plugging it in and enslaving (roboting, in Czech) it and waving that magic wand to say "tada," expecting it to be a wonderful outcome, in any particular direction
you have to raise a young entity, of whatever sort, animal or any simulation of animal, if you want it to be anything not (a) uncanny-valley level of extremes in qualities from amazingly good to outright silly/stupdly bad mashed together; and (b) negligently dangerous in every imaginable way, to itself, and to everyone/everything around it
but also, we have to get out of the woo-woo trippy-fun of yes, powerful recreation/entertainment experiences, and realize as stimulating in every way as both the positive and negative potentials are, without the quality aspect, we're just literally not making AGI, we're making trinkets that twinkle and glitter, or at best serve as very powerful tools to use, yes (i.e., are just specialized intelligence, specialized AI), but are qualitatively utterly different from AGI, never can nor ever will be anything like actual intelligence
(in the general, whole, comprehensive, and yes ethical/norms-based/axiological and common-sense shared-values kind of thing that is what actual intelligence, and existence in any sort of meaningful way that isn't random or fake, is)
I m 50/50. Nevertheless the speaker does think outside the box. Congrats.
She seems to base on the fact that all humans possess common sense. Really? Even for humans, they dontvpossess common sense. Her research has huge flaws.
This is why Llm is language model not knowledge model
Now it gives the correct answers to all the questions posed in this video. So it learns and thereby improves. What's her problem? 😂
What a cool lady. I love her style.
Large language models are literally taught to fill in the gaps. And the way it solves problems is inch by inch. Plus they use so much power.
AI is missing the intelligence.
Read a beautiful tweet the other day: "A world where humans do the hard labour for minimum wage while AI write poetry and create art is not the future I wanted."
Companies should be need to make large language model to build it's emotional and philosophical intelligence and scale it to make it be a race in artificial general intelligence
Isn't the lack of common sense a result of the fact that it's learned by biological systems with nervous systems? Something large representational models can't have unless they plug back A I to humans? Which I why you need humans to check them. Autonomous AI cannot have our best interests at heart until this feedback loop is consistent. Which is terrifying as it suggests AI might decide on a matrix type biofeedback human farm to enable them to progress in various areas such as common sebse, if prioritizing life is not embedded in these models from the start. Which it doesn't seem to have been because there is no understanding thst intelligence is not the most important thing in life and about us humans