I suggest an underlying course of which constitutes our intelligence as human beings, and argue that our present AI techniques …
source
Tags: aiAI apocalypseArtificialartificial intelligenceartificial intelligence explainedartificial intelligence newsartificial intelligence news 2023artificial intelligence tutorialconsciousnesselon muskgpt 3Intelligencelatest news about robotics technologylatest robotslatest robots 2023machine learningMissingnick bostromphilosophyrobot newsrobotics newsrobotics news 2023robotics technologies llcrobotics technologySciencesimulation argumentsimulation hypothesissimulation hypothesis wrongsimulation theorytechnologyuniverse simulationuploading consciousnesswhat is artificial intelligence
Just … wow.
AI today:
if I remove the two black corners how many dominoes does it take without any overhanging pieces?
ChatGPT
If you remove the two black corners from a standard chessboard, you are left with 62 squares (64 total squares minus 2). This poses an interesting problem, as now you have an equal number of black and white squares.
A standard domino will cover one black and one white square. Since the number of black and white squares is no longer equal (after removing the two black corners), it is impossible to cover all the remaining squares with dominoes without any overhanging pieces.
Here's why:
A standard chessboard has 32 black and 32 white squares.
By removing two black corners, you'll have 30 black and 32 white squares.
Since each domino covers one black and one white square, you can't match all the white squares with black ones.
Hence, it's not possible to cover the remaining squares with dominoes without overhanging pieces.
great essey
"there is no inherent meaning in ones and zeroes" – yeah, unlike the gray goo inside our skull, the ordering of our neurons, these things are of course "inherently meaningful" – not.
semantics itself is per (usually used) definition relative: a statements meaning is constituted by its truth-conditions, aka what it actually says about the real world.
no type of matter that exhibits intelligent behavior (a thinking brain for example) will ever tell us anything about the real world on its own, though, as semantics is always dependant on the relation of the encoded information and the outside world.
Brilliant. I have been saying that AI is not actually intelligent and now you have given a really clever way of explaining one of the reasons why. Relevance realisation. (I prefer the word recognition, but anyway.) Great video. Great channel.
I think that meaning and relevance are the key words
We will NOT always hit a brick wall. Adding sensors to neural networks will give them a sense of relevance. And doing so is just one more step among the many steps that are being taken right now. We have no good explanation for how our own self awareness emerges. We are in no position to categorically rule it out for AI systems. Imo the best guess is still that self awareness is an emergent result of complexity, of a system listening to itself and its inner states. Remember: we can choose anything as input that can be digitally expressed, including real time information about the physical world and information about the state of another neural network. People will try the wildest stuff. Your evolution analogy isn't just an analogy. A big part of relevance is rooted in survival, as your fire example perfectly illustrates. As soon as an AI system has a sense of mortality, an understanding of existence vs. non-existence, it will have relevance.
It's also fair to say that humans have a hard time with seeing the relevant information when it comes to complex social systems and interactions, politics etc. And AI is better at detecting relevant connections within large and unstructured datasets.
Get a life, and stop overthinking stuff which just doesn't matter. You must be really boring at dinner dinner parties.
It is said that all the other animals call us "That crazy Ape". When AI can go crazy and still function, it might become the master of Humans.
actually high IQ take for once
I feel like the robot blowing up because of the bomb was a little realistic to how caveman would react to the bomb like how would they know the bombs dangerous it's just an unknown object and i feel like intelligence is adaptability and curiosity and the ability to learn that is pretty much what a lot of intelligent animals have and they have fun because that thing interests them and we get bored because that thing is probably already known and/or uninteresting and plus if that robot had a scanner and could learn and adapt as well as research about the objects it's seeing and link them up to what it is and why it is and how it is and what cause all of those can decide the outcome i don't disagree with everything i mean the way we pick up on relative information seems like a pretty good skill to have for intelligence but again it also comes down with adaptability i mean at least that one did lol but if that robot could do all of that then it could try and do a thought process on how to eliminate that threat aka the bomb which we know it could already do because we know of chess AI that can respond to threats to a piece and with all of this information it gathered it could try and "adapt" to a solution as in chess alpha zero became so good because it played a lot of games against itself and learned at least i think that's how i think it happened lol but anyway it learned and figured out what were threats and adapted now obviously if it ended at adapting it'd never learn so it needs to remember the information, now i mean idk how to create any AI so I'm probably wrong.
8:07 it s a Asimov prediction.
You can stuck a robot in a loop if you give him too much informations, even if he s very good at calculating output, he will not act because the amount of possibility if infinite.
That's (in part) why he bring the 3 laws
I personally think that this video might apply too much magical thinking to what humans do vs. AI.
When you hear someone say that they are out of gas, then there are many, many contextual clues that allow the listener to make a highly educated guess as to what the person needs.
If I said that to you and I was sitting on a toilet and just farted a few times, the probabilities would shift such that typically low-probability choices might have some level of likelihood now.
Notice that it actually boils down to a quite simple “probability multiplication”.
Man sitting on toilet x just farted a lot x smirk on his face.
When you consult your database of lifetime experiences, you might actually be a bit confused about the answer (your neural network suggests multiple possibilities that compete for “relevance”, which is a simple numerical comparison) l, but a different meaning for “gas” “lights up” in your own inner neural network as an alternate possibility.
Humans are actually confused quite often. We just get really good at hiding it by staying in realms where we don’t have many competing likelihoods.
If you follow what I just shared down to the end, the suggestion is that “relevance” is simply a multiplicative calculation weighing memory, theoretical frames, possibilities, and other forms of context.
I have seen it many times as a person learning a 2nd language.
It is really easy to confuse someone.
“Effective communication” is having a useful model for people’s neural networks and providing enough context to allow them to rule out the most likely “confounding conclusions”.
(You may have avoided a certain word because you knew it would sound too much like another, perhaps. Or you emphasized a certain word to shift the probabilities in your favor.)
Now to go use a grapefruit as a computer mouse. Jk Jk
Things that are blue, circular, or filled with red bean paste tends to be moving targets.
Even building an image to text generator that associate images with colors means abrutrarily assigning what text strings are associated with what colors.
But creating a thing that can make sense of the colors by associating it with an image means needing to find a way to associate found colors with known objects.
Reverse of breaking tasks down into smaller parts.
One approach Im considering is a linguist approach to AI development.
Wow. This one’s my fav and it’s the last one I watched of yours! Such a gem dude, loved it!
Ai is artificial idiocy.
Eu morri que o gringo usou o meme da Nazaré Tedesco no contexto certinho!!!
But a grapefruit is a great weapon.
Well done google for slapping a climate change plaque on the video and hiding it from the algo
A necessary but probably not sufficient component I think AI needs to become "real intelligence"is meta-cognition. Being able to not just refine but reevaluate it's own "thought" process. What I think this would mean in practice is rather than a linear sequence of layers having layers loop back in to each other so it can hold "ideas" in memory, "reassess" "ideas" to determine whether to keep them or try something else etc. This would be computationally far slower and less predictable though.
Planets orbit stars according to certain differential equations, but they don’t have to internally compute those equations to do so. Soap bubbles take on the shape of minimum surface area (given their boundary) without having to internally minimize an integral. How do we know the brain is any different?
Nature seems to be able to act in accordance with complex mathematical models without actually expending computing power to do so. So, it seems conceptually possible that the brain produces intelligent behavior without having to explicitly compute it, and that building machines to explicitly compute intelligent behavior could be infeasible to reach AGI.
Eminently enlightening! Huge thanks!!
whats the reggae tune playing in the back? its so catchy
At 5:30 I realized from that perspective how well the category theory is named.
12:41
alan turing described computation as a
12:43
system constituted by the manipulation
12:45
of binary symbols if that's what we're
12:48
working with we probably can't get
12:50
relevance realization this kind of
12:52
computation is defined purely
12:53
syntactically and relevance realization
12:56
requires semantics or meaning there's
12:58
nothing meaningful about a piece of code
13:00
by itself
13:01
the meaning is only assigned when a
13:03
human interprets the code and
13:05
understands what it's doing the code
13:07
itself is just manipulating symbols it
13:10
has no understanding of what it's
13:11
actually doing or the purpose it's
13:13
serving
13:14
now for most practical purposes none of
13:17
this actually matters
13:19
if a car can drive itself or github
13:21
copilot auto-completes code who cares if
13:24
the system is actually intelligent in
13:26
the strict sense i described what's at
13:28
stake here is the prospect of building
13:31
true intelligent machines not whether ai
13:34
can do complicated useful things
13:36
with our current ai methods no matter
13:39
how big our data set or how powerful our
13:41
processing power we will never achieve
13:44
true relevance realization or
13:45
intelligence
13:46
we will always hit a brick wall with the
13:48
frame problem because our ai systems are
13:51
not auto poetic embodied or embedded
13:55
if we are to build intelligent machines
13:57
we will need to reformulate the problem
13:59
of artificial intelligence achieve new
14:02
insights in science and technology and
14:04
above all make sure that we're realizing
14:06
what's relevant
3:14 This way of problemsolving is common in math and is called the invariance principle. I just wanted to point it out for those intressted. Very good example of how we focus on the relevant part, in this case the invariant.
"it depends what you mean by ai and computation"
Jordan Peterson has entered the chat