MIT Introduction to Deep Studying 6.S191: Lecture 1 *New 2023 Version* Foundations of Deep Studying Lecturer: Alexander …
source
Tags: 6.s1916s191aialexander aminiaminiandrew ngartificial intelligenceartificial intelligence newsartificial intelligence news 2023ava soleimanybasicsComputer visionDeepDeep Learningdeep learning basicsdeep learning pythondeeplearningDeepMindintro to deep learningintroductionintroduction to deep learninglatest news about robotics technologylatest robotslatest robots 2023learninglecture 1machine learningmitmit 6.s191mit 6s191mit deep learningNeural NetworksOpenAIrobot newsrobotics newsrobotics news 2023robotics technologies llcrobotics technologyTensorFlowtensorflow tutorialwhat is deep learning
Thank you so much!!! 👏
printf("Hello World!");
Thank you
This lecture is incrediblylizationcein !!!
To search the ideale weights Who minimize the Cost function,you should look for the point Who minimize the Cost function , in this case : min (gradient(w))=0
THANK YOU
Hello amini I am an African truly interested in ai is there any opportunity for me
Does this course contain practicals (from basic) too or just theory and some Advanced practical stuff??! It's waste of time if they are gonna teach advanced stuff which I can't understand at this moment 😑😑
guys I think if you are a complete beginner to this AI/ML stuff then its going to be a bit difficult to grasp these topics…my suggestion first learn a bit about AI/ML in some other introductory course…then visit this course it will be much more beneficial
Hello world
Thank you so much for making this course accessible for free. I feel so lucky today 🙏
THANK YOU
I will be back
Progress: 14:24
Git commit -m “my first commit”
Thank so much, Alexander. It was a great of explanation.
how are n perceptrons able to generate m outputs…shouldn't number of outputs be same as number of perceptrons? (watched till 29:04 so far)
Next level explanation ❤
Bravo! This tutorial is exceptional.
After watching just a few YouTube videos I have a neural network running on my computer (Python), built from scratch, and no fancy libraries (except NumPy). Forward propagation, non-linear activation, backward propagation, gradient descent… maybe 50 lines of code… that's it. It was able to train itself to recognize handwritten digits (0 – 9) in a couple of minutes. I'm completely blown away – can hardly imagine what serious networks accomplish. Looking forward to this series for a deeper understanding.
I am pleased to share with MIT, Professionals and Companies the most advanced technological tool in existence: the Digital Twin that I invented in 1981 under the name of Virtual Instruments. https://youtu.be/eadaQiL_AK0 https://youtu.be/fXlp4QLdWQs Export the Copilot, Chat GPT, Revit, Plant 3D, Civil 3D, Inventor, Engi file of the Building or Refinery to Excel, prepare Budget 1 and export it to COBRA. Prepare Budget 2 and export it to Microsoft Project. Solve the problems of Overallocated Resources, Planning Problems, prepare the Budget 3 with which the construction of the Building or the Refinery is going to be quoted.
Amazing delivery and presentation, thank you for sharing this material with us.
Very well done.
Question: If we want to minimize the error, then J(W) is best when equal to 0, isn't it? So why do we keep descending down to negative values? I.e isn't 0 a better error than -4? ( see the visualization in the video: https://youtu.be/QDX-1M5Nj7s?list=PLtBw6njQRU-rwp5__7C0oIVt26ZgjG9NI&t=2312)
clear and fantastic presentation, thank you so much. Do you carry the Iranian gene?
thank you Alexander and the team for this great effort. wanted to ask, what is the prerequisites for this course.
I didn't get one point: batches + computing gradient means that we're still considering the whole dataset? Or is it like let's say 32 samples per batch and computing gradient for let's say 8 samples? Or 32 and for all of 32 we need to calculate gradient?
Borderlands 3 got better graphics. One could even say the page boy did too.
@Alexandra where can we do the labs for each lecture? Going thorugh the lectures without the labs is a bit too much information to process.
6:31
❤❤❤❤
the capability of google ai concerns me if not scares me now. have been thinking to start learning AI and ML not even talked about it with anyone not even searched. the algorithm knew about it from my other searches completely unrelated and landed me here WOW.
@AAmini thank you very much for a detailed master piece . i am watching this video repeatedly to understand each second.
until 30 min , i am clear.
Dear Alexander, thank you for your AI course on YouTube! It is the best among all of these on YouTube.