Testing the M1 Max GPU with a machine studying coaching session and evaluating it to a nVidia RTX 3050ti and RTX 3070.
source
Tags: aiAppleartificial intelligenceartificial intelligence newsartificial intelligence news 2023best mac for devbest mac for software engineeringCardDelldell xpsdell xps 15destroysengineer benchmarkslatest news about robotics technologylatest robotslatest robots 2023learningm1 maxm1 max vs rtx 3070m1 prom1 ultra benchmarkm1 ultra machine learning benchmarkm1 ultra vs rtxm1 vs rtx 3050 tim1 vs rtx 3070MacBookmacbook proMachinemachine learningmachine learning m1ml on m1 macml on m1 maxProPyTorchpytorch on m1pytorch on m1 macpytorch on macrobot newsrobotics newsrobotics news 2023robotics technologies llcrobotics technologyRTXSoftware Developmenttensorflow m1 max benchmarkwhich macbook is best for devXPS
How to properly say nVidia's RTX 3050Ti (for this video I chose the second pronunciation) : https://www.youtube.com/watch?v=XhvspQSAKiU
M1 max destroys lowend GPU. Wow.
I wish a apple but really I don't like this f.king notch. Every where is a notch. Cmon….
Does it mean that Macbook M1 would be better for Machine Learning tasks? Should I buy macbook over dell XPS for ML and coding tasks?
Yes please make the video on your project
First a very very short test that is basically measuring set up time. The shared memory system doesn't have the serial delay of loading the GPU so it comes out ahead. Then you rail it to the other extreme to find a test that will only run with substantial memory. That seems…. engineered to give a result. Honestly, that appears less than upfront.
How about test some real world benchmarks that run on RTX machines of 8-12 GB and compare the performance to the M1. If the M1 comes out ahead then cool.
but who really is going to be machine learning "on the go". if you were you could just remote into a server that had the correct hardware anyway.
WHICH LAPTOP SHOULD I PREFER FOR MACHINE LEARNING,DL, DATA SCIENCE. MAC AIR M2 OR ROG STRIX G 15!?
I dont understand. How this is happening? How 64gb?
$4000 vs $1500 ??? How about a 3080 laptop?
It would be interesting to see the performance with limited batch size on the RTX GPUs versus the M1 max.
what about macbook air m2 8/256 for maching learning ? and who's faster the air m2 or gpu t4 for google collab ? thanks
wow, interesting to test in my M1 Pro. Thanks a lot.
can you teach how to do basic ml in asus g15 advantage edition with windows and rocm use please?
your are comparing a 4000 euro macbook with a 2000 dolar laptop
RTX GPU is designed for gaming not machine learning. Of course it will be slower. Its cool that they baked the RAM very close to both CPU and GPU.
Please do more comparing between multi VGA card ( 3060 12GB, 3070, 4060) with M1, M2 apple CPU!
WE do need more information about this comparing like.
Total time to process A test , Total calculation in a range of time.
Avg speed.
Pros and Cons
Thanks so much Sir!
this comparison doesn't even make sense. You are comparing a 5000$ laptop to two laptops which don't cost a fraction of what this 64GB Ram Monster cost
Better buy a windows laptop for eveything else other than ML as faster, i.e Programming, virtual machines, docker containers, non-apple software video editing, and gaming.
And for ML models, run those on cloud at fraction of cost
why not just buy a windows laptop and then train models on external gpu
We need an updated version of this video please do so 4080 4090 laptops vs m3 series
I miss the Schwarzenegger 😂
Actually you are not compare M1 to RTX but MacOS to Shitbuntu. Just because you not able to get Linux compiled for Intel laptop and for dedicated purpose. Actually you can do that, but you not able… whatever reason.
In Apple you get a lot more ram right ahead… For LLMs are apple machines better than any AMD/Nvidia solution for home computing.
For some of the things I do I need more than 40 GB ram… There is no VCard I can pay for that
But don’t you need CUDA to utilize most of the ML python libraries? In that respect, don’t you have to use Nvidia hardware? What if you’re mostly working from the DevOps perspective, trying to setup the proper Conda and Pip environment and simply to test functionality on simple/smaller datasets and test small training sets and then move your code later to cloud to run the full training and inference on Amazon AWS Nvidia A100 or DGX A100 resources?
Should I buy a mac mini m2 or a pc? Which one better?
you are being stupid here, you testing a sepecific test, m1 cpu is nowhere near rtx gpus
lol ı do prefer RTX in 2024