Making AI More Accurate: Microscaling on NVIDIA Blackwell

Описание к видео Making AI More Accurate: Microscaling on NVIDIA Blackwell

The focus on AI hardware has been speed and power efficiency. One of those tools is quantization - using fewer bits and doing more calculations. It usually comes at a cost - accuracy. Accuracy is important in AI. In order to get the best of both, NVIDIA has integrated microscaling - fixed offsets for small FP4 batches of numbers. Here's an explainer!

-----------------------
Need POTATO merch? There's a chip for that!
http://merch.techtechpotato.com

http://more-moore.com : Sign up to the More Than Moore Newsletter
  / techtechpotato   : Patreon gets you access to the TTP Discord server!

Follow Ian on Twitter at   / iancutress  
Follow TechTechPotato on Twitter at   / techtechpotato  

If you're in the market for something from Amazon, please use the following links. TTP may receive a commission if you purchase anything through these links.
Amazon USA : https://geni.us/AmazonUS-TTP
Amazon UK : https://geni.us/AmazonUK-TTP
Amazon CAN : https://geni.us/AmazonCAN-TTP
Amazon GER : https://geni.us/AmazonDE-TTP
Amazon Other : https://geni.us/TTPAmazonOther

Ending music:    • An Jone - Night Run Away  
-----------------------
Welcome to the TechTechPotato (c) Dr. Ian Cutress
Ramblings about things related to Technology from an analyst for More Than Moore

#techtechpotato #nvidia #ai
------------
More Than Moore, as with other research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, which may include advertising on TTP. The companies that fall under this banner include AMD, Armari, Baidu, Facebook, IBM, Infineon, Intel, Lattice Semi, Linode, MediaTek, NordPass, ProteanTecs, Qualcomm, SiFive, Supermicro, Tenstorrent, TSMC.

Комментарии

Информация по комментариям в разработке