Nvidia delays new AI chip allegedly due to design flaw

Nvidia delays new AI chip allegedly due to design flaw


The giant name of the graphics card and artificial intelligence side Nvidiaallegedly failed to launch new AI chip due to design flaw postponed.

ChatGPT and other productive AI systems are largely driven by GPUs/AI chips from Nvidia. The company, which has gained incredible dominance in this regard, is now on the agenda with a negative development. According to the claim, Nvidia has ordered the production of artificial intelligence chips called “Blackwell B200” to Microsoft and an unnamed server provider. He said it would take at least three months longer than planned. According to two unnamed sources, this delay is the result of a design flaw that was discovered during the production process. The brand, which is claimed to have discovered this problem very late, is currently receiving a lot of orders for the B200, which will replace the H100. If true, this delay will push all orders back, but at least there will be no need for a recall after deliveries. Nvidia recently made headlines with its xAI and Grok developments. According to the statement made by Elon Musk, xAI is being developed with the support of X, Nvidia and some other companies. exactly 100 thousand liquid cooled Nvidia H100 (very powerful GPU system focused on AI training) was put into operation as a server center where the systems were brought together.

YOU MAY BE INTERESTED IN

Elon Musk, “The world’s most powerful artificial intelligence education system” This center he said, It will allow Grok to become much more advanced and intelligent in a short time.. Musk also said, ““The infrastructure will provide a significant advantage for training the world’s strongest AI by any metric by December of this year.” the center he said, It cost the company a lot of money but gave it a much stronger hand in training large language models, which is at the heart of LLM, or generative artificial intelligence.

Musk had previously said the following on this matter: “The reason we decided to build 100,000 H100s and the next big system in-house is because our competitiveness depends on being faster than other AI companies. That’s the only way to catch up. If our destiny depends on being the fastest by a long shot, we should be the ones steering the wheel, not the backseat passenger.”

lgct-tech-game