Log In

Mixture of experts: The method behind DeepSeek's frugal success | - The Times of India

Published 1 month ago2 minute read

Mixture of experts: The method behind DeepSeek's frugal success

China’s

DeepSeek

has pulled off an AI miracle—building a top-tier

artificial intelligence

model while spending far less than its American rivals. At a time when

AI giants

are burning billions on GPUs and power-hungry data centers, this start-up has figured out a way to do more with less.
The secret? A mix of smart engineering, a clever

neural network design

, and some good old-fashioned mathematical efficiency.
Big AI, Small Budget
Most AI firms stack their data centers with thousands of GPUs—Meta’s latest

AI model

reportedly ran on 16,000 specialized chips, each costing around $40,000. DeepSeek? Just 2,000. Their total compute cost? A mere $6 million, almost a tenth of what Meta is rumored to have spent.

The ‘Mixture of Experts’ Trick
The key to DeepSeek’s frugal success? A method called "mixture of experts." Traditional AI models try to learn everything in one giant neural network. That’s like stuffing all knowledge into a single brain—inefficient and power-hungry.
DeepSeek, instead, split the system into specialized mini-networks—one for poetry, one for coding, another for biology, and so on. Each "expert" focused on its domain, while a "generalist" network acted as a bridge, coordinating them.
Think of it like a newsroom: specialist reporters cover specific beats, while an editor connects the dots.
The Decimal Game
If that wasn’t enough, DeepSeek also squeezed efficiency out of pure mathematics. AI models rely on mind-boggling amounts of number crunching, typically using 16-bit precision. DeepSeek? They slashed it to 8 bits—halving memory use and speeding up calculations.
Losing precision sounds risky, right? Not really. Just like rounding π to 3.14 works for most practical uses, trimming decimals didn’t hurt the AI’s performance. And when needed, DeepSeek stretched the final results back to 32-bit accuracy—giving them the best of both worlds.
Why Didn’t Others Do It?
AI giants like OpenAI and Google’s DeepMind have the brains and the budget, so why didn’t they crack this code first? Simple: risk.
Building AI models is expensive, and experimenting with new techniques can burn millions with no guarantee of success. DeepSeek took that gamble—and it paid off.
Now that they’ve published their findings, the industry is taking note. AI development just got a whole lot cheaper. The question is—who will be the next to follow suit?

Origin:
publisher logo
Times Of India
Loading...
Loading...

You may also like...