The Battle For Control Of The GenAI Market

Generative AI (gen AI) is driving rapid innovation in areas from marketing to drug discovery, but the field is also consolidating fast. A few big companies could soon dominate, unless policymakers act to keep the market competitive, warns a recent paper from Berkeley.

“There’s no question that a few key players will take control,” the authors note. “The real issue is how concentrated it gets. Even a slight shift could make a big difference.”

Excessive control

The concern is that if only a handful of companies control AI, it could stifle innovation and transparency, limiting the technology’s future potential. In traditional industries, patents and trade secrets protect small startups. But in AI, big firms have a huge head start, making these protections less effective.

The paper, co-written with researchers from MIT and Harvard, builds on the work of David Teece, who studied competition in industries like computing and pharmaceuticals. He differentiates between two ways companies gain an edge: “appropriability,” the ability to protect core technology, and “complementary assets,” the ability to turn that technology into something profitable.

In pharmaceuticals, a drug idea can be patented to protect it from copying. In AI, the basic model is well-known and often published openly, so it’s harder to guard. Plus, with high employee turnover in Silicon Valley, trade secrets are tough to keep. “In California, noncompete clauses are illegal,” the authors explain, “so it’s easy for talent to move between rivals and share knowledge.”

Complementary assets

However, big AI firms have complementary assets that smaller companies can’t match, especially computing power. Running AI models requires huge amounts of data and computing infrastructure—Meta, for instance, is spending billions on Nvidia’s top-tier graphics cards. Large firms also scrape vast amounts of online data to train their models, setting benchmarks that smaller players struggle to meet.

Ironically, Meta itself has kept the market somewhat competitive by releasing an open-source version of its AI model, LLaMA. This led to a wave of spinoffs like Berkeley’s Vicuna and Stanford’s Alpaca, opening up new opportunities for experimentation. But Meta hasn’t fully shared its training data, and there’s concern it could try to control the platform, similar to how Google manages Android.

To prevent excessive concentration, the authors suggest more active government involvement. One idea is creating a national AI infrastructure that any company could use, much like a public highway system. California’s SB 1047 bill even includes a plan for something similar, called CalCompute. Regulators could also standardize performance and safety benchmarks to promote transparency and level the playing field.

Striking a balance

Policymakers need to strike a balance, though. For example, requiring companies to pay for the data they use to train models would help content creators but might give larger firms an edge, as only they could afford the costs. Smaller companies might struggle to compete unless exemptions are in place.

Ultimately, the dominance of large companies may be hard to avoid, much like in the computer and cloud computing industries. But the early internet, with its openness and innovation, shows that a more democratic model is possible—and could help gen AI fulfill its promise as a transformative technology.

Facebooktwitterredditpinterestlinkedinmail