Financial Sharing Models For An AI World
Photos of Bill Gross' presentation at DLD
Bill GrossLet’s think about some of the unintended consequences of AI and how to deal with them.
In all of the questions around what we’re going to do with LLMs, and what kind of hardware we’re going to use, and how we’re going to find the energy to power all of these systems, one question seems to often get lost in the mix – how are we going to award financial gain to parties that are working on or using these systems?
Actually, though, this has been coming up in recent conferences and talks, and in discussions that I’ve been having with industry leaders who are sensitive to these kinds of realities. “Cui bono?” (who benefits) is a time-tested phrase in the legal world, but we need to apply it to our new business world, too, a world that doesn’t really look much like it did even five years ago.
I was listening to Bill Gross talk recently about these issues. He is the founder of Idealab and the maker of Knowledge Adventure, an educational software product. He also pioneered the technology that became Lotus … so he has some bonafides in this area.
I’m going to break down some of the main ideas of what Gross discussed around how to make AI profits more equitable, and the market context that’s in play.
Gross mentioned a variation on this theme early on, talking about what happens when the cost of a certain commodity approaches zero. For example, he noted that the Internet made the cost of disseminating information close to zero. (I would add that digital photography made the cost of an image close to zero, too, and that’s a prominent example.)
What else? The cloud, Gross suggested, made the cost of storage close to zero. Now, AI is making the cost of knowledge acquisition close to zero as well.
That’s even more important, in my view, as we’ve been seeing things like liquid networks and foundational models greatly decreasing token costs. (Disclaimer: I have been involved in consulting the work that Liquid AI and the CSAIL lab are doing on liquid models.)
We’ll get back to this financial idea in a minute…
Gross also went over some thoughts on the emerging capabilities of AI based on the evolution of neural networks.
In the beginning, he pointed out, the neural nets only had several layers. That’s comparable to the cognitive sophistication of a fruit fly.
AI Consequences in categories
Bill GrossNow, he said, we’re about at the “rat level,” where the neural networks have maybe 100 to 200 layers, equivalent to the cognitive function of one of these large rodents.
However, he pointed out that even at this scale, the AI entities are capable of passing most Turing tests. So we communicate with them a lot better than we do with rats.
He also suggested that given the fast pace of technology, it will soon be beyond the rat stage, and more towards perhaps the equivalent of a dog or a horse.
“Look at where we're going to go,” he said. “Our brain has approximately 1000 layers, so dwarfing (what) Chatgpt can do today … everything is growing exponentially. We're getting more and more compute power. Nvidia just announced new chips just last week at CES in Las Vegas. It's really, really moving fast.”
Having made those points, Gross went on to talk about how technology can have unintended consequences (and often does.)
Oil and gas energy production is one of the main ones, and one that Gross brought up. It is, perhaps, the darkest example of how technology brings us new problems. Right now, we’ve blown past the 1.5° standard that was part of world climate science, and we’re seeing wildfires rage across California.
Gross identified that oil and gas boom as a prime example of not understanding what the side effects of technologies will be. What are the side effects of AI? We really don’t know yet…
“I tried to put together a list of AI consequences in three categories,” Gross said. “I listed some of the obvious positives, education and cures, innovation, possibly climate correction, productivity, easing tasks, leisure time. There (are) many, many positives of AI. I listed some of the negatives, copyright theft, misinformation, bias. … unintended negative consequences, things that you might not even think about, like rogue AI, or pollution that comes from AI power, the power required for AI, obviously propaganda…”
He ended up focusing, he said, on the issue of copyright theft.
I thought that perhaps the biggest idea, and the one that Gross spent the most time on, was the idea of revenue-sharing models.
One major side effect of AI, he suggested, is that companies are simply stealing copyrighted information and using it for free. He pointed out how the heads of these behemoth companies want to be able to scrape free data to feed their systems. But then he mentioned platforms like YouTube and Spotify that have adopted revenue-sharing models and found it works a lot better.
Specifically, with YouTube, he noted that the channel pursued this kind of “cat and mouse game” of trying to corral copyrighted content, but then when they started revenue sharing, everything got a lot easier.
The basic idea is that when you put fundamental collaborative models in place, and people don’t have conflicting incentives, you don’t have to try to enforce a whole bunch of unenforceable behavior restrictions.
“It is absolutely ridiculous that you have to get something else to free from other people's work to make your business work,” he said. “My dream would be that we have a planet where anybody with intellectual property in their brain, anybody with an idea, anybody with creativity, can create something, register it, and whenever it's used in AI engines, they'll get a check, a royalty check, every month for the content that they create. …. there's so much opportunity, and it really is the best time in history for making a positive impact on the world, because almost every aspect of life is going to be touched by AI.”
It was an eye-opener. Personally, I don’t think we’ve learned this lesson in AI, and we probably haven’t learned it in regular life, either. Capitalism and social interaction tend to be the same kind of cat and mouse games, or for another species analogy, a dog-eat-dog world.
But it doesn’t have to be that way, and inspired by outlooks like Gross’s, maybe we can probably work to make AI have a better impact on what we do together.