As AI exploded into the mainstream with the 2022 launch of ChatGPT, a fierce debate simmered behind the scenes of the world’s leading tech companies. The dispute centered on the emerging field’s backbone: foundation models.
These vast systems suddenly fuelled everything from OpenAI’s chatbot to Copilot’s coding assistant and Netflix’s recommendation engines. But all this power provoked contentious questions. Who truly owned these models? And who decided what they can do?
In Silicon Valley, responses were divided. OpenAI locked its foundation models in-house, granting outsiders access to the outputs only under tight commercial terms.
Meta released its models through an “open-weight” license. Weights were available for download but with strict usage restrictions, while the training data and code remained within the company’s control.
Google kept its foundation models entirely proprietary, releasing them only under highly restrictive licenses.
But beyond Big Tech’s dominions, a wave of alternative approaches was taking shape. Each offered a distinct vision of model ownership.
The open model
In Washington, D.C, EleutherAI fostered a radically open approach. Everything would be released — model weights, training code and data — under permissive licenses. The company quickly came to embody the openness OpenAI had originally promised. Its goal: full transparency and broad accessibility.
Oversight now lies with Stella Biderman, the executive director of EleutherAI. A mathematician, a computer scientist, she hopes to create a world where anyone can build models from scratch.
“We've invested really hard in trying to make more people able to own control and study this technology,” she says. “We think that's really important on the scientific front.”
The creators of EleutherAI all believed the public could “own this as well,” Biderman recalls. She had grown up “idolizing” the Free Software Foundation, the Mozilla Institute, the Open Source Initiative, and the ‘90s hackers who fought the “Crypto Wars” over restrictions on encryption.
“That's very much where I'm coming from philosophically. I want there to be some real sense in which you can own, control, modify and redistribute a model checkpoint, but also really understand what your program is… and influence it to behave the way you want.”
EleutherAI achieves that by treating models as public goods rather than private property. From its founding in 2020, transparency, reproducibility and community governance were prioritized over proprietary rights.
Within months of its launch, the non-profit unveiled The Pile, an 886GB dataset for training LLMs that anyone can access. A year later, the lab launched the world’s largest publicly available GPT-3-style model. A series of further open LLMs followed.
The releases showed you could build top-tier models without tech giants — challenging their hold on foundation models.
“Having model weights is a really good form of ownership, but being able to build your own model is an even stronger form of ownership,” Biderman says. “It’s expensive, and kind of a waste of resources if you don’t need it. But if you do need it, it should be an option.”
Another option is using EleutherAI’s pre-built open models — a popular choice for startups. “They like the fact that they could actually control the technology in a way that OpenAI wouldn't let them.”
Meanwhile, a very different approach to AI ownership is emerging on the other side of the US.
The experimental model
Across the country in San Francisco, Sentient Foundation is running a novel experiment in AI ownership. It aims to prevent any single entity from controlling Artificial General Intelligence.
“Our goal is to make sure that open AGI remains open and decentralized,” says Himanshu Tyagi, cofounder of Sentient.
Tyagi and his team are forging a path that turns every contributor to a model into an owner.
First, their identity is established through “fingerprinting,” which embeds a hidden signature in the model. “The first prerequisite for any kind of ownership or control is identity,” Himanshu says. “If you cannot trace back a copy to its origin, then there is no ownership at all.”
Once their identity is established, every action is logged on the blockchain, from curating data to fine-tuning the model. Contributors can even receive automated rewards, turning developers into stakeholders.
The models can also be aligned to the needs of individual communities, which can then arbitrate any conflicts. “We ourselves do not want to dictate these governance routes,” says Tyagi.
Still, mass adoption won’t be simple. There remain substantial technical barriers to scaling the system, which also needs to win over a few skeptics. But the promise has already attracted support from Silicon Valley titan Peter Thiel. Last year, the billionaire’s Founders Fund co-led an $85 million seed round in Sentient.
The company’s fingerprinting is also gaining traction. In February, over 650,000 people applied the technique to secure fractional ownership of a Sentient AI model.
Tyagi says this is just the start. “We think that countries and religions eventually will have their own models fingerprinted.”
He’s not alone in dreaming big. Across the globe, another vision of model ownership is attracting attention.
The mixed model
Around 10,000 kilometers away in Israel, AI21 Labs is capturing eyes — and wallets — with a hybrid approach. The startup provides open-weight foundational models that businesses can customize, as well as tailored versions for specific tasks.
Founded in 2017 by the CEO of self-driving pioneer Mobileye, a Stanford computer scientist, and a veteran of Israel’s elite intelligence unit 8200, AI21 develops and owns its own models.
“That means we take full responsibility for their design, training and deployment," says Amnon Morag, VP of Product.
“It also means we carry the accountability: ensuring the models are robust, transparent, and aligned with enterprise needs. That’s a very different kind of ownership than what you get if you’re only consuming a closed-source API.”
The approach straddles openness and control. Smaller models or datasets are often released publicly under open licenses, while the largest remain closed or under restricted access. Developers can access open model weights, while enterprises deploy controlled versions for compliance, security and operational needs.
“We support open approaches where customers can fine-tune or adapt models for their own purposes,” Morag says. “In those cases, ownership becomes shared; we provide the foundation, and they own the responsibility for how it’s customized and applied.”
The hybrid strategy has impressed investors. In November 2023, AI21 Labs raised a $208 million Series C round at a $1.4 billion valuation.
Since then, however, progress hasn’t always been straightforward. In June, an Iranian missile strike forced the company to evacuate an office. But just a month later, AI21 released its AI orchestration system, Maestro, for enterprise customers. The software reportedly reduces hallucinations by up to 50% and boosts reasoning accuracy above 95%.
This focus on dependability extends to the startup's vision for model ownership. “The next phase of ownership and stewardship won’t just be about who trains the biggest model, but who can make those models reliable enough for mass deployment,” Morag says.
For builders, the ownership options are set to expand. Regulatory environments may favor bespoke or tightly controlled models. Contributor-driven networks could benefit from blockchain approaches, while hybrid models offer a balance of open access and sustainability.
Back at EleutherAI, Biderman is keeping an open mind. “Part of believing in individual ownership is believing people can set the rules for the stuff they build — even if I don’t like them.”