The Paradox of “Open-Source AI”

Interest in “open-source AI” has grown into a frenzy. Meta made headlines in July by releasing the source code and weights for its flagship large language model (LLM) Llama 2 under a licence permitting commercial use and has enjoyed a much-needed image boost since becoming a vocal proponent of “open-source AI”. Meanwhile, the collective efforts of the AI developer community have led to the rise of Hugging Face, a platform for openly sharing AI models, which has seen various companies like Stability AI and Mistral AI release open alternatives to OpenAI’s conspicuously “unopen” AI models.

The popularity of “open-source” AI models has even led to suggestions that AI is having its “Linux moment”—that pivot point at which open-source software became popular. In drawing a parallel between Linux’s critical moment in the 1990s and the state of AI today, however, we need to remind ourselves that open-source licences were developed for software, not AI. Since open-source licences were created for software, they do not apply directly to AI models or systems. The term “open-source AI” is thus inherently paradoxical. On the one hand, it may seem that if an AI model is licenced under an open-source licence, the model must be open-source, but on the other hand, it cannot be open-source because it is not software. For that reason, the phrase “open-source AI” will always be accompanied by quotation marks in this article.

AI ≠ Software

Open-source licences were born of a legal watershed moment in the 1960s. Back then, the products created by a fledgling software industry had to compete with the software that came bundled with hardware products. The judgement in United States v. IBM (1969) declared bundled software to be anti-competitive, triggering increased investment in independent software. When copyright law was extended to computer programs in 1980, it provided the legal foundation for the sale of independent software under proprietary licences.

Richard Stallman founded the Free Software Movement (FSM) in the early 1980s in response to these changes. Stallman’s GNU Manifesto declared that free software should have four “essential freedoms”: to run, study, and distribute code, and to re-distribute modified code. Copyleft licences, such as the GNU Public Licence (GPL), were introduced to protect these freedoms. Directly and indirectly indebted to FSM, Linus Torvalds began developing the Linux kernel in 1991, sparking the so-called “Linux moment” and heralding an era of open-source licences which offered similar freedoms.

When applying open-source licences to AI systems, it is all too easy to disregard the fundamental techno-legal differences between software and AI. Traditional software comprises computer code that may be in object or source form, the latter of which is human-readable and is the primary asset made publicly available when software is open-sourced. Publicly releasing code in source form under a licence that permits anybody to use, modify and redistribute the code has enabled a collaborative software development model that is unrivalled in its efficacy, efficiency, and inclusivity. However, while AI systems certainly include code, they also include a number of additional elements which fundamentally differentiate them from other types of software, meaning that transplanting the benefits of open-source software to AI systems is not a routine operation.

Challenges of open-sourcing AI

An AI system is best conceptualised as a collection of components or “artefacts”, which include not only software artefacts encompassing code for training and executing an AI model, but also trainable parameters of the model (such as weights) that have no human-readable source form. Depending on how the AI system is defined, other artefacts can include the datasets used to train the model and software drivers which enable use of the specialised hardware the models are designed to run on.

Merely open-sourcing the software artefacts of an AI system does not automatically grant users the essential freedoms associated with open-source licences. To achieve a similar effect, at least some of the other artefacts must also be made openly available. The question is: which other artefacts, and to what extent? Along this line of thinking, academics and legal professionals have suggested that any sensible definition of an “open-source AI system” must entail transparency, reusability, and extensibility, or alternatively transparency, reproducibility, and enablement of the AI system.

Legal and practical restrictions can apply to artefacts of AI systems other than code. For example, datasets used to train an AI model may include material that is subject to copyright or other exclusive rights, meaning that a user may not be legally free to use the data, or even to use a model trained using the data. Such restrictions potentially deny some users the freedom to run the AI system, which is incompatible with the four essential freedoms and the overall ethos of open source.

As another example, training large AI models such as LLMs is impracticable without powerful GPUs or other specialist hardware, whose drivers and firmware are typically not open source. The majority of large “open” AI models today have been trained using Nvidia hardware that operates on proprietary CUDA software. The resulting AI systems arguably do not give users the full freedom to study and modify them.

In addition to the above, open-source licences are conditional copyright licences that cannot be applied directly to artefacts that are not subject to copyright. In fact, it is not clear as yet whether any licensable intellectual property right is applicable to trained model parameters. So, for models that differ from others only by their weights, there remains a question of what legal basis there is for imposing conditions on their use.

Can the paradox be resolved?

Proponents of Ptolemy’s geocentric model of the universe ended up going around in circles seeking to revise prior knowledge in response to newly-discovered inconsistencies. Copernicus, instead, was willing to redefine the whole problem in light of fresh evidence. A similar willingness to rethink presumed premises is needed in the definition of ‘open-source AI’. In this regard, the Open Source Initiative (OSI) has made early steps by releasing a first draft definition of open-source AI systems, stating that to be open-source, an AI system needs to make its components available under licences that individually enable to user to study, use, modify and share the AI system. However, there is still some way to go before we have a broadly accepted definition. In the meantime, the “open-source AI” brand will continue to be used inconsistently and enthusiastically by companies of all shapes and sizes. Such is the extent of its marketing and lobbying value.

+ posts

Girish is a Patent Scientist at EIP with an academic background spanning applied mathematics, computational physics, and engineering. Prior to joining EIP, Girish was a research fellow in applied mathematics at the University of Leeds and a research associate in aerospace engineering at the University of Cambridge.

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

Driving the Future of Connectivity with Data Centres

As an astonishing matter of fact, there are 5.3 billion people...

Willow’s Breakthroughs in Quantum Stability

The Start of a New Era Have you heard the...

Cloud Computing Demands Robust Security Solutions

Modern organisations are increasingly reliant on cloud computing to...

Unlocking the future of manufacturing; AIOps network optimisation

Global manufacturing activity has failed to show signs of...

Why is the hybrid cloud the future of computing?

Have you ever wondered how companies can benefit from...