AI Commons (AIC) Framework
The agrarian and industrial revolutions each transformed civilisation by reorganising what humans and their tools do - but in every case, humans remained the coordinating intelligence that directed the system. The integration of artificial intelligence is different in kind: for the first time, a new autonomous agent is entering the picture, capable of replacing human coordination at a scale and pace without precedent. Because AI systems can replicate at near-zero marginal cost, the terms on which they are adopted lock in rapidly through self-reinforcing dynamics - and whether these dynamics will concentrate gains or distribute prosperity depends on the conditions under which AI enters the economic game. So does whether the costs - environmental and social - are accounted for at each step or allowed to accumulate out of sight.
Shaping those conditions is the central design challenge of our time, and the purpose of the AI Commons Framework.
The Interaction Layer
In any complex adaptive system, the emergent properties of the whole follow from the rules of interaction between agents, not from the characteristics of any agent taken alone. The same principle applies to the AI transition. The conditions under which AI affects society are shaped at several layers: what a system is designed to deliver, how it works, and what regulation permits and prohibits. All these matter. But from a systems perspective, the highest-leverage layer may be this: the operational arrangements through which AI capabilities are put to work and exchanged between actors. A well-intended, well-designed system, operating within the law, can still produce harmful systemic outcomes depending on the terms governing its use, whether anyone can see how it operates, who captures the gains, and who counts as a stakeholder. The architecture of the deal matters as much as the architecture of the model.
The existing legal and economic instrumentarium at this layer - the logic of value capture, profit maximisation, and cost externalisation that structures commercial relationships - is already straining. Concentration of gains, opacity of operations, ecological costs pushed downstream: these are familiar perils of the pre-AI economy. AI does not even need to introduce any new perils: if it runs the existing ones through systems that replicate at near-zero marginal cost and accelerate without human friction, the toolset that has been producing problematic outcomes at human pace will produce catastrophic ones at machine pace. New interaction patterns at this layer are structurally necessary.
Repeatable Patterns, Systemic Effects
In practice, these interaction patterns live in contracts - API terms, procurement clauses, platform conditions, arrangements both standardised and bespoke. Whether by default or by design, each one settles governance questions that no current regulatory framework reaches: how gains and environmental costs distribute through the value chain, what operational evidence is disclosed, who holds standing to contest the terms. Most pass without public sight; the few that attract attention confirm what the rest configure quietly.
A contract, however carefully drafted, remains a private arrangement between two parties. It produces no shared vocabulary and creates nothing others can build on. System dynamics shift when choices become repeatable - when a recognisable pattern emerges that actors adopt because it works for them, and that third parties can read and compare - or demand.
Creative Commons is the clearest precedent. A small family of standardised terms for sharing knowledge and culture, required by no government, adopted by scholars and creators because it made certain choices easy and legible. Individual decisions, over time, aggregated into something that had not existed before: a commons of openly accessible resources. Open-source licensing achieved the same for code, and now increasingly for AI model weights. Neither spread by mandate; both reshaped entire sectors through accumulation alone, and both remain enforceable through existing legal infrastructure..
The AIC Framework applies a similar logic to the conditions under which AI capabilities are contracted into use. Its vehicle is a licence - not in the narrow intellectual-property sense, but in the functional sense that open-source developers already recognise: a set of terms that attaches to a capability and travels with it through the value chain. When a developer chooses an open-source licence, they set conditions on how their code circulates; across thousands of projects, those individual choices built an ecosystem. The AIC licence does the same for AI capabilities - where Creative Commons and open source attach conditions to intellectual property, the AIC Framework attaches conditions to the deployment and use of AI capabilities.
The Framework
The framework is organised around six governance profiles, each independently optional. An adopter selects the profiles relevant to its context, composing a recognisable profile:
Sustainability - A commitment that the environmental impact of AI operations is neutral at minimum, and where feasible, regenerative. The adopter sets defined targets for energy consumption, emissions, and ecological footprint, measured and reported against standardised benchmarks. Accountability sharpens when this data becomes comparable across providers, making environmental performance a visible and legible factor in how AI capabilities are chosen.
Value - A commitment to directing a share of the gains from AI operations toward the communities bearing their social-economic consequences. The adopter allocates a defined share of gains to specified groups of beneficiaries through a commons fund with transparent governance. Adopted across many cases, these commitments aggregate into a distributed (many-to-many) mechanism for addressing the social-economic effects of AI from within economic processes, rather than through tax-based redistribution after the fact.
Access - A commitment to open access to the full capability, without discrimination by user type or ability to pay. The adopter commits that the service is freely accessible to all users, with volume as the only adjustable constraint. As more providers adopt comparable access commitments, a floor of universal availability emerges across the ecosystem - not mandated centrally, but accumulating through decentralised decisions.
Reciprocity - A commitment to recognising and returning value to those whose resources made a system possible: the datasets, labour, and knowledge it was built on. The adopter registers these upstream contributions and allocates a share of revenue to a contributor fund, addressing the persistent pattern in which value flows downstream while the inputs sustaining it are treated as free. Across many adopters, these registries make visible the actual dependency structure of AI value chains - who contributed what, and whether the return was proportionate.
Openness - A commitment that the system's full operational logic - its code, decision architecture, and the principles governing its behaviour - is legible and available for scrutiny. This is not a predefined disclosure checklist; it is an open-ended obligation that the system will demonstrate responsible handling of whatever societal concerns arise, including those not yet articulated. An AIC-O system cannot run undisclosed operations in the background, even where existing law would permit it: if it cannot be shown, it cannot be run. Where specific operational detail must remain confidential for defined reasons, the restriction itself and its justification are public. Across many adopters, these disclosures accumulate into a shared, comparable record of how AI systems operate in practice - a commons of operational evidence that civil society, regulators, and other stakeholders can draw on independently of any single provider's willingness to cooperate.
Governance - A commitment that all groups materially affected by an AI system are represented in decisions that alter how it affects them. The adopter defines the stakeholder composition, selection method, and the categories of decision that activate the governance mechanism - including scope expansion to new populations or use cases, transfer of control to a new operator, significant changes to the system's capabilities, modifications to access or data practices, and discontinuation. When governance profiles are declared publicly, a norm of participatory oversight develops across sectors - visible, comparable, and increasingly expected.
These profiles are modular: an adopter may take up any combination. Each commitment is voluntary, but standardised and visible. Once adopted, the commitment may extend into a requirement on others in the value chain: upstream suppliers or downstream operators. A foundation model provider, for instance, might commit to sharing automation gains, and require that every downstream operator who builds on their system does the same - so that the obligation travels through the chain regardless of how many times the capability is repackaged. These are not regulatory requirements imposed from above; they are requirements that peers set for each other through ordinary contracts, as a condition of doing business together.
Enforcement operates through two complementary layers: contractual commitments that are bilateral and litigable under ordinary contract law, and declaratory commitments - public visibility of declared profiles - that create reputational accountability. The combination works within existing legal systems and requires no new legislation.
AI Commons (AIC) International Working Group
The AIC Working Group is currently hosted at CLEA (Center Leo Apostel for Interdisciplinary Studies), Vrije Universiteit Brussel, and is seeking registration under the United Nation's AI for Good Impact Initiative, or an equivalent international organisation. The group's purpose is to bring the framework from draft stage to a working instrument, combining interdisciplinary governance research with implementation in real AI ecosystems.
The work is organised along three lines:
- Profile architecture review. The overall structure of the six profiles - their boundaries, interoperability, and the degree of standardisation they should carry. The objective is to ensure that the system is coherent and implementable across diverse contexts.
- Profile-specific expert groups. Each group reviews and finalises one profile's standard clause set, bringing it to a first stable version: refining scope and definitions, specifying baseline clauses and adjustable parameters, and maintaining a clear rationale for design choices.
- Framework stewardship. Ongoing governance of the framework's components - clause sets, profile definitions, governance templates, versioning, and the processes by which they evolve. The working group serves as the custodian of the framework as a whole, ensuring that changes are coherent, transparent, and aligned with the framework's governance intent.
The framework is a licensing and contracting instrument, not a platform or blockchain-based system. It consists of standardised clause sets, profile definitions, and governance templates - designed to be adopted through existing legal and contractual channels. Technical tooling may emerge around it where useful, but no technical layer is constitutive of the framework's operation or centrally controlled.
Members
- Dr. Em Lenartowicz (Chair)
- Andrzej Ryś
- Dr. Cristian Axenie
- Dr. Fatima Roumate
- Gen Zendahl
- Dr. Joana Santos
- Michel Bauwens
- Dr. Mihaela Ulieru
- Dr. Scott L. David
- Steve Coy
- Dr. Vasilis Kostakis
- Dr. Weaver D.R. Weinbaum
- Dr. Young Yoon
References
- Bauwens, M., Niaros, V . (2017). Value in the Commons Economy: Developments in Open and Contributory Value Accounting; P2P Foundation. Berlin: Heinrich Böll Foundation. [Link]
- Kostakis, V., Tympas, A. (2025). AI as commons: Why we need community-controlled Artificial Intelligence. Internet Policy Review. [Link]
- Lenartowicz, E.M. (2025). Impact-Oriented Licensing for Artificial Intelligence: A Conceptual Framework for a New Domain of AI Governance. SSRN id=5794362
- Lenartowicz, E.M. (2025). Shaping AI Impacts Through Licensing: Illustrative Scenarios for the Design Space. SSRN id=5835702
- Lenartowicz, E.M. (2025). AI Commons (AIC) Licence Suite: A Modular Framework for Impact-Oriented AI Governance. SSRN id=5848523
CLEA Seminar
Presentation of the AIC Framework for the CLEA research team on 19.12.2025 (Abstract). Several aspects of the framework formulation have evolved since then.