AI Commons (AIC)
International Working Group
The agrarian and industrial revolutions each transformed civilisation by reorganising what humans and their tools do - but in every case, humans remained the coordinating intelligence that directed the system. The integration of artificial intelligence is different in kind: for the first time, a new autonomous agent is entering the picture, capable of replacing human coordination at a scale and pace without precedent. Because AI systems can replicate at near-zero marginal cost, the terms on which they are adopted lock in rapidly as self-reinforcing dynamics - and whether these dynamics will concentrate gains or distribute prosperity depends on the conditions under which AI enters the economic game. So does whether the costs - environmental and social - are accounted for at each step or allowed to accumulate out of sight.
Shaping those conditions is the central design challenge of our time, and the purpose of the AIC Working Group.
The Interaction Layer
In any complex adaptive system, the emergent properties of the whole are determined less by the characteristics of individual agents than by the rules of interaction between them. The same principle applies to the AI transition. The conditions under which AI affects society are shaped at several layers: what a system is designed to deliver, how it reasons, and what regulation permits and prohibits. All these matter. But from a systems perspective, the highest-leverage layer is the one that receives the least attention: the operational arrangements through which AI capabilities are put to work and exchanged between actors. A well-designed system, operating within the law, can still produce harmful systemic outcomes depending on the terms governing its use, whether anyone can see how it operates, who captures the gains, and who counts as a stakeholder. The architecture of the deal matters as much as the architecture of the model.
The existing legal and economic instrumentarium at this layer - the logic of value capture, profit maximisation, and cost externalisation that structures commercial relationships - is already straining. Concentration of gains, opacity of operations, ecological costs pushed downstream: these are familiar perils of the pre-AI economy. What changes with AI is that these same patterns will now run through systems that replicate at near-zero marginal cost and accelerate without human friction. The toolset that has been producing problematic outcomes at human pace will produce catastrophic ones at machine pace. New interaction patterns at this layer are structurally necessary.
Repeatable Patterns, Systemic Effects
In practice, these interaction patterns live in mundane instruments. API terms, procurement clauses, platform conditions, cloud contracts - their defaults are often treated as standard without much scrutiny, yet they determine what gets measured, what gets rewarded, and what disappears from view. Every time an AI capability is contracted into use, a small piece of the post-transition order gets configured. These arrangements, collectively, are writing the terms of the new society one deal at a time.
But a contract, however carefully drafted, remains a private arrangement between two parties. It produces no shared vocabulary and creates nothing others can build on. System dynamics shift when choices become repeatable - when a recognisable pattern emerges that actors adopt because it works for them, and that third parties can read and compare - or demand.
Creative Commons is the clearest precedent. A small family of standardised terms for sharing knowledge and culture, required by no government, adopted by scholars and creators because it made certain choices easy and legible. Individual decisions, over time, aggregated into something that had not existed before: a commons of openly accessible resources. Open-source licensing achieved the same for code, and now increasingly for AI model weights. Neither spread by mandate; both reshaped entire sectors through accumulation alone, and both remain enforceable through existing legal infrastructure.
The AIC Framework applies a similar logic to the conditions under which AI capabilities are contracted into use. The natural vehicle for standardising such conditions is something close to a licence - a recognisable set of terms that attaches to a capability and travels with it. When a developer chooses an open-source licence, they set conditions on how their code circulates; across thousands of projects, those individual choices built an ecosystem. "Licensing" here is shorthand for that same idea applied to AI: not a narrow legal category, but the familiar mechanism by which conditions become legible and repeatable.
The resemblance to Creative Commons and open source is in the mechanism, not the object. Those frameworks govern intellectual property. The AIC Framework governs the operational terms on which AI capabilities enter value chains.
AI Commons (AIC) Framework
The framework is organised around six governance dimensions, each independently optional. An adopter selects the dimensions relevant to its context, composing a recognisable profile:
Sustainability - A commitment that the environmental impact of AI operations is neutral at minimum, and where feasible, regenerative. The adopter sets defined targets for energy consumption, emissions, and ecological footprint, measured and reported against standardised benchmarks. Accountability sharpens when this data becomes comparable across providers, making environmental performance a visible and legible factor in how AI capabilities are chosen.
Value - A commitment to directing a share of the gains from AI operations toward the communities bearing their social-economic consequences. The adopter allocates a defined share of revenue to specified beneficiaries through a commons fund with transparent governance. Adopted across many cases, these commitments aggregate into a distributed (many-to-many) mechanism for addressing the social-economic effects of AI from within economic processes, rather than through redistribution after the fact.
Access - A commitment to open access to the full capability, without discrimination by user type or ability to pay. The adopter commits that the service is freely accessible to all users, with volume as the only adjustable constraint. As more providers adopt comparable access commitments, a floor of universal availability emerges across the ecosystem - not mandated centrally, but accumulating through decentralised decisions.
Reciprocity - A commitment to recognising and returning value to those whose resources made a system possible: the datasets, labour, and knowledge it was built on. The adopter registers these upstream contributions and allocates a share of revenue to a contributor fund, addressing the persistent pattern in which value flows downstream while the inputs sustaining it are treated as free. Across many adopters, these registries make visible the actual dependency structure of AI value chains - who contributed what, and whether the return was proportionate.
Openness - A commitment to making evidence of how a system behaves in operation - compliance data, risk disclosure, incident reporting - available in standardised formats. Useful for any single adopter; powerful when comparable across many, enabling informed choices and public oversight.
Governance - A commitment that all groups affected by an AI system are represented in decision-making about it - and in particular, the decision to discontinue it. The adopter defines the stakeholder composition and selection method. When governance profiles are declared publicly, a norm of participatory oversight develops across sectors - visible, comparable, and increasingly expected.
These dimensions are modular: an adopter may take up any combination. Each commitment is voluntary, but standardised and visible. Once adopted, the commitment may extend into a requirement on others in the value chain: upstream suppliers or downstream operators. A foundation model provider, for instance, might commit to sharing automation gains, and require that every downstream operator who builds on their system does the same - so that the obligation travels through the chain regardless of how many times the capability is repackaged. These are not regulatory requirements imposed from above; they are requirements that peers set for each other through ordinary contracts, as a condition of doing business together.
Enforcement operates through two complementary layers: contractual commitments that are bilateral and litigable under ordinary contract law, and declaratory commitments - public visibility of declared profiles - that create reputational accountability. The combination works within existing legal systems and requires no new legislation.
AI Commons (AIC) International Working Group
The AIC Working Group is currently hosted at CLEA (Center Leo Apostel for Interdisciplinary Studies), Vrije Universiteit Brussel, and is seeking registration under the ITU AI for Good Impact Initiative, or an equivalent international organisation. The group's purpose is to bring the framework from draft stage to a working instrument, combining interdisciplinary governance research with implementation in real AI ecosystems.
The work is organised along three lines:
- Profile architecture review. The overall structure of the six dimensions - their boundaries, interoperability, and the degree of standardisation they should carry. The objective is to ensure that the system is coherent and implementable across diverse contexts.
- Dimension-specific expert groups. Each group reviews and finalises one dimension's standard clause set, bringing it to a first stable version: refining scope and definitions, specifying baseline clauses and adjustable parameters, and maintaining a clear rationale for design choices.
- Framework stewardship. Ongoing governance of the framework's components - clause sets, profile definitions, governance templates, versioning, and the processes by which they evolve. The working group serves as the custodian of the framework as a whole, ensuring that changes are coherent, transparent, and aligned with the framework's governance intent.
The framework is a licensing and contracting instrument, not a platform or blockchain-based system. It consists of standardised clause sets, profile definitions, and governance templates - designed to be adopted through existing legal and contractual channels. Technical tooling may emerge around it where useful, but no technical layer is constitutive of the framework's operation or centrally controlled.
Members
- Dr. Em Lenartowicz (Coordinator) - em.m.lenartowicz@gmail.com
- Andrzej Ryś
- Gen Zendahl
- Michel Bauwens
- Dr. Mihaela Ulieru
- Steve Coy
- Dr. Vasilis Kostakis
- Dr. Weaver D.R. Weinbaum
- Dr. Young Yoon
References
Lenartowicz, E.M. (2025). Impact-Oriented Licensing for Artificial Intelligence: A Conceptual Framework for a New Domain of AI Governance. SSRN id=5794362
Lenartowicz, E.M. (2025). Shaping AI Impacts Through Licensing: Illustrative Scenarios for the Design Space. SSRN id=5835702
Lenartowicz, E.M. (2025). AI Commons (AIC) Licence Suite: A Modular Framework for Impact-Oriented AI Governance. SSRN id=5848523
CLEA Seminar
Presentation of the AIC Framework for the CLEA research team on 19.12.2025 (Abstract). Several aspects of the framework formulation have evolved since then.