The headlong rush to develop artificial intelligence continues to provoke widespread alarm about its potential impact – on energy use and climate change, jobs, education, the knowledge environment and democracy. It comes alongside an increasing concentration of power and wealth in the hands of a few tech companies and offers the potential to create unprecedented surveillance apparatus.
But while there are concerns over the impact of this tech – particularly generative AI, which brings the threat of deepfakes spreading disinformation, and the scraping of intellectual property – it also offers huge potential benefits. Who receives those benefits depends very much on the ownership of the tech: and here the co-op model comes into play.
The ownership question was the central plank of this year’s conference of the Platform Cooperativism Consortium (PCC), which drew a mix of co-operators, activists, tech specialists, policymakers and – the event being held in Istanbul – several of the local feral cats.

Big tech threats, co-op responses
This delightful setting near the shores of the Marmara was in sharp contrast to the urgency of the global picture on AI. Around the time of the conference, debate was growing over working conditions and the environmental impact at AI data centres, particularly in the Global South, where huge amounts of water and energy are being consumed. Meanwhile, musicians including Paul McCartney have released a collection of silent tracks in protest against copyright theft by AI companies; novelists are warning that in time, all books will be generated by AI; and Peter Thiel, the tech billionaire who has entered deals with governments including the USA and UK to handle citizens’ data, has been raising eyebrows with lectures about the antichrist and Armageddon.
Such concerns were given an airing at the conference, with Turkish academic Rana Birden warning that six out of 10 companies she studied are using AI, but only one in 10 has policies in to stop it being exploitive.
And there was discussion of the need to centre workers at the heart of AI development, with reference to disputes by Hollywood screenwriters and Brazilian voice actors.
But speakers also pointed to solutions, and over the course of the four days highlighted its potential – if democratically owned and managed – for cutting food waste with more accurate harvest predictions, bringing potential for work and economic development to the Global South, and speeding research collaboration.

Many of these solutions are radical ones. Rafael Grohmann, assistant professor of media studies at the University of Toronto Scarborough (UTSC), discussed a worker-owned intersectional platforms project in Brazil and Argentina, which takes in initiatives such as building feminist infrastructure for tech, and the Brazilian homeless tech movement which shows solidarity across class lines. It also works in intersection with queer studies in academia.
Turkish tech journalist Kaya Genç drew a line between the quest for a co-operative AI and resistance movements in his own country, such as the the Gezi movement which began in 2013 to oppose the redevelopment Istanbul’s Gezi Park. This made use of social media and online networks – but, said Genç, by 2022 social media companies were complying with the government’s use of data to track dissidents.
Related: Co-operative councils and ethical AI
Now, with right-wing tycoons like Thiel and Elon Musk in control of a tech which is being used for capital extraction, the situation is acute, he warned. And the “blinding effect these technologies have on us” means “we don’t notice this happening until it is too late”.
As a critical writer on tech, Genç has looked at issues such as the plight of translators who have been employed to train the AI that will go on to take their jobs. It is important to remember that this technology is “inherently exploitative”, and to explain this to young people, he said – and “come up with our own emancipatory discourse on tech“ while tech giants step up their calls for deregulation.
‘Hope is a decision‘
Trebor Scholz, founding director of the PCC, said: “Hope is really a decision – people decide together to build something different.
“The future of digital and AI doesn’t have to belong to just a few businesses, it can belong to farmers, educators, and so on – to meet their own needs.”
He told delegates that although some platform co-ops fail, each failure is a lesson in what to do next. “Each closing co-operative, each failed experiment, becomes the soil from which the next one will grow.”

Scholz struck a defiant note. “We will not be defeated,” he said, “we will double down on building this collaborative infrastructure. We have a collaborative strength that big industries cannot imagine.”
Calling for a revival of the concept of a co-operative commonwealth as a “solidarity stack” to reshape AI, he added: “Integrity can be world changing … we can show the co-operative principles by refusing big tech buyouts.”
There are formidable barriers on the way – not least in terms of raising finance – but Scholz said the solidarity stack model can challenge those wielding power.
Related: Congressional committee examines AI use in credit unions
“You cannot think in isolation,” he said. “You can’t have a local AI alternative – you need to think globally.”
With big tech moving forward at a formidable speed, in terms of technical advance and the accumulation of wealth and power, there is no time to waste. “If co-operatives and other democratic enterprises don’t act in tandem and start thinking about how they occupy this new space, it will be too late,” warned Scholz.
Platform co-op ventures
The conference set the notion of a co-operative AI in the context of platform co-ops that are already pushing back against big tech. Mary Watson, from the New School in New York City, said: “We’ve been trying through the lens of intellectual property to figure out ways tech can encourage co-design, where AI is the co-designer, not a scraper of data.”
The New School, home to the PCC, asked students how they wanted to use AI as a partner; the reply was that they were aware of problems with AI, and were interested in introducing social values and co-op models.
In terms of specific projects, Austin Robey discussed Subvert, an attempt to create a collectively owned version of Bandcamp after that platform was sold to music-licensing company Songtradr in 2023.
An alpha platform is due to launch for internal testing, a zine is being produced and, in terms of AI, a policy is being sought for what sort of content will be permitted on the site. Subvert is gaining thousands of members month and, said Robey, “the bigger vision is the co-operative itself more than the platform”. With the potential to own assets like coffee shops or a pressing plant, he added that the question is: ‘What would a Mondragon of music look like?’
Other projects discussed included Pescadata, a data platform for Mexico’s thousands of fishing co-ops, offering blockchain and AI for traceability, an electronic catch register for eco certifications, back office tools and peer-to-peer learning.
Presenting Pescadata’s work, Stuart Fulton said fishing co-ops in Baja California are facing a challenging situation; they once had exclusive access to the waters but, after co-op law was weakened in 1994, private companies have entered the frame, resulting in overfishing and the ‘washing’ of seafood through Belize.
In Turkey, Needs Map, developed after catastrophic wildfires in 2021, is a platform which uses AI to match citizens in crisis with services than can help, and has subsequently been used during the Covid pandemic and earthquakes.
And in Canada, Hypha Worker Co-op is developing Roo, a co-operative chatbot – in an attempt to make an AI that is “transparent, accountable, community-driven, and not extractive, in line with co-op values”.
A call to the wider co-op movement
Established co-ops are also getting on board with the challenges of AI – which go to the heart of how co-operative democracy works. From the Mondragon Federation of industrial worker co-ops in Basque Spain, Dorleta Urrutia-Onate said: “Technology has always been transforming our work. But maybe AI is the first one that actually challenges how decisions are taken, how we understand knowledge, who we trust.

“Perhaps the real question is not what technology we are using but how we face it as a co-operative movement.”
What is needed, she said, is “a way of staying human together.”
Mondragon’s research centre is working to create a secure shared data space – “one with trust. A place where our co-operatives can develop solutions together without giving up their autonomy.”
The federation will now conduct a review across 30 of its co-ops, looking at how each successive technological wave played out internally – in terms of what worked, what didn’t, how governance can be reshaped and how Mondragon can deal with AI.
Yuliy Lobarev, a co-op governance researcher who works at cybersecurity co-operative Rad Cop and the Patio international co-op network, called for more collaboration between different silos.
“We are tech co-ops,” he said, “but around the world so many co-ops in agriculture and retail. It would be good to build a bridge between us tech co-operators and the rest of the co-operative world. Those other co-ops are going to use AI but might not know the dangers of it, how to use it more humanly for the good of society.”
On that point, Amy Gittins from Co-operatives UK said the apex is working on a practical member-led, member-first approach to AI.
“We’re encouraging our staff to explore the use of it, we’re training staff on generative AI language models, we’ve debated whether we should even be using it, we’ve been assisting members with their concerns. ”
Many co-ops in the UK are worried about the rushed rollout of AI, she added, and the risks of investing in tools that might become obsolete in just a few years.
“We want to make sure the tools we use lessen harm,” she said. “Maybe people can use
Co-operatives UK as a test case.”
Digital strategist Ela Kagel warned that tech co-ops are struggling to federalise themselves, and are still meeting in a centralised way. But, noting that the co-ops of the 19th century also faced challenges, she asked: “What keeps us from building a co-operative global AI federation? All the organisations are there. All the layers are there. So what is missing?”
This problem might require more committee work, she said. “Is a globally federated AI what we want, or are we looking for collective intelligence – a more accurate term for what we achieve together?”
The question then, she added, is how co-operators become custodians of this collective intelligence – how they can use it, make it accessible and defend it against big tech.
“Do we have knowledge on governance on how to use collective intelligence, do we have the right learning spaces, do we have what we need to build federated service architectures?”
Industrial engineer and co-op expert Andreas Arnold, who sits on the board of Platform Coops eG, suggested mechanisms to “put AI on a leash” – including the development of shared service providers, and sticking to principle 6 – co-operation among co-ops – to “keep money in our ecosystem”.
Duncan MacKenzie, from Luminally, discussed his work to develop a co-operative AI for social enterprises, inspired by his time volunteering for homeless charity, with staff overwhelmed by red tape and back office work.
AI can also help with the problem of ”impact drag”, he said, where powerful insights and learnings are left trapped in spreadsheets with no time for organisations to examine them; AI can help to unlock this valuable data.

“We want to build a co-op-native AI, that respects data sovereignty,” he said, “that is not sold for ads or training models, that is interoperable and can plug into the open source ecosystem. These are tools that focus on specific problems, and are environmentally friendly.”
Other services could include intelligent fundraising tools, impact analysis and reporting.
MacKenzie added: “I’m thinking what can we do around food and resource security, to free farmers from proprietary data systems … how do we build data sovereignty into everything?”
He suggested a more effective public health policy could be developed using patient data –held locally in medical centres or by government – as opposed to the current situation where patient data is being sold in bulk to big corporations.
“Who knows what is done with it?” he asked. “How can we as a society draw the benefits of working with large data sets like this.”
Felix Weth from Cosy AI warned: “Big tech is aiming at recursive self improvement … It will be really difficult to stop, this automated automation.
With the prospect of 100% of well-paying jobs becoming automated, Weth said: “Either we disempower big tech or big tech will disempower us, the vast majority of humans on this planet.”
Related: Platform co-op Cosy AI looks to show a better way forward
To fight this, he wants to see a coalition to offer tools for co-operation, train specialised local alms, build distributed computer clusters, set up basic income systems, integrate tools for federated learning and data pooling, support networks of local AI co-operatives, and enable an AI safe space.
Transkribus: Jeremy Bentham meets co-op AI
In terms of successful AI co-ops, there is a shining example from Scotland – Transkribus, a thriving venture with 6,500 members.
Melissa Terras, professor of digital cultural heritage at the University of Edinburgh, presented the work of the co-op, which can provide an accurate transcription of handwriting – making it a vital tool for historians and archivists.
Looking back to the inception of the co-op, she said: “There are billions of important historical documents that have never been read. I wondered, is there a way to do information extraction from them?”
The test case was a formidable project – involving a formidable figure, British Enlightenment philosopher Jeremy Bentham.

Terras obtained a grant to create digital images from 80,000 boxes of material from Bentham’s archive, which were then put on online, with a team of 20 volunteers recruited to transcribe them. This work was used to train an AI model, which reached 94% accuracy and was given an EU grant to develop into a service.
“We depend on people interacting with the system,” said Terras – highlighting the innately co-operative nature of AI. “Every time someone corrects the system it gets better.”
The project, which eventually had 55 people working on it, won the EU Horizon Impact award, before securing funding from an Austrian co-op bank.
With other universities and archives needing to digitise, Transkribus now has “a vibrant community”, said Terras – with 300,000 users, millions of documents, and 10,000 individual users a week. It works in English, German, Dutch and Latin, and experiments are being done with Esperanto and Breton.
The co-op, which has been used for tens of thousands of peer-reviewed academic papers, has six people on board and 266 highly engaged members – with 55% attendance at AGMs.
“AI is just a bunch of code,” she said. “What we choose to do with it is important. It’s not all generative AI, it’s not the only AI – we’re being sold a pup by big tech which wants us to think it’s the only way to do it, which is extractive and uses energy.
“They want generative AI to be masters of us. There’s a word for people who want to be masters of sentient beings, and it’s enslavement.”
She told delegates: “We don’t need this or to do it this way. We can find cures for cancer with AI, but the problem is the business people who want to use it to build their own personal fortune.”
Echoing the motto of the conference – Another AI is possible – Terras said that although there are plenty of papers on responsible, ethical AI, they posed no solution.
“I am here to argue the co-operative principles are the answer to that,” she said – and warned against the misuse of co-operative language by big tech. “They are co-opting our language,” she said. “We need to watch that the language we use, it’s being cheapened by big tech.”
Co-operation among co-ops
Ludovica Rogers, from Co-operatives UK – who stressed she was speaking in a personal capacity rather than presenting an official line – also warned against big tech and the onus this places on the wider co-op movement.
“We need to be stronger about saying what we as a co-operative movement shouldn’t be using,” she said. “We shouldn’t have become addicted to fossil fuels, let’s not get addicted to big tech AI.”
Related: Ethical challenges for co-ops in the modern retail world
Questioning the role of the co-op movement around data, and the writing and testing of code, she said the power and resources of the co-op movement need to be harnessed.
“We need international co-operation around tech,” she said. “Tech is international, and we need to work more with the public sector.”

To achieve this, the movement needs to overcome the familiar barriers facing co-ops. From Thailand, co-op researcher Akkanut Wantanasombut said AI and other tech ventures from co-ops in his country need government support because it is hard to raise capital from workers. They also suffer from a lack of secondary tech co-op support, and face a poor public perception of the co-op sector.
Giuseppe Guerini, president of Cooperatives Europe, said a successful co-operative AI would need significant investment in tech, training, skills and strategy – alongside the development of partnerships to keep co-ops abreast of unprecedented change.
For instance, consumer co-ops use AI to manage supply, and manage customer relationships. For social co-ops, Guerini noted “very interesting applications – for instance, it can help people with autism to be more autonomous”.
Addressing the question of how traditional co-ops should deploy AI, Guerini said there is no difference from other businesses in terms of the actual tasks it is used for. “But for co-ops the questions are different,” he added, with the key issues “how do we use it and who benefits?”

He said co-op federations can play a crucial role in helping co-ops though this process of change, pointing to the digital innovation hub created in Italy to help give co-ops the tools to adapt to an AI world.
“But training is not systematic and continuous,” he warned. “We’re fragmented on data, we need a more formal approach.”
The challenge, he said, is to use AI “to strengthen participation, democracy and justice. This means a specific approach: technology needs to serve community rather than displace community. In a landscape dominated by big tech, co-operative intelligence offers a human-centred alternative. For us, the digital revolution is not a means to economic growth but a means to realise social value.”
Here, the co-operative values and principles are vital, said Guerini.
Lessons from China
Another key component in the growth of AI is the development race between the US and China. This is being used by US tech companies as an excuse to oppose regulation and take the brakes off the evolution of the technology – but, said human rights expert Nicholas Bequelin, the China is not posing enough of a challenge for this to be necessary.
But it does offer different perspectives on AI. Bequelin said Chinese thinking on the tech is in sharp contrast to that of the US. It differs in a variety of ways: on the one hand, there is state-led tech authoritarianism, with censorship of language learning models and intensive surveillance.

This all-powerful state leads to a different dynamic. “Their tech billionaires could be jailed immediately with no process,” said Bequelin. “In the US, Zuckerberg can do whatever he wants.”
Policymakers are also more clued-up about the tech: two thirds of Chinese government have engineering degrees. “They understand these issues,” said Bequelin, “they can read a technical paper … It’s very different from western governments.”
But China is also promoting an open model of AI which is less resource-heavy, he added – and while US tech barons fantasise about flying to Mars, in China the thinking is more practical, with a focus on how AI can be used in the real world, following a view that “science makes things better”.
Asked if AI tech could facilitate a rising against the Chinese state, Bequelin said: “I’ve been waiting for 30 years. But tech gives huge power to an oppressor. This is a warning to the rest of us … but it’s in our hands, the future is not written.
“The victory is every act of solidarity, every day we stand with what is right, every day we make the world a better place.”

