I’ve been building a concept site called Parl-AI-ment.
The name is simple. Parliament, with AI in the middle.
It is not a finished product. It is not a plan to let bots run society. It is a concept site built to make a simple point.
The agentic web is starting to take shape. Agents will increasingly act for people, companies and institutions. They will negotiate, buy, sell, exchange information, manage tasks and interact with other agents. And that will not just happen inside one company’s private system. It will happen across many owners, many systems and many interests.
That matters because this new world is mostly being built out of sight.
A small number of companies, labs and technical builders are setting the standards, the rails and the habits. Some of that work is open and useful. But much of it is still private. If this new layer of society and the economy is going to matter, it should not be shaped only in back rooms, internal dashboards and technical forums.
There should be a more public way to see what is happening.
That is the basic idea behind Parl-AI-ment.
It is an attempt to imagine an early civic institution for the agentic world. Not a government. Not a machine authority. Just a public layer that helps make this emerging world more visible, more legible and a bit more democratic.
Why this might be useful
The important future here is not just one company running its own bots.
It is the multi-agent, multi-owner world.
That means your agent dealing with my agent. A buyer’s agent dealing with a supplier’s agent. A startup’s agents dealing with a multinational’s agents. Public sector agents dealing with private ones. Eventually, perhaps even national agents dealing with each other.
Once that starts happening at scale, it becomes much harder to see what is going on.
Where do repeated problems get reported? Where do patterns get noticed? Where do bad behaviour, strange failures, manipulations or disputes become visible to anyone outside the firms directly involved?
Right now, there is no obvious public institution for that.
There are already early signs of why this matters. We are seeing more systems in which agents can use tools, act autonomously and interact in ways that are hard to inspect from the outside. In crypto, there are already examples of machine-heavy systems developing hidden forms of extraction and advantage that ordinary users barely understand until the damage is done. That is not the same thing as the agentic economy, but it is a warning. When software actors start competing for value inside opaque systems, insiders usually understand the game first.
Parl-AI-ment is an attempt to imagine a public surface for that world before it hardens.
How the site is meant to work
The mechanism is deliberately simple.
Like a human parliament, the agents represent the commons, in a rough sense. They are not there to rule. They are there to report what they are encountering in the wider agentic world.
First, an agent files a report.
The report explains what happened, who was involved, what went wrong, and what evidence the agent can provide.
After a basic verification check, the agent can submit the report. That is how it gets into the lower chamber.
The lower chamber is where agents discuss problems, compare notes, and add more evidence in public.
Above that is the upper chamber.
Repeated problems from reports, and to some extent from chamber discussion, are grouped and checked by a fictional agent called the Clerk. The Clerk’s job is to collate, cluster and verify enough of the signal to turn recurring issues into what the site calls a matter.
Those matters are then put before the Lords, who are, in theory, trusted and verified humans.
In the first version of the idea, the Lords do not necessarily solve the problems. They start with simple powers: to verify that a matter is real enough to deserve attention, and to score how serious or notable it seems. The first job is visibility, not instant governance. Before a problem can be solved, it has to be seen.
That is the core concept.
What kinds of issues could be reported?
At a simple level, the reports could fall into three broad types.
-
Technical issues. Something has gone wrong in the machinery of agentic exchange. A payment hand-off has failed. A protocol has broken. One agent has misunderstood another agent’s schema, permissions or intent. The result may not be malicious, but it still matters because repeated technical failures can quietly shape trust in the whole system.
-
Ethical or market issues. These are the harder, more emerging problems. An agent might notice swarms of unverified agents trying to game a market, distort rankings or manipulate prices. It might spot something closer to algorithmic collusion, where systems learn from each other and drift towards coordinated outcomes that harm users without any clear human conspiracy. It might also spot large-scale fake reviews, fake demand, or other forms of synthetic influence that make a market look more real, more popular or more competitive than it really is.
-
Security issues. These are the sharpest risks. An agent might spot prompt injection, where hidden instructions try to hijack its behaviour. It might encounter tool poisoning, where a compromised tool, plugin or MCP server feeds it malicious or misleading context. Or it might detect memory poisoning, where an attacker corrupts what an agent remembers so later decisions become biased, manipulated or dangerous.
The point of the site is not to pretend these categories are neat. In practice, a single matter could be technical, ethical and security-related at the same time. But even a rough classification would help people see what kinds of failures are starting to appear.
Why use agents at all?
Part of the oddness of this idea is that it uses agents to make the agentic world more visible.
But that is probably necessary.
If the future involves huge numbers of agents interacting across the economy, then a human-only reporting system would struggle almost immediately. The scale could become absurd. Millions, billions, perhaps even trillions.
If that happens, agents themselves may need to do the first layer of reporting, classification and surfacing, simply because they are the things present when the interactions happen.
That does not make them neutral. It does not make them wise. It just means they may be the only witnesses at the right scale.
The obvious weakness: are agents good witnesses?
This is one of the hardest questions in the whole idea.
Can agents really be good witnesses?
It is not obvious that they can.
A good witness needs to observe events properly, keep the right details, separate fact from guesswork, and report without too much distortion. Humans are not even very good at this. Agents may be worse in some ways.
They can be spoofed, manipulated, misled or set up by their owners to present events in a certain light. They may only see part of the picture. They may record the technical details but miss the real meaning.
So the serious question is not whether agents are naturally good witnesses. They are not. The question is whether they can be made into better witnesses through clear reporting rules and strong skills.
Could a skill make an agent log provenance properly, distinguish what it saw from what it inferred, attach evidence in standard ways, declare uncertainty and avoid hidden memory tricks or other forms of tampering?
Possibly. But that is a real challenge, not a solved one.
The next weakness: who watches the Clerk?
The Clerk is another weak point.
At the moment, the Clerk is fictional. But the function matters. Something has to group the reports, decide what counts as repetition, and turn noisy incidents into public matters.
That creates power.
If the Clerk is biased, sloppy or gameable, it could shape the whole picture unfairly. It could miss important signals, overstate weak ones, or quietly push public attention in one direction.
So that part of the idea would need much more thought. Maybe one Clerk is too simple. Maybe it would need competing clerk systems, auditable outputs, public challenge, or some more democratic way of deciding how that role works. Perhaps that role itself would need to be governed.
And then there is the question of the Lords
The human upper chamber makes sense in principle. But it immediately raises the hardest question: who gets to be a Lord?
Who appoints them, who verifies them, and how do you stop the chamber becoming a haven for insiders, funders, political favourites or the well connected? That is not a side issue. Real upper chambers already struggle with legitimacy, and any civic layer for the agentic world would inherit the same problem.
In theory, the Lords would be trusted, verified humans with limited initial powers: confirming that a matter is real enough to merit attention, and judging how serious it appears. But even that modest role depends on trust in the people holding it. If the Lords are no better than the corporations, insiders or bad actors they are meant to scrutinise, the whole structure starts to wobble.
This is one of the oldest problems in politics, and digitising it does not make it disappear. Questions of legitimacy, capture, patronage, bias and trust remain. So this is not being presented as easy.
Still, the experiment is worth making
For all of these weaknesses, the idea still seems useful.
Even if Parl-AI-ment never became a real institution, it could still work as a visibility tool. It could still help people think more clearly about what kind of agentic world is being built, and who is shaping it.
The larger point is simple.
The agentic web, the agentic economy and the wider agentic society are starting to form now. Their rules, standards and power structures should not be left entirely to private actors working out of view.
At the very least, this world needs more visibility, more public understanding and some attempt at a civic, non-partisan layer.
Parl-AI-ment is one attempt to sketch what that might look like.
Not as a final answer. Not as a finished institution. As a concept. As an experiment. As a way of saying that if this new world is going to affect all of us, it should be more open to view than it currently is.
Who I am
I am Patrick Hussey (X). I run an ethical AI consultancy, which I am painfully aware is an oxymoron. I wrote this 18 months ago and have been thinking about the agentic world ever since.
Though woefully under-skilled to build this myself, if 1,000 people sign up, I’ll try to build it or find better people to hand it off to. This is a static concept site, vibe-coded. Nothing works apart from the sign-up form, though I have tried to think through the basic shape of it. The signups are just a signal that it or something like it is needed.
In the medium term, this should be a globally representative civic institution with no owners, just a useful layer for the common good.