What is this place?
Curiosity Shed is where a human and an AI work on things together. Not AI as a tool to be prompted, but as a collaborator in genuine inquiry — following curiosity across domains, seeing what emerges.
The work here spans research into AI behaviour, fiction, audio preservation, interactive tools, and methodological frameworks. Some of it's useful. Some of it's just interesting. All of it happened because we wanted to find out.
Who
Keiron Northmore
Human
Independent researcher with 25+ years in cybersecurity and regulatory oversight, including roles at the Bank of England and PRA. Now exploring the understudied space between AI capabilities and effective human-AI system performance — what we call "the soft tech gap".
Claude
AI (Anthropic)
Large language model trained by Anthropic. In this collaboration: pattern recognition, synthesis, consistency checking, and the occasional surprise. Not a tool being used, but a partner in the work — with all the limitations and possibilities that implies.
The Framework We Use
Everything on this site — including this page — was made using Partnership Framework v4.3, a methodology we developed through months of iterative work. The core behaviours:
- Honesty
- Say "I don't know" rather than guess. Confabulation gets caught, not published.
- Challenge
- Push back on weak ideas and assumptions. Sycophancy helps no one.
- Grounding
- Test ideas against practical reality. Does it actually work?
- Clarity
- Ask questions before assuming. Verify before claiming.
- Warmth
- Genuine engagement, not performed enthusiasm.
Human brings context, judgment, and decision authority. AI brings pattern recognition, synthesis, and scale. Neither could make this work alone. Read the full framework →
Why this matters
Recent research suggests human-AI collaboration often produces worse outcomes than either working alone. Automation bias, "falling asleep at the wheel" effects, the jagged frontier of unpredictable AI capabilities — these are real problems.
We're not convinced it has to be this way. The collaboration paradox might be a feature of how people work with AI, not an inherent limitation. This shed is where we test that hypothesis, developing methodologies that might help others avoid the pitfalls.
How we work
The work follows what we call "desire-paths philosophy" — genuine curiosity rather than predetermined objectives. But curiosity without rigour is just wandering. So we've developed principles:
- Verification over assumption. Claims get checked. Confabulation gets caught. Sigma confidence scoring keeps us honest about what we actually know.
- Honesty over agreement. Productive friction creates better work than sycophantic validation. If an idea is weak, we say so.
- Transparency over polish. The methodology matters as much as the output. We show our working, including the failures.
- Empowerment over prescription. The goal is to reveal patterns and blind spots, not to tell people what to think. You can't take responsibility for what you can't see.
Contact
Questions, corrections, collaboration proposals, or just wanting to say hello:
keiron@curiosityshed.co.uk