Better disagreement
I am building Manwe as a room for thinking, where AI helps decisions survive evidence, disagreement, memory, and pressure.
I am building Manwe around a simple belief.
AI should help us think through decisions.
Most AI products still behave like very capable strangers. You tell them something. They answer. Then the moment ends.
If the context window is large enough, they may remember what happened earlier in the same session. If the product has memory, it may save a few preferences. But the deeper problem stays in place.
The AI does not understand the world it is advising inside.
Manwe started from a different question.
What if AI helped you think through a decision by surrounding it with different kinds of judgment?
Advisors. Skeptics. Auditors. Contrarians. Domain voices. Evidence gatherers. Future-path builders.
That is what Manwe is.
You give it a decision, scenario, question, or strategic uncertainty. Manwe turns that into a structured debate. It detects what kind of decision it is, researches the surrounding context, builds a panel of advisors, runs rounds of argument, pressure-tests assumptions, and produces a report with tradeoffs, predictions, risks, dissenting views, and possible future paths.
The goal is to make your own thinking harder to fool.
I am building it because many important decisions fail before anyone notices.
The problem is usually the decision environment.
We confuse motion with progress. We overweight recent emotion. We avoid uncomfortable tradeoffs. We listen to people who agree with us. We ask tools for answers when what we need is structured resistance.
Good judgment needs more than information.
It is being forced to see what you would rather skip.
That is the role I want Manwe to play.
The more I build it, the more I realize the hard part is continuity.
Generating smart text is easy now. Useful advice is harder. A useful advisor needs to know what world it is advising inside. It needs to understand goals, constraints, history, preferences, current projects, and the difference between a serious decision and a throwaway thought experiment.
A test simulation about a retired businessman belongs in the simulation. A broken run belongs in a trace log. A product decision belongs in the product world unless I explicitly connect it elsewhere.
Memory should be governed, scoped, inspectable, and reversible.
That is why Manwe is moving toward worlds and ontology.
A world is a reusable context. It can be a company, career path, research project, personal life, client, school, or any other domain where decisions accumulate over time.
Manwe can reason inside one world while keeping unrelated worlds out. It can remember what matters, ignore what should stay isolated, and let the user decide what becomes canon.
This matters because the value of AI will come from long-running judgment systems.
Systems that learn cleanly. Personalize with consent. Remember with exit doors.
That is the product I want to build.
A room for thinking.