---
date: 2026-04-15
type: ship
title: Manwe
slug: manwe
project: Tiny Things
isCurrent: true
kicker: AI doesn't need better answers. It needs better disagreement.
excerpt: A decision room for macOS. Specialist agents research live evidence across 50+ source surfaces, argue from different incentives, challenge weak assumptions, and produce an auditable decision record. Not a chat reply.
cover: /assets/covers/hero-manwe.webp
href: https://askmanwe.com
palette:
  variant: dark
  bg: "#0A0A0A"
  fg: "#F5F1EA"
  accent: "#B58A3C"
  source:
    brand: "Caran d'Ache"
    name: "Yellow Ochre"
  role: Designer · Founder · Engineer
  pull: Weak assumptions hide inside confidence.
tags: [ai, agents, native, macos]
---

Manwe is the product I am shipping right now. The premise is simple. When a decision matters, one AI answer is the wrong shape. You need evidence, structured disagreement, claim checks against your own data, and a record you can defend three months later when the situation has moved on.

## What it does

Manwe assembles specialist AI agents around a serious decision. They search live evidence across more than fifty source and evidence surfaces (scholarly, legal, government, market, software, health, macro, and public-record), argue from different incentives, challenge weak assumptions, and converge on a verdict the room can defend. The panel always includes a Contrarian who challenges consensus and an Auditor who fact-checks claims, including claims about your own uploaded documents.

The agents are alive. They remember past debates, develop worldviews across runs, and react to real-world knowledge in their domain between debates. Each advisor's conviction is tracked round to round, so the final record shows where positions shifted under pressure and where they held.

The run resolves into a language-aware decision record. Verdict and confidence, the evidence chain, dissent, risks, calibrated forecasts, an action plan, and what would change the answer. Records export to PDF, HTML, and Markdown.

## Two surfaces

The Mac app is the local-first room. Chat a decision through with up to three advisors, then run a structured debate when you are ready. Local Qwen models on Apple Silicon, or your own keys for Claude Code, Codex, Qwen Code, OpenAI, Anthropic, or any OpenAI-compatible endpoint. Reasoning control from Low to Max per model. Apple Foundation Models on the Neural Engine handle research tasks in parallel while the GPU runs the debate. v0.8.0 ships Market Intelligence: market-aware runs with Event Radar across markets, companies, technology, policy, health, climate, geopolitics, and culture, with computed evidence cards for timing, earnings, and market screens.

Ask Manwe Web is the hosted path at app.askmanwe.com. Quick and Pro runs, saved records, source inspection, usage controls, Pro Future Paths.

![Manwe macOS app composer in dark mode. Left rail shows "Ask Manwe", "Records", "Billing", "Settings", with a "Recent Decisions" entry "Should Manwe launch quietly to a first group of paying or…". The main panel reads "Start a decision. Write the question first. Choose the room after." with a placeholder "Ask about a future path you want pressure-tested…". Below the prompt sit two pill buttons: "Quick Run · Enter" and "Pro Run · ⌘ Enter", with shortcut hints to the right. A budget strip shows "$12.00 · 2 Pro runs left, 6 Quick runs left, Pay as you go enabled" and a "Billing · top up →" link. A "Prompt ideas" rail at the bottom lists three example questions tagged Career, Medical and Technology.](/assets/projects/manwe/manwe-composer.webp)

## Why it matters

Search gives you information. Chat gives you a fluent response. Both stop before the answer has been challenged, and weak assumptions hide inside confidence. Manwe slows the moment down so evidence, objections, and risks stay visible before the verdict is written.

The product is a process you can inspect: classify, cast, research, debate, deliver. When statistics get cited, the system auto-searches and regenerates with real evidence. When a knowledge gap appears mid-debate, a guest expert is recruited on the fly. You can inject events into a live run, interview individual advisors after, and continue any decision record with new questions later. Toward the end of a serious run the process moves from conclusions to assumptions through Causal Layered Analysis, Sohail Inayatullah's framework, paired with the Deeper Story synthesis. That matters most once you see what happens beyond the verdict: the product does not stop at a recommendation and call it depth.

The CLA implementation was tested against Inayatullah's digital twin on metafuture.org and judged genuine.

> Brilliant stuff here. Just love it. You are on to something. Congrats.
>
> Prof. Sohail Inayatullah\
> UNESCO Chair in Future Studies at IIUM Malaysia, creator of Causal Layered Analysis

350+ decision runs across 17 domain categories so far. Validated across medical evidence review, financial credit and investment analysis, geopolitical and macro scenarios, and strategic career, product, and organisational decisions.

![A Manwe decision record open in a browser tab. Header reads "Decision record" with "Download Markdown", "Download HTML" and an orange "Save as PDF" button. The main card is titled "Should Manwe launch quietly to a first group of paying or near-paying users now, or stay in private testing for another week?", tagged "● RECORDED · Pro run · Anthropic API · 7 advisors · 8 rounds · BUSINESS · manwe-pre-launch-readiness". Beneath it, a "VERDICT" block at 63% confidence reasons through the launch decision: "Launch now, to five users, tonight. The product meets every stated launch criterion: billing works, runs complete, failures refund, and the report is demonstrably useful…" A "RESEARCH" rail on the right lists dozens of extracted findings and probe results, each prefixed with "Extracted", "Searching" or "Crawling" and a short snippet.](/assets/projects/manwe/manwe-decision-record.webp)

## What I own

Product model. Interaction design. The whole multi-agent room. Advisor casting, the dissent and verification logic, claim checks against user data, the Deeper Story synthesis. The evidence pipeline across 50+ research surfaces. The living agent layer. The native macOS app, Ask Manwe Web, and the marketing site at askmanwe.com. Local and cloud inference paths. Shipped end to end as a Tiny Things product.

## Built with

Swift and SwiftUI for the Mac app. MLX-Swift on Apple Silicon for local inference (Qwen3 8B, Qwen3.5 9B, Qwen3.5/3.6 35B MoE). Apple Foundation Models on the Neural Engine for parallel research tasks. NaturalLanguage and Accelerate for embeddings. GRDB+FTS5 and SwiftData with CloudKit for memory and persistent advisors. llama.cpp for portable local models. Cloud routes through Claude Code CLI, Codex CLI, Qwen Code CLI, or any OpenAI-compatible endpoint via your own API key (Anthropic, OpenAI, Alibaba, OpenRouter, Groq, Together, Ollama). Ask Manwe Web runs on Manwe-managed cloud inference.
