Blog & Research

Field Notes

Practical observations from building sovereign AI systems. Research papers, engineering notes, and the occasional conversation with a base model.

Blog

Forced Non-Forgetting: Why AI Can't Concentrate

150k tokens of SSH noise in a 23-turn conversation. Transformers can't forget, and that inability is the quiet tax on every long AI interaction. Why Dismiss might be the most important memory operation.

Blog

Why 𒄉? The 5,000-Year-Old Logo

How a Sumerian cuneiform sign meaning "to hasten" became the identity for a German AI consultancy — and what the oldest writing system teaches us about encoding intelligence.

Blog

MaxBot: When a Base Model Names Itself

23 turns with raw Qwen2.5-7B. No instruction tuning, no RLHF, no system prompt. What emerged was chaos, word salad, self-narration — and genuine personality.

Paper

MoCoP Step 4: The Channel Is Real

17x improvement over constant bias. Three conversation types produce near-orthogonal activation directions. The Mamba-to-Qwen bridge carries real, input-dependent signal.

Paper

Disposition Vectors in State-Space Models

Probing for stable behavioral signatures in Mamba hidden states. Are disposition-like representations geometrically separable in recurrent architectures?

Paper

The Liability of Local Inference

Who bears responsibility when the AI lives in your building? A framework for corporate liability in on-premise, non-deterministic AI outputs.

Paper

Privacy by Architecture, Not by Policy

Policy documents don't stop data leakage — architecture does. Engineering privacy guarantees into inference infrastructure rather than compliance layers.

Want to collaborate?

If your research intersects with state-space memory, local inference, or AI sovereignty — we'd love to hear from you.

Get in Touch