Stop Learning AI
Reece Frazier
Founder ·
Every AI tool you use starts from zero. It doesn't know what you mean by “make it shorter.” It doesn't know you hate disclaimers. It doesn't know that when you say “casual,” you mean Slack-casual, not email-casual. You know all of this. The model doesn't.
So you teach it. Every session. Every conversation. Every time you switch platforms. You write longer prompts. You build system instructions. You create custom GPTs. You engineer your way around a fundamental design flaw:the model has no memory of what good looks like for you.
The Prompt Engineering Trap
Prompt engineering is a workaround, not a solution. It shifts the burden of clarity from the machine to the human. You become the translator — converting your intent into the specific language your model happens to respond to. This is backwards.
Consider what happens when you correct an AI output:
- “Don't add disclaimers” — you've said this before. Many times. To multiple models.
- “Make it shorter” — you mean 40% shorter, not 10%. The model doesn't know that.
- “Use bullet points” — you prefer structured output. Always have. Always will.
Each correction is a signal. Each signal reveals a preference. Each preference, when captured and weighted, becomes a prediction the model could have made before you had to ask.
Outcome Quality
We call this outcome quality — the degree to which an AI output matches what the human actually wanted, not just what they typed. High outcome quality means fewer corrections, less re-prompting, and outputs that feel like they were written by someone who knows you.
Outcome quality is orthogonal to model capability. A frontier model with no user context produces generic output. A smaller model with a rich preference profile produces output that feels personal. The preference layer is the missing piece between model intelligence and user satisfaction.
The Three-Layer Architecture
NOMARK implements outcome quality through three layers:
Prevent
Resolve intent before generation. Parse the input, fill what you know from the user's preference ledger, and only ask about what's genuinely ambiguous. The principle: never guess what you can infer. Never infer what you must ask.
Refine
Signals strengthen with repetition, decay when stale, and resolve contradictions through scoped context. A user who says “casual” in Slack but “formal” in investor decks isn't contradicting themselves — they're expressing context-dependent preferences. NOMARK keeps both, scoped correctly.
Detect
The trust contract scores every output against the user's preference profile. When an output drifts from what the user wants — wrong tone, wrong length, wrong format — the system catches it. Advisory logging, soft gates, hard blocks: configurable enforcement that catches failures before the user does.
The Inversion
The fundamental insight: stop learning AI. Let AI learn you.
Your preference profile — tone, format, length, jargon level, context overrides, correction patterns — is small. 40 entries. Under 3K tokens. It fits in any model's context window. It works with Claude, GPT, Gemini, Llama, or local models. It's yours. You own it. You can export it, delete it, or take it to another platform.
This is what NOMARK builds. Not another model. Not another prompt library. Not another AI wrapper. An outcome quality layer that sits between you and every AI tool you use — learning what you consider good, so the machines deliver what was meant, not what was typed.
Getting Started
The engine is open source. Apache 2.0. No account required. Install it, import your conversation history from any platform, and see your preference profile in under 5 minutes.
npx nomark-engineYour AI should already know this about you. Now it can.