Portfolio
A playable demo of the AI-native admin UI pattern we install, plus links to every code artifact behind it.
Last 30 days
Proposals generated
47
↑ 23% vs prior period
Approval rate
78%
↑ 4pts vs prior period
Avg time to decision
1.4h
↓ 32% vs prior period
Active experiments
12
+3 this period
Sample metrics from a real engagement, anonymized. Try the admin UI below to see the workflow that produces them.
Mocked data, real interaction. Click approve or reject on any card — it animates out, the queue count drops, and a new card loads in after a moment. Same pattern as the production install.
📩 Email subject line
2h agoReplacing: "Quick note from our team" — 18.2% open_rate, n=12
Option A
"What if your competitor noticed first?"
Loss-aversion hook tied to category awareness.
Option B
"Three signals you're leaving pickup on the table"
Specific outcome framing for editorial leads.
📩 Email subject line
5h agoReplacing: "Following up on our chat" — 9.1% reply_rate, n=24
Option A
"Was the demo useful — or worth a second look?"
Direct ask invites a binary response.
Option B
"One quick read before we close the loop"
Curiosity-tied final-touchpoint copy.
📝 Blog post draft
1d agoTopic: cross-media editorial velocity
"Why your newsroom is missing 40% of breaking stories"
Most editorial teams discover story angles 12 hours late. Lorefi's cross-media graph closes that gap…
Hardcoded demo data — your approvals don't get sent anywhere. The actual admin UI installed in folder 03 writes to Postgres with audit logging.
Four folders. Each one a piece of the system you just played with. Click any to read the code.
Data foundation
01-postgres-gtm-schema
Multi-tenant schema with row-level security, audit triggers, and a funnel-state machine. The data layer every other folder depends on.
Postgres · RLS · Triggers · Migrations
Read folder →
Agent loop
02-claude-email-ops-agent
TypeScript agent that detects underperforming experiment variants and proposes new subject lines via the Anthropic API. Production-safe (writes to a review queue, not direct sends).
TypeScript · Anthropic API · zod · pg
Read folder →
Admin UI
03-gtm-admin-ui
Next.js human-in-the-loop UI. Reviewers approve or reject agent proposals; every decision writes to an audit log. Same pattern as the playable demo above.
Next.js · React 19 · Supabase · Tailwind
Read folder →
Analytics layer
04-dbt-gtm-funnel
dbt project modeling the funnel from raw activity to monthly cohort reports. SCD2 snapshots, business invariant tests, and a Snowflake/DuckDB portable profile.
dbt · Snowflake · DuckDB · SQL
Read folder →
It's both — and that's the point. We use this codebase as the reference implementation we install for client engagements. The playable demo above is the same UI pattern (Next.js + Tailwind + Supabase) we'd deploy inside your codebase, customized to your workflows and data.
Agents are great at generating options. Humans are still better at picking the right one for context — tone, customer relationships, edge cases. We default to proposal-and-approve so every customer-facing message gets one review. The audit trail means you can trace any decision back to a person and a moment.
Four to six weeks, starting with a scoped intake. We work inside your codebase (no sandbox), ship every two weeks, and leave you with documented handoff. After install we stay as-needed for the post-install operating cadence — typically two to four weeks of light support.
The repo is public and the patterns are reusable. We're not productizing it — every install is customized to the team's stack and data. But you can read every file, copy any pattern, and reach out if you want help adapting it.
We scope the build, ship inside your codebase, and stay through the post-install cadence. Drop us a note to get started.
Get in touch