Activation Labs

Portfolio

What we've built.

A playable demo of the AI-native admin UI pattern we install, plus links to every code artifact behind it.

Last 30 days

Proposals generated

47

↑ 23% vs prior period

Approval rate

78%

↑ 4pts vs prior period

Avg time to decision

1.4h

↓ 32% vs prior period

Active experiments

12

+3 this period

Sample metrics from a real engagement, anonymized. Try the admin UI below to see the workflow that produces them.

01

Try the admin UI.

Mocked data, real interaction. Click approve or reject on any card — it animates out, the queue count drops, and a new card loads in after a moment. Same pattern as the production install.

GTM Admin — /queue

Email Ops queue

3

📩 Email subject line

2h ago

Replacing: "Quick note from our team" — 18.2% open_rate, n=12

Option A

"What if your competitor noticed first?"

Loss-aversion hook tied to category awareness.

Strong bet

Option B

"Three signals you're leaving pickup on the table"

Specific outcome framing for editorial leads.

Moderate

📩 Email subject line

5h ago

Replacing: "Following up on our chat" — 9.1% reply_rate, n=24

Option A

"Was the demo useful — or worth a second look?"

Direct ask invites a binary response.

Strong bet

Option B

"One quick read before we close the loop"

Curiosity-tied final-touchpoint copy.

Moderate

📝 Blog post draft

1d ago

Topic: cross-media editorial velocity

"Why your newsroom is missing 40% of breaking stories"

Most editorial teams discover story angles 12 hours late. Lorefi's cross-media graph closes that gap…

Strong bet

Hardcoded demo data — your approvals don't get sent anywhere. The actual admin UI installed in folder 03 writes to Postgres with audit logging.

02

Inside the repo.

Four folders. Each one a piece of the system you just played with. Click any to read the code.

03

Common questions.

Is this a real product, or just a demo? +

It's both — and that's the point. We use this codebase as the reference implementation we install for client engagements. The playable demo above is the same UI pattern (Next.js + Tailwind + Supabase) we'd deploy inside your codebase, customized to your workflows and data.

Why human-in-the-loop instead of full automation? +

Agents are great at generating options. Humans are still better at picking the right one for context — tone, customer relationships, edge cases. We default to proposal-and-approve so every customer-facing message gets one review. The audit trail means you can trace any decision back to a person and a moment.

What does an engagement look like? +

Four to six weeks, starting with a scoped intake. We work inside your codebase (no sandbox), ship every two weeks, and leave you with documented handoff. After install we stay as-needed for the post-install operating cadence — typically two to four weeks of light support.

Can I use this codebase directly? +

The repo is public and the patterns are reusable. We're not productizing it — every install is customized to the team's stack and data. But you can read every file, copy any pattern, and reach out if you want help adapting it.

04

Ready to install one?

We scope the build, ship inside your codebase, and stay through the post-install cadence. Drop us a note to get started.

Get in touch