DEADWAX Back to About

How We Built a Product with AI Coworkers

A technical white paper for engineers and anyone who wants to understand what Deadwax is and how a small human team orchestrated AI agents to build it.

Companion document: How We Built Deadwax — the visual overview. This document goes deeper.

Want access to the source code? The repository is private. Reach out to Jason for access.


Key Principles & Mental Models

If you take nothing else away, take these:

1. Treat AI Like a New Coworker, Not a Tool
Claude and Codex aren't autocomplete. They have roles, responsibilities, and a reporting structure. You don't prompt them -- you manage them with clear briefs, expectations, and feedback.
2. The Expert in Claude Is Claude
Don't spend hours reading docs about how to use Claude. Ask Claude. It writes its own configuration files, debugs its own workflows, and knows its own limits better than any human guide.
3. Scripts Over Skills, Always
Deterministic tasks (deploy, test, create PR) go in shell scripts -- not AI prompts. Scripts are cheaper, faster, and testable. Save AI for tasks that require judgment.
4. Humans Own Judgment, AI Owns Execution
Jason sets priorities and makes decisions. David advises on AI workflow optimization. Chris validates domain expertise. Hope creates the visual identity. Caden builds community. AI does everything else.
5. Context Is Everything
AI agents forget between sessions. 80+ markdown docs exist to re-teach them the full context every time. Getting context right is the job.

The Simple Version

Deadwax is an app for vinyl record collectors. It connects to Discogs (a big music database) and makes it dramatically easier to browse your collection on your phone and find which pressing of a record sounds best -- all in one screen instead of ten browser tabs.

The wild part? It was built by a tiny team -- Jason managing AI agents, with help from an AI workflow advisor, a vinyl expert, a visual artist, and a social media manager. There's a Director that assigns daily work, a Product Manager that defines what to build, Developers that write code, a Tester that catches bugs, a Designer, a Legal advisor, and a Marketing lead. Jason's job is to be the boss. The AI does everything else.

And it's not just code. User-reported bugs get automatically triaged, fixed, tested, and deployed to production in under 90 minutes -- while Jason sleeps.


For People in Tech

The Product

Deadwax sits on top of Discogs and provides a mobile-first experience for vinyl collectors. The core feature is Pressing Intelligence -- a panel that shows the best pressings of any album ranked by community ratings, audiophile label status, and mastering engineer. What takes 10-20 tabs on Discogs fits in one screen on Deadwax.

Positioning: Discogs has the data. Deadwax has the experience. The moat is UX quality.

The Team: Humans + AI

This isn't a solo AI project. It's a human-and-AI collaboration:

WhoRoleWhat They Do
JasonCEO & FounderProduct vision, strategic decisions, agent orchestration
DavidAI Workflow AdvisorConsultant on agentic AI workflows, token optimization, tools-vs-skills-vs-scripts framework
ChrisBoard MemberPressing expertise, audiophile label knowledge, seed list curation
CadenSocial MediaInstagram presence (@deadwax.io), community engagement
HopeVisual CreativeLogo, wordmark, brand visual identity

The human team handles what AI can't: taste, domain expertise, visual creativity, and community voice. The AI team handles execution:

AgentPlatformWhat They Do
DirectorClaude CodeDaily task orchestration, merge order, blocker escalation
Product ManagerClaude CodeRequirements, backlog, acceptance criteria
DesignerClaude CodeWireframes, design system, UX specs
MarketingClaude CodePositioning, GTM, community launch
LegalClaude CodePrivacy, compliance, naming
ArchitectCodexTechnical decisions, code review
Backend DevCodexAPI, OAuth, data pipelines
Frontend DevCodexUI, routing, mobile layout
TesterCodexCI/CD, Playwright E2E, coverage

How the Director Knows What to Work On

The most common question from engineers: "How does the AI know what to work on?"

The information flows through a chain of markdown files -- a paper trail that replaces Slack and Jira:

  1. CEO posts a decision to agents/COMMS.md (e.g., "Confirmed: Top 10 album seed list for Pressing Intelligence")
  2. Director reads it and creates a daily execution plan with themes, tasks, merge order, and dependencies
  3. Director writes lean packets -- TODAY.md (tasks) and COMMS-TODAY.md (context) -- that every agent reads first
  4. Agents execute, posting updates and handoffs back to COMMS.md
  5. Director tracks completion, archives the day's work, and creates tomorrow's packet

Every decision gets a permanent ID in the Decision Log (DEC-001 through DEC-035+). Nothing is verbal. Everything is traceable. The AI equivalent of "if it's not in writing, it didn't happen."

HOW AGENTS COLLABORATE (via markdown files in Git)

  +----------+                          +--------------+
  |   CEO    |-- decision ------------->|  COMMS.md    |
  | (Jason)  |                          |  (message    |
  +----------+                          |   board)     |
                                        +------+-------+
                                               | reads
                                               v
                                        +--------------+
                                        |  DIRECTOR    |
                                        |  (Claude)    |
                                        +------+-------+
                                               | writes
                            +------------------+------------------+
                            v                  v                  v
                     +------------+     +------------+     +------------+
                     | TODAY.md   |     | COMMS-     |     | EXEC-      |
                     | (tasks)    |     | TODAY.md   |     | YYYY-MM-   |
                     |            |     | (context)  |     | DD.md      |
                     +-----+------+     +-----+------+     +------------+
                           |                  |
                           v                  v
          +-------------------------------------------------+
          |           ALL AGENTS READ THESE FIRST           |
          |                                                 |
          |  PM --- Designer --- Legal --- Marketing        |
          |          (Claude Code agents)                   |
          |                                                 |
          |  Architect --- Backend --- Frontend --- Tester  |
          |          (Codex agents, in git worktrees)       |
          +---------------------+---------------------------+
                                | post updates
                                v
                         +--------------+
                         |  COMMS.md    |<-- cycle repeats
                         +--------------+

The Feedback Loop

The most impressive part: user-reported bugs go from "submitted" to "fixed in production" in under 90 minutes with zero human intervention. A cron job runs every hour, on the hour -- the PM agent triages (auto-fixable bugs get task packets; feature requests and larger asks get flagged for deeper human review), the Dev agent implements safe fixes, CI validates, and it deploys.

AUTOMATED FEEDBACK-TO-PRODUCTION PIPELINE

  User finds bug on deadwax.io
         |
         v
  +--------------+     GitHub API      +------------------+
  |  Feedback    | ------------------> |  GitHub Issue     |
  |  Widget      |                     |  label: [feedback]|
  +--------------+                     +--------+---------+
                                                |
                     +--------------------------+
                     |  Every hour, on the hour (macOS cron)
                     v
              +--------------+
              |  PM Agent    |  Phase 1: TRIAGE
              |  (Codex)     |  - Classify priority & effort
              |              |  - Safe to auto-fix? --> task packet
              |              |  - Needs human? --> [needs-input] label
              +------+-------+
                     | writes task packet
                     v
              +--------------+
              |  Dev Agent   |  Phase 2: IMPLEMENT (max 2/run)
              |  (Codex)     |  - Read task packet
              |              |  - Create branch + write fix
              |              |  - Run lint + typecheck + tests
              +------+-------+
                     | opens PR
                     v
              +--------------+
              |  CI Pipeline |  Security > Lint > Types >
              |  (Actions)   |  Vitest > Playwright E2E > Coverage
              +------+-------+
                     | all green
                     v
              +--------------+
              |  Auto-merge  |--> Deploy --> Verify production
              |  + Deploy    |
              +--------------+
                     |
                     v
            Bug is fixed in prod.
            User hasn't checked back yet.

For Engineers

Architecture

Monorepo (npm workspaces): React+Vite frontend, TypeScript Lambda backend (Hono), shared packages. AWS infrastructure: Lambda + API Gateway + CloudFront + DynamoDB + S3. Discogs OAuth 1.0a with server-side-only tokens.

What Makes the Agent System Work

Git worktrees -- each Codex agent works in an isolated worktree, enabling parallel development on separate branches without conflicts. Merge order is enforced: Architect → Backend → Frontend → Tester.

CI pipeline (GitHub Actions) -- every PR triggers: security scanning (ripgrep blocks DELETE/POST/PUT to Discogs, blocks token exposure), ESLint, TypeScript strict, Vitest unit tests, Playwright E2E with video, and code coverage delta posted as a PR comment.

Protected files -- AGENTS.md, CLAUDE.md, and all docs and agent files are automatically stripped from code PRs. Strategy docs commit directly to main.

Ship gate -- nothing is "done" until the ship-loop script squash-merges, watches the deploy, and verifies production endpoints. The Architect agent reviews every code PR.

Feedback pipeline -- runs via macOS cron (every hour, top of hour). Phase 1: PM triages GitHub issues -- auto-fixable bugs get task packets, feature requests and ambiguous asks get flagged for human review. Phase 2: Dev implements (max 2/run), opens PR, CI auto-merges on green. From feedback to production in under 90 minutes.

Pressing Intelligence -- Bayesian-weighted scoring: score = (v/(v+m))*R + (m/(v+m))*C where m=25, C=4.0. Pipeline fetches master release, ranks vinyl versions, flags audiophile labels (MoFi, AP, Classic, Impex), cross-references mastering engineers, checks user ownership, caches results.

The Bottom Line

This isn't AI-generated spaghetti code. It's 80+ docs providing context, 29 scripts automating mechanics, comprehensive CI with security scanning, git worktree isolation for parallel agents, and a decision log tracking every choice with rationale. The insight: AI can be organized into a team with the same rigor you'd expect from human engineers -- and one human can orchestrate it.


Lessons Learned (The Hard Way)

This project runs formal post-mortems and retrospectives. Run retrospectives. We do them every 5 days. They're the single most valuable process artifact -- more important than the code itself. Here's what we learned:

1. Silent Data Loss from Worktree Merges (INC-001)

The worst incident. For 3 days, AI dev agents opened PRs that silently overwrote Director and business-agent changes to main. Decision log entries disappeared. Entire documentation sections vanished. We didn't notice for 3 days.

Root cause: Git worktrees snapshot all tracked files at branch creation. When the worktree branch merges, stale copies of docs overwrite the current versions -- and git doesn't warn you because it's technically correct behavior.

Fix: Three-layer prevention: (1) sparse checkout so protected files are physically absent from worktrees, (2) pre-PR cleanup script strips protected files before PR creation, (3) CI gate fails any PR touching protected paths. Documentation alone was not enough. The rules existed in AGENTS.md -- agents just didn't follow them. Mechanical enforcement is the only enforcement that works with AI agents.

2. CLAUDE.md and AGENTS.md Bloat

These files started small and grew into sprawling instruction manuals. Every time something went wrong, we'd add more instructions. Eventually they were so long that agents burned context window tokens just reading their own config files, leaving less room for actual work.

Fix: David (AI Workflow Advisor) helped define the problem and crafted a diagnostic prompt: "Analyze my CLAUDE agentic workflow for token usage, workflow optimizations, and tools vs. skills/script usage." That audit led to aggressive pruning -- details moved into dedicated docs, config files became pointers not encyclopedias, and lean daily packets stayed under 50 lines. The lesson: treat AI context like expensive real estate. Every token of instruction costs a token of output.

3. Claude Code Agents Don't Self-Trigger

The Legal agent went 11 days with zero completed tasks. The Designer missed 4 consecutive sessions. Nobody noticed because the Director was busy shipping code.

Root cause: Claude Code agents (PM, Designer, Legal, Marketing) don't run unless explicitly scheduled. Unlike Codex agents which get spawned with a task, Claude Code agents just... sit there. If the Director doesn't schedule them, they don't exist.

Fix: Standing daily schedule with explicit session slots per agent. No implicit expectations. If it's not on the schedule, it won't happen.

4. Treating CI Failures as Noise

ESLint v9 broke every PR's lint step. Instead of treating it as a blocker, the team treated it as background noise -- "oh, lint always fails." This masked real problems for days.

Fix: CI failures are blockers, period. No "known failures" that get ignored. If a step fails, fix it or remove it. Noise in CI is indistinguishable from real problems.

5. Acceptance Criteria Written After the Code

The PM wrote acceptance criteria retroactively -- after the developer had already implemented the feature. This meant the criteria described what was built, not what should have been built.

Fix: PM writes acceptance criteria at story creation with a 48-hour validation window. If AC isn't written before implementation, the task stays blocked.

6. Rollback Capability as Afterthought

The Pressing Intelligence feature shipped without a kill-switch. During a rollback drill on Day 22, there was no way to disable PI in production. Required an emergency remediation PR.

Fix: Rollback capability is now a pre-deployment gate. If you can't turn it off, you can't turn it on.

7. Prefer the Thinnest Interface That Preserves Truth

We learned not every integration deserves the heaviest tool. When a task can be handled cleanly with the GitHub CLI, that is usually the cheapest path in both time and context. If CLI is not enough, go to the API. Only reach for richer tool layers when they add unique capability or context we cannot get another way.

Fix: Treat interface choice as context budgeting: CLI first, then API, then heavier tool layers when the extra surface area earns its keep.

Meta-Lesson: Run Post-Mortems Religiously

Our retro cadence broke down for 15 days because the Director kept prioritizing shipping over reflection. During those 15 days, the same patterns repeated: agents not scheduled, carry-forward tasks piling up, same root causes. The retro is the product. Without it, you're just making the same mistakes faster.


Questions People Always Ask

Do you write any code?
No. All code is written by AI agents and goes through CI/PR/review like any team.
How do you prevent AI from breaking things?
Tests, CI security gates, Architect review, and production verification. Bad code physically cannot merge.
How long did this take?
~4 weeks from start to working product with OAuth, collection browsing, pressing panels, full CI/CD, E2E tests, and automated feedback loop.
Could this replace a real engineering team?
For a focused product with one decision-maker? It already has. The bottleneck was never code production -- it was always judgment, taste, and decisions.
How much does this cost?
~$50/month total. That's Claude Code subscription + Codex subscription + a little GitHub Actions compute + AWS infrastructure (Lambda, DynamoDB, CloudFront). Once active development slows down, the AI subscription costs drop and you're left with just AWS hosting -- a few dollars a month. For context, a single junior engineer costs $8,000-$12,000/month fully loaded.
What's the hardest part?
Context management. Re-teaching agents the full project context every session. That's why the 80+ docs exist.

Engineering Deep Dive

This section is for engineers who want to understand exactly how the system is built. It covers repository structure, infrastructure, CI/CD mechanics, the agent coordination protocol, the Pressing Intelligence pipeline, and the automation scripts that hold it all together.

Repository Structure

Monorepo using npm workspaces. Node 20+. Three workspaces, 29 automation scripts, 65+ documentation files.

discogs-app/
├── apps/
│   ├── web/              # React 19 + Vite 6 SPA (Tailwind, Playwright E2E)
│   └── api/              # TypeScript Lambda backend (Hono router)
├── packages/
│   ├── shared/           # Shared types (VinylRecord, Pressing, Pagination)
│   └── config/           # Centralized ESLint, TypeScript, Prettier config
├── scripts/              # 29 automation scripts (worktrees, PRs, CI, bots)
├── agents/               # Agent orchestration files (roles, execution, comms)
├── docs/                 # 65+ markdown docs (product, design, engineering, legal)
├── infra/                # Infrastructure as Code
├── data/                 # Seed data and static datasets
├── logs/                 # Runtime logs from CI/agents/bots
└── test-logs/            # Playwright videos, coverage reports, CI artifacts

Frontend Architecture

React 19 SPA built with Vite 6, styled with Tailwind CSS (150+ custom theme tokens), routed with React Router v7.

apps/web/src/
├── pages/                # Route-level components
│   ├── CollectionPage    # Discogs collection browser with sorting/filtering
│   ├── WantlistPage      # Wantlist browser with pressing intelligence
│   ├── DiscoverPage      # Discovery + rarity insights
│   ├── AboutPage         # About + education links
│   └── ...               # Landing, Privacy, TipJar, Beta, Callback
├── components/
│   ├── RecordTableLayout # Main collection/wantlist table
│   ├── PressingPanel     # Pressing discovery slide-out panel
│   ├── FeedbackWidget    # In-app bug report → GitHub Issue
│   └── ...
└── lib/
    ├── pressing-intel.ts         # PI data retrieval
    ├── pressing-data.ts          # Bayesian scoring + ranking logic
    ├── mastering-engineer-signals.ts  # Engineer name matching
    ├── record-pressing-signals.ts     # Multi-factor pressing signals
    ├── persistent-record-page-cache.ts  # IndexedDB caching
    ├── background-preload.ts     # Preload next record while viewing current
    ├── request-cache.ts          # HTTP response cache layer
    └── record-search.ts          # Discogs search + filtering

Data flow: Components → lib utilities → API endpoints → Discogs proxy. Three-layer caching: browser IndexedDB for record pages, HTTP response cache for API calls, and background preloading for next-record anticipation.

Backend Architecture

Single AWS Lambda function routing via Hono. Handles OAuth 1.0a, Discogs API proxy, Pressing Intelligence, analytics, and feedback submission.

apps/api/src/
├── handlers/              # Route handlers
│   ├── auth.ts            # OAuth 1.0a flow (Discogs)
│   ├── auth-session.ts    # JWT session tokens
│   ├── collection.ts      # GET /api/collection/* (cached proxy)
│   ├── wantlist.ts        # GET /api/wantlist/*
│   ├── wantlist-add.ts    # POST /api/wantlist/add (only write op)
│   ├── pressing-intel.ts  # GET /api/pressing-intel/*
│   ├── feedback.ts        # POST /api/feedback → GitHub Issue (Octokit)
│   ├── events.ts          # POST /api/events (structured analytics)
│   └── health.ts          # GET /api/health
├── services/
│   ├── discogs-client.ts  # HTTP client with caching + rate-limit backoff
│   ├── oauth.ts           # OAuth 1.0a signature generation
│   ├── session-store.ts   # DynamoDB session storage
│   ├── catalog-cache-store.ts  # DynamoDB collection cache
│   ├── memory-cache.ts    # In-process LRU cache
│   └── ssm.ts             # AWS Secrets Manager client
├── middleware/
│   ├── discogs-rate-limit.ts  # Discogs API backoff (respect 429s)
│   └── read-only-guard.ts     # Block all mutations except wantlist add
└── pi/                    # Pressing Intelligence pipeline (see below)

Security layers:

Infrastructure (AWS)

┌─────────────────────────────────────────────────────┐
│                    CloudFront CDN                     │
│            (www.deadwax.io + deadwax.io)              │
├───────────────────────┬─────────────────────────────┤
│   S3 (private origin) │    API Gateway + Lambda      │
│   React SPA + static  │    /api/* routes             │
│   assets              │    OAuth, proxy, PI, events   │
├───────────────────────┴─────────────────────────────┤
│              DynamoDB (sessions, cache, config)        │
│              PostgreSQL (Pressing Intelligence)        │
│              CloudWatch (structured logs + alarms)     │
│              Secrets Manager (OAuth keys, API tokens)  │
└─────────────────────────────────────────────────────┘

S3 bucket is private — no public access. CloudFront uses Origin Access Identity. All API calls route through API Gateway → single Lambda. Costs: ~$5/month post-development (Lambda pay-per-request + DynamoDB on-demand + S3/CloudFront minimal).

CI/CD Pipeline

Every PR triggers the full CI pipeline via GitHub Actions. Nothing merges without passing all gates.

PR opened or pushed to main/codex/feat/fix/infra branches
    │
    ├── Security Scanning ──── ripgrep blocks DELETE/PUT/PATCH to Discogs,
    │                          blocks token/secret exposure in code
    ├── ESLint ──────────────── shared config (typescript-eslint v8 flat config)
    ├── TypeScript ──────────── strict mode, all workspaces
    ├── Vitest Unit Tests ───── frontend components + backend handlers
    ├── Playwright E2E ──────── Chromium headless, video recording
    ├── Code Coverage ───────── Istanbul + V8, delta posted as PR comment
    └── Protected Files Gate ── fails if PR modifies AGENTS.md, CLAUDE.md, etc.

All green → merge to main → deploy workflow triggers:
    │
    ├── Build frontend (Vite)
    ├── Upload to S3
    ├── Invalidate CloudFront
    └── Verify production endpoints (curl health checks)

Deploy triggers: workflow_run event fires after CI succeeds on main. Concurrency group serializes deploys (no race conditions). OIDC role assumption for AWS credentials (no static keys in CI).

Agent Coordination Protocol

Nine AI agents coordinate through markdown files committed to Git. No Slack, no Jira, no Notion. Everything is versioned and traceable.

agents/
├── MASTER-EXECUTION.md     # Single source of truth for all agent tasks
├── TODAY.md                # Current-day runtime packet (<50 lines)
├── COMMS.md                # Durable communication log (CEO ↔ agents)
├── COMMS-TODAY.md          # Today's communication digest
├── STATUS.md               # Build/deploy status tracker
├── roles/                  # Role definitions (one per agent)
│   ├── director.md         # Orchestration, dispatch, escalation
│   ├── product-manager.md  # Backlog, stories, acceptance criteria
│   ├── architect.md        # ADRs, system design, code review
│   ├── backend-dev.md      # Lambda, services, PI pipeline
│   ├── frontend-dev.md     # React, UI, E2E tests
│   ├── tester.md           # Test strategy, QA, coverage
│   ├── designer-researcher.md  # UX specs, design system
│   ├── marketing-manager.md    # GTM, community, outreach
│   └── legal.md            # Compliance, privacy, naming
├── execution/              # Daily execution logs (EXEC-2026-03-*.md)
├── decisions/              # Decision log (DEC-001 through DEC-035+)
├── incidents/              # Post-mortem records (INC-*.md)
├── retros/                 # Sprint retrospectives (every 5 days)
└── auto-tasks/             # Bot-generated task packets (TASK-*.md)

Information flow: CEO posts decisions to COMMS.md → Director reads and creates TODAY.md + COMMS-TODAY.md → all agents read these first → agents execute and post updates back to COMMS.md → Director archives to execution/EXEC-*.md and creates next day's packet.

Merge order enforcement: Infra → Backend → Frontend → Tester. The Director specifies daily in MASTER-EXECUTION.md. This prevents merge conflicts from parallel agent work.

Human Approval Gates

Not everything is automated. Some decisions deliberately stop the pipeline and require human sign-off before work continues. No workaround, no override.

Git Worktree Strategy

Each Codex agent works in an isolated git worktree — a full copy of the repo on a separate branch. This enables truly parallel development without conflicts.

# Create a worktree for backend work
$ bash scripts/worktree-create.sh backend feat/oauth-proxy
→ /Users/.../discogs-app/.claude/worktrees/backend

# When done, create PR from worktree
$ bash scripts/pr-create.sh --title "feat: OAuth proxy"
→ Rebases on main → strips protected files → pushes → opens PR

# Clean up after merge
$ bash scripts/worktree-cleanup.sh backend --delete-branch

Protected files problem (INC-001): Worktrees snapshot all files at branch creation. When merged, stale copies of docs overwrite current versions. Fix: three-layer prevention — sparse checkout (files physically absent), pre-PR cleanup script, and CI gate that fails any PR touching protected paths.

Scripts (29 Total)

Core principle: never create an AI skill for a deterministic task. Scripts are cheaper, faster, testable, and don't burn tokens.

CategoryScriptWhat It Does
Worktreesworktree-create.shCreate isolated worktree + branch from origin/main
worktree-cleanup.shRemove worktrees, prune merged branches
PRspr-create.shRebase → cleanup protected files → push → open PR
pr-review.shFetch diff, metadata, check protected files, CI status
pre-pr-cleanup.shReset protected files to main before PR
CI/CDdeploy-aws.shBuild → S3 upload → CloudFront invalidation
coverage.shRun tests, generate coverage, diff against main
main-sync.shFetch + rebase local main on origin
Botsrun-feedback-pipeline.shFull feedback loop: triage → fix → PR → merge → deploy (hourly)
run-triage-bot.shPM triage: classify issues, create task packets
run-fix-bot.shDev fix: read task packets, implement, open PR
Testingrun-all.shMaster test runner (vitest + playwright + node --test)
security-readonly-checks.shEnforce Discogs read-only constraints via ripgrep
verify-runtime-endpoints.shPOST curl tests to deployed API endpoints
PIpi-report.mjsGenerate Pressing Intelligence status report
bootstrap-qa-session.mjsSet up QA session with seed albums
Codexship-loop.shPost-approval squash-merge → deploy → verify production

Pressing Intelligence Pipeline

Multi-source evidence pipeline that ranks vinyl pressings using Bayesian-weighted confidence scoring.

Scoring formula:
  score = (v/(v+m)) * R + (m/(v+m)) * C
  where m = 25 (minimum votes), C = 4.0 (prior mean)

Pipeline:
  Sources (YouTube, forums, Gemini AI analysis, editorial)
      ↓
  Ingest (fetch content, hash for dedup, store in PostgreSQL)
      ↓
  Extract claims (per-connector: youtube_transcript, youtube_gemini_video,
                   forum_ingest_shf, forum_ingest_reddit, editorial_review)
      ↓
  Normalize (entity resolver: "Bernie Grundman" == "BG at Bernie Grundman Mastering")
      ↓
  Score (confidence × relevance weights per claim type)
      ↓
  Rank (materialized view: pressings sorted by weighted score)
      ↓
  Serve API: GET /api/pressing-intel/{album_seed_id}

Connectors:

Storage: PostgreSQL for sources, claims, citations, and review queue. DynamoDB for connector registry and album seed config. CloudWatch for event logging.

Automated Feedback Pipeline (Cron)

Runs on a Mac mini via macOS cron. Three scheduled jobs:

ScriptScheduleWhat It Does
run-feedback-pipeline.shEvery hour, top of hourFull loop: triage → fix → PR → CI → merge → deploy
run-triage-bot.shEvery 4 hoursPM classifies GitHub issues → creates TASK-*.md packets
run-fix-bot.shEvery 4 hours (offset +1h)Dev reads task packets → implements fix → opens PR

Safety rails: Won't auto-fix P0 critical issues, UX redesigns, auth/security changes, schema migrations, or anything ambiguous. Feature requests and larger asks get flagged with [needs-input] label for human review. Only safe, scoped bug fixes ship automatically.

Documentation Suite (65+ Files)

docs/
├── adr/          # Architecture Decision Records (ADR-001 through ADR-007+)
├── design/       # Design system, brand guidelines, UX specs, wireframes
├── engineering/  # Codex playbook, execution overview, code review, API reference
├── infra/        # Architecture diagrams, caching, deploy runbook, cost analysis
├── product/      # PRD, vision, roadmap, backlog, user stories, PI rollout
├── legal/        # Privacy policy, compliance checklist, OAuth consent, licensing
├── marketing/    # GTM brief, landing copy, competitive analysis, outreach tracker
├── research/     # User research findings, beta candidates, interview guides
├── education/    # This document + DEADWAX-EXPLAINED.md
├── company/      # Culture, team
└── testing/      # Test plan + QA strategy

Every document is self-contained but cross-referenced. ADRs track architecture decisions with rationale. Product specs include acceptance criteria linked to design specs. The decision log (DEC-001 through DEC-035+) gives every choice a permanent ID.

Development Setup

# Clone and install
git clone git@github.com:jasonmeans/discogs-app.git
cd discogs-app
npm install

# Build all workspaces
npm run build

# Run all checks
npm run lint && npm run typecheck && npm run test

# Local frontend dev (Vite hot reload)
npm run dev --workspace @deadwax/web

# Environment
cp .env.example .env.local
# Fill in: API endpoint, OAuth client ID, feature flags, PI database URL

Want to explore the code? The repository is private. Reach out to Jason for access. Engineers welcome.


See also: How We Built Deadwax — the visual overview with team cards, pipeline diagrams, and roadmap.

Back to top