Home

Wedding Ready

Pinterest, but hyper-local and supplier-lead.

Problem

Couples can easily collect inspiration on Pinterest, but it is hard to turn ideas into a realistic plan with suppliers who can actually deliver locally.

Solution

Wedding Ready combines inspiration and sourcing: users browse and save vendor-created pins, and each pin links directly to the relevant local suppliers.

My Role

Founder, Full-Stack Dev, Design


Tech Stack

  • Next.js (App Router) - React/Node w SSR/RSC

  • TanStack Query

  • React Hook Form

  • Zod

  • Vercel - deployment branching and cron

  • GitHub Actions - CI/CD

  • Supabase - integrated Postgres + Auth

  • Drizzle ORM - DAL, no client-side queries

  • UploadThing - S3 wrapper

  • Tailwind CSS

  • Shadcn/ui

  • Vitest - integration testing w 'scene' pattern


Engineering

Constraint

Decision

Trade-off

Supporting multiple authentication providers (email/password + Google OAuth)

Separated authentication from onboarding and introduced an explicit onboarded state with routing guards.

Significant rework of signup and routing logic; added complexity around user state management (authenticated but not onboarded vs authenticated and onboarded).

Multi-entity ownership and team access (one user -> multiple suppliers; one supplier -> multiple staff)

Designed a relational ownership model with team-based access, scoped roles, and permissions.

Increased schema and validation complexity; required strict backend access controls and careful UI context switching.

Need for stable, low-downtime deployments as a solo dev

Built a safer release pipeline with Vitest, integration tests, preview deployments, and automated database migrations in GitHub Actions.

Significant upfront engineering time to build deployment and testing discipline; payoff was safer releases, higher confidence, and faster long-term iteration.

Limited developer capacity (solo founder/developer)

Adopted a serverless-first stack (Next.js on Vercel + Supabase) to offload infrastructure, and used TypeScript end-to-end to reduce context switching and ship faster across frontend and backend.

Tighter vendor coupling; performance considerations in serverless environments (e.g. cold starts); reduced low-level infrastructure control.


Architecture

Feed route architecture. /feed -> API -> operations/model layer -> Postgres, with ranked retrieval via a simple scoring algorithm and client save-state cache pre-hydration.
Wedding Ready full-stack architecture; /feed route.

I adopted a layered architecture for Wedding Ready so each part of the stack owns a clear concern: UI rendering, request boundaries, business operations, and data access/definitions. The goal is to keep ownership boundaries explicit as the product grew, so complexity stays manageable.

The /feed route shows how this works in practice:

  • Proxy gate: JWT-based authentication is enforced before route rendering, so unauthenticated users are redirected before the server loads /feed.

  • Server-rendered route shell (RSC): /feed uses React Server Components, handles Suspense and error boundaries around the interactive feed client.

  • UI components (tiles): tile cards stay presentational ('dumb') with no business logic, so they remain reusable across different contexts.

  • Client boundary (feed client + useFeed): owns interaction state and client orchestration, not business rules.

  • Server boundary (API endpoint): handles authentication, request parsing/validation, and invokes operations.

  • Operations layer: enforces authorization and maps raw model/database outputs into safe response DTOs before returning data to the client.

  • Data layer: performs typed retrieval using precomputed ranking signals (recency, quality, social) and updates delivery/view state so duplicate tiles are not returned to the feed.

This separation lets me change ranking logic or retrieval strategy without rewriting UI components. On /feed, the API returns ranked tiles and the client pre-hydrates save-state cache entries in the same pass, avoiding N+1 per-tile save-state requests while preserving isolated save/unsave mutations.


Deep Dive: Multi-Step Tile Upload Form

The Problem

Wedding Ready depends on fresh tiles from suppliers, and tiles are most useful if credited accurately. The original one-step upload had a poor UX for supplier crediting and would be a bottleneck for planned features. I also wanted to avoid a naive implementation that would cause unnecessary re-renders and make typing/search feel laggy once there are several tiles in the batch.

What I Built

A client-side refactor of the supplier upload flow at /suppliers/[handle]/new:

  • Batch add up to ~10 images into a single upload session

  • Configure each tile in two steps: basic details -> crediting suppliers

  • Upload and delete per tile (no draft persistence; safe to lose state on refresh)

Separation of Concerns

Upload flow component hierarchy diagram
Component hierarchy for the multi-step supplier tile upload flow.

The core design goal was clear ownership. Each layer owns a narrow responsibility:

  • UploadProvider owns the batch file list, stable uploadIds, and object URL lifecycle (add/remove/cleanup). It does not own form state or mutations.

  • UploadPreviewItem owns one tile's lifecycle: build payload, run the upload mutation, show progress, and remove the item from the batch on success.

  • UploadPreviewForm owns form state (React Hook Form), step state, and credit rows via useFieldArray.

  • Each credit row owns its own supplier search input (debounced query + shared cache by search term).

Preventing Unnecessary Re-renders

I treated re-render strategy as a first-class requirement:

  • Stable keys: `uploadId` per tile card and `field.id` per credit row keep component identity stable.

  • High-frequency actions (typing, supplier search) stay local to a single card/row.

  • List-level updates only happen on low-frequency events (add/remove/clear files).

Outcome

  • Suppliers can batch upload multiple tiles and credit other suppliers without a clunky, error-prone form.

  • Each tile upload is isolated: one failing tile does not break the entire batch.

  • The flow stays extensible as requirements grow (more metadata, moderation, ranking signals).


More Projects