Use case · Verified May 2026
Seed your local database with one command
Open the project. Run the seed command. Have 50,000 customers, 200,000 orders, and a dashboard that does not look like a stage play. No real data leaves your machine, no PII gets near your laptop, and the team's local DBs all look the same.
Short answer
Generate a fixture set in SynthForge (multi-table, FK-respecting, your dialect of choice). Save it to your repo or CI artifact store. Add one line to your project's seed command. Every developer's local database now looks the same.
The situation
The 'works on my machine because my DB has data and yours does not' bug is a tax. Common workarounds: a Postgres dump that is 400 MB and contains the QA team's last quarter of testing; a Faker script that generates one table at a time and breaks every time someone adds a column; a Notion page that lists six manual steps including 'go to /admin and click the seed button' which has not worked since 2024.
What works better: a generated fixture file, checked into the repo or stored as a CI artifact, that loads in seconds via psql / mysql / sqlite3 and produces the same database for everyone.
The dataset has to be realistic enough that the dashboards, charts, and N+1 queries actually surface during local dev. Faker's uniform distributions are not realistic; sequential timestamps are not realistic; orphaned FK values are not realistic. SynthForge handles all three out of the box.
How to set this up
1. Define your schema in SynthForge
AI from a description, paste a CREATE TABLE, or build it field-by-field. Mark the foreign-key columns. Choose your numeric distributions (Normal for ages, LogNormal for prices, Triangular for ratings).
2. Generate a seed-sized dataset
Local seed datasets are usually small: 1,000 - 50,000 rows total. Generate, then export as SQL (with INSERTs) for the simplest case, or as CSV (with the loader script the export includes) for larger sizes.
# Project layout
your-app/
db/
schema.sql # owned by your migrations
seed/
synthforge-seed.sql # generated, checked in
Makefile
# Makefile target
seed:
psql $$DATABASE_URL -f db/seed/synthforge-seed.sql 3. Wire it into the new-developer setup
Add `make seed` to your README, your Makefile, your devcontainer setup, or your nx / pnpm command surface. The whole team's local DB now looks the same.
4. Re-generate when the schema changes
Save the SynthForge schema URL in your repo. When migrations land, open the schema, regenerate, replace the seed file, commit. Treat it like any other build artifact.
When something else is the right call
Honest alternatives in case SynthForge is not the best fit for your specific situation.
Postgres dump from staging
When real data is legally OK to copy and the dump size is acceptable. Higher fidelity than synthetic, but contains real PII unless de-identified first.
Hand-written seed.sql
When the seed dataset is under ~50 rows and the values matter (admin user, default settings, lookup tables). Combine: hand-write the lookup tables, generate the volume tables.
Faker + a custom Python script
When you have a strong preference for keeping seeding inline with your codebase. Expect to write FK resolution code yourself.
Frequently asked questions
Can I check the SynthForge output into my repo?
Will it match my schema exactly, including indexes, triggers, and constraints?
How do I keep the seed dataset under a reasonable size?
What about admin users and lookup tables?
Related
Other use cases
Try SynthForge for free
Design a multi-table schema, generate referentially-intact data, and export to your database. No credit card.