Extract a live SQL Server database to JSON files, deploy to a fresh target, and watch SchemaSmith compute the DDL. Full walk-through in about 15 minutes.
In the next 15 minutes, you'll extract a real database into version-controlled files, deploy them to a fresh empty database, make a schema change, and watch SchemaSmith compute the exact DDL needed to bring the target in line.
By the end, you'll have completed the full SchemaSmith cycle — and you'll understand why teams stop writing migration scripts once they've seen this. This walkthrough uses the Northwind demo database on SQL Server. PostgreSQL and MySQL have their own Getting Started walkthroughs if you're running those engines.
You need three things:
SchemaSmith tools. Download self-contained ZIP packages from the latest GitHub release and extract them anywhere on your PATH. No .NET runtime required.
A database server. Either an existing SQL Server instance or Docker (recommended). The repository includes a Docker Compose file that starts the demo server for you.
The demo files. Clone the repository or download just the Demos/ folder:
git clone https://github.com/Schema-Smith/SchemaSmithyFree.git
cd SchemaSmithyFree
From the repository root, start the SQL Server demo Docker environment:
docker compose -f Demos/SqlServer/docker-compose.yml up -d
This starts a SQL Server instance on port 1440, then deploys the matching demo databases using SchemaQuench. The credentials are in Demos/SqlServer/.env:
| Setting | Value |
|---|---|
| SQL Server | localhost,1440 |
| User | TestUser |
| Password | (see Demos/SqlServer/.env) |
Wait for the setup to finish — you can watch progress with docker compose -f Demos/SqlServer/docker-compose.yml logs -f. When you see the completed service exit, the databases are ready.
Now let's go the other direction. Pretend you already have a database and want to bring it under SchemaSmith management. SchemaTongs grips your live schema and casts it into structured files. Create a SchemaTongs configuration file called tongs-extract.json:
{
"Source": {
"Platform": "SqlServer",
"Server": "localhost,1440",
"User": "TestUser",
"Password": "your-password-here",
"Database": "Northwind"
},
"Product": {
"Path": "./my-northwind",
"Name": "Northwind"
},
"Template": {
"Name": "Northwind"
},
"ShouldCast": {
"Tables": true,
"Schemas": true,
"UserDefinedTypes": true,
"Functions": true,
"Views": true,
"Procedures": true,
"TableTriggers": true,
"Catalogs": true,
"StopLists": true,
"DDLTriggers": true,
"XMLSchemaCollections": true,
"IndexedViews": true
}
}
Run the extraction:
SchemaTongs --ConfigFile:tongs-extract.json
Look at what appeared in the my-northwind/ folder. You now have a complete schema package:
my-northwind/
Product.json ← Platform, templates, tokens
.json-schemas/ ← JSON Schema validation, generated on the fly
Templates/
Northwind/
Template.json
Tables/
dbo.Categories.json
dbo.Customers.json
dbo.Orders.json
... (13 tables)
Views/
dbo.Alphabetical list of products.sql
...
Procedures/
dbo.CustOrderHist.sql
...
Schemas/
Functions/
Triggers/
Table Data/
Every table is a JSON file describing its columns, indexes, and constraints. Every stored procedure and view is a plain SQL file. This is your entire database — materialized as readable, diffable, reviewable source files. Commit this to source control and you have a complete history of every schema change from this point forward. For the full set of extraction options, see the SchemaTongs reference.
That's the cast. A live database, captured in files you can read, review, and version.
No GUI editor needed — the schema package is designed to read clearly in any editor or IDE. Open my-northwind/ in VS Code, JetBrains Rider, Visual Studio, or whatever your team uses. The .json-schemas/ folder gives you validation and autocomplete for every JSON file in the package automatically.
Click through Tables/dbo.Categories.json. You'll see the column definitions, the primary key, and any indexes. Open a stored procedure in Procedures/ and you see the full SQL definition ready to diff in a pull request. This is what schema review looks like when the files are designed for it: structured enough for tools, readable enough for humans, diff-friendly enough for code review.
Take a minute to click around. Every table, every view, every procedure in the Northwind database is here, structured and browsable. No server connection needed to explore.
Here's where it gets powerful. Let's deploy — quench — the Northwind schema to a completely empty database on the same server. Your declared state is about to harden into a live database. Create a SchemaQuench configuration file called quench-deploy.json:
{
"Target": {
"Server": "localhost,1440",
"User": "TestUser",
"Password": "your-password-here"
},
"WhatIfONLY": false,
"SchemaPackagePath": "./demo/Northwind",
"ScriptTokens": {
"NorthwindDb": "NorthwindClone"
}
}
Notice the ScriptTokens section: we're telling SchemaQuench to use NorthwindClone as the database name instead of Northwind. The {{NorthwindDb}} token in the template's scripts will resolve to this value, creating a brand-new database. You'll learn more about tokens in Products and Templates, and the full token system is documented in Script Token Mechanics.
Run the deployment:
SchemaQuench --ConfigFile:quench-deploy.json
SchemaQuench reads the schema package, connects to the target server, and builds the entire database from scratch: creates NorthwindClone, runs migration scripts, creates all the tables with their columns and indexes, deploys all views and stored procedures. A complete, reproducible database from source files. Connect to localhost,1440 with any SQL client and you'll find NorthwindClone with the full Northwind schema.
One package. One command. A complete database — built exactly as declared, every time. That's what quenching looks like.
Now for the real satisfaction. Let's modify the schema and watch SchemaSmith figure out exactly what needs to change.
Open demo/Northwind/Templates/Northwind/Tables/dbo.Shippers.json (or your own re-extracted copy). The file declares the current state of the table: columns, indexes, the primary key. Add a new column for tracking email addresses. Insert this entry into the Columns array after the [Phone] column:
{
"Name": "[Email]",
"DataType": "NVARCHAR(100)",
"Nullable": true,
"OldName": ""
}
You just declared the desired state: "the Shippers table should have an Email column." You didn't write an ALTER TABLE script. You didn't check whether the column already exists. You described what the table should look like. You decide the shape. The forge handles the rest.
Now let's see what SchemaSmith will do — without actually touching the database. Run SchemaQuench in WhatIf mode by setting "WhatIfONLY": true in a copy of your settings file:
{
"Target": {
"Server": "localhost,1440",
"User": "TestUser",
"Password": "your-password-here"
},
"WhatIfONLY": true,
"SchemaPackagePath": "./demo/Northwind",
"ScriptTokens": {
"NorthwindDb": "Northwind"
}
}
Save this as quench-whatif.json and run:
SchemaQuench --ConfigFile:quench-whatif.json
In the output, you'll see [WhatIf] entries showing the computed changes. SchemaQuench compared the declared state (your JSON with the new Email column) against the live database (which has no Email column) and determined that an ALTER TABLE ... ADD is needed. No changes were applied — WhatIf mode is read-only. Preview before you commit. Confidence before you deploy.
Now apply it for real. Change "WhatIfONLY": true to "WhatIfONLY": false and run again:
SchemaQuench --ConfigFile:quench-deploy.json
SchemaQuench connects to the Northwind database, sees that dbo.Shippers is missing the [Email] column, and adds it. Every other table, view, and procedure is already in sync, so nothing else changes. Exactly the right delta, computed automatically. No more, no less.
Connect to the database with whatever SQL client you prefer and look at the Shippers table:
SELECT TOP 1 [CompanyName], [Phone], [Email] FROM [dbo].[Shippers];
The Email column is there, right where you declared it. The file is the truth, the database matches, and your next commit records the change for posterity.
You just completed the full SchemaSmith workflow:
No migration scripts. No ordered chains of ALTERs. No guessing what the target looks like. You declare the state you want, and SchemaSmith gets you there. Every time.
Last reviewed May 2026 by the SchemaSmith Team.
You just did state-based schema management. Now let's understand what's happening under the hood — products, templates, tokens, and the deployment pipeline that makes it all work. Same concepts across all three platforms.