The daily rhythm of shaping a SQL Server schema with SchemaSmith. Add tables, modify columns, write procedures, extract drift from a live database, and bootstrap new environments — all from one schema package.
Now it's time to give your database form.
Your team needs a Promotions table to track discount campaigns. Here's what you do.
dbo.Promotions.json to your package's Tables/ folder:
{
"Schema": "[dbo]",
"Name": "[Promotions]",
"CompressionType": "NONE",
"Columns": [
{
"Name": "[PromotionID]",
"DataType": "INT IDENTITY(1, 1)",
"Nullable": false
},
{
"Name": "[PromotionName]",
"DataType": "NVARCHAR(100)",
"Nullable": false
},
{
"Name": "[DiscountPercent]",
"DataType": "DECIMAL(5,2)",
"Nullable": false,
"Default": "0",
"CheckExpression": "[DiscountPercent]>=(0) AND [DiscountPercent]<=(100)"
},
{
"Name": "[StartDate]",
"DataType": "DATE",
"Nullable": false
},
{
"Name": "[EndDate]",
"DataType": "DATE",
"Nullable": true
},
{
"Name": "[IsActive]",
"DataType": "BIT",
"Nullable": false,
"Default": "1"
}
],
"Indexes": [
{
"Name": "[PK_Promotions]",
"PrimaryKey": true,
"Unique": true,
"Clustered": true,
"IndexColumns": "[PromotionID]"
},
{
"Name": "[IX_Promotions_StartDate]",
"IndexColumns": "[StartDate]"
}
]
}
SchemaQuench
SchemaQuench reads the JSON, sees that dbo.Promotions doesn't exist in the target database, and generates a CREATE TABLE statement. One file. One command. Done.
Compare this to the traditional approach: write a CREATE TABLE script, write a migration file with a sequence number, make sure the sequence number doesn't collide with anyone else's, add an IF NOT EXISTS guard, add a corresponding rollback script, update a migrations tracking table. With SchemaSmith, you created one file and ran one command. No migration scripts. No dependency ordering. No collision worries.
The Promotions table needs changes. Marketing wants a description field, the discount column needs more precision, and you need an index on the active flag for a dashboard query. All three edits happen in the same JSON file — you shape the table right where it lives.
Insert a new entry in the Columns array:
{
"Name": "[Description]",
"DataType": "NVARCHAR(500)",
"Nullable": true
}
Find the DiscountPercent column and edit its DataType:
"DataType": "DECIMAL(7,4)"
Add a new entry in the Indexes array:
{
"Name": "[IX_Promotions_IsActive]",
"IndexColumns": "[IsActive]",
"FilterExpression": "[IsActive] = 1"
}
Preview the changes before touching the database. Run SchemaQuench in WhatIf mode:
SmithySettings_WhatIfONLY=true SchemaQuench
SchemaQuench generates the SQL it would execute — an ALTER TABLE ... ADD for the new column, an ALTER TABLE ... ALTER COLUMN for the data type change, and a CREATE INDEX for the new filtered index — and logs it all without applying anything. Read the generated SQL, confirm it looks right, then run SchemaQuench normally to quench the changes into your database. Three changes to one file. One preview. One command. No scripts to write, number, or maintain.
Stored procedures, functions, views, and triggers work differently from tables. Instead of JSON, they're plain .sql files. Each object gets its own file in the matching folder:
| Object type | Folder |
|---|---|
| Stored procedures | Procedures/ |
| Functions | Functions/ |
| Views | Views/ |
| Triggers | Triggers/ |
Here's a SQL Server stored procedure that returns the order history for a customer. Create dbo.CustOrderHist.sql in the Procedures/ folder:
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
GO
CREATE OR ALTER PROCEDURE CustOrderHist @CustomerID nchar(5)
AS
SELECT ProductName, Total = SUM(Quantity)
FROM Products P,
[Order Details] OD,
Orders O,
Customers C
WHERE C.CustomerID = @CustomerID
AND C.CustomerID = O.CustomerID
AND O.OrderID = OD.OrderID
AND OD.ProductID = P.ProductID
GROUP BY ProductName
GO
The key detail: CREATE OR ALTER. This is idempotent. It works whether the procedure exists or not. No IF EXISTS ... DROP guard. No separate create-vs-alter logic. SchemaQuench runs the script as-is, and the database engine handles the rest.
Views work the same way. Drop this file into the Views/ folder:
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
GO
CREATE OR ALTER VIEW "Products Above Average Price" AS
SELECT Products.ProductName, Products.UnitPrice
FROM Products
WHERE Products.UnitPrice > (SELECT AVG(UnitPrice) FROM Products);
GO
Need to update an existing procedure? Edit the .sql file and quench. The CREATE OR ALTER takes care of whether it's new or changed. There's no separate "alter" workflow — you always declare the full object definition, and SchemaQuench applies it.
Someone changed the database directly. Maybe a DBA added a column in production during an incident. Maybe a developer used SSMS to tweak an index on staging. The database has drifted from the package.
SchemaTongs
SchemaTongs connects to the database, reads every table, procedure, view, function, and trigger — plus SQL Server extras like XML indexes, full-text indexes, and custom statistics — and writes the current definitions to the package files. Changed objects update in place. New objects get new files.
git diff:
$ git diff
--- a/Templates/Northwind/Tables/dbo.Products.json
+++ b/Templates/Northwind/Tables/dbo.Products.json
@@ -48,6 +48,12 @@
},
+ {
+ "Name": "[BackorderThreshold]",
+ "DataType": "INT",
+ "Nullable": true,
+ "Default": "10"
+ },
The diff reads like a sentence: "someone added a BackorderThreshold column to the Products table with a default of 10." Compare that to trying to figure out what changed by comparing two database snapshots or reading through audit logs. The drift is captured. The mystery is over.
Any Extensions metadata you had previously attached to tables is preserved through the round-trip. Data classification tags, ownership, retention policies — your sidecar data survives every re-extraction.
SchemaTongs does more than dump scripts to flat folders. When you cast your database schema, the tool brings real intelligence to the extraction — preserving organization, carrying custom metadata forward, detecting orphans, focusing extraction on what matters, validating scripts, and reconciling constraint style.
Organize scripts by domain — Tables/Sales/, Tables/HR/, Procedures/Reporting/. SchemaTongs preserves existing subfolder locations on re-extraction. New objects that haven't been organized go to the root folder.
Custom metadata attached to tables via the Extensions carrier — data classification, ownership, retention policies — is carried forward on every re-extraction. Your sidecar data survives as long as the underlying object stays.
When a database object is dropped, its script file becomes an orphan. SchemaTongs offers three modes:
| Mode | Behavior |
|---|---|
Detect | Logs orphans, no action. Default. |
DetectWithCleanupScripts | Logs orphans, generates cleanup scripts. |
DetectDeleteAndCleanup | Deletes files, generates cleanup scripts. |
Cast only what you care about. ShouldCast flags toggle whole object types on or off — Tables, Views, Procedures, Functions, TableTriggers, and SQL Server extras like UserDefinedTypes, IndexedViews, and DDLTriggers.
For a tighter scope, ObjectList restricts extraction to a named set of objects — a comma-separated list like dbo.Customer, usp_GetOrders. Focus a cast on your real surface without pulling every legacy table.
With ValidateScripts enabled, SchemaTongs checks each extracted script against the database to verify it parses correctly. Invalid scripts are saved with a .sqlerror extension instead of .sql, making them visible but excluded from deployment until you fix them.
Controls whether check constraints are extracted as column-level properties inside the table JSON or as table-level named constraints. The default is ColumnLevel. Set it once in Product.json — the style is locked to whatever the file specifies.
Some products need to create their target database from scratch. CI pipelines spin up fresh containers. Docker Compose environments start from nothing. New developers clone the repo and need a working database in one command. The Initialize template pattern handles all of these. Three pieces work together. Here's how the Northwind demo product sets it up on SQL Server.
Templates/Initialize/Template.json:
{
"Name": "Initialize",
"DatabaseIdentificationScript": "SELECT [Name] FROM master.sys.databases WHERE [Name] = 'master' AND NOT EXISTS (SELECT 1 FROM master.sys.databases WHERE [Name] = '{{NorthwindDb}}')"
}
The DatabaseIdentificationScript is the key. It returns a result only when the target database doesn't yet exist — it matches master (a database that always exists on the server) but only when NorthwindDb is missing. On the first run, this template activates and creates the database. On every subsequent run, the script returns no rows, SchemaQuench skips the template entirely, and deployment proceeds straight to the main template.
Templates/Initialize/Before Scripts/Create Northwind [ALWAYS].sql:
IF NOT EXISTS (
SELECT 1 FROM master.sys.databases
WHERE [Name] = '{{NorthwindDb}}'
)
BEGIN
CREATE DATABASE [{{NorthwindDb}}]
END
The [ALWAYS] marker tells SchemaQuench to run this script every time the Initialize template is active — no version tracking needed. The IF NOT EXISTS guard makes the script safe to re-run, though in practice it only executes once because the template self-selects out after the database exists.
Product.json defines the template order.
{
"Name": "Northwind",
"Platform": "SqlServer",
"ValidationScript": "SELECT CAST(1 AS BIT)",
"TemplateOrder": [
"Initialize",
"Northwind"
],
"ScriptTokens": {
"NorthwindDb": "Northwind"
}
}
TemplateOrder ensures Initialize runs first. If the database doesn't exist, Initialize creates it, then the Northwind template deploys the full schema. If the database already exists, Initialize is skipped and Northwind deploys any pending changes. One docker compose up bootstraps everything from an empty server. Subsequent runs skip Initialize automatically and apply only schema changes. Fresh environment or existing environment, same command, same result.
Last reviewed May 2026 by the SchemaSmith Team.
That's how you shape your schema — adding tables, modifying columns, writing procedures, casting changes, and bootstrapping new environments. Clean, repeatable, no surprises. When you're ready to apply these declarations to a target database, SchemaQuench is the tool that computes and executes the DDL.