The full SchemaQuench reference for SQL Server deployments — configuration, execution flow, WhatIf previews, migration tracking, checkpoint/resume, and FK-aware data delivery.
Take your declared schema and harden it onto a live database — that's what SchemaQuench does.
It reads a schema package, connects to the target server, and transforms each database to match the desired state. No hand-written ALTER scripts, no guessing what changed. Run it against dev, staging, and production with the same package, the same confidence, and the same boring, predictable result every time. SchemaQuench compares current state against desired state, makes only the changes necessary, and tracks migration scripts so they execute only once.
One executable, three platforms. The product's Platform value (SqlServer, PostgreSQL, or MySQL) tells SchemaQuench which adapter, which DDL flavor, and which set of helper procedures to use. Everything else looks the same.
SchemaQuench is included in the SchemaSmith distribution. Run it from the directory containing SchemaQuench.settings.json:
SchemaQuench
SchemaQuench --ConfigFile:path/to/alternate.settings.json
SchemaQuench --LogPath:path/to/logs
Point SchemaQuench at a SQL Server target with --ConnectionString:
SchemaQuench --ConnectionString:"Data Source=myserver;Initial Catalog=master;User ID=sa;Password=secret;TrustServerCertificate=True;"
The --ConnectionString switch bypasses all Target settings and passes the value directly to the SQL Server driver.
SchemaQuench reads configuration from SchemaQuench.settings.json (or the file specified by --ConfigFile), environment variables with the SmithySettings_ prefix, and command-line switches. Later sources override earlier ones. For the full loading-order and precedence rules, see Configuration.
| Key | Type | Default | Description |
|---|---|---|---|
Target:Server | string | (required) | Database server hostname or IP. |
Target:Port | string | 1433 | TCP port for SQL Server. |
Target:User | string | (empty) | Login username. SQL Server allows blank for Windows auth. |
Target:Password | string | (empty) | Login password. |
Target:SecondaryServers | string | (empty) | Comma-separated list of Availability Group secondary servers to quench in parallel with the primary. See Secondary servers. |
Target:ConnectionProperties | object | {} | Arbitrary key-value pairs appended to the connection string — e.g., TrustServerCertificate, Encrypt, ApplicationIntent. |
| Key | Type | Default | Description |
|---|---|---|---|
SchemaPackagePath | string | (required) | Path to the schema package directory or ZIP file. |
WhatIfONLY | bool | false | Dry-run mode. Generates SQL without executing. |
KindleTheForge | bool | true | Deploy SchemaSmith helper procedures and the migration tracking table to each target database before quenching. |
UpdateTables | bool | true | Apply table structure changes (columns, indexes, constraints, foreign keys) from the schema package. |
DropTablesRemovedFromProduct | bool | true | Drop tables that exist in the database but aren't defined in the schema package. |
RunScriptsTwice | bool | false | Run object scripts twice to verify idempotency. A CI/testing tool. |
TrackRunOnceMigrations | bool | true | Track run-once migration scripts. When false, all scripts run on every deployment. |
PruneObsoleteMigrationTracking | bool | true | Remove tracking entries for scripts no longer in the package. |
MaxThreads | int | 10 | Number of parallel database operations. Range 1–20. |
VerboseLogging | bool | false | Include PRINT informational output from user scripts in logs. |
ScriptTokens | object | {} | Config-level overrides for product script tokens. |
{
"Target": {
"Server": "localhost",
"Port": "",
"User": "",
"Password": "",
"SecondaryServers": "",
"ConnectionProperties": {
"TrustServerCertificate": "True"
}
},
"WhatIfONLY": false,
"SchemaPackagePath": "./MyProduct",
"KindleTheForge": true,
"UpdateTables": true,
"DropTablesRemovedFromProduct": true,
"RunScriptsTwice": false,
"TrackRunOnceMigrations": true,
"PruneObsoleteMigrationTracking": true,
"MaxThreads": 10,
"VerboseLogging": false,
"ScriptTokens": {}
}
For environment variable mapping, see Environment variables.
SQL Server deployments targeting Availability Groups can quench to a primary plus one or more secondary servers in parallel. Configure secondaries on Target:
{
"Target": {
"Server": "primary-replica",
"SecondaryServers": "secondary-1,secondary-2"
}
}
When a secondary list is configured, SchemaQuench routes each product-level folder to the right server based on its ServerToQuench setting (Primary, Secondary, or Both). Templates target the primary; product-level scripts can target either side. See Products & Templates — Platform Differences for the package side of the configuration.
When SchemaQuench runs, the product quench executes these steps in order:
Product.ValidationScript is configured, executes it against master. Aborts if the result is falsy.Product.BaselineValidationScript is configured, executes it. Aborts if the result is falsy.Before Product folder(s). With secondary servers, scripts run in parallel to all eligible servers.Product.TemplateOrder:
Template.json and merges template-level ScriptTokens over the product token set.DatabaseIdentificationScript against master to discover target databases.MaxThreads concurrent operations).After Product folder(s).Product.VersionStampScript is configured, executes it.After the quench returns, the calling program backs up log files to a numbered directory and exits with code 0 (see Exit codes).
For each database identified by a template's DatabaseIdentificationScript, the database quench runs the following sequence. All steps execute on the identified database.
KindleTheForge is false.Template.BaselineValidationScript if configured. Aborts if falsy.Objects-slot folders using the dependency retry loop. If RunScriptsTwice is enabled, resets all scripts and runs a complete second pass to verify idempotency.Tables/*.json definitions into temp/staging tables for the modular procedures to consume.Before slot. Sequential and tracked.BetweenTablesAndKeys slot. Sequential and tracked.AfterTablesScripts slot. Sequential and tracked.AfterTablesObjects-slot folders (triggers, DDL triggers, rules, post-table views) using the dependency retry loop. Also retries any still-unresolved Objects-slot scripts.DataDelivery blocks, ordered by foreign key dependencies. See Table data delivery. Then executes any hand-written scripts in the TableData slot using the dependency retry loop.IndexedViewQuench procedure.After slot. Sequential and tracked.Template.VersionStampScript if configured.When UpdateTables is false, steps 4 through 16 are skipped entirely. When IndexOnlyTableQuenches is enabled on a template, steps 4–8 (parse JSON, missing tables, second Objects pass, Before scripts, modified tables) are replaced by a single call to the IndexOnlyQuench procedure. Steps 9–16 still execute, with MissingIndexesAndConstraintsQuench (step 11) and ForeignKeyQuench (step 15) skipped.
SchemaQuench assigns every script folder to a quench slot that determines when the folder's scripts execute and how they are handled. The slot list is the same on every platform; the default folders vary by platform.
| Slot | Execution style |
|---|---|
Before | Sequential, tracked |
Objects | Dependency retry loop |
BetweenTablesAndKeys | Sequential, tracked |
AfterTablesScripts | Sequential, tracked |
AfterTablesObjects | Dependency retry loop |
TableData | Dependency retry loop |
After | Sequential, tracked |
| Slot | Execution style |
|---|---|
Before | Sequential |
After | Sequential |
Product scripts run against the administrative connection, outside the per-database template loop.
CompletedMigrationScripts so they only run once (unless marked [ALWAYS]).See exactly what SchemaQuench would do before it touches a single table. Set WhatIfONLY to true to perform a dry run. In WhatIf mode:
@WhatIf = 1, generating the SQL that would be executed and logging it without applying changes.Would APPLY: {script} for scripts that haven't yet been tracked.Would SKIP (previously quenched): {script} for scripts already recorded in CompletedMigrationScripts.WhatIf shows the top level of changes, not the full cascade. Because nothing actually executes, WhatIf can't show ripple effects that depend on earlier changes having been applied. For example, if an object script drops an index, that script doesn't run in WhatIf mode, so the index still exists when WhatIf analyzes table changes — meaning the table diff won't show the index as needing to be recreated. WhatIf is a confidence check, not a guarantee. It catches the majority of issues but the full deployment may produce additional changes that WhatIf couldn't predict.
During both normal and WhatIf runs, SchemaQuench writes the SQL generated by the table quench process to files in the working directory:
SchemaQuench - ParseJson {DatabaseName}.sqlSchemaQuench - MissingTableAndColumnQuench {DatabaseName}.sqlSchemaQuench - ModifiedTableQuench {DatabaseName}.sqlSchemaQuench - MissingIndexesAndConstraintsQuench {DatabaseName}.sqlSchemaQuench - ForeignKeyQuench {DatabaseName}.sqlSchemaQuench - IndexedViewQuench {DatabaseName}.sqlSchemaQuench - IndexOnlyQuench {DatabaseName}.sql (when IndexOnlyTableQuenches is enabled)These files can be reviewed to understand exactly what structural changes were (or would be) made.
Reach for WhatIf while you're debugging a tricky deployment or while you're still building confidence with the tooling. Inspect the generated SQL, confirm the changes match intent, then run for real. Once you trust the package and the pipeline, direct quenches are the normal mode — WhatIf isn't a required gate on every deployment.
Before SchemaQuench can shape your database, it needs its tools in place. KindleTheForge deploys the SchemaSmith infrastructure to each target database. The infrastructure includes the SQL Server helper functions, the modular table-quench procedures, the IndexedViewQuench procedure, the reverse-engineering procedures used by SchemaTongs, and the CompletedMigrationScripts tracking table.
KindleTheForge runs on every quench to ensure the helper procedures match the version of SchemaQuench being used. In a normal release pipeline, always leave this true.
Data-fix and patch deployments turn this off so the run can't alter structure. See Data fixes — the datafix profile for the full flag combination and rationale.
The table quench is broken into modular stored procedures, each handling a specific aspect of the table schema. The procedures are deployed during the KindleTheForge step and called in sequence during the database quench.
| Procedure | Responsibility |
|---|---|
| MissingTableAndColumnQuench | Creates tables that exist in the schema package but not in the database. Adds columns that exist in the table definition but are missing from the existing table. |
| ModifiedTableQuench | Alters existing columns to match the schema package definitions. Handles data type, nullability, default constraint, and computed column changes. Drops removed tables when DropTablesRemovedFromProduct is enabled. |
| MissingIndexesAndConstraintsQuench | Creates indexes, check constraints, default constraints, and statistics that exist in the schema package but are missing from the database. |
| ForeignKeyQuench | Creates, modifies, and drops foreign keys to match the schema package. Runs late in the sequence so all referenced tables and columns exist. |
| IndexOnlyQuench | Alternative to the full sequence. Manages indexes only — doesn't create tables, add columns, or manage foreign keys. Used when IndexOnlyTableQuenches is enabled on a template. |
| IndexedViewQuench | Deploys indexed views with diff-based change detection. |
The implementation lives in the deployed SQL on the target database — which means a DBA can read it on the server with sp_helptext. No black boxes.
The quench procedures are deployed to the target database during the KindleTheForge step and remain there afterward. You can call them directly from Before Scripts, After Scripts, or any migration script to bootstrap specific tables or views as part of a data migration.
The typical pattern uses specific-object tokens to quench individual objects that your migration script depends on, rather than passing the entire schema. First, define a token in your Product.json or Template.json:
{
"ScriptTokens": {
"AuditLogTable": "<*SpecificTable*>dbo.AuditLog"
}
}
TableQuench — ensures a specific table exists with the right structure before your migration script runs:
-- Bootstrap the AuditLog table so we can insert into it during this migration
EXEC SchemaSmith.TableQuench
@ProductName = '{{ProductName}}',
@TableDefinitions = '[{{AuditLogTable}}]',
@WhatIf = 0,
@DropUnknownIndexes = 0,
@DropTablesRemovedFromProduct = 0,
@UpdateFillFactor = 1;
The same pattern works for views. Define the token, then pass it to the procedure:
{
"ScriptTokens": {
"OrderSummaryView": "<*SpecificIndexedView*>dbo.vw_OrderSummary"
}
}
IndexedViewQuench:
EXEC SchemaSmith.IndexedViewQuench
@ProductName = '{{ProductName}}',
@IndexedViewSchema = '[{{OrderSummaryView}}]',
@WhatIf = 0,
@UpdateFillFactor = 0;
You can also pass the full schema tokens ({{TableSchema}}, {{IndexedViewSchema}}) to quench all objects of that type, but the specific-object pattern is more common in migration scripts where you need one table or view to exist before proceeding.
| Parameter | TableQuench | IndexedViewQuench |
|---|---|---|
| ProductName | Required | Required |
| Definitions (JSON) | Required | Required |
| WhatIf | Default: off | Default: off |
| DropUnknownIndexes | Default: off | — |
| DropTablesRemovedFromProduct | Default: on | — |
| UpdateFillFactor | Default: on | Default: off |
When to use direct calls: When a migration script needs a table or view to exist before it can run — for example, bootstrapping an audit table in a Before Script before inserting migration tracking data, or ensuring an indexed view is deployed before populating dependent tables.
SchemaQuench remembers what it has already run, so you never have to worry about a migration script executing twice. Migration scripts (scripts in the Before, BetweenTablesAndKeys, AfterTablesScripts, and After slots) are tracked in the SchemaSmith.CompletedMigrationScripts table:
| Column | Description |
|---|---|
ProductName | The product name from Product.json. |
QuenchSlot | The slot the script belongs to. |
ScriptPath | The relative path of the script within the template. |
QuenchDate | Timestamp when the script was executed. |
Scripts with [ALWAYS] in the filename (before the .sql extension) run on every quench regardless of tracking:
001_SeedReferenceData [ALWAYS].sql
002_RefreshPermissions [ALWAYS].sql
[ALWAYS] scripts are never recorded in the tracking table.
Migration scripts within each slot execute in alphabetical order by filename. Use numeric prefixes to control execution order:
001_CreateStagingTable.sql
002_MigrateData.sql
003_DropStagingTable.sql
When SchemaQuench processes a slot, it compares the tracking table entries against the scripts currently present in the package. Entries for scripts that no longer exist in the package are automatically removed.
To force a tracked script to run again, either delete the corresponding row from SchemaSmith.CompletedMigrationScripts in the target database, or rename the script file (tracking is by path, so a renamed script is treated as new).
You shouldn't have to name your files in dependency order just so they deploy correctly. Scripts in the Objects, AfterTablesObjects, and TableData slots execute using a dependency retry loop rather than simple sequential execution:
On the final attempt (the last pass when errors are reported), failures are logged as errors and the quench fails.
This mechanism allows scripts with interdependencies to coexist in the same folder without requiring a specific naming order. For example, if View B references View A and is alphabetically first, it will fail on the first pass but succeed on the retry after View A has been created.
The Objects slot gets four opportunities to resolve: (1) before the table quench, (2) after missing tables are created, (3) after table modifications are complete, and (4) during the AfterTablesObjects pass alongside triggers. This handles cases where a view or function references a table column that doesn't yet exist on the first pass.
When DropTablesRemovedFromProduct is true (the default), ModifiedTableQuench drops tables that:
This keeps the database clean as tables are removed from the schema package over time.
| Environment | Setting | Rationale |
|---|---|---|
| CI and local dev | true | Catch product areas that reference tables you plan to remove. |
| Test/staging | true | Same rationale, but verify the drop is intentional before promoting to production. |
| Production | Often false | Dropping a table is a hard drop with no built-in recovery. Teams that need rollback-friendly deployments should leave this off in production. |
DropTablesRemovedFromProduct: false in the production config.When RunScriptsTwice is true, the Objects-slot scripts are executed twice in succession during step 3 of the database quench sequence. On the second pass, all scripts are reset to unquenched and processed through the dependency retry loop again. Both runs must succeed — if either fails, the deployment fails.
This is an idempotency testing tool, not a dependency resolution mechanism. Dependency resolution is already handled by the retry loop, which retries failed scripts as long as progress is being made. RunScriptsTwice answers a different question: "Can my [ALWAYS] scripts and object scripts run again safely?"
[ALWAYS] scripts are truly idempotent. If a script fails on the second run, you have caught an idempotency bug before it reaches production.[ALWAYS] scripts.When TrackRunOnceMigrations is false, SchemaQuench treats all migration scripts as if they had the [ALWAYS] suffix — no script is recorded in CompletedMigrationScripts, no script is skipped based on prior runs. Every migration script in every slot runs on every deployment.
When tracking is off, PruneObsoleteMigrationTracking is forced off regardless of its configured value.
For how this flag fits partial-package deployments, see Data fixes — the datafix profile.
When PruneObsoleteMigrationTracking is true (the default), SchemaQuench removes entries from CompletedMigrationScripts for scripts that no longer exist in the current package. This is correct for full release deployments where the package represents the complete truth.
When false, existing tracking entries are left alone regardless of what scripts are in the current package. This setting is ignored when TrackRunOnceMigrations is false (no tracking means no pruning).
For how this flag fits partial-package deployments, see Data fixes — the datafix profile.
ShouldApplyExpression is a SchemaQuench feature that lives on the schema package side. Whenever SchemaQuench evaluates a table component that has a ShouldApplyExpression set, it resolves any tokens in the expression, runs the expression against the target database, and skips the component if the result is falsy. This means a single table file can declare components that only apply on certain databases, certain environments, or certain server versions — no per-environment file copies, no branching logic in your deployment pipeline. See Conditional application for the JSON shape and worked examples, and Custom Properties as drivers for how to drive ShouldApplyExpression values from team-defined metadata.
| Code | Meaning |
|---|---|
0 | Successful quench. All databases quenched, logs backed up. |
2 | One or more database quenches failed. |
3 | Unhandled exception. An unexpected error occurred outside the normal quench flow. |
4 | Unable to back up log files. |
Each table that participates in data delivery carries a DataDelivery block in its JSON. SchemaQuench walks every table JSON, keeps the ones that declare delivery, orders them by foreign key dependencies, and merges each row set. Tables without a DataDelivery block are left untouched. On SQL Server, delivery is applied with the native MERGE statement.
See Data delivery for the full property reference, MergeType options, FK-aware two-pass merge, and worked examples.
You can use both. For each target database, SchemaQuench first delivers every table with a DataDelivery block in FK order, then runs any .sql files you dropped into the template's TableData-slot folders through the dependency retry loop. Use declarative DataDelivery for bulk reference data and keep the script slot for special cases — conditional seeds, one-off rebuilds, procedural loads.
Long deployments fail. Network blips, transient lock timeouts, a migration script that tripped on bad data at step 14 of 20. Without checkpointing, a failure in the final stretch means the next run starts from zero — re-running every step you've already successfully applied.
SchemaQuench writes checkpoints as it goes. Every completed quench step and every completed migration script is recorded to disk. On the next run, already-completed work is skipped and execution resumes at the first incomplete step.
SchemaQuench --ResumeQuench
With --ResumeQuench, SchemaQuench reads the existing checkpoint files (if any) and skips anything already recorded as complete. Without the switch, the resume logic is off — every step executes regardless of prior state.
SchemaQuench --CheckpointDirectory:C:\schemasmith\checkpoints
By default, checkpoints live in %TEMP%/schemaquench-checkpoints. Override with --CheckpointDirectory:<path> when you need them on a specific volume — for a CI runner with ephemeral temp storage, a shared build server, or a mounted volume that outlives the container. The directory is created if it doesn't exist.
The same value can be set in SchemaQuench.settings.json via the CheckpointDirectory key. The CLI switch wins if both are present.
SchemaQuench tracks two kinds of progress:
Product-scoped — Cross-database work shared by all templates:
Before and After product-level scripts (per server, for Availability Group deployments).Database-scoped — One checkpoint file per {product, template, server, database} combination:
| Step name | What it covers |
|---|---|
KindleForge | Helper procedure deployment for this database. |
ValidateBaseline | Baseline validation script. |
MissingTablesAndColumns | Adding missing tables and missing columns. |
ModifiedTables | Altering existing columns, computed columns, dropping tables. |
IndexesAndConstraints | Creating missing indexes, check constraints, defaults, statistics. |
TableDataDelivery | Both passes of FK-aware data delivery for tables with DataDelivery blocks. |
ForeignKeys | Creating, modifying, and dropping foreign keys. |
IndexedViewQuench | Indexed view deployment. |
VersionStamp | Version stamp script. |
In addition, each template slot (Before, Objects, AfterTablesObjects, BetweenTablesAndKeys, AfterTable, TableData, After) records the exact scripts that ran, so resumed runs skip each individual script that already succeeded.
Checkpoints exist to protect against failures. When the quench completes without error, SchemaQuench deletes every checkpoint file associated with the product. A clean run leaves no residue to mislead the next deployment. A failed run leaves the checkpoint files in place, ready for the next --ResumeQuench invocation.
A 90-minute deployment to a large production database fails at minute 75 because a migration script hit a transient deadlock. You fix the data, re-run the deployment:
SchemaQuench --ResumeQuench
SchemaQuench reads the checkpoints, sees that KindleTheForge, ValidateBaseline, missing tables, modifications, indexes, constraints, and every Objects-slot script already succeeded, logs what it's skipping, and picks up at the first incomplete step. Minutes of work instead of starting from the top.
Use --ResumeQuench when you specifically expect that a prior run may have left partial state — typically when re-running after a real failure in a non-trivial deployment.
Last reviewed May 2026 by the SchemaSmith Team.