SQL Server Reference

SchemaQuench for SQL Server

The full SchemaQuench reference for SQL Server deployments — configuration, execution flow, WhatIf previews, migration tracking, checkpoint/resume, and FK-aware data delivery.

The SchemaSmith development cycle: declare schema, quench, repeat

Take your declared schema and harden it onto a live database — that's what SchemaQuench does.

Overview

It reads a schema package, connects to the target server, and transforms each database to match the desired state. No hand-written ALTER scripts, no guessing what changed. Run it against dev, staging, and production with the same package, the same confidence, and the same boring, predictable result every time. SchemaQuench compares current state against desired state, makes only the changes necessary, and tracks migration scripts so they execute only once.

One executable, three platforms. The product's Platform value (SqlServer, PostgreSQL, or MySQL) tells SchemaQuench which adapter, which DDL flavor, and which set of helper procedures to use. Everything else looks the same.

Installation and invocation

SchemaQuench is included in the SchemaSmith distribution. Run it from the directory containing SchemaQuench.settings.json:

SchemaQuench

Common switches

SchemaQuench --ConfigFile:path/to/alternate.settings.json
SchemaQuench --LogPath:path/to/logs

Point SchemaQuench at a SQL Server target with --ConnectionString:

SchemaQuench --ConnectionString:"Data Source=myserver;Initial Catalog=master;User ID=sa;Password=secret;TrustServerCertificate=True;"

The --ConnectionString switch bypasses all Target settings and passes the value directly to the SQL Server driver.

Configuration reference

SchemaQuench reads configuration from SchemaQuench.settings.json (or the file specified by --ConfigFile), environment variables with the SmithySettings_ prefix, and command-line switches. Later sources override earlier ones. For the full loading-order and precedence rules, see Configuration.

Target connection settings

KeyTypeDefaultDescription
Target:Serverstring(required)Database server hostname or IP.
Target:Portstring1433TCP port for SQL Server.
Target:Userstring(empty)Login username. SQL Server allows blank for Windows auth.
Target:Passwordstring(empty)Login password.
Target:SecondaryServersstring(empty)Comma-separated list of Availability Group secondary servers to quench in parallel with the primary. See Secondary servers.
Target:ConnectionPropertiesobject{}Arbitrary key-value pairs appended to the connection string — e.g., TrustServerCertificate, Encrypt, ApplicationIntent.

Behavior settings

KeyTypeDefaultDescription
SchemaPackagePathstring(required)Path to the schema package directory or ZIP file.
WhatIfONLYboolfalseDry-run mode. Generates SQL without executing.
KindleTheForgebooltrueDeploy SchemaSmith helper procedures and the migration tracking table to each target database before quenching.
UpdateTablesbooltrueApply table structure changes (columns, indexes, constraints, foreign keys) from the schema package.
DropTablesRemovedFromProductbooltrueDrop tables that exist in the database but aren't defined in the schema package.
RunScriptsTwiceboolfalseRun object scripts twice to verify idempotency. A CI/testing tool.
TrackRunOnceMigrationsbooltrueTrack run-once migration scripts. When false, all scripts run on every deployment.
PruneObsoleteMigrationTrackingbooltrueRemove tracking entries for scripts no longer in the package.
MaxThreadsint10Number of parallel database operations. Range 1–20.
VerboseLoggingboolfalseInclude PRINT informational output from user scripts in logs.
ScriptTokensobject{}Config-level overrides for product script tokens.

Full settings file example

{
  "Target": {
    "Server": "localhost",
    "Port": "",
    "User": "",
    "Password": "",
    "SecondaryServers": "",
    "ConnectionProperties": {
      "TrustServerCertificate": "True"
    }
  },
  "WhatIfONLY": false,
  "SchemaPackagePath": "./MyProduct",
  "KindleTheForge": true,
  "UpdateTables": true,
  "DropTablesRemovedFromProduct": true,
  "RunScriptsTwice": false,
  "TrackRunOnceMigrations": true,
  "PruneObsoleteMigrationTracking": true,
  "MaxThreads": 10,
  "VerboseLogging": false,
  "ScriptTokens": {}
}

For environment variable mapping, see Environment variables.

Secondary servers

SQL Server deployments targeting Availability Groups can quench to a primary plus one or more secondary servers in parallel. Configure secondaries on Target:

{
  "Target": {
    "Server": "primary-replica",
    "SecondaryServers": "secondary-1,secondary-2"
  }
}

When a secondary list is configured, SchemaQuench routes each product-level folder to the right server based on its ServerToQuench setting (Primary, Secondary, or Both). Templates target the primary; product-level scripts can target either side. See Products & Templates — Platform Differences for the package side of the configuration.

Deployment execution flow

When SchemaQuench runs, the product quench executes these steps in order:

  1. Log product info. Logs the product name, platform, template order, validation script, and any configured script tokens.
  2. Test server connection. Opens a connection to the target server and runs a SQL Server liveness check. Aborts if the connection fails.
  3. Validate server. If Product.ValidationScript is configured, executes it against master. Aborts if the result is falsy.
  4. Validate baseline. If Product.BaselineValidationScript is configured, executes it. Aborts if the result is falsy.
  5. Product Before scripts. Executes scripts from Before Product folder(s). With secondary servers, scripts run in parallel to all eligible servers.
  6. Quench each template. For each template name in Product.TemplateOrder:
    • Loads Template.json and merges template-level ScriptTokens over the product token set.
    • Executes DatabaseIdentificationScript against master to discover target databases.
    • Creates a database quench task for each discovered database and runs them in parallel (up to MaxThreads concurrent operations).
    • If any database quench fails, logs the failure and exits with code 2.
  7. Product After scripts. Executes scripts from After Product folder(s).
  8. Stamp product version. If Product.VersionStampScript is configured, executes it.
  9. Log completion. Logs "Completed quench of {ProductName}".

After the quench returns, the calling program backs up log files to a numbered directory and exits with code 0 (see Exit codes).

Database quench sequence

For each database identified by a template's DatabaseIdentificationScript, the database quench runs the following sequence. All steps execute on the identified database.

  1. Kindle the Forge. Deploys SchemaSmith helper procedures, functions, and the migration tracking table for SQL Server. Skipped if KindleTheForge is false.
  2. Validate baseline. Executes Template.BaselineValidationScript if configured. Aborts if falsy.
  3. Object scripts (first pass). Executes scripts from all Objects-slot folders using the dependency retry loop. If RunScriptsTwice is enabled, resets all scripts and runs a complete second pass to verify idempotency.
  4. Parse table JSON. Serializes all Tables/*.json definitions into temp/staging tables for the modular procedures to consume.
  5. MissingTableAndColumnQuench. Creates missing tables and adds missing columns.
  6. Object scripts (second opportunity). Re-attempts any Objects-slot scripts that failed in step 3, now that missing tables exist.
  7. Before scripts. Executes migration scripts from any folder in the Before slot. Sequential and tracked.
  8. ModifiedTableQuench. Alters existing columns (type changes, nullability, defaults, computed columns) and manages indexes and check constraints.
  9. Object scripts (third opportunity). Re-attempts any remaining failed Objects-slot scripts now that table modifications are complete.
  10. BetweenTablesAndKeys scripts. Executes migration scripts from any folder in the BetweenTablesAndKeys slot. Sequential and tracked.
  11. MissingIndexesAndConstraintsQuench. Creates missing indexes, check constraints, default constraints, and statistics.
  12. AfterTablesScripts. Executes migration scripts from any folder in the AfterTablesScripts slot. Sequential and tracked.
  13. AfterTablesObjects scripts. Executes scripts from AfterTablesObjects-slot folders (triggers, DDL triggers, rules, post-table views) using the dependency retry loop. Also retries any still-unresolved Objects-slot scripts.
  14. Table data delivery. Merges table data described by per-table DataDelivery blocks, ordered by foreign key dependencies. See Table data delivery. Then executes any hand-written scripts in the TableData slot using the dependency retry loop.
  15. ForeignKeyQuench. Creates, drops, and modifies foreign keys to match the schema package.
  16. Indexed view quench. If the template defines indexed views, deploys them via the IndexedViewQuench procedure.
  17. After scripts. Executes migration scripts from any folder in the After slot. Sequential and tracked.
  18. Stamp version. Executes Template.VersionStampScript if configured.

When UpdateTables is false, steps 4 through 16 are skipped entirely. When IndexOnlyTableQuenches is enabled on a template, steps 4–8 (parse JSON, missing tables, second Objects pass, Before scripts, modified tables) are replaced by a single call to the IndexOnlyQuench procedure. Steps 9–16 still execute, with MissingIndexesAndConstraintsQuench (step 11) and ForeignKeyQuench (step 15) skipped.

Quench slots

SchemaQuench assigns every script folder to a quench slot that determines when the folder's scripts execute and how they are handled. The slot list is the same on every platform; the default folders vary by platform.

Template quench slots

SlotExecution style
BeforeSequential, tracked
ObjectsDependency retry loop
BetweenTablesAndKeysSequential, tracked
AfterTablesScriptsSequential, tracked
AfterTablesObjectsDependency retry loop
TableDataDependency retry loop
AfterSequential, tracked

Product quench slots

SlotExecution style
BeforeSequential
AfterSequential

Product scripts run against the administrative connection, outside the per-database template loop.

  • Sequential, tracked — scripts run in alphabetical order and are recorded in CompletedMigrationScripts so they only run once (unless marked [ALWAYS]).
  • Dependency retry loop — scripts are retried in rounds until all succeed or no progress is made. See Dependency retry loop.

WhatIf mode

See exactly what SchemaQuench would do before it touches a single table. Set WhatIfONLY to true to perform a dry run. In WhatIf mode:

  • Validation scripts execute normally (server validation, baseline validation).
  • Table quench procedures run with @WhatIf = 1, generating the SQL that would be executed and logging it without applying changes.
  • Migration scripts show detailed status for each script:
    • Would APPLY: {script} for scripts that haven't yet been tracked.
    • Would SKIP (previously quenched): {script} for scripts already recorded in CompletedMigrationScripts.
  • Object scripts (Objects, AfterTablesObjects, Table Data) are logged but not executed.
  • Product Before/After scripts are logged but not executed.
  • Version stamp scripts aren't executed; a log message indicates the stamp would occur.

Important limitation

WhatIf shows the top level of changes, not the full cascade. Because nothing actually executes, WhatIf can't show ripple effects that depend on earlier changes having been applied. For example, if an object script drops an index, that script doesn't run in WhatIf mode, so the index still exists when WhatIf analyzes table changes — meaning the table diff won't show the index as needing to be recreated. WhatIf is a confidence check, not a guarantee. It catches the majority of issues but the full deployment may produce additional changes that WhatIf couldn't predict.

Debug SQL output

During both normal and WhatIf runs, SchemaQuench writes the SQL generated by the table quench process to files in the working directory:

  • SchemaQuench - ParseJson {DatabaseName}.sql
  • SchemaQuench - MissingTableAndColumnQuench {DatabaseName}.sql
  • SchemaQuench - ModifiedTableQuench {DatabaseName}.sql
  • SchemaQuench - MissingIndexesAndConstraintsQuench {DatabaseName}.sql
  • SchemaQuench - ForeignKeyQuench {DatabaseName}.sql
  • SchemaQuench - IndexedViewQuench {DatabaseName}.sql
  • SchemaQuench - IndexOnlyQuench {DatabaseName}.sql (when IndexOnlyTableQuenches is enabled)

These files can be reviewed to understand exactly what structural changes were (or would be) made.

When to use WhatIf

Reach for WhatIf while you're debugging a tricky deployment or while you're still building confidence with the tooling. Inspect the generated SQL, confirm the changes match intent, then run for real. Once you trust the package and the pipeline, direct quenches are the normal mode — WhatIf isn't a required gate on every deployment.

KindleTheForge

Before SchemaQuench can shape your database, it needs its tools in place. KindleTheForge deploys the SchemaSmith infrastructure to each target database. The infrastructure includes the SQL Server helper functions, the modular table-quench procedures, the IndexedViewQuench procedure, the reverse-engineering procedures used by SchemaTongs, and the CompletedMigrationScripts tracking table.

KindleTheForge runs on every quench to ensure the helper procedures match the version of SchemaQuench being used. In a normal release pipeline, always leave this true.

When to set it false

Data-fix and patch deployments turn this off so the run can't alter structure. See Data fixes — the datafix profile for the full flag combination and rationale.

Modular quench procedures

The table quench is broken into modular stored procedures, each handling a specific aspect of the table schema. The procedures are deployed during the KindleTheForge step and called in sequence during the database quench.

ProcedureResponsibility
MissingTableAndColumnQuenchCreates tables that exist in the schema package but not in the database. Adds columns that exist in the table definition but are missing from the existing table.
ModifiedTableQuenchAlters existing columns to match the schema package definitions. Handles data type, nullability, default constraint, and computed column changes. Drops removed tables when DropTablesRemovedFromProduct is enabled.
MissingIndexesAndConstraintsQuenchCreates indexes, check constraints, default constraints, and statistics that exist in the schema package but are missing from the database.
ForeignKeyQuenchCreates, modifies, and drops foreign keys to match the schema package. Runs late in the sequence so all referenced tables and columns exist.
IndexOnlyQuenchAlternative to the full sequence. Manages indexes only — doesn't create tables, add columns, or manage foreign keys. Used when IndexOnlyTableQuenches is enabled on a template.
IndexedViewQuenchDeploys indexed views with diff-based change detection.

The implementation lives in the deployed SQL on the target database — which means a DBA can read it on the server with sp_helptext. No black boxes.

Calling procedures directly from migration scripts

The quench procedures are deployed to the target database during the KindleTheForge step and remain there afterward. You can call them directly from Before Scripts, After Scripts, or any migration script to bootstrap specific tables or views as part of a data migration.

The typical pattern uses specific-object tokens to quench individual objects that your migration script depends on, rather than passing the entire schema. First, define a token in your Product.json or Template.json:

{
  "ScriptTokens": {
    "AuditLogTable": "<*SpecificTable*>dbo.AuditLog"
  }
}

TableQuench — ensures a specific table exists with the right structure before your migration script runs:

-- Bootstrap the AuditLog table so we can insert into it during this migration
EXEC SchemaSmith.TableQuench
    @ProductName = '{{ProductName}}',
    @TableDefinitions = '[{{AuditLogTable}}]',
    @WhatIf = 0,
    @DropUnknownIndexes = 0,
    @DropTablesRemovedFromProduct = 0,
    @UpdateFillFactor = 1;

The same pattern works for views. Define the token, then pass it to the procedure:

{
  "ScriptTokens": {
    "OrderSummaryView": "<*SpecificIndexedView*>dbo.vw_OrderSummary"
  }
}

IndexedViewQuench:

EXEC SchemaSmith.IndexedViewQuench
    @ProductName = '{{ProductName}}',
    @IndexedViewSchema = '[{{OrderSummaryView}}]',
    @WhatIf = 0,
    @UpdateFillFactor = 0;

You can also pass the full schema tokens ({{TableSchema}}, {{IndexedViewSchema}}) to quench all objects of that type, but the specific-object pattern is more common in migration scripts where you need one table or view to exist before proceeding.

Parameter reference

ParameterTableQuenchIndexedViewQuench
ProductNameRequiredRequired
Definitions (JSON)RequiredRequired
WhatIfDefault: offDefault: off
DropUnknownIndexesDefault: off
DropTablesRemovedFromProductDefault: on
UpdateFillFactorDefault: onDefault: off

When to use direct calls: When a migration script needs a table or view to exist before it can run — for example, bootstrapping an audit table in a Before Script before inserting migration tracking data, or ensuring an indexed view is deployed before populating dependent tables.

Migration script tracking

SchemaQuench remembers what it has already run, so you never have to worry about a migration script executing twice. Migration scripts (scripts in the Before, BetweenTablesAndKeys, AfterTablesScripts, and After slots) are tracked in the SchemaSmith.CompletedMigrationScripts table:

ColumnDescription
ProductNameThe product name from Product.json.
QuenchSlotThe slot the script belongs to.
ScriptPathThe relative path of the script within the template.
QuenchDateTimestamp when the script was executed.

Execution rules

  • On each quench run, SchemaQuench checks which scripts in each slot have already been recorded.
  • Scripts that appear in the tracking table are skipped.
  • Scripts that don't appear are executed, and on success a tracking entry is inserted.

The [ALWAYS] suffix

Scripts with [ALWAYS] in the filename (before the .sql extension) run on every quench regardless of tracking:

001_SeedReferenceData [ALWAYS].sql
002_RefreshPermissions [ALWAYS].sql

[ALWAYS] scripts are never recorded in the tracking table.

Ordering

Migration scripts within each slot execute in alphabetical order by filename. Use numeric prefixes to control execution order:

001_CreateStagingTable.sql
002_MigrateData.sql
003_DropStagingTable.sql

Obsolete entry cleanup

When SchemaQuench processes a slot, it compares the tracking table entries against the scripts currently present in the package. Entries for scripts that no longer exist in the package are automatically removed.

Forcing re-execution

To force a tracked script to run again, either delete the corresponding row from SchemaSmith.CompletedMigrationScripts in the target database, or rename the script file (tracking is by path, so a renamed script is treated as new).

Dependency retry loop

You shouldn't have to name your files in dependency order just so they deploy correctly. Scripts in the Objects, AfterTablesObjects, and TableData slots execute using a dependency retry loop rather than simple sequential execution:

  1. Execute all pending (not yet quenched) scripts in the slot.
  2. For each script, attempt to execute all its batches. If any batch fails, record the error and move on.
  3. If at least one script succeeded in this iteration, loop back to step 1 with only the remaining failed scripts.
  4. If zero scripts succeeded in an iteration, the loop terminates.

On the final attempt (the last pass when errors are reported), failures are logged as errors and the quench fails.

This mechanism allows scripts with interdependencies to coexist in the same folder without requiring a specific naming order. For example, if View B references View A and is alphabetically first, it will fail on the first pass but succeed on the retry after View A has been created.

The Objects slot gets four opportunities to resolve: (1) before the table quench, (2) after missing tables are created, (3) after table modifications are complete, and (4) during the AfterTablesObjects pass alongside triggers. This handles cases where a view or function references a table column that doesn't yet exist on the first pass.

DropTablesRemovedFromProduct

When DropTablesRemovedFromProduct is true (the default), ModifiedTableQuench drops tables that:

  • Exist in the target database.
  • Aren't defined in any table JSON file in the schema package.
  • Were previously managed by this product.

This keeps the database clean as tables are removed from the schema package over time.

Environment guidance

EnvironmentSettingRationale
CI and local devtrueCatch product areas that reference tables you plan to remove.
Test/stagingtrueSame rationale, but verify the drop is intentional before promoting to production.
ProductionOften falseDropping a table is a hard drop with no built-in recovery. Teams that need rollback-friendly deployments should leave this off in production.

The rollback-friendly removal pattern

  1. Remove the table from your product definition.
  2. Keep DropTablesRemovedFromProduct: false in the production config.
  3. Write a migration script that renames or archives the table (or verifies no dependencies remain).
  4. After the retention period, either enable the setting for one deployment or add an explicit DROP in a migration script.

RunScriptsTwice

When RunScriptsTwice is true, the Objects-slot scripts are executed twice in succession during step 3 of the database quench sequence. On the second pass, all scripts are reset to unquenched and processed through the dependency retry loop again. Both runs must succeed — if either fails, the deployment fails.

This is an idempotency testing tool, not a dependency resolution mechanism. Dependency resolution is already handled by the retry loop, which retries failed scripts as long as progress is being made. RunScriptsTwice answers a different question: "Can my [ALWAYS] scripts and object scripts run again safely?"

When to use

  • CI pipelines — Verify that [ALWAYS] scripts are truly idempotent. If a script fails on the second run, you have caught an idempotency bug before it reaches production.
  • Local development — Verify idempotency as you author [ALWAYS] scripts.

When not to use

  • Production deployments — It doubles the execution time for the object script phase with no production benefit. This is a testing tool.

TrackRunOnceMigrations

When TrackRunOnceMigrations is false, SchemaQuench treats all migration scripts as if they had the [ALWAYS] suffix — no script is recorded in CompletedMigrationScripts, no script is skipped based on prior runs. Every migration script in every slot runs on every deployment.

When tracking is off, PruneObsoleteMigrationTracking is forced off regardless of its configured value.

For how this flag fits partial-package deployments, see Data fixes — the datafix profile.

PruneObsoleteMigrationTracking

When PruneObsoleteMigrationTracking is true (the default), SchemaQuench removes entries from CompletedMigrationScripts for scripts that no longer exist in the current package. This is correct for full release deployments where the package represents the complete truth.

When false, existing tracking entries are left alone regardless of what scripts are in the current package. This setting is ignored when TrackRunOnceMigrations is false (no tracking means no pruning).

For how this flag fits partial-package deployments, see Data fixes — the datafix profile.

ShouldApplyExpression and conditional deployment

ShouldApplyExpression is a SchemaQuench feature that lives on the schema package side. Whenever SchemaQuench evaluates a table component that has a ShouldApplyExpression set, it resolves any tokens in the expression, runs the expression against the target database, and skips the component if the result is falsy. This means a single table file can declare components that only apply on certain databases, certain environments, or certain server versions — no per-environment file copies, no branching logic in your deployment pipeline. See Conditional application for the JSON shape and worked examples, and Custom Properties as drivers for how to drive ShouldApplyExpression values from team-defined metadata.

Exit codes

CodeMeaning
0Successful quench. All databases quenched, logs backed up.
2One or more database quenches failed.
3Unhandled exception. An unexpected error occurred outside the normal quench flow.
4Unable to back up log files.

Table data delivery

Each table that participates in data delivery carries a DataDelivery block in its JSON. SchemaQuench walks every table JSON, keeps the ones that declare delivery, orders them by foreign key dependencies, and merges each row set. Tables without a DataDelivery block are left untouched. On SQL Server, delivery is applied with the native MERGE statement.

See Data delivery for the full property reference, MergeType options, FK-aware two-pass merge, and worked examples.

DataDelivery vs hand-written Table Data scripts

You can use both. For each target database, SchemaQuench first delivers every table with a DataDelivery block in FK order, then runs any .sql files you dropped into the template's TableData-slot folders through the dependency retry loop. Use declarative DataDelivery for bulk reference data and keep the script slot for special cases — conditional seeds, one-off rebuilds, procedural loads.

Checkpoint and resume

Long deployments fail. Network blips, transient lock timeouts, a migration script that tripped on bad data at step 14 of 20. Without checkpointing, a failure in the final stretch means the next run starts from zero — re-running every step you've already successfully applied.

SchemaQuench writes checkpoints as it goes. Every completed quench step and every completed migration script is recorded to disk. On the next run, already-completed work is skipped and execution resumes at the first incomplete step.

Enabling resume

SchemaQuench --ResumeQuench

With --ResumeQuench, SchemaQuench reads the existing checkpoint files (if any) and skips anything already recorded as complete. Without the switch, the resume logic is off — every step executes regardless of prior state.

Where checkpoints live

SchemaQuench --CheckpointDirectory:C:\schemasmith\checkpoints

By default, checkpoints live in %TEMP%/schemaquench-checkpoints. Override with --CheckpointDirectory:<path> when you need them on a specific volume — for a CI runner with ephemeral temp storage, a shared build server, or a mounted volume that outlives the container. The directory is created if it doesn't exist.

The same value can be set in SchemaQuench.settings.json via the CheckpointDirectory key. The CLI switch wins if both are present.

Checkpoint scopes

SchemaQuench tracks two kinds of progress:

Product-scoped — Cross-database work shared by all templates:

  • Before and After product-level scripts (per server, for Availability Group deployments).
  • Completed templates (a template with every database finished is itself recorded as complete).

Database-scoped — One checkpoint file per {product, template, server, database} combination:

Step nameWhat it covers
KindleForgeHelper procedure deployment for this database.
ValidateBaselineBaseline validation script.
MissingTablesAndColumnsAdding missing tables and missing columns.
ModifiedTablesAltering existing columns, computed columns, dropping tables.
IndexesAndConstraintsCreating missing indexes, check constraints, defaults, statistics.
TableDataDeliveryBoth passes of FK-aware data delivery for tables with DataDelivery blocks.
ForeignKeysCreating, modifying, and dropping foreign keys.
IndexedViewQuenchIndexed view deployment.
VersionStampVersion stamp script.

In addition, each template slot (Before, Objects, AfterTablesObjects, BetweenTablesAndKeys, AfterTable, TableData, After) records the exact scripts that ran, so resumed runs skip each individual script that already succeeded.

Automatic cleanup after success

Checkpoints exist to protect against failures. When the quench completes without error, SchemaQuench deletes every checkpoint file associated with the product. A clean run leaves no residue to mislead the next deployment. A failed run leaves the checkpoint files in place, ready for the next --ResumeQuench invocation.

Practical resume workflow

A 90-minute deployment to a large production database fails at minute 75 because a migration script hit a transient deadlock. You fix the data, re-run the deployment:

SchemaQuench --ResumeQuench

SchemaQuench reads the checkpoints, sees that KindleTheForge, ValidateBaseline, missing tables, modifications, indexes, constraints, and every Objects-slot script already succeeded, logs what it's skipping, and picks up at the first incomplete step. Minutes of work instead of starting from the top.

When to leave resume off

  • Normal, clean deployments — The resume flag is opt-in. Without it, every step executes, and at the end the checkpoint files get cleaned up regardless. There's no cost to leaving it off for fast, green runs.
  • CI pipelines that rebuild databases from scratch — Each run is a fresh slate, so resume has nothing to do.

Use --ResumeQuench when you specifically expect that a prior run may have left partial state — typically when re-running after a real failure in a non-trivial deployment.

Last reviewed May 2026 by the SchemaSmith Team.