SchemaSmith Documentation

SchemaQuench - PostgreSQL (Enterprise)

SchemaQuench turns fragile, step-by-step scripts into safe, declarative releases-so every deployment is boring, predictable, and fast.

SchemaQuench automated deployment tool

Overview

SchemaQuench is a state-based, opinionated database migration tool inspired by the principles of infrastructure-as-code tools like HashiCorp's Terraform. It enables you to define the desired end state of your Postgres databases using metadata and applies these definitions to target servers, ensuring consistency and repeatability across environments. We refer to this process as quenching.

Why Choose SchemaQuench?

Traditional migration scripts track changes over time but can become cumbersome and error-prone, especially in complex environments. SchemaQuench offers a declarative approach, focusing on the desired final state rather than the sequence of changes. This methodology:

  • Ensures Consistency: Aligns your database schema with the defined metadata, reducing discrepancies between environments.
  • Simplifies Version Control: Treats database schema as code, facilitating easier tracking, reviewing, and auditing of changes.
  • Enhances Automation: Integrates seamlessly into CI/CD pipelines, promoting automated deployments.

Where SchemaQuench Fits

  • SchemaTongs extracts your schema into metadata.
  • SchemaHammer provides powerful search and edit capabilities for your metadata.
  • SchemaQuench applies your schema and data metadata to your Postgres server.
  • DataTongs extracts your seed data into metadata

Quick Start Guide

Follow these steps to get started with SchemaQuench:
  1. Extract Metadata with SchemaTongs: Use SchemaTongs to cast your existing database schema into metadata files.
  2. Define Your Product: Edit the product.json to include the correct server identification, version validation, and version stamping for your domain. Add any necessary Script Tokens. Verify the TemplateOrder contains all your templates in the order they need to be quenched.
  3. Define Your Templates: Edit each template.json to find the correct database(s) to apply against, along with version validation and stamping scripts.
  4. Configure Settings: Set up your appSettings.json file to specify connection details, target environment, and other configuration settings.
  5. Run SchemaQuench: Execute SchemaQuench from the command line to apply your metadata to the target server.

You can use Command Line Options to specify the log file location or an alternate config file.

See the SchemaQuench Walkthrough to help you get started with the tool.

Tip

Start by defining a small subset of your database objects. Gradually expand as you gain confidence, ensuring that each step is manageable and verifiable.

Execution Flow

SchemaQuench processes a product in two layers: product-level orchestration and per-database quenching. Understanding this sequence helps you place scripts in the correct slots and troubleshoot failures.

Product-Level Sequence

  1. Load product
  2. Test server connection
  3. Validate server
  4. Validate baseline
  5. Resolve product-level query tokens
  6. Execute product-level Before scripts
  7. Load templates
  8. Quench each template (runs the database sequence below for each database)
  9. Execute product-level After scripts
  10. Stamp product version
  11. Backup logs and exit

Database Quench Sequence (17 Steps)

For each database matched by a template, SchemaQuench runs these steps in order:

  1. Kindle the Forge — Deploy infrastructure objects (see below)
  2. Validate Baseline — Run the template's baseline validation script
  3. Object Scripts (first pass) — Execute scripts from object folders (Views, Functions, Procedures, etc.)
  4. Missing Tables and Columns — Create new tables and add missing columns
  5. Resolve Query Tokens — Evaluate any query-based script tokens
  6. Object Scripts (second attempt) — Re-run object scripts to resolve cross-dependencies
  7. Before Migration Scripts — Run the Before migration script slot
  8. Modified Tables — Alter columns, rebuild dependencies, handle renames
  9. After-Tables Object Scripts — Scripts that depend on table structure being final
  10. Between Tables and Keys Scripts — Run scripts between table changes and constraint application
  11. Indexes and Constraints — Apply indexes, foreign keys, check constraints, statistics
  12. After Table Scripts — Post-constraint scripts
  13. Object Scripts (final pass) — Final object script execution
  14. Data Delivery — Run MERGE statements for seed data
  15. Table Data Scripts — Execute table data script slot
  16. After Migration Scripts — Run the After migration script slot
  17. Version Stamp — Execute the template's version stamping script

Fault Tolerance

Use Checkpointing with the --ResumeQuench flag to resume interrupted deployments from the last successful step, rather than starting over.

Infrastructure Objects

The "Kindle the Forge" step deploys helper objects into a "SchemaSmith" schema in each target database. These objects are used internally during the quench process and are safe to leave in place between runs.

Helper Functions
  • ExecuteOrDebug
  • QuoteColumnList
  • QuoteIndexColumnList
  • StripParenWrapping
  • FormatJson
Quench Procedures
  • MissingTableAndColumnQuench
  • ModifiedTableQuench
  • MissingIndexesAndConstraintsQuench
  • IndexOnlyQuench
  • TableQuench
  • GenerateTableJson
Ownership Procedures
  • ValidateTableOwnership
  • FixupTableOwnership
  • FixupIndexOwnership
Tracking Tables
  • CompletedMigrationScripts — Prevents duplicate migration script execution
  • ProductOwnership — Records which product owns each table and index

Logging & Debug SQL

SchemaQuench writes dual-stream log files (Progress + Errors) and diagnostic SQL files when table quench steps fail. These debug files contain the exact SQL that was attempted, making failures easy to reproduce.

Logging & Exit Codes covers log file locations, backup behavior, debug SQL file names, exit codes, and platform-specific logging details including PostgreSQL NOTICE filtering.

PostgreSQL Platform Notes

Product Ownership

SchemaQuench tracks object ownership through a "SchemaSmith"."ProductOwnership" table with columns for Schema, TableName, IndexName, and ProductName. This table:

  • Records which product owns each table and index
  • Prevents cross-product interference when multiple products share a database
  • Flags objects for removal when they are no longer defined in the product metadata
Error Handling

SchemaQuench tracks errors at multiple levels: script-level, database-level, template-level, and product-level. A script failure is logged and the quench continues to the next script. Repeated failures at higher levels can cause the quench to abort the current database or template while still processing others.

Idempotency Patterns

Infrastructure objects and quench procedures use PostgreSQL idempotency patterns: CREATE ... IF NOT EXISTS, DROP ... IF EXISTS, and INSERT ... ON CONFLICT DO NOTHING/UPDATE. This means re-running KindleTheForge is always safe.

Custom Table Drop Hooks

When SchemaQuench detects that a table should be dropped (it was once defined in the product but has been removed), it checks for a function named "SchemaSmith"."CustomTableDrop". If found, it calls the function with the schema and table name as parameters and skips the actual drop.

Similarly, "SchemaSmith"."CustomTableRestore" is called near the beginning of each quench to restore previously custom-dropped tables. These hooks let you delay drops, perform custom backups, or implement whatever your process requires.

Best Practices

  • Incremental Adoption — Begin with non-critical objects to familiarize yourself with the tool's workflow.
  • Version Control Integration — Keep your metadata files under source control to track changes and collaborate effectively.
  • Environment Segregation — Use different configurations for development, staging, and production to prevent accidental deployments.
  • Regular Validation — Periodically validate that the target databases match the desired state defined in your metadata.

Forge Deeper With An Example

Say you are widening a column from INT to BIGINT. On the surface, that is a simple alter table script, however, what if that column participates in a generated column, a rule, or a view?

You would have to drop those constructs, make your change, and put them back on. That is a lot of bookkeeping for a simple change. SchemaQuench handles all of that for you. You change the type in the table's JSON and SchemaQuench handles the dependencies. See Forge Deeper with SchemaQuench for details.

New Feature

We now support renaming tables and columns. See defining tables for more details.