SchemaSmith Documentation

DataTongs - PostgreSQL (Enterprise)

Extract data and generate deployment scripts for repeatable Postgres environments.

DataTongs reference data extraction tool

Overview

Now that you have your source and structure under source control and you've got your state based deployments in place it's time to manage your seed data in the same way. DataTongs will help you extract all or parts of your data and manage deployment via generated MERGE scripts using the data in a Json format. It connects to a Postgres database and extracts data for the tables specified in the config file and will generate MERGE statements that insert missing rows and optionally update and delete based on the extracted data as well. You can define a filter clause in case you allow mixing user and system data in a single table and both the extraction and DELETE will honor that filter.

Simply point the application at a database with the correct seed data to capture, fill out the list of tables and the key columns for each, set the merge options you wish to generate and we'll handle generating the scripts for you. You can generate new scripts any time you need to update the seed data or manually update the script(s) if you find that simpler. Your application seed data will become part of your source control and state based updates.

Where DataTongs Fits

  • SchemaTongs extracts your schema into metadata.
  • SchemaHammer provides powerful search and edit capabilities for your metadata.
  • SchemaQuench applies your schema and data metadata to your Postgres server.
  • DataTongs extracts your seed data into metadata

Quick Start Guide

1. Configure your appSettings.json

Point DataTongs at your source database and list the tables to extract. This example uses Schema Package Integration mode to write content files and configure table JSON automatically:

{
  "Source": {
    "Server": "localhost",
    "User": "postgres",
    "Password": "",
    "Database": "dvdrental"
  },
  "ScriptPath": "",
  "ContentPath": "/opt/schema-package/Templates/Main/TableData",
  "Tables": [
    {
      "Name": "public.country",
      "KeyColumns": "",
      "SelectColumns": "",
      "Filter": "",
      "MergeType": ""
    },
    {
      "Name": "public.language",
      "KeyColumns": "",
      "SelectColumns": "",
      "Filter": "",
      "MergeType": ""
    },
    {
      "Name": "public.category",
      "KeyColumns": "",
      "SelectColumns": "",
      "Filter": "category_id <= 10",
      "MergeType": ""
    }
  ],
  "ShouldCast": {
    "OutputContentFiles": true,
    "OutputScripts": false,
    "ConfigureDataDelivery": true,
    "MergeType": "Insert/Update",
    "MergeUpdate": true,
    "MergeDelete": false,
    "DisableTriggers": false,
    "DisableRules": false,
    "UpdateDescendents": true
  }
}
2. Run DataTongs

Execute DataTongs from the command line and tongs will begin the casting process.

> DataTongs
3. Add them to your repository

Once you determine your deployment scheme you can add them to your repository and configure either a script folder or the data delivery section of each table to deploy your seed data.

See the DataTongs Walkthrough to help you get started with the tool.

Operating Modes

DataTongs operates in two modes depending on how you want to deploy the extracted data. The modes are mutually exclusive — ConfigureDataDelivery requires OutputContentFiles to be enabled and OutputScripts to be disabled.

Schema Package Integration

When ConfigureDataDelivery is enabled, DataTongs writes .tabledata content files and automatically updates the corresponding table JSON files in your schema package with data delivery properties. SchemaQuench reads these properties at deployment time and generates MERGE statements on the fly.

Configuration:

  • ShouldCast:OutputContentFiles = true
  • ShouldCast:OutputScripts = false
  • ShouldCast:ConfigureDataDelivery = true
  • ContentPath must reside within a template directory (one that contains Template.json)

Properties written to table JSON files:

Property Source
ContentFile Relative path from template root to the .tabledata file.
MergeType Per-table MergeType override, or the global ShouldCast:MergeType default.
MatchColumns From the table's KeyColumns setting, if provided.
MergeFilter From the table's Filter setting, if provided.
MergeDisableTriggers From the global ShouldCast:DisableTriggers setting.
MergeDisableRules From the global ShouldCast:DisableRules setting.
MergeUpdateDescendents From the global ShouldCast:UpdateDescendents setting.

DataTongs only writes the table JSON file when a property value has actually changed, keeping your source control diffs clean.

Standalone Script Generation

When OutputScripts is enabled, DataTongs generates self-contained .sql files with MERGE statements. These scripts can be executed directly or placed in a script folder for SchemaQuench to run.

Configuration:

  • ShouldCast:OutputScripts = true
  • ShouldCast:ConfigureDataDelivery = false (or omitted)

Each script is named Populate schema.tablename.sql and written to ScriptPath. By default, the JSON data is embedded inline in the script. Set TokenizeScripts to true to use a {{schema.table.tabledata}} token placeholder instead, allowing you to manage the JSON content file separately from the deployment script.

You can also combine OutputScripts with OutputContentFiles to generate both scripts and content files — useful when you want scripts for manual review and content files for automated delivery.

Test Data

You can also use DataTongs to extract test data. Generate standalone scripts and execute them outside of SchemaQuench to populate test databases as part of your test pipeline. This simplifies integration test setup and lets you easily reset to a consistent starting state between test runs.

Extraction Details

Column Auto-Detection

When SelectColumns is left blank for a table, DataTongs queries information_schema.columns and pg_attribute to build the column list automatically. Generated columns and dropped columns are excluded. Array columns are automatically wrapped with ARRAY_TO_STRING for serialization.

When KeyColumns is left blank, DataTongs auto-detects match columns using the table's primary key. If no primary key exists, the unique index with the fewest columns is selected, with preference for indexes where all columns are NOT NULL. Nullable match columns are prefixed with * to generate IS NULL-safe equality in the ON clause.

Excluded Columns

The following column types are automatically excluded from extraction:

Column Type Reason
Generated columns PostgreSQL computes these values automatically from expressions; they cannot be inserted or updated directly.
Dropped columns Columns that no longer exist on the table but still have historical space allocated. Detected via pg_attribute.attisdropped.

Special Type Handling

Array Columns
Array values (detected by a leading underscore in PostgreSQL's udt_name, e.g. _int4, _text) are serialized using ARRAY_TO_STRING with *,* as the delimiter and *NULL_VALUE_REPRESENTATION* for null elements within the array. At delivery time, values are deserialized with STRING_TO_ARRAY using the same conventions. This encoding preserves array elements that contain commas and distinguishes null elements from empty strings.

Extraction SQL

DataTongs extracts table data using JSON_AGG and ROW_TO_JSON:

SELECT JSON_AGG(ROW_TO_JSON(tbl))
  FROM (SELECT "col1", "col2", ...
          FROM "schema"."tableName"
          WHERE {filter}
          ORDER BY "keyColumns") tbl

Results are ordered by key columns for deterministic output and cleaner source control diffs. The JSON output is lightly formatted with newlines between row objects for readability.

PostgreSQL uses double-quoted identifiers throughout. Column and table names are quoted exactly as they appear in the catalog.

ShouldCast Flags Reference

Flag Default Description
OutputContentFiles false Write .tabledata JSON files to ContentPath. Each file is named schema.tablename.tabledata. Required for ConfigureDataDelivery.
OutputScripts true Write standalone .sql MERGE scripts to ScriptPath. Each file is named Populate schema.tablename.sql. Incompatible with ConfigureDataDelivery.
TokenizeScripts false Use {{schema.table.tabledata}} token placeholder in scripts instead of embedding JSON inline. Only applies when OutputScripts is enabled.
MergeUpdate true Include WHEN MATCHED THEN UPDATE clause in generated MERGE scripts. Updates existing rows when key columns match and values differ.
MergeDelete false Include WHEN NOT MATCHED BY SOURCE THEN DELETE clause. Requires PostgreSQL 17+. Deletes rows in the target not present in the source data.
DisableTriggers false Wrap MERGE statements with ALTER TABLE ... DISABLE TRIGGER ALL before execution and ENABLE TRIGGER ALL after.
DisableRules false Wrap MERGE statements with ALTER TABLE ... DISABLE RULE ALL before execution and ENABLE RULE ALL after. PostgreSQL-specific.
UpdateDescendents true Target inherited child tables during merge. When false, the ONLY keyword is used to restrict operations to the named table. PostgreSQL-specific.
ConfigureDataDelivery false Automatically update table JSON files with delivery properties after extraction. Requires OutputContentFiles = true and OutputScripts = false. See Operating Modes.
MergeType Insert/Update Default merge type written to table JSON when ConfigureDataDelivery is enabled. Valid values: Insert, Insert/Update, Insert/Update/Delete. Can be overridden per table.

Additional Resources

See Also