The full DataTongs reference for PostgreSQL — source connection, the Tables array, ShouldCast flags, generated MERGE scripts against jsonb_to_recordset, and the ConfigureDataDelivery pipeline that wires reference data straight into your schema package.
Lookup tables, seed data, the whole library of reference rows every environment needs — they're a schema-management problem as much as the tables themselves.
DataTongs grips reference data from a live database and produces self-contained synchronization scripts that bring it into any target. Point it at a source, list the tables you care about, and it produces one SQL file per table — ready to drop into a schema package's Table Data folder or run directly against any compatible instance.
DataTongs supports SQL Server, PostgreSQL, and MySQL. On PostgreSQL, the generated scripts use native MERGE (PostgreSQL 15 and later) with jsonb_to_recordset so the source data travels inside the script as an embedded jsonb literal — no staging tables, no COPY dependencies, no external files at run time. The configuration shape and the workflow are identical across all three platforms.
GitHub Releases — Download the latest release from the SchemaSmith releases page. Unzip and place the executable on your PATH. DataTongs ships in the same distribution as SchemaTongs and SchemaQuench.
From the directory containing your DataTongs.settings.json:
DataTongs
With a custom config file:
DataTongs --ConfigFile:/etc/schemasmith/extract-reference-data.json
With a full PostgreSQL connection string override:
DataTongs --ConnectionString:"Host=myserver;Port=5432;Database=appmain;Username=postgres;Password=secret;"
DataTongs connects to the source database, reads every table listed in the Tables array, serializes each row to JSON in deterministic key order, and writes a sync script plus a sibling .tabledata file for each table. Progress is logged to DataTongs - Progress.log and any errors to DataTongs - Errors.log in the working directory.
DataTongs reads configuration from multiple sources, merged in precedence order (highest priority last):
DataTongs.settings.json in the working directory (or the file specified by --ConfigFile)SmithySettings_ prefixFor the full explanation of configuration loading, environment variable mapping, and CLI switches, see Configuration.
{
"Source": {
"Server": "",
"Port": "5432",
"User": "",
"Password": "",
"Database": "",
"Platform": "PostgreSQL",
"ConnectionProperties": {}
},
"ContentPath": "",
"ScriptPath": "",
"Tables": [
{ "Name": "public.country", "KeyColumns": "country_code" },
{ "Name": "public.currency", "KeyColumns": "currency_code", "Filter": "is_active = true" }
],
"ShouldCast": {
"OutputScripts": true,
"OutputContentFiles": false,
"DisableTriggers": false,
"DisableRules": false,
"UpdateDescendents": true,
"MergeUpdate": true,
"MergeDelete": false,
"MergeType": "Insert/Update",
"ConfigureDataDelivery": false
}
}
| Key | Type | Default | Description |
|---|---|---|---|
Source:Platform | string | (required) | Set to PostgreSQL to select the PostgreSQL script generator. |
Source:Server | string | (required) | Database server hostname or IP. |
Source:Port | string | 5432 | TCP port. |
Source:User | string | (empty) | Login username. |
Source:Password | string | (empty) | Login password. |
Source:Database | string | (required) | Source database to extract data from. |
Source:ConnectionProperties | object | {} | Arbitrary key-value pairs appended to the connection string. Platform-specific keys — see Configuration. |
The --ConnectionString switch bypasses all Source settings and passes the provided value directly to the PostgreSQL driver. Source:Platform is still required so the right script generator claims the connection.
| Key | Type | Default | Description |
|---|---|---|---|
ScriptPath | string | "." | Directory where the generated Populate <schema>.<table>.sql merge scripts are written. Created automatically if it doesn't exist. |
ContentPath | string | "." | Directory where the sibling .tabledata files (raw JSON row data) are written. When --ConfigureDataDelivery is on, this path is also used to locate the enclosing template — DataTongs walks upward from ContentPath looking for a Template.json. |
The typical placement inside a schema package is ScriptPath pointing at Templates/<Name>/Table Data and ContentPath pointing at Templates/<Name>/data. That layout makes the script files visible to SchemaQuench's TableData quench slot and the .tabledata files reachable from each table's DataDelivery block.
| Field | Type | Default | Description |
|---|---|---|---|
Name | string | (required) | Table name in schema.table format. If no schema prefix is given, public is assumed. |
KeyColumns | string | (auto-detected) | Comma-separated column names for the row-matching key. When blank, auto-detected from the table's primary key or best unique index. Prefix a column with * for NULL-safe comparison on nullable keys. |
SelectColumns | string | (all columns) | Comma-separated column names to extract. When blank, every non-excluded column is extracted. Use to narrow extraction to a specific subset. |
Filter | string | (empty) | SQL WHERE clause (without the WHERE keyword) to filter which rows are extracted. Also applied to the delete clause when MergeDelete is enabled. |
MergeType | string | (derived) | Per-table override for the DataDelivery:MergeType default. Values: None, Insert, Insert/Update, Insert/Update/Delete. When blank, derived from global ShouldCast:MergeUpdate + ShouldCast:MergeDelete. |
See Table configuration below for key-column auto-detection, nullable-key handling, and filter semantics. The Tables array itself can't be configured via environment variables — table definitions must live in the JSON configuration file.
Specify tables in schema.table format. If you omit the schema, public is assumed:
{ "Name": "public.country" }
{ "Name": "products" }
{ "Name": "config.feature_flags" }
DataTongs validates that each table exists in the source database before attempting extraction. Tables that don't exist are skipped with an error message.
Key columns define the row-matching predicate — the ON clause of the generated MERGE.
Auto-detection. When KeyColumns is blank, DataTongs queries the table's indexes and selects the primary key if one exists. If there's no primary key, it falls back to the first available unique index. Nullable columns in the detected key are automatically prefixed with * for NULL-safe comparison. This handles the vast majority of tables without any manual configuration.
Manual override. When you specify KeyColumns, DataTongs uses your list instead of auto-detection. Separate multiple columns with commas:
{ "Name": "public.order_line", "KeyColumns": "order_id, line_number" }
Nullable key columns. If a key column allows NULLs, prefix it with *. This generates NULL-safe matching:
{ "Name": "public.mapping", "KeyColumns": "source_code, *target_code" }
The generated ON clause becomes:
ON "Source"."source_code" = "Target"."source_code"
AND ("Source"."target_code" = "Target"."target_code"
OR ("Source"."target_code" IS NULL AND "Target"."target_code" IS NULL))
When auto-detection discovers a nullable column in a unique index, it automatically applies the * behavior.
No key available. If a table has no primary key, no unique index, and no KeyColumns configured, DataTongs skips the table with an error message.
The Filter field accepts a SQL WHERE clause (without the WHERE keyword). It controls two things:
MergeDelete is enabled.{ "Name": "public.feature_flags", "KeyColumns": "flag_name", "Filter": "is_active = true" }
This extracts only active flags and, if MergeDelete is on, only deletes active flags that no longer exist in the source. Inactive flags in the target are untouched.
Control how DataTongs generates merge scripts and whether it wires the results into a schema package.
| Flag | Default | Effect |
|---|---|---|
OutputScripts | true | Generate merge scripts. When false, only .tabledata files are written. |
OutputContentFiles | false | Write raw data to sibling .tabledata content files. Automatically enabled when ConfigureDataDelivery is on. |
DisableTriggers | false | Wraps the generated script with ALTER TABLE ... DISABLE TRIGGER USER / ENABLE TRIGGER USER. Use when the target table has audit or notification triggers whose side effects are undesirable during a bulk data refresh. |
DisableRules | false | Wraps the generated script with ALTER TABLE ... DISABLE RULE / ENABLE RULE. Use when the target table has PostgreSQL rules whose rewrites would interfere with the merge. |
UpdateDescendents | true | When false, the generated MERGE uses MERGE INTO ... ONLY so partitioned-table writes do not propagate to descendant tables. |
MergeUpdate | true | Includes the WHEN MATCHED update branch. When disabled, existing rows are left untouched regardless of whether their data differs. |
MergeDelete | true | Includes the WHEN NOT MATCHED BY SOURCE delete branch. When a Filter is configured, the delete clause respects the filter so that rows outside the filter are never removed. |
MergeType | Insert/Update | Default DataDelivery:MergeType for tables that don't set it explicitly. Values: None, Insert, Insert/Update, Insert/Update/Delete. Used when writing DataDelivery blocks via --ConfigureDataDelivery. |
ConfigureDataDelivery | false | After extraction, writes a DataDelivery block into each matching table's JSON file inside the enclosing schema package. See --ConfigureDataDelivery. |
These settings compose into three practical patterns:
| Scenario | MergeUpdate | MergeDelete | Effect |
|---|---|---|---|
| Full sync | true | true | Insert missing, update changed, delete removed. Target matches source exactly. |
| Add and update, no deletes | true | false | Insert missing, update changed, leave extra rows alone. Good when targets carry environment-specific additions. |
| Seed only | false | false | Insert missing rows only. Existing rows untouched, nothing deleted. Good for seed data without overwriting local customizations. |
What DataTongs writes to disk on PostgreSQL is a native MERGE statement with the source data embedded as a jsonb literal and parsed through jsonb_to_recordset. The script is self-contained — no staging tables, no COPY dependencies, no external files. Run it against any compatible database (PostgreSQL 15 or higher, for native MERGE support) and the data arrives.
MERGE INTO "public"."shippers" AS "Target"
USING (
SELECT "company_name", "phone", "shipper_id"
FROM jsonb_to_recordset('[
{"company_name":"Speedy Express","phone":"(503) 555-9831","shipper_id":1},
{"company_name":"United Package","phone":"(503) 555-3199","shipper_id":2},
{"company_name":"Federal Shipping","phone":"(503) 555-9931","shipper_id":3}
]') AS x("company_name" varchar(40), "phone" varchar(24), "shipper_id" integer)
) AS "Source"
ON "Source"."shipper_id" = "Target"."shipper_id"
WHEN MATCHED AND (<change detection>) THEN -- if MergeUpdate
UPDATE SET "company_name" = "Source"."company_name",
"phone" = "Source"."phone"
WHEN NOT MATCHED THEN
INSERT ("company_name", "phone", "shipper_id")
VALUES ("Source"."company_name", "Source"."phone", "Source"."shipper_id")
WHEN NOT MATCHED BY SOURCE THEN -- if MergeDelete
DELETE
;
The comment markers (-- if MergeUpdate, -- if MergeDelete) indicate sections that are conditionally emitted. With all flags on every block lands in the file; flip any of the flags and the relevant block disappears. When DisableTriggers or DisableRules is enabled, the script is wrapped with the matching ALTER TABLE statements before and after the MERGE.
Identity columns declared with GENERATED ALWAYS AS IDENTITY are handled via OVERRIDING SYSTEM VALUE in the INSERT clause so explicit values can be carried from the source. They are excluded from the UPDATE SET clause so existing identity values are never modified.
Generated columns (GENERATED ALWAYS AS ... STORED) are auto-excluded from extraction and from every section of the generated script. They can't be assigned, so attempting to insert or update them would fail at run time.
When UpdateDescendents is false, the generated MERGE uses MERGE INTO <table> ONLY AS "Target" so partitioned-table writes stay on the declared parent and do not propagate to descendant tables. Use this when you're delivering data to a partition directly and don't want the merge to cascade.
DataTongs detects column types and applies the correct extraction and restoration strategy for each. On PostgreSQL:
OVERRIDING SYSTEM VALUE in the INSERT clause; excluded from the UPDATE SET clause.jsonb_to_recordset cast path.Auto-excluded on PostgreSQL: tsvector, tsquery, money, geometric types (box, circle, line, lseg, path), and user-defined composite types. A warning is logged for each skipped column so you know exactly what was skipped.
These types are skipped because their JSON representation can't reliably round-trip without manual intervention. If you need data from these columns, handle them with custom scripts outside of DataTongs.
DataTongs can do more than write scripts — it can wire the extracted data straight into your schema package's declarative delivery pipeline. Pass --ConfigureDataDelivery and DataTongs writes a DataDelivery block into each matching table's JSON file after extracting the data.
DataTongs --ConfigureDataDelivery
DataTongs --ConfigureDataDelivery --TemplatePath:./Templates/Main
The switch can also be enabled via configuration:
{
"ShouldCast": { "ConfigureDataDelivery": true }
}
For every table that produces output, DataTongs opens Tables/<schema>.<table>.json inside the template and writes (or overwrites) a DataDelivery block with:
| Property | Value |
|---|---|
ContentFile | Path to the written .tabledata file, relative to the template root. |
MergeType | Either the per-table MergeType from config, or a default derived from ShouldCast:MergeUpdate + ShouldCast:MergeDelete (see below). |
MatchColumns | The key columns used during extraction — either the per-table KeyColumns, or the primary key auto-detected from the source database (same logic that drives the generated script). |
MergeFilter | The per-table Filter, if one was specified. |
MergeDisableTriggers | Mirrors ShouldCast:DisableTriggers. |
MergeDisableRules | Mirrors ShouldCast:DisableRules. |
MergeUpdateDescendents | Mirrors ShouldCast:UpdateDescendents (default true). |
Tables whose JSON file doesn't exist in the template are skipped with a warning. No file is created from scratch — this tool configures existing tables, it doesn't scaffold new ones (use SchemaTongs for that).
When a table's config doesn't set MergeType explicitly, the default is derived from the global ShouldCast flags:
MergeUpdate | MergeDelete | Default MergeType |
|---|---|---|
true | true | Insert/Update/Delete |
true | false | Insert/Update |
false | any | Insert |
A per-table MergeType on the Tables[] entry always wins over the derived default.
--ConfigureDataDelivery needs to know where your template lives. It uses this precedence:
--TemplatePath:<path> CLI switch.TemplatePath config key.ContentPath looking for the nearest directory containing a Template.json.If nothing is found, DataTongs logs a warning and skips the configuration step — extraction still completes normally. See Schema Packages — DataDelivery for the full property reference and the schema validation that catches misconfigurations at authoring time.
DataTongs extracts data using a deterministic, JSON-shaped query. On PostgreSQL, that query uses jsonb_agg(row_to_json(...)). The principles that apply:
Nobody wants a script that updates every row just because it can. The update branch of the generated script fires only when the source row actually differs from the target row. DataTongs generates a type-aware comparison for every non-key, non-identity column.
For most columns:
NOT ("Target"."column" = "Source"."column"
OR ("Target"."column" IS NULL AND "Source"."column" IS NULL))
This NULL-safe comparison treats two NULLs as equal (no update needed) and a NULL versus a non-NULL as different (update needed).
For special types — JSON / JSONB, large text, arrays — the comparison is wrapped with the appropriate cast or normalization function (e.g., ::text coercion for JSONB equality that survives whitespace differences).
This approach means running DataTongs twice against unchanged data produces a script that matches every row but updates none — the script becomes a no-op for existing data. Diffs are clean. Reviews are honest.
Last reviewed May 2026 by the SchemaSmith Team.