Declare where a table's rows come from and how they should merge. SchemaQuench delivers the data in foreign-key order, across every platform, with no hand-rolled MERGE scripts.
Declare how a table's data gets merged into the target database and let SchemaQuench handle the rest.
Lookup tables, configuration rows, the reference data every environment needs to run — that data is a schema-management problem as much as the tables themselves. The DataDelivery block declares how those rows land in the target database, alongside the table definition, so reference data travels with the schema it depends on.
Declare how a table's data gets merged into the target database and let SchemaQuench handle the rest.
Tables without a DataDelivery block are left alone. Tables that declare one are picked up automatically during the data delivery step — see the SchemaQuench Table Data Delivery section on your platform for the runtime behavior: SQL Server, PostgreSQL, MySQL.
Add a DataDelivery block to the table JSON, point it at a .tabledata file, tell it what a "match" looks like, and SchemaQuench delivers the data in foreign-key order — no hand-rolled merge scripts.
| Property | Type | Default | Description |
|---|---|---|---|
ContentFile |
string | Path to the row data, relative to the template root. Typically produced by DataTongs as a .tabledata file (raw JSON array). |
|
MergeType |
string | One of Insert, Insert/Update, Insert/Update/Delete. See MergeType below. |
|
MatchColumns |
string | Comma-separated column names that identify a row. Prefix a column with * for NULL-safe comparison on nullable keys. Matches the KeyColumns concept in DataTongs. |
|
MergeFilter |
string | "" |
Optional SQL WHERE clause (without the WHERE keyword). Scopes both the rows considered for matching and, when delete is enabled, the rows eligible for deletion. |
MergeDisableTriggers |
bool | false |
Wrap the merge with platform-appropriate trigger disable/enable. |
MergeDisableRules |
bool | false |
PostgreSQL. Disable rewrite rules on the table during the merge. |
MergeUpdateDescendents |
bool | false |
PostgreSQL. When true, the merge targets descendant partitions as well as the specified table. When false (the default), the merge uses ONLY so descendant tables are left untouched. |
| Value | Behavior |
|---|---|
Insert |
Missing rows inserted. Existing rows and extra rows left alone. The seed-data pattern. |
Insert/Update |
Missing rows inserted, changed rows updated. Extra rows left alone. Good for reference tables where environments can append local rows. |
Insert/Update/Delete |
Full sync. Missing rows inserted, changed rows updated, and target rows that don't exist in the source data are deleted. The demo products use this. When MergeFilter is set, deletes are scoped by the filter so rows outside it are never removed. |
The chosen idiom is platform-specific — MERGE on SQL Server and PostgreSQL, INSERT ... ON DUPLICATE KEY UPDATE with a conditional delete step on MySQL — but the declarative contract is the same on every platform.
When multiple tables declare DataDelivery, SchemaQuench orders them by their declared foreign keys:
This is automatic. You don't order the tables yourself; SchemaQuench computes the dependency graph from the ForeignKeys arrays in the table JSON. A cycle among NOT NULL foreign keys fails delivery with a clear log message — break the cycle by making one side nullable, or separate the data load into explicit phases.
{
"Name": "[Employee]",
"Schema": "HumanResources",
"Columns": [
{ "Name": "[EmployeeID]", "DataType": "INT", "Identity": true, "Nullable": false },
{ "Name": "[ManagerID]", "DataType": "INT", "Nullable": true },
{ "Name": "[DepartmentID]", "DataType": "INT", "Nullable": false },
{ "Name": "[FullName]", "DataType": "NVARCHAR(100)","Nullable": false }
],
"Indexes": [
{ "Name": "[PK_Employee]", "PrimaryKey": true, "Unique": true, "IndexColumns": "[EmployeeID]" }
],
"ForeignKeys": [
{ "Name": "[FK_Employee_Manager]", "Columns": "[ManagerID]", "RelatedTable": "[Employee]", "RelatedColumns": "[EmployeeID]" },
{ "Name": "[FK_Employee_Department]", "Columns": "[DepartmentID]", "RelatedTable": "[Department]", "RelatedColumns": "[DepartmentID]" }
],
"DataDelivery": {
"ContentFile": "data/HumanResources.Employee.tabledata",
"MergeType": "Insert/Update",
"MatchColumns": "[EmployeeID]",
"MergeDisableTriggers": true
}
}
The self-referential ManagerID is nullable, so pass 1 loads every employee with ManagerID = NULL, and pass 2 back-fills the manager chain once every row exists. The mandatory DepartmentID forces Department to deliver first.
You don't have to write these blocks by hand. Point DataTongs at a source database with --ConfigureDataDelivery and it writes the DataDelivery section into each table JSON, including the match columns and merge type — see DataTongs — --ConfigureDataDelivery.
Last reviewed May 2026 by the SchemaSmith Team.