Package data changes into deliverable artifacts for controlled rollouts.
In the Community edition, data delivery can be achieved via scripts.
SchemaQuench supports those scripts and the
TableData quench slot and behavior. The Enterprise Edition expands on this
capability by offering the ability to export your data to JSON files and configure tables
for automated data delivery. This feature uses the JSON files to generate MERGE scripts
dynamically, which are applied immediately prior to the TableData scripts.
If you continue using the TableData scripts, we will loop to resolve
dependency order. However, this can be expensive, especially with many tables or complex
dependency chains. You can optionally assign names to your scripts, so they will sort to
control their execution order and reduce or eliminate the need for looping, though this
approach will increase the maintenance burden.
With data delivery, we traverse the foreign keys to optimize the delivery order. We also use looping to resolve dependencies that cannot be accurately determined up front, such as circular references.
To facilitate this method, you can use DataTongs to capture the necessary JSON files. The AdventureWorks product in demo repository is configured to use data delivery for both seed data and the full set of test data.
At the table level, you can simplify setup and reduce script management by configuring the following five properties in your table's JSON file:
{
"ContentFile": "TableContents\\Person.AddressType.content",
"MergeType": "Insert/Update",
"MatchColumns": "",
"MergeFilter": "",
"MergeDisableTriggers": false
}
A complete table JSON with these properties configured can be found at the bottom of this file.
| Property | Required | Default | Description |
|---|---|---|---|
ContentFile |
Yes | — | Relative path from the template root to the .tabledata JSON file created by DataTongs. |
MergeType |
Yes | — | Controls which MERGE clauses are generated: Insert, Insert/Update, or Insert/Update/Delete. |
MergeDisableTriggers |
No | false |
When true, wraps the merge with ALTER TABLE ... DISABLE TRIGGER ALL / ENABLE TRIGGER ALL. |
MatchColumns |
No | Auto-detected | Comma-separated column names used to match source and target rows. Overrides auto-detection from PK or unique index. |
MergeFilter |
No | — | SQL predicate applied to the DELETE clause only, scoping which rows are eligible for deletion. |
ContentFile path is relative to the template for which the table is defined.MergeFilter scopes the DELETE operation when included.MergeType values include Insert, Insert/Update, and Insert/Update/Delete.
The MergeType property controls which clauses are included in the generated
MERGE statement. Each value builds on the previous one, adding additional
behavior to synchronize target data with the content file.
| MergeType | MERGE Clauses Generated | Behavior |
|---|---|---|
Insert |
WHEN NOT MATCHED BY TARGET THEN INSERT |
Adds new rows only. Existing rows are never modified or removed. |
Insert/Update |
Above + WHEN MATCHED THEN UPDATE |
Adds new rows and updates existing rows that have changed. No rows are deleted. |
Insert/Update/Delete |
Above + WHEN NOT MATCHED BY SOURCE THEN DELETE |
Full synchronization. Rows not present in the content file are removed from the target table. |
SchemaQuench dynamically generates a MERGE statement for each table. The content
file data is loaded into a variable and parsed with OPENJSON. Column metadata is
resolved from the target database at delivery time, not stored in the content file.
DECLARE @v_json NVARCHAR(MAX) = '[ ... content file data ... ]';
MERGE INTO [Schema].[TableName] AS Target
USING (
SELECT [Col1], [Col2], ...
FROM OPENJSON(@v_json)
WITH (
[Col1] NVARCHAR(100),
[Col2] INT,
...
)
) AS Source
ON Source.[KeyCol] = Target.[KeyCol]
WHEN MATCHED AND (NOT (Target.[Col2] = Source.[Col2]
OR (Target.[Col2] IS NULL AND Source.[Col2] IS NULL))) THEN
UPDATE SET [Col2] = Source.[Col2], ...
WHEN NOT MATCHED BY TARGET THEN
INSERT ([Col1], [Col2], ...)
VALUES (Source.[Col1], Source.[Col2], ...)
WHEN NOT MATCHED BY SOURCE THEN
DELETE;
Certain SQL Server types cannot be used directly with OPENJSON. SchemaQuench
automatically maps these types in the WITH clause and reconstructs the original
values in the SELECT.
| SQL Server Type | OPENJSON Mapping | Notes |
|---|---|---|
HIERARCHYID |
NVARCHAR(4000) |
String representation of the hierarchy path |
GEOGRAPHY |
NVARCHAR(4000) + companion INT SRID column |
Stored as .ToString() with a separate [Col.STSrid] column. Reconstructed via geography::STGeomFromText() |
NTEXT |
NVARCHAR(MAX) |
Deprecated type mapped to its modern equivalent |
TEXT |
VARCHAR(MAX) |
Deprecated type mapped to its modern equivalent |
IMAGE |
VARBINARY(MAX) |
Deprecated type mapped to its modern equivalent |
XML with schema collection |
XML([schema].[collection]) |
Schema collection reference is preserved in the WITH clause |
The WHEN MATCHED clause only triggers an update when at least one column value
has actually changed. All comparisons use NULL-safe equality: if both the source and target
values are NULL, the columns are considered equal.
| Column Type | Comparison Method |
|---|---|
| Standard types | Target.[Col] = Source.[Col] OR (Target.[Col] IS NULL AND Source.[Col] IS NULL) |
GEOGRAPHY |
Target.[Col].ToString() = Source.[Col].ToString() with NULL check |
XML, NTEXT |
CAST(Target.[Col] AS NVARCHAR(MAX)) = CAST(Source.[Col] AS NVARCHAR(MAX)) with NULL check |
TEXT |
CAST(Target.[Col] AS VARCHAR(MAX)) = CAST(Source.[Col] AS VARCHAR(MAX)) with NULL check |
IMAGE |
CAST(Target.[Col] AS VARBINARY(MAX)) = CAST(Source.[Col] AS VARBINARY(MAX)) with NULL check |
DELETE clause (WHEN NOT MATCHED BY SOURCE).
Restricts which rows are eligible for deletion, leaving rows outside the filter untouched.
The filter expression must reference the Target alias.
For example, Target.[Region] = 'US' would only delete unmatched US-region rows.
true, the merge script is wrapped with
ALTER TABLE [Schema].[Table] DISABLE TRIGGER ALL before execution
and ALTER TABLE [Schema].[Table] ENABLE TRIGGER ALL after.
This prevents triggers from firing during data synchronization.
SET IDENTITY_INSERT [Schema].[Table] ON and
SET IDENTITY_INSERT [Schema].[Table] OFF. Identity columns are
included in the content file and delivered as explicit values but excluded from
the UPDATE clause since their values cannot change.
SchemaQuench uses a two-pass, foreign-key-aware algorithm to deliver data in the correct dependency order. This avoids FK constraint violations without requiring you to manually order your tables.
Before delivery begins, each foreign key relationship is classified based on the nullability of its columns. Only foreign keys pointing to other tables in the delivery set are considered.
| FK Type | Condition | Effect |
|---|---|---|
| Required | Any FK column is NOT NULL |
The referenced table must be delivered first. This creates a hard dependency edge. |
| Deferred | All FK columns are nullable | The FK columns can be temporarily set to NULL, allowing the table to be inserted in Pass 1 without the referenced data. |
| Ignored | Self-referencing FK, or referenced table is not in the delivery set | Not factored into delivery ordering. |
Tables are delivered in topological order based on their required dependency edges. The algorithm iterates until all deliverable tables are processed:
MergeType).
CAST(NULL AS <type>) in the source query. These tables are
queued for Pass 2.
If tables remain after topological iteration completes (all have unmet required dependencies), they form a hard circular dependency. These tables are delivered as-is with full merge scripts. Any FK constraint violations will surface as errors and can be resolved with migration scripts.
After all tables have been inserted, tables that were deferred in Pass 1 receive a
second full merge using their configured MergeType. This fills in the actual
FK values now that all referenced rows exist in the database.
When SchemaQuench runs in WhatIf mode, the delivery algorithm performs the
same FK classification and topological ordering, but no SQL is executed against the
database. Instead, it logs what each table delivery would do, including which tables
would be deferred, which would be delivered with retry, and which would receive a
Pass 2 update.
SchemaQuench tracks "Data Delivery" as a single checkpoint step per template. If delivery fails partway through (e.g., on the fifth of ten tables), the entire data delivery step re-executes on the next run. The merge-based algorithm is designed to be safely re-runnable, so tables that were already delivered are re-merged without data loss or duplication.
Content files use the .tabledata extension and contain a JSON array of
row objects. Each object represents one row, with property names matching the column names
of the source table.
[
{"AddressTypeID": 1, "Name": "Billing", "ModifiedDate": "2008-04-30T00:00:00"},
{"AddressTypeID": 2, "Name": "Home", "ModifiedDate": "2008-04-30T00:00:00"},
{"AddressTypeID": 3, "Name": "Main Office", "ModifiedDate": "2008-04-30T00:00:00"}
]
Content files are created by DataTongs, which queries the
source database using FOR JSON AUTO with NOLOCK hints. DataTongs
can also automatically configure the data delivery properties in your table JSON files
when the ConfigureDataDelivery setting is enabled.
Column metadata is not stored in the content file. At delivery time,
SchemaQuench queries the target database's INFORMATION_SCHEMA and
sys.columns to resolve column types, nullability, and identity status.
This means the content file is decoupled from the target schema -- columns can be
added or removed from the table definition without regenerating the content file, as
long as the match columns remain present.
The following column types are automatically excluded from content files and merge scripts:
| Column Type | Reason |
|---|---|
| Computed columns | Values are calculated by the database engine and cannot be inserted or updated directly. |
ROWGUIDCOL columns |
Globally unique identifiers managed by the database; excluded from both extraction and insertion. |
[Col].ToString() for the WKT
representation and [Col].STSrid for the spatial reference ID. At delivery
time, the value is reconstructed via geography::STGeomFromText([Col], [Col.STSrid]).
When MatchColumns is not specified, SchemaQuench auto-detects the best
match columns from the target table using this priority:
When nullable columns are part of the match key, they are prefixed with an asterisk
in the MatchColumns value (e.g., *[NullableCol]). This tells
SchemaQuench to generate IS NULL-safe equality in the ON clause:
(Source.[Col] = Target.[Col] OR (Source.[Col] IS NULL AND Target.[Col] IS NULL)).