SchemaSmith Documentation

DataTongs - SQL Server (Enterprise)

Extract data and generate deploy scripts built for repeatable Enterprise CI/CD pipelines.

DataTongs reference data extraction tool

Overview

Now that you have your source and structure under source control and you've got your state based deployments in place it's time to manage your seed data in the same way. DataTongs will help you extract all or parts of your data and manage deployment via generated MERGE scripts using the data in a Json format. It connects to a SQL Server database and extracts data for the tables specified in the config file and will generate MERGE statements that insert missing rows and optionally update and delete based on the extracted data as well. You can define a filter clause in case you allow mixing user and system data in a single table and both the extraction and DELETE will honor that filter.

Simply point the application at a database with the correct seed data to capture, fill out the list of tables and the key columns for each, set the merge options you wish to generate and we'll handle generating the scripts for you. You can generate new scripts any time you need to update the seed data or manually update the script(s) if you find that simpler. Your application seed data will become part of your source control and state based updates.

Where DataTongs Fits

  • SchemaTongs extracts your schema into metadata.
  • SchemaHammer provides powerful search and edit capabilities for your metadata.
  • SchemaQuench applies your schema and data metadata to your SQL Servers.
  • DataTongs extracts your seed data into metadata

Quick Start Guide

1. Configure your appSettings.json

Point DataTongs at your source database and list the tables to extract. This example uses Schema Package Integration mode to write content files and configure table JSON automatically:

{
  "Source": {
    "Server": "localhost",
    "Database": "AdventureWorks",
    "User": "",
    "Password": ""
  },
  "ScriptPath": "",
  "ContentPath": "C:\\SchemaPackage\\Templates\\Main\\TableData",
  "Tables": [
    {
      "Name": "Person.AddressType",
      "KeyColumns": "",
      "SelectColumns": "",
      "Filter": "",
      "MergeType": ""
    },
    {
      "Name": "Person.ContactType",
      "KeyColumns": "",
      "SelectColumns": "",
      "Filter": "",
      "MergeType": ""
    },
    {
      "Name": "Sales.Currency",
      "KeyColumns": "",
      "SelectColumns": "",
      "Filter": "CurrencyCode IN ('USD','EUR','GBP')",
      "MergeType": ""
    }
  ],
  "ShouldCast": {
    "OutputContentFiles": true,
    "OutputScripts": false,
    "ConfigureDataDelivery": true,
    "MergeType": "Insert/Update",
    "MergeUpdate": true,
    "MergeDelete": false,
    "DisableTriggers": false
  }
}
2. Run DataTongs

Execute DataTongs from the command line and tongs will begin the casting process.

> DataTongs
3. Add them to your repository

Once you determine your deployment scheme you can add them to your repository and configure either a script folder or the data delivery section of each table to deploy your seed data.

See the DataTongs Walkthrough to help you get started with the tool.

Operating Modes

DataTongs operates in two modes depending on how you want to deploy the extracted data. The modes are mutually exclusive — ConfigureDataDelivery requires OutputContentFiles to be enabled and OutputScripts to be disabled.

Schema Package Integration

When ConfigureDataDelivery is enabled, DataTongs writes .tabledata content files and automatically updates the corresponding table JSON files in your schema package with data delivery properties. SchemaQuench reads these properties at deployment time and generates MERGE statements on the fly.

Configuration:

  • ShouldCast:OutputContentFiles = true
  • ShouldCast:OutputScripts = false
  • ShouldCast:ConfigureDataDelivery = true
  • ContentPath must reside within a template directory (one that contains Template.json)

Properties written to table JSON files:

Property Source
ContentFile Relative path from template root to the .tabledata file.
MergeType Per-table MergeType override, or the global ShouldCast:MergeType default.
MatchColumns From the table's KeyColumns setting, if provided.
MergeFilter From the table's Filter setting, if provided.
MergeDisableTriggers From the global ShouldCast:DisableTriggers setting.

DataTongs only writes the table JSON file when a property value has actually changed, keeping your source control diffs clean.

Standalone Script Generation

When OutputScripts is enabled, DataTongs generates self-contained .sql files with MERGE statements. These scripts can be executed directly or placed in a script folder for SchemaQuench to run.

Configuration:

  • ShouldCast:OutputScripts = true
  • ShouldCast:ConfigureDataDelivery = false (or omitted)

Each script is named Populate schema.tablename.sql and written to ScriptPath. By default, the JSON data is embedded inline in the script. Set TokenizeScripts to true to use a {{schema.table.tabledata}} token placeholder instead, allowing you to manage the JSON content file separately from the deployment script.

You can also combine OutputScripts with OutputContentFiles to generate both scripts and content files — useful when you want scripts for manual review and content files for automated delivery.

Test Data

You can also use DataTongs to extract test data. Generate standalone scripts and execute them outside of SchemaQuench to populate test databases as part of your test pipeline. This simplifies integration test setup and lets you easily reset to a consistent starting state between test runs.

Extraction Details

Column Auto-Detection

When SelectColumns is left blank for a table, DataTongs queries INFORMATION_SCHEMA.COLUMNS and sys.columns to build the column list automatically. Computed columns and ROWGUIDCOL columns are excluded. Geography columns are split into two expressions for WKT and SRID extraction.

When KeyColumns is left blank, DataTongs auto-detects match columns using the table's primary key. If no primary key exists, the unique index with the fewest columns is selected, with preference for indexes where all columns are NOT NULL.

Excluded Columns

The following column types are automatically excluded from extraction:

Column Type Reason
Computed columns Values are calculated by the database engine and cannot be inserted or updated directly.
ROWGUIDCOL columns Globally unique identifiers managed by the database; excluded from both extraction and merge scripts.

Special Type Handling

Certain SQL Server data types require special handling during extraction and merge script generation:

SQL Server Type Extraction OPENJSON Mapping
GEOGRAPHY Split into two JSON properties: [Col].ToString() for the WKT representation and [Col].STSrid for the spatial reference ID. Reconstructed via geography::STGeomFromText(). NVARCHAR(4000) + INT
XML Extracted as-is. Preserves schema collection when present. XML or XML([schema].[collection])
NTEXT Extracted as-is (deprecated type). NVARCHAR(MAX)
TEXT Extracted as-is (deprecated type). VARCHAR(MAX)
IMAGE Extracted as-is (deprecated type). VARBINARY(MAX)
HIERARCHYID Extracted as string representation. NVARCHAR(4000)

For types that require casting during comparison (XML, NTEXT, TEXT, IMAGE), DataTongs generates NULL-safe change detection in the MERGE's WHEN MATCHED clause: NOT (CAST(Target.[Col] AS TYPE) = CAST(Source.[Col] AS TYPE) OR (Target.[Col] IS NULL AND Source.[Col] IS NULL)). This ensures updates only trigger when values have actually changed.

Extraction SQL

DataTongs extracts table data using FOR JSON AUTO with NOLOCK hints:

SELECT CAST((
  SELECT [Col1], [Col2], ...
    FROM [schema].[tableName] WITH (NOLOCK)
    WHERE {filter}
    ORDER BY {keyColumns}
    FOR JSON AUTO) AS NVARCHAR(MAX))

The NOLOCK hint allows non-blocking reads during extraction, safe because DataTongs is read-only. Results are ordered by key columns for deterministic output and cleaner source control diffs.

The JSON output is lightly formatted with newlines between row objects for readability, then written to the .tabledata file or embedded in the merge script.

ShouldCast Flags Reference

Flag Default Description
OutputContentFiles false Write .tabledata JSON files to ContentPath. Each file is named schema.tablename.tabledata. Required for ConfigureDataDelivery.
OutputScripts true Write standalone .sql MERGE scripts to ScriptPath. Each file is named Populate schema.tablename.sql. Incompatible with ConfigureDataDelivery.
TokenizeScripts true Use {{schema.table.tabledata}} token placeholder in scripts instead of embedding JSON inline. Only applies when OutputScripts is enabled.
MergeUpdate true Include WHEN MATCHED THEN UPDATE clause in generated MERGE scripts. Updates existing rows when key columns match and values differ.
MergeDelete false Include WHEN NOT MATCHED BY SOURCE THEN DELETE clause. Deletes rows in the target not present in the source data. Honors the MergeFilter predicate.
DisableTriggers false Wrap MERGE statements with ALTER TABLE ... DISABLE TRIGGER ALL before execution and ENABLE TRIGGER ALL after. Applies to both script generation and ConfigureDataDelivery.
ConfigureDataDelivery false Automatically update table JSON files with delivery properties after extraction. Requires OutputContentFiles = true and OutputScripts = false. See Operating Modes.
MergeType Insert/Update Default merge type written to table JSON when ConfigureDataDelivery is enabled. Valid values: Insert, Insert/Update, Insert/Update/Delete. Can be overridden per table.

Additional Resources

See Also