The daily rhythm of shaping a MySQL schema with SchemaSmith. Add tables, modify columns, write procedures, extract drift from a live database, and bootstrap new environments — all from one schema package.
Now it's time to give your database form.
Your team needs a promotions table to track discount campaigns. Here's what you do.
promotions.json to your package's Tables/ folder:
{
"Name": "promotions",
"Engine": "InnoDB",
"Columns": [
{
"Name": "promotion_id",
"DataType": "INT AUTO_INCREMENT",
"Nullable": false
},
{
"Name": "promotion_name",
"DataType": "VARCHAR(100)",
"Nullable": false
},
{
"Name": "discount_percent",
"DataType": "DECIMAL(5,2)",
"Nullable": false,
"Default": "0"
},
{
"Name": "start_date",
"DataType": "DATE",
"Nullable": false
},
{
"Name": "end_date",
"DataType": "DATE",
"Nullable": true
},
{
"Name": "is_active",
"DataType": "TINYINT(1)",
"Nullable": false,
"Default": "1"
}
],
"Indexes": [
{
"Name": "PRIMARY",
"PrimaryKey": true,
"Unique": true,
"IndexColumns": "promotion_id"
},
{
"Name": "ix_promotions_start_date",
"IndexColumns": "start_date"
}
],
"CheckConstraints": [
{
"Name": "ck_promotions_discount_range",
"Expression": "discount_percent BETWEEN 0 AND 100"
}
]
}
SchemaQuench
SchemaQuench reads the JSON, sees that promotions doesn't exist in the target database, and generates a CREATE TABLE statement. One file. One command. Done.
Compare this to the traditional approach: write a CREATE TABLE script, write a migration file with a sequence number, make sure the sequence number doesn't collide with anyone else's, add an IF NOT EXISTS guard, add a corresponding rollback script, update a migrations tracking table. With SchemaSmith, you created one file and ran one command. No migration scripts. No dependency ordering. No collision worries.
The promotions table needs changes. Marketing wants a description field, the discount column needs more precision, and you need an index on the active flag for a dashboard query. All three edits happen in the same JSON file — you shape the table right where it lives.
Insert a new entry in the Columns array:
{
"Name": "description",
"DataType": "VARCHAR(500)",
"Nullable": true
}
Find the discount_percent column and edit its DataType:
"DataType": "DECIMAL(7,4)"
Add a new entry in the Indexes array:
{
"Name": "ix_promotions_is_active",
"IndexColumns": "is_active"
}
Preview the changes before touching the database. Run SchemaQuench in WhatIf mode:
SmithySettings_WhatIfONLY=true SchemaQuench
SchemaQuench generates the SQL it would execute — an ALTER TABLE ... ADD COLUMN for the new column, an ALTER TABLE ... MODIFY COLUMN for the data type change, and a CREATE INDEX for the new index — and logs it all without applying anything. Read the generated SQL, confirm it looks right, then run SchemaQuench normally to quench the changes into your database. Three changes to one file. One preview. One command. No scripts to write, number, or maintain.
Stored procedures, functions, views, triggers, and events work differently from tables. Instead of JSON, they're plain .sql files. Each object gets its own file in the matching folder:
| Object type | Folder |
|---|---|
| Functions | Functions/ |
| Procedures | Procedures/ |
| Views | Views/ |
| Triggers | Triggers/ |
| Events | Events/ |
Here's a MySQL stored procedure that returns the order history for a customer. Create CustOrderHist.sql in the Procedures/ folder:
DROP PROCEDURE IF EXISTS `CustOrderHist`;
DELIMITER //
CREATE PROCEDURE `CustOrderHist` (IN p_CustomerID CHAR(5))
LANGUAGE SQL
NOT DETERMINISTIC
CONTAINS SQL
SQL SECURITY DEFINER
BEGIN
SELECT `ProductName`, SUM(`Quantity`) AS `Total`
FROM `Products` P, `Order Details` OD, `Orders` O, `Customers` C
WHERE C.`CustomerID` = p_CustomerID
AND C.`CustomerID` = O.`CustomerID`
AND O.`OrderID` = OD.`OrderID`
AND OD.`ProductID` = P.`ProductID`
GROUP BY `ProductName`;
END //
DELIMITER ;
The key detail: DROP PROCEDURE IF EXISTS followed by CREATE PROCEDURE. MySQL doesn't support CREATE OR REPLACE for procedures, so this drop-then-create pair is the idempotent idiom. DELIMITER // lets the procedure body contain semicolons without ending the statement prematurely. SchemaQuench runs the script as-is, and MySQL handles the rest.
Views support CREATE OR REPLACE directly. Drop this file into the Views/ folder:
CREATE OR REPLACE VIEW `Products Above Average Price` AS
SELECT `ProductName`, `UnitPrice`
FROM `Products`
WHERE `UnitPrice` > (SELECT AVG(`UnitPrice`) FROM `Products`);
Need to update an existing procedure or view? Edit the .sql file and quench. The drop-then-create pattern (or CREATE OR REPLACE for views) takes care of whether it's new or changed. There's no separate "alter" workflow — you always declare the full object definition, and SchemaQuench applies it.
Someone changed the database directly. Maybe a DBA added a column in production during an incident. Maybe a developer used SSMS to tweak an index on staging. The database has drifted from the package.
SchemaTongs
SchemaTongs connects to the database, reads every table, procedure, function, view, and trigger — plus MySQL extras like events and full-text indexes — and writes the current definitions to the package files. Changed objects update in place. New objects get new files.
git diff:
$ git diff
--- a/Templates/Northwind/Tables/products.json
+++ b/Templates/Northwind/Tables/products.json
@@ -48,6 +48,12 @@
},
+ {
+ "Name": "backorder_threshold",
+ "DataType": "INT",
+ "Nullable": true,
+ "Default": "10"
+ },
The diff reads like a sentence: "someone added a backorder_threshold column to the products table with a default of 10." Compare that to trying to figure out what changed by comparing two database snapshots or reading through audit logs. The drift is captured. The mystery is over.
Any Extensions metadata you had previously attached to tables is preserved through the round-trip. Data classification tags, ownership, retention policies — your sidecar data survives every re-extraction.
SchemaTongs does more than dump scripts to flat folders. When you cast your database schema, the tool brings real intelligence to the extraction — preserving organization, carrying custom metadata forward, detecting orphans, focusing extraction on what matters, validating scripts, and reconciling constraint style.
Organize scripts by domain — Tables/Sales/, Tables/HR/, Procedures/Reporting/. SchemaTongs preserves existing subfolder locations on re-extraction. New objects that haven't been organized go to the root folder.
Custom metadata attached to tables via the Extensions carrier — data classification, ownership, retention policies — is carried forward on every re-extraction. Your sidecar data survives as long as the underlying object stays.
When a database object is dropped, its script file becomes an orphan. SchemaTongs offers three modes:
| Mode | Behavior |
|---|---|
Detect | Logs orphans, no action. Default. |
DetectWithCleanupScripts | Logs orphans, generates cleanup scripts. |
DetectDeleteAndCleanup | Deletes files, generates cleanup scripts. |
Cast only what you care about. ShouldCast flags toggle whole object types on or off — Tables, Views, Functions, Procedures, TableTriggers, and the MySQL-specific Events flag for scheduled events.
For a tighter scope, ObjectList restricts extraction to a named set of objects — a comma-separated list like Customer, GetOrders. Focus a cast on your real surface without pulling every legacy table.
With ValidateScripts enabled, SchemaTongs checks each extracted script against the database to verify it parses correctly. Invalid scripts are saved with a .sqlerror extension instead of .sql, making them visible but excluded from deployment until you fix them.
Controls whether check constraints are extracted as column-level properties inside the table JSON or as table-level named constraints. The default is ColumnLevel. Set it once in Product.json — the style is locked to whatever the file specifies.
Some products need to create their target database from scratch. CI pipelines spin up fresh containers. Docker Compose environments start from nothing. New developers clone the repo and need a working database in one command. The Initialize template pattern handles all of these. Three pieces work together. Here's how the Northwind demo product sets it up on MySQL.
Templates/Initialize/Template.json:
{
"Name": "Initialize",
"DatabaseIdentificationScript": "SELECT SCHEMA_NAME FROM information_schema.schemata WHERE SCHEMA_NAME = 'TestMain' AND NOT EXISTS (SELECT * FROM information_schema.schemata WHERE SCHEMA_NAME = '{{NorthwindDb}}')"
}
The DatabaseIdentificationScript is the key. In MySQL, databases live in information_schema.schemata. The script returns a result only when the target database doesn't yet exist — it matches TestMain (the bootstrap database this demo connects to) but only when NorthwindDb is missing. On the first run, this template activates and creates the database. On every subsequent run, the script returns no rows, SchemaQuench skips the template entirely, and deployment proceeds straight to the main template.
Templates/Initialize/Before Scripts/Create Northwind Database [ALWAYS].sql:
CREATE DATABASE IF NOT EXISTS `{{NorthwindDb}}`
CHARACTER SET utf8mb4
COLLATE utf8mb4_0900_ai_ci;
The [ALWAYS] marker tells SchemaQuench to run this script every time the Initialize template is active — no version tracking needed. CREATE DATABASE IF NOT EXISTS makes the script safe to re-run, though in practice it only executes once because the template self-selects out after the database exists. The explicit charset and collation pin the new database to modern Unicode defaults.
Product.json defines the template order.
{
"Name": "Northwind",
"Platform": "MySQL",
"ValidationScript": "SELECT EXISTS(SELECT * FROM information_schema.schemata WHERE SCHEMA_NAME IN ('{{NorthwindDb}}', 'TestMain'))",
"TemplateOrder": [
"Initialize",
"Northwind"
],
"ScriptTokens": {
"NorthwindDb": "northwind"
}
}
TemplateOrder ensures Initialize runs first. If the database doesn't exist, Initialize creates it, then the Northwind template deploys the full schema. If the database already exists, Initialize is skipped and Northwind deploys any pending changes. One docker compose up bootstraps everything from an empty server. Subsequent runs skip Initialize automatically and apply only schema changes. Fresh environment or existing environment, same command, same result.
Last reviewed May 2026 by the SchemaSmith Team.
That's how you shape your schema — adding tables, modifying columns, writing procedures, casting changes, and bootstrapping new environments. Clean, repeatable, no surprises. When you're ready to apply these declarations to a target database, SchemaQuench is the tool that computes and executes the DDL.