Take things to the next level with our Enterprise product, which opens up configuration and customization to meet a much wider array of needs. We also add a robust UI to help in editing and maintaining your metadata repository. Customize the folder layout for your scripts including new product level scripts that allow you to run scripts from the context of the master database either before or after the update without the need to create extra templates. Script tokens go from simple values to supporting files and queries and full access to the table metadata from your script and can now be used anywhere. Table metadata can now include custom properties at any level and those properties can be added to the UI for a rich and integrated experience.
Enterprise is built with high availability in mind. You can now define secondary servers and each product script level folder and template can be designated to apply to the primary, secondary or both. This lets you manage things like updates to objects in master or SQL jobs that need to be applied to all servers for an Availability Group while the AG managed changes only apply to the primary.
Take control of your repository layout with configurable script folders. You can define as many folders as you need to support your product and development process. Folders can be added at the template and product levels and you define the update slot they run within. The order of the folders within the product or template file will determine the order they are applied during the update and within each folder the files will be sorted by name.
You can also use the none update slot to define a folder that will not be applied
during the update. Perhaps for holding utility scripts or whatever other use cases you might
have specific to your environments. They will still show in the UI to allow easily managing
and editing these .sql files. Another handy use case is for the files for your
QueryFile script tokens. You can create one or more folders for those. They will not be
applied by the quench process directly, but they will be used to resolve those tokens and
you can see and manage them in the UI easily.
Script Tokens are now supported in all
the scripts including table components and scripted objects (.sql) on top of the
product and template JSON file scripts. We’ve added support for File tokens, BinaryFile tokens,
Query tokens, and QueryFile tokens. There are new system-provided tokens such as TableSchema that
gives you the complete JSON schema for all the tables in the current template. You can even
access the schema for other templates via tokens named for each template (TableSchema_).
This allows access not only to the core properties but all of your custom properties
as well. Implement custom data dictionaries, replication, or any number of custom processes
that you might need for your products or environment. We also have Specific Table tokens you
can set up to get access to the JSON for one table which can be handy for things like
bootstrapping a table in a before migration script that is needed for some specific processing
without needing to script it directly or paste the JSON directly into a script that might
become stale over time.
Expand your metadata for documentation. Drive product specific needs such as reporting engine
configurations or custom data dictionaries. Set up things like replication that we don’t
directly support due to the variable complexity allowed by Microsoft. Track information to
identify PII or other data that might need to be obfuscated or have other special handling
whether inside or outside your application. This lets you define such things right in the
table JSON and use it within your scripts as needed via the TableSchema script
tokens. Custom properties can be easily queried from the JSON and then used in a variety of
ways within your scripts. We can provide examples and details for querying the data and some
examples exist within the enterprise demo repository.
Custom properties can be added to the table schema file in the repository which can be used to validate your JSON. We support using the title and description properties as well as the format and other constraint properties in the schema file to drive specific UI behaviors. Populate drop downs; restrict data by format, length, etc.; generate check lists for multiple value selection; and even a custom key/value pair editor for populating a "dictionary" of values.
We added support for custom procedures that can be used to implement a recycle bin pattern for table drops that allows restoring them if your process allows it. We call your procedure if it exists rather than dropping a table directly when it is removed from the product and we call your custom procedure if it exists prior to creating a missing table. This allows you to restore a previously dropped table including the data if you have the need. A common use case might be to have all tables placed in a specific schema for 30 days prior to actually dropping the data and an automated cleanup. Meanwhile, if the table comes back within that period, you might have a restore process that simply puts it back in the original schema with all the data still intact.
Dive even deeper into customization with optional table and table component deployment. Do
you want to add a new index that is only supported on the latest version of SQL Server which
is not yet deployed to every environment? You can add it in the metadata and use the
ShouldApply property to supply the
conditional logic. SchemaQuench will respect that
filter and only apply the new index when the condition is met. You might also use that pattern
to only apply full text indexes to the tables when Full Text is installed on the server.
Consider a new data type being supported by SQL Server such as when XML was added...
You could define the same column twice and use ShouldApply to deploy it as
VARCHAR(MAX)for older versions and use the new XML type as soon
as it becomes available. You could also have environment, customer or database-specific
items defined in the metadata.
Another significant use case we ran into was rolling out complex changes such as changing a
table from row store to column store. This can be prohibitively long to roll out completely
in one shot across all databases and environments. You could use ShouldApply to
tell the update to look at the old version of the table or tables involved until the conversion
has been completed and then look at the new version. You can clean out the older items once
all databases have been migrated to the new structure. Staggered migrations can be complex to
manage and this is one way we can help you simplify the process.