Synchronizing Data with gsqlcmd
gsqlcmd allows synchronizing data across databases on different database server platforms.
Below we discuss several techniques.
You can find the source code in the Examples\Sync Databases folder of the downloaded package.
Also, you can try any batch file with a demo sample hosted in Microsoft Azure SQL Database.
Contents:
- Basics
- Two-way synchronization using GUID identifiers
- Using SQL Server rowversion/timestamp columns to synchronize new and changed rows
- Using last update columns to synchronize new and changed rows
- Complete synchronization with deletes
- Complete synchronization across multiple editable databases with integer key columns
- Improving and unifying solutions
- Final Notes
Basics
Suppose we have two databases, db1 and db2, with two tables of customers, table11 on db1 and table12 on db2.
CREATE TABLE [s61].[table11] ( [id] int IDENTITY(1,1) NOT NULL , [name] nvarchar(50) NOT NULL , CONSTRAINT [PK_table11] PRIMARY KEY ([id]) ); CREATE TABLE [s61].[table12] ( [id] int IDENTITY(1,1) NOT NULL , [name] nvarchar(50) NOT NULL , CONSTRAINT [PK_table12] PRIMARY KEY ([id]) );
The table11 contains the following records:
id | name |
---|---|
1 | Customer C1 |
2 | Customer C2 |
3 | Customer C3 |
4 | Customer C4 |
5 | Customer C5 |
To merge data from table11 to table12, we can use the following batch file:
gsqlcmd exec db1 s61.table11 table11.csv gsqlcmd make db2 table11.csv table12.sql /table=s61.table12 /merge /insertIdentity gsqlcmd exec db2 table12.sql
The first exec command exports data from table11 to a CSV file:
id;"name" 1;"Customer C1" 2;"Customer C2" 3;"Customer C3" 4;"Customer C4" 5;"Customer C5"
The second make command produces merge commands for table12:
UPDATE [s61].[table12] SET [name] = N'Customer C1' WHERE [id] = 1; UPDATE [s61].[table12] SET [name] = N'Customer C2' WHERE [id] = 2; UPDATE [s61].[table12] SET [name] = N'Customer C3' WHERE [id] = 3; UPDATE [s61].[table12] SET [name] = N'Customer C4' WHERE [id] = 4; UPDATE [s61].[table12] SET [name] = N'Customer C5' WHERE [id] = 5; GO SET IDENTITY_INSERT [s61].[table12] ON; INSERT INTO [s61].[table12] ([id], [name]) SELECT s.[id], s.[name] FROM (SELECT 1 AS [id], N'Customer C1' AS [name]) s LEFT OUTER JOIN [s61].[table12] t ON t.[id] = s.[id] WHERE t.[id] IS NULL; INSERT INTO [s61].[table12] ([id], [name]) SELECT s.[id], s.[name] FROM (SELECT 2 AS [id], N'Customer C2' AS [name]) s LEFT OUTER JOIN [s61].[table12] t ON t.[id] = s.[id] WHERE t.[id] IS NULL; INSERT INTO [s61].[table12] ([id], [name]) SELECT s.[id], s.[name] FROM (SELECT 3 AS [id], N'Customer C3' AS [name]) s LEFT OUTER JOIN [s61].[table12] t ON t.[id] = s.[id] WHERE t.[id] IS NULL; INSERT INTO [s61].[table12] ([id], [name]) SELECT s.[id], s.[name] FROM (SELECT 4 AS [id], N'Customer C4' AS [name]) s LEFT OUTER JOIN [s61].[table12] t ON t.[id] = s.[id] WHERE t.[id] IS NULL; INSERT INTO [s61].[table12] ([id], [name]) SELECT s.[id], s.[name] FROM (SELECT 5 AS [id], N'Customer C5' AS [name]) s LEFT OUTER JOIN [s61].[table12] t ON t.[id] = s.[id] WHERE t.[id] IS NULL; SET IDENTITY_INSERT [s61].[table12] OFF; GO print 'Processed 5 total records'; GO
These SQL commands update names on the target table and insert new rows.
The make command uses the /merge option to generate such commands and the /insertIdentity option to keep identity values.
The third exec command executes the commands against db2.
Solving Issues with SET IDENTITY_INSERT
To execute the script with SET IDENTITY_INSERT on Microsoft SQL Server, a database connection user must have the ALTER ON TABLE permission.
Otherwise, it returns an error: Cannot find the object "s61.table12" because it does not exist or you do not have permissions.
See details at https://learn.microsoft.com/en-us/sql/t-sql/statements/set-identity-insert-transact-sql?view=sql-server-2017.
To solve this issue, you can use a separate user to synchronize data or create the target table with the primary key column without the IDENTITY option:
CREATE TABLE [s61].[table13] ( [id] int NOT NULL , [name] nvarchar(50) NOT NULL , CONSTRAINT [PK_table13] PRIMARY KEY ([id]) );
The technique remains the same, but the merge SQL commands do not contain SET IDENTITY_INSERT.
If you cannot change the identity column, you can use the solution below.
Using Different ID Values in Source and Target Tables
If you do not need to keep ID values the same in both tables, you can omit the identity columns in synchronization.
Here is a sample:
gsqlcmd exec db1 "SELECT name FROM s61.table11" table11.csv gsqlcmd make db2 table11.csv table14.sql /table=s61.table14 /keys=name /merge gsqlcmd exec db2 table14.sql
The first exec command exports data without the ID column:
"name" "Customer C1" "Customer C2" "Customer C3" "Customer C4" "Customer C5"
Pay attention the command uses an inline select: SELECT name FROM s61.table11
You can use this feature to export the required columns and change their names according to the target table's names.
Also, you can calculate values for the required columns of the target table.
The second command produces the following SQL commands to insert absent names into the table12:
INSERT INTO [s61].[table14] ([name]) SELECT s.[name] FROM (SELECT N'Customer C1' AS [name]) s LEFT OUTER JOIN [s61].[table14] t ON t.[name] = s.[name] WHERE t.[name] IS NULL; INSERT INTO [s61].[table14] ([name]) SELECT s.[name] FROM (SELECT N'Customer C2' AS [name]) s LEFT OUTER JOIN [s61].[table14] t ON t.[name] = s.[name] WHERE t.[name] IS NULL; INSERT INTO [s61].[table14] ([name]) SELECT s.[name] FROM (SELECT N'Customer C3' AS [name]) s LEFT OUTER JOIN [s61].[table14] t ON t.[name] = s.[name] WHERE t.[name] IS NULL; INSERT INTO [s61].[table14] ([name]) SELECT s.[name] FROM (SELECT N'Customer C4' AS [name]) s LEFT OUTER JOIN [s61].[table14] t ON t.[name] = s.[name] WHERE t.[name] IS NULL; INSERT INTO [s61].[table14] ([name]) SELECT s.[name] FROM (SELECT N'Customer C5' AS [name]) s LEFT OUTER JOIN [s61].[table14] t ON t.[name] = s.[name] WHERE t.[name] IS NULL; GO print 'Processed 5 total records'; GO
Pay attention that the make command uses the /keys=name option to specify columns used in the JOIN conditions.
Using Import Mode
The one-way synchronization, discussed above, always include three parts:
- Exporting data from the source table
- Creating SQL commands to update the target table
- Executing the SQL commands
You can create batch files using CSV files in the first and second steps. This feature is available in all paid editions.
The gsqlcmd Enterprise edition allows using the import mode that combines these three commands into a single one.
For example, the following command merges data from table11 to table13:
gsqlcmd import db2 db1::s61.table11 /table=s61.table13 /merge
The main command is executed against the db2 connection.
However, it loads the source data from the s61.table11 table of the db1 database into the memory but not into a CSV file like the command
gsqlcmd exec db1 s61.table11 table11.csv
The import command, with an inline select, looks like this
gsqlcmd import db2 "db1::SELECT name FROM s61.table11" /table=s61.table14 /keys=name /merge
You can easily change the initial make to the import mode.
The next samples will use the import mode.
Two-way Synchronization using GUID Identifiers
In the previous samples, both tables have the primary key columns of the integer type.
The one-way synchronization, in this case, is simple and reliable.
The easiest way to have two-way synchronization is to use the primary key columns of the GUID type.
Here is a table declaration for Microsoft SQL Server:
CREATE TABLE [s61].[table21] ( [id] uniqueidentifier NOT NULL DEFAULT(newid()) , [name] nvarchar(50) NOT NULL , CONSTRAINT [PK_table21] PRIMARY KEY ([id]) ); CREATE TABLE [s61].[table22] ( [id] uniqueidentifier NOT NULL DEFAULT(newid()) , [name] nvarchar(50) NOT NULL , CONSTRAINT [PK_table22] PRIMARY KEY ([id]) );
The id column has the uniqueidentifier data type with the newid() default value.
In this case, Microsoft SQL Server creates unique id values for new rows.
Here are commands to merge data from the table21 to table22 and backward:
gsqlcmd import db2 db1::s61.table21 /table=s61.table22 /merge gsqlcmd import db1 db2::s61.table22 /table=s61.table21 /merge
This technique is easy. It does not require additional tables or fields.
However, it is suitable for small and mid-size tables as all the data are loaded and updated every time.
Using SQL Server Rowversion/Timestamp Columns to Synchronize New and Changed Rows
Microsoft SQL Server and Microsoft SQL Server Compact have the rowversion data type with the timestamp synonym.
It is generally used as a mechanism for version-stamping table rows.
The server increments the server-level value on every INSERT and UPDATE and updates the field value.
See details here:
- https://learn.microsoft.com/en-us/sql/t-sql/data-types/rowversion-transact-sql?view=sql-server-2017
- https://learn.microsoft.com/en-us/previous-versions/sql/compact/sql-server-compact-4.0/ms172424(v=sql.110)
Other database platforms allow implementing row versioning using triggers.
So, the idea is simple:
- Import all rows once and save the maximum rowversion value.
- Synchronize data after the last saved rowversion value and save the maximum rowversion value again.
Here is a table declaration with the rowversion (timestamp) field:
CREATE TABLE [s61].[table31] ( [id] uniqueidentifier NOT NULL DEFAULT(newid()) , [name] nvarchar(50) NOT NULL , [ts] timestamp NOT NULL , CONSTRAINT [PK_table31] PRIMARY KEY ([id]) ); CREATE TABLE [s61].[table32] ( [id] uniqueidentifier NOT NULL DEFAULT(newid()) , [name] nvarchar(50) NOT NULL , [ts] timestamp NOT NULL , CONSTRAINT [PK_table32] PRIMARY KEY ([id]) );
Here is a sample table that contains saved timestamps:
CREATE TABLE [s61].[sync_timestamps] ( [source] nvarchar(128) NOT NULL , [target] nvarchar(128) NOT NULL , [last_ts] binary(8) NOT NULL , [prev_ts] binary(8) NULL , CONSTRAINT [PK_sync_timestamps] PRIMARY KEY ([source], [target]) );
Due to the source and target columns, you can use a single table for all database tables and synchronization pairs.
Here is a batch file to merge new and updated rows only from the table31 to table32:
gsqlcmd exec db1 update-sync-table31-to-table32.sql if errorlevel 1 goto :EOF gsqlcmd import db2 db1::select-table31-to-table32.sql /table=s61.table32 /merge if errorlevel 1 goto :EOF gsqlcmd exec db1 commit-sync-table31-to-table32.sql
update-sync-table31-to-table32.sql has the following code:
MERGE s61.sync_timestamps AS t USING (SELECT 'table31' AS [source], 'table32' AS [target], COALESCE(MAX(ts), 0x) AS ts FROM s61.table31) AS s ([source], [target], ts) ON (t.[source] = s.[source] AND t.[target] = s.[target]) WHEN MATCHED AND last_ts = prev_ts THEN UPDATE SET last_ts = s.ts WHEN NOT MATCHED THEN INSERT ([source], [target], last_ts) VALUES (s.[source], s.[target], s.ts);
The merge command inserts a row for the specified synchronization pair or updates the last_ts field.
The sync_timestamps table has values like these:
source | target | last_ts | prev_ts |
---|---|---|---|
table31 | table32 | 0x00000000000084E9 | 0x00000000000084E8 |
The code updates the last_ts field only if the last_ts and prev_ts values are equal.
This technique would prevent changes if the previous synchronization failed.
To commit the changes, the batch uses the last command:
gsqlcmd exec db1 commit-sync-table31-to-table32.sql
commit-sync-table31-to-table32.sql has the following simple code:
UPDATE s61.sync_timestamps SET prev_ts = last_ts WHERE [source] = 'table31' AND [target] = 'table32'
Here is the code of the select-table31-to-table32.sql used to select new and changed rows only:
SELECT id, name FROM s61.table31 WHERE ts > COALESCE(( SELECT prev_ts FROM s61.sync_timestamps WHERE [source] = 'table31' AND [target] = 'table32' ), 0x)
Note that it selects rows with timestamp values greater than the prev_ts field committed in the previous synchronization.
This topic showed how to synchronize new and changed rows only using row version columns.
The main benefit of this way is a minimal and safe change of the underlying tables. In SQL Server, you need to add a single column of the rowversion data type to tables.
Also, this is only the way to synchronize data across Microsoft SQL Server Compact databases as they do not support triggers discussed below.
Using Last Update Columns to Synchronize New and Changed Rows
Another way to select new and updated rows is to use columns containing the creation or last update time.
The challenge is to create such fields and triggers for all tables to be synchronized.
Here is a sample of the table and its triggers for Microsoft SQL Server:
CREATE TABLE [s61].[table41] ( [id] uniqueidentifier NOT NULL DEFAULT(newid()) , [name] nvarchar(50) NOT NULL , [last_update] datetime NOT NULL DEFAULT(getutcdate()) , CONSTRAINT [PK_table41] PRIMARY KEY ([id]) ); CREATE TRIGGER [s61].[trigger_table41_after_insert] ON [s61].[table41] AFTER INSERT AS BEGIN SET NOCOUNT ON UPDATE s61.table41 SET last_update = GETUTCDATE() FROM s61.table41 t INNER JOIN inserted ON inserted.id = t.id END; CREATE TRIGGER [s61].[trigger_table41_after_update] ON [s61].[table41] AFTER UPDATE AS BEGIN SET NOCOUNT ON UPDATE s61.table41 SET last_update = GETUTCDATE() FROM s61.table41 t INNER JOIN deleted ON deleted.id = t.id END;
To keep the last time of synchronized data for each synchronization pair, we can use a table like this:
CREATE TABLE [s61].[sync_last_updates] ( [source] nvarchar(128) NOT NULL , [target] nvarchar(128) NOT NULL , [last_update] datetime NOT NULL , [prev_update] datetime NULL , CONSTRAINT [PK_sync_last_updates] PRIMARY KEY ([source], [target]) );
Here is a batch file used to synchronize data from the table41 in db1 to table42 in db2:
gsqlcmd exec db1 update-sync-table41-to-table42.sql if errorlevel 1 goto :EOF gsqlcmd import db2 db1::select-table41-to-table42.sql /table=s61.table42 /merge if errorlevel 1 goto :EOF gsqlcmd exec db1 commit-sync-table41-to-table42.sql
The first command saves the last_update value to be committed in the last batch command if there are no errors.
The update-sync-table41-to-table42.sql has the following code:
MERGE s61.sync_last_updates AS t USING (SELECT 'table41' AS [source], 'table42' AS [target], COALESCE(MAX(last_update), 0) AS last_update FROM s61.table41) AS s ([source], [target], last_update) ON (t.[source] = s.[source] AND t.[target] = s.[target]) WHEN MATCHED AND t.last_update = t.prev_update THEN UPDATE SET last_update = s.last_update WHEN NOT MATCHED THEN INSERT ([source], [target], last_update) VALUES (s.[source], s.[target], s.last_update);
The commit-sync-table41-to-table42.sql, used in the last batch command, has the following simple code:
UPDATE s61.sync_last_updates SET prev_update = last_update WHERE [source] = 'table41' AND [target] = 'table42'
And, here is the code of the select-table41-to-table42.sql used to select rows added or updated after the last synchronization time saved in the prev_update field:
SELECT id, name FROM s61.table41 WHERE last_update > COALESCE(( SELECT prev_update FROM s61.sync_last_updates WHERE [source] = 'table41' AND [target] = 'table42' ), 0)
Complete synchronization with deletes
The topics above discuss adding and updating data from the source to target tables only.
Such synchronization can solve your tasks if you hide rows instead of deleting them.
To replicate deleting rows, you need to have data about the deletion.
A simple approach is to create a single table containing the deleted identifiers and add triggers that save such identifiers in the delete operations.
Here is a sample table that contains the deleted identifiers:
CREATE TABLE [s61].[deleted_guids] ( [id] uniqueidentifier NOT NULL , [source] nvarchar(128) NOT NULL , [last_update] datetime NOT NULL , CONSTRAINT [PK_deleted_guids] PRIMARY KEY ([id]) );
As the GUID identifiers are globally unique, the table uses them as a primary key.
The source column contains the table of the deleted row.
The last_update column contains the deletion time used to synchronize the deletion once.
Here is a trigger sample:
CREATE TRIGGER [s61].[trigger_table51_after_delete] ON [s61].[table51] AFTER DELETE AS BEGIN SET NOCOUNT ON INSERT INTO deleted_guids (id, [source], last_update) SELECT deleted.id , 'table51' AS [source] , GETUTCDATE() AS last_update FROM deleted END;
Here is a modified batch file that replicates deleting rows in the target table:
gsqlcmd exec db1 update-sync-table51-to-table52.sql if errorlevel 1 goto :EOF gsqlcmd import db2 db1::select-table51-to-table52.sql /table=s61.table52 /merge if errorlevel 1 goto :EOF gsqlcmd import db2 db1::select-table51-to-table52-deleted.sql /table=s61.table52 /delete if errorlevel 1 goto :EOF gsqlcmd exec db1 commit-sync-table51-to-table52.sql
The batch is similar to the one discussed above. However, it includes additional actions for the deletions.
The update-sync-table51-to-table52.sql must save the last_update time to be committed for both tables, including deleted_guids:
MERGE s61.sync_last_updates AS t USING (SELECT 'table51' AS [source], 'table52' AS [target], COALESCE(MAX(last_update), 0) AS last_update FROM s61.table51) AS s ([source], [target], last_update) ON (t.[source] = s.[source] AND t.[target] = s.[target]) WHEN MATCHED AND t.last_update = t.prev_update THEN UPDATE SET last_update = s.last_update WHEN NOT MATCHED THEN INSERT ([source], [target], last_update) VALUES (s.[source], s.[target], s.last_update); MERGE s61.sync_last_updates AS t USING (SELECT 'deleted_guids' AS [source], 'table52' AS [target], COALESCE(MAX(last_update), 0) AS last_update FROM s61.deleted_guids) AS s ([source], [target], last_update) ON (t.[source] = s.[source] AND t.[target] = s.[target]) WHEN MATCHED AND t.last_update = t.prev_update THEN UPDATE SET last_update = s.last_update WHEN NOT MATCHED THEN INSERT ([source], [target], last_update) VALUES (s.[source], s.[target], s.last_update);
The final command must commit the last time for both tables also.
The commit-sync-table51-to-table52.sql file has the following code:
UPDATE s61.sync_last_updates SET prev_update = last_update WHERE [source] = 'table51' AND [target] = 'table52'; UPDATE s61.sync_last_updates SET prev_update = last_update WHERE [source] = 'deleted_guids' AND [target] = 'table52';
And, here is the code of the select-table51-to-table52-deleted.sql used to select identifiers deleted in table51 after the last synchronization:
SELECT id FROM s61.deleted_guids WHERE last_update > COALESCE(( SELECT prev_update FROM s61.sync_last_updates WHERE [source] = 'deleted_guids' AND [target] = 'table52' ), 0) AND [source] = 'table51'
The batch uses this file in the following command:
gsqlcmd import db2 db1::select-table51-to-table52-deleted.sql /table=s61.table52 /delete
This command uses the /delete option instead of the /merge.
Complete Synchronization Across Multiple Editable Databases with Integer Key Columns
The most complicated case is synchronizing data across multiple editable databases that contain tables with integer key columns.
For example, you have a disconnected desktop application for your salespersons that should be synced with a central database.
In this case, you cannot guarantee unique identifiers across databases.
The possible solution is to add columns for the source table and its identifiers.
For example:
CREATE TABLE [s61].[table71] ( [id] int IDENTITY(1,1) NOT NULL , [name] nvarchar(50) NOT NULL , [source_table] nvarchar(128) NULL , [source_row_id] int NULL , [last_update] datetime NULL , CONSTRAINT [PK_table71] PRIMARY KEY ([id]) );
The source_table column contains a table name where the row was initially added. The source_row_id column contains the row id.
The source_table column can include any additional data like a database name or a salesperson application computer name.
The insert trigger must update these fields automatically. Here is a sample:
CREATE TRIGGER [s61].[trigger_table71_after_insert] ON [s61].[table71] AFTER INSERT AS BEGIN SET NOCOUNT ON IF USER_NAME() IN ('sample61_user2') RETURN UPDATE s61.table71 SET source_table = 'table71' , source_row_id = t.id , last_update = GETUTCDATE() FROM s61.table71 t INNER JOIN inserted ON inserted.id = t.id END;
The trigger updates the source_row_id column with the initial id generated by a database.
To prevent changes of the source_table, source_row_id, and last_update values during synchronization, the trigger uses the following code:
IF USER_NAME() IN ('sample61_user2') RETURN
The sample uses the sample61_user2 username in the synchronization operations. So, it imports data as is.
For other users, the triggers update the fields regardless of user values.
The update trigger also must be modified to keep the initial sync values of the row regardless of possible changes by a user:
CREATE TRIGGER [s61].[trigger_table71_after_update] ON s61.table71 AFTER UPDATE AS BEGIN SET NOCOUNT ON IF @@NESTLEVEL > 1 RETURN; IF USER_NAME() IN ('sample61_user2') RETURN UPDATE s61.table71 SET source_table = deleted.source_table , source_row_id = deleted.source_row_id , last_update = GETUTCDATE() FROM s61.table71 t INNER JOIN deleted ON deleted.id = t.id END;
This trigger also skips updates in the synchronization operations executed under the sample61_user2 username that allows updating the last_update column values of the imported rows.
Deleting rows also is a challenge as we need to delete records in other databases using source_table and source_row_id values.
Here is a sample of the table that keeps identifiers of the deleted rows:
CREATE TABLE [s61].[deleted_ints] ( [id] int NOT NULL , [source] nvarchar(128) NOT NULL , [source_table] nvarchar(128) NOT NULL , [source_row_id] int NOT NULL , [last_update] datetime NOT NULL , CONSTRAINT [PK_deleted_ints] PRIMARY KEY ([id], [source]) );
The table uses id and source fields as a primary key to store values for any table in a database.
Also, it contains the source_table and source_row_id columns that hold values used in the synchronization.
The last_update column contains the deletion time used to synchronize the deletion once.
The delete trigger has the following code:
CREATE TRIGGER [s61].[trigger_table71_after_delete] ON s61.table71 AFTER DELETE AS BEGIN SET NOCOUNT ON INSERT INTO deleted_ints (id, [source], source_table, source_row_id, last_update) SELECT deleted.id , 'table71' AS [source] , source_table , source_row_id , GETUTCDATE() AS last_update FROM deleted END;
The synchronization batch file has the following code:
gsqlcmd exec db1 update-sync-table71-to-table72.sql if errorlevel 1 goto :EOF gsqlcmd import db2 db1::select-table71-to-table72.sql /table=s61.table72 /merge /keys=source_table,source_row_id if errorlevel 1 goto :EOF gsqlcmd import db2 db1::select-table71-to-table72-deleted.sql /table=s61.table72 /delete /keys=source_table,source_row_id if errorlevel 1 goto :EOF gsqlcmd exec db1 commit-sync-table71-to-table72.sql
The commands are similar to the ones discussed in the previous topic.
However, the import commands use the /keys=source_table,source_row_id option to use the source_table and source_row_id columns in the JOIN and WHERE conditions.
You have to create indexes for these columns to improve the performance.
The update-sync-table71-to-table72.sql and commit-sync-table71-to-table72.sql files are completely the same as in the previous topic except for the table names.
The select-table71-to-table72.sql is similar to the previous one except for the SELECT clause:
SELECT name, source_table, source_row_id, last_update
It does not contain the local id column but contains the global source_table and source_row_id.
The select-table71-to-table72-deleted.sql contains changed SELECT and FROM clauses:
SELECT source_table, source_row_id FROM s61.deleted_ints WHERE last_update > COALESCE(( SELECT prev_update FROM s61.sync_last_updates WHERE [source] = 'deleted_ints' AND [target] = 'table72' ), 0) AND [source] = 'table71'
Improving and Unifying Solutions
The solutions above use batch files and SQL script files with hard-coded table names.
We did this for educational purposes to increase the complexity step by step and show the approaches' differences.
Of course, you can place the code into stored procedures and unify the synchronization batch file.
For example, the batch file can look like this:
@echo off set schema=s61 set source=%1 set target=%2 gsqlcmd exec db1 "exec %schema%.usp_update_last_updates @source='%source%', @target='%target%'" if errorlevel 1 goto :EOF gsqlcmd import db2 "db1::exec %schema%.usp_select_upsert_rows @source='%source%', @target='%target%'" /table=%schema%.%source% /merge /keys=source_table,source_row_id if errorlevel 1 goto :EOF gsqlcmd import db2 "db1::exec %schema%.usp_select_delete_rows @source='%source%', @target='%target%'" /table=%schema%.%source% /delete /keys=source_table,source_row_id if errorlevel 1 goto :EOF gsqlcmd exec db1 "exec %schema%.usp_commit_last_updates @source='%source%', @target='%target%'"
All the procedures accept the source and target tables as parameters. So, you can call a single unified batch file with parameters to sync tables.
The procedures generate SQL codes to select the actual column names of the source tables.
You can find the source code in the Examples\Sync Databases folder of the gsqlcmd download package.
Final Notes
Any sync process must include at least three steps:
- Getting data from the source
- Producing SQL commands to change the target
- Executing the SQL commands
gsqlcmd allows getting data from databases, files, and web resources.
Also, it allows creating and executing commands for database servers, including SQL Server, Oracle, DB2, MySQL, PostgreSQL, NuoDB, SQLite, and SQLCE.
So, you can synchronize data across any supported databases.
You can find the source code in the Examples\Sync Databases folder of the downloaded package.
Also, you can try any batch file with a demo sample hosted online.