Working with relational databases
Integrator works equally well with all relational databases, as long as there is a JDCB driver for the database and database can be accessed from the server where Integrator is running.
Connecting to database
You can connect to the following relation databases from the Integrator:
- MS SQL Server
- Derby (JavaDB)
Create database connection to connect to the relational database.
Connecting to database if it is not on the list
If database is not on the list of the supported databases, you can still connect to it using this instruction.
Testing database connection
Test Connection button in the connection screen to test the database connection.
Browsing database objects and executing SQL
Use Integrator's add-on Explorer to browser database objects, such as tables, views, columns, etc. and execute SQL queries.
Browsing database objects
To browse database objects select and expand database connection on the left navigational bar.
Writing and executing SQL queries
To write and execute SQL queries use build-in SQL editor.
Database is a source
Integrator can extract data from database objects, such as tables, views, synonyms, user-defined SQL, functions, etc., transform it and load into any destination, for example another database, files, API endpoints, etc.
Step 1 If database is a source in transformation, start creating data integration flow by opening Flow Builder page, clicking the
+ button, selecting Databases in the left navigational bar and selecting flow type, where database is a source:
Step 2 Continue by creating one or multiple transformations where the source is a database
Step 3 Modify transformation by selecting database object to read data from (table, view, etc.) or defining a source query.
Database is a destination
Integrator can extract data from files, API endpoints, social web sites, databases, etc., transform it and load into any database. When loading data into database Integrator either automatically generates and executes SQL DML statement (INSERT, UPDATE, DELETE or MERGE) or executes user-created SQL.
Step 1 If database is a destination in transformation, start creating data integration flow by opening Flow Builder page, clicking the
+ button, selecting Databases in the left navigational bar and selecting flow type, where database is a destination:
Step 2 Continue by creating one or multiple transformations where the destination is a database
Step 3 Modify transformation by selecting database object to load data into (table, view, etc.) or defining a destination query.
Automatically create destination table
Integrator can automatically create a destination table if it does not exist. It uses fields in the source to create and execute
create table SQL.
This behavior is enabled by default.
Conditionally update database table using MERGE
By default, when the destination is a database Integrator generates and executes INSERT SQL statement. In many cases it makes more sense to check if record already exists and UPDATE it instead. A relational database management system uses SQL MERGE (also called upsert) statements to INSERT new records or UPDATE existing records depending on whether condition matches. Integrator supports MERGE (upsert) for the following databases:
- MS SQL Server
Step 1 Create a flow where destination is a database.
Step 2 Click the
MAPPING button between FROM and TO.
Step 3 Goto Parameters tab.
Step 4 Select
MERGE as an Action.
Step 5 Enter coma separated list of the fields in the Lookup Fields text box. These fields will be used to match the record in the destination table.
Important: PostgreSQL and MySQL require an unique index, which includes all the fields defined in the Lookup Fields text box.
Conditionally update database table without MERGE
The alternative to
IfExist action. Instead of relying on the native
MERGE, included in the target database engine, Integrator will execute SQL statement to check if record already exist, and if it does, will then execute UPDATE. If record does not exist Integrator will execute INSERT.
MERGEis not supported by the destination database, or
- you cannot create an index for the fields used to identify unique record, and the destination database is either PostgreSQL or MySQL, or
- you just don't want to use native MERGE for whatever reason
Step 1. Create flow with at least one transformation where database is a destination.
Step 2. When configuring transformation click
MAPPING and then select Parameters tab.
Step 3. Select
IfExist for action and enter coma separated list of field names, used to identify unique record.
Other common use cases
In Integrator it is possible to create flows, which track changes in database and load them into another database or API. It is called change replication.
- Change replication between 2 databases: flow pulls changed records from the source database and updates destination database.
- Incremental database backup: flow pulls changed records from the source database and packages them into the "update" files.
Using temporary database
In Integrator it is possible to create a connection to the temporary, in memory database and use it as a staging area for the data.
Work with temporary database just like you would with any other relational database, just keep in mind that it will be automatically deleted once the connection that created it closed.
- Dump content of the one or multiple files or response from the API into the staging database. Than create a flow where the source is a custom SQL statement and the destination is a another database, file or API endpoint. It will allow you to utilize a power of the SQL to drive output.
- Dump content of the file or response from the API into the staging database. Use SQL loop to execute flow for each record returned by the SQL statement. For example, you can use this technique to execute CRUD API endpoint for each row in the source Excel file.
Automatically create staging tables
Integrator can automatically create (and drop) staging tables for you, practically from any data source: files, API endpoints, etc.
Step 1 Start creating data integration flow by opening Flow Builder page. Click the
+ button and type
create staging tables in the "Select Flow Type" box.
Step 2 Continue by defining FROM and TO. Select a database connection for the staging table(s) in the TO connection field.
Note TO field. It it will be used as a name for the staging table.
Step 3. Configure flow to drop existing staging table if it is already exist.
Execute any SQL
In Integrator it is possible to execute any SQL statement (including DDL) or a list of the
; separated SQL statements, outside of the regular source to destination transformation.
Read more about SQL flow.
Read records from the database and execute flow for each record
In Integrator it is possible to execute the same flow in a loop for each record returned by SQL statement.
- Use database table as a "configuration" for the flow (for example store start/end dates and other parameters). The flow will loop through all the records in the configuration table and call API endpoint.
- Pull all the records from the staging table with the rows from the source Excel worksheet, and execute CRUD API endpoint for each row.
Expose database object as an API endpoint
In Integrator is is possible to build a REST API that can be used to access some controlled portion of the data from the other side of the firewall. For example, you can create a flow which executes a SQL query that reads data from your corporate database and exposes result set as an Integrator's API endpoint.
Read more about building REST API for accessing database objects.
Tips and tricks
Read about tips and tricks when working with databases.
Read about performance tips when working with databases.
Also read about common performance tips.