ETL, ELT & reverse ETL

Move data any direction. Any technique.

Visual transforms or full code. Batch or streaming. ETL into a warehouse, ELT with pushdown, or Reverse ETL back to SaaS — all from one engine. The only data integration platform that doesn't make you pick.

Any
Direction · ETL, ELT, reverse
200+
Flow types built in
3
Languages · SQL, JS, Python
Both
Batch + streaming

The problem

Most tools force you to pick.

Visual builder OR code. Batch OR streaming. ETL OR ELT. Cloud-first OR on-prem. Modern data teams need all of it — different patterns for different workloads, different teammates, different sources. The cost of picking the wrong tool isn't features; it's another tool a year later.

Why most teams run multiple ETL tools

“Best of breed” usually means “five vendors and a glue layer.”

One tool for ELT (Fivetran). Another for transformations (dbt). Another for Reverse ETL (Hightouch). Another for streaming (Kafka). Another for files and APIs (custom code). Each contract, each vendor relationship, each integration with each other. Etlworks runs every pattern from one engine, with one billing model and one place to debug. Not because it's marketing-pretty — because real production data work needs all of these.

Capabilities

Every pattern, one engine.

Visual + code, same flow

Drag-and-drop transforms with live preview. Drop into SQL, JavaScript, or Python anywhere you need it. No tool switch.

ETL with pushdown

Transform before load, in memory or in source. Or push the work down to your warehouse — all from the same flow definition.

Reverse ETL — first class

Sync warehouse data back to Salesforce, HubSpot, NetSuite — 200+ SaaS targets. Same platform, same tier, no add-on.

Batch and streaming

Hourly/daily batch for warehouses. Sub-second streaming for CDC. The same flow definition picks its mode.

200+ flow types

Pre-built patterns for every common scenario — file-to-DB, DB-to-warehouse, API-to-warehouse, queue-to-DB, and many more.

Stage-and-load patterns

Files staged in S3/Azure/GCS, then COPY INTO at warehouse speed. Snowflake, BigQuery, Redshift, Synapse — all native.

Patterns

Three techniques. Same engine.

ETL, ELT, and Reverse ETL aren't different products at Etlworks — they're different routes through the same flow engine. Switch between them by changing the pipeline definition.

ETL
Source Transform Target

Transform before load

Clean, dedupe, enrich, mask in flight. Land cleaner data in the warehouse. Best for sensitive data and complex transforms.

Use when: warehouse compute is expensive, data needs masking, or transforms are complex.

ELT
Source Target Transform

Load raw, transform in warehouse

Land raw data fast, push transformations down to Snowflake / BigQuery / Redshift compute. Plays nicely with dbt.

Use when: warehouse is the system of record, dbt is your transformation layer.

Reverse ETL
Warehouse Transform SaaS

Send modeled data back

Push enriched warehouse data to Salesforce, HubSpot, NetSuite. Operationalize analytics in the tools your team already uses.

Use when: sales / marketing / ops teams need warehouse data in their CRM.

Specifications

The depth your team will check.

Transformations
Visual builder
Drag-and-drop mappings · live preview · 50+ pre-built operations · functions library
Code transforms
SQL · JavaScript · Python · sandboxed execution · custom function library
Pushdown
Snowflake · BigQuery · Redshift · Synapse · Postgres · MySQL · SQL Server · Oracle
Execution modes
Batch
Cron-style schedules · event triggers · file watchers · webhook triggers
Streaming
Sub-second CDC · message queue consumers · webhook listeners · Snowpipe Streaming
Hybrid
Initial backfill (batch) → ongoing streaming · same flow definition
Sources & destinations
Warehouses
Snowflake · BigQuery · Redshift · Synapse · Databricks · Postgres-based warehouses
Databases
MySQL · Postgres · SQL Server · Oracle · DB2 · MongoDB · Cassandra · 30+ more
SaaS & APIs
Salesforce · HubSpot · NetSuite · Workday · Zendesk · 200+ via dedicated connectors and HTTP
Files & storage
CSV · JSON · XML · Parquet · Avro · S3 · Azure Blob · GCS · SFTP · WebDAV

Comparing ETL platforms? See Etlworks vs Fivetran, Talend, Informatica, and Matillion

Proof

Production ETL, at scale.

“Our previous vendor — a name you'd recognize — was failing at scale. Etlworks gave us templates, autonomous on-prem agents, and a stable engine in one platform. Same engine for batch ETL, ELT, and reverse sync — one team, one operating model.”
OpenGov
GovTech · classic ETL + ELT at scale

FAQ

Common questions.

ETL or ELT — which should I use?
Depends on the workload. ETL (transform before load) is best when warehouse compute is expensive, data needs masking before it lands, or transforms are complex enough that running them in flight is faster than re-running them on every query. ELT (load raw, transform in warehouse) is best when the warehouse is your system of record and you have dbt or similar tooling for in-warehouse transformations. With Etlworks you don't have to pick — the same flow can do either, and most production deployments use both depending on the source.
What's “Reverse ETL” and do I need it?
Reverse ETL pushes data from your warehouse back to operational SaaS tools — Salesforce, HubSpot, NetSuite, Marketo, etc. The use case: marketing / sales / ops teams want to act on warehouse insights, but they live in their CRM. Instead of asking them to query Snowflake, you sync derived metrics (churn risk score, LTV, segment) back to Salesforce so it shows up next to the customer record. Most teams don't think they need it until someone asks “why doesn't sales see this?”
Can I use both visual transforms and code in the same flow?
Yes — that's the design. Most steps in a flow are configured visually (mappings, filters, joins). When you hit a transformation that needs custom logic, drop into SQL, JavaScript, or Python for that step and continue visually after. Each step has live preview, so you can validate as you build. No tool switch, no separate dbt project required (though dbt works alongside if you prefer).
Does Etlworks replace dbt?
It can — but it doesn't have to. Common patterns: (1) Etlworks loads raw data, then triggers a dbt run for warehouse transformations. (2) Etlworks does both ingestion and transformation natively, no dbt. (3) Hybrid — Etlworks for sources where in-flight transforms make sense, dbt for the modeled layer. Some teams pick one approach, others mix. We integrate with dbt Cloud and dbt-core via the scheduler if you want to keep using it.
What about pushdown — does it work for non-Snowflake warehouses?
Yes. Pushdown is supported for Snowflake, BigQuery, Redshift, Synapse, Databricks, plus the relational databases (Postgres, MySQL, SQL Server, Oracle). The flow definition declares the warehouse, and Etlworks generates the appropriate SQL for that engine. SQL dialects are handled — you write standard SQL or vendor-specific functions, and we translate where needed.
Can the same pipeline be batch one day and streaming the next?
Often, yes. The flow definition declares the work (read from X, transform, write to Y); the execution mode is configurable per source. A common pattern: backfill historical data with a one-time batch run, then switch the same flow to streaming mode for ongoing changes. Used in production for petabyte-scale CDC migrations into warehouses.
What about migrating from Fivetran or Talend?
Migrations from each are common and well-documented. We provide migration assessment for cost (typically 40–70% savings vs Fivetran consumption pricing or Talend per-seat licensing), timeline (most migrations run 2–4 weeks), and connector parity. Reach out via Talk to us and we'll send a migration brief specific to your current platform.

Start your trial

14 days. No card. Real workloads.

Spin up a free trial, build a flow, and see if “any direction, any technique” actually means it.