User-defined APIs
Expose any flow as a custom REST API endpoint — or have an external system trigger a flow by POSTing to one. Configure once, your endpoint lives at /rest/v1/httplistener/<your-pattern>, runs under the calling user's permissions, and shows up in the audit log alongside everything else.
The endpoints in the REST API reference are built-in — /v1/flows, /v1/connections, etc. — for managing the platform. User-defined APIs are endpoints you create by attaching a flow to an HTTP Listener. They're how you expose your own data and integrations as REST.
Overview
The core building block is an HTTP Listener — a connection that listens on a configurable path and method. Depending on which side of the source-to-destination flow it sits on, the listener either pushes the request payload into the flow or returns the flow's output as the HTTP response:
| Pattern | Listener role | Use case |
|---|---|---|
| PULL | Destination (TO) | Expose a database, file, or third-party API as a JSON/XML/CSV endpoint. |
| PUSH | Source (FROM) | External system POSTs a payload → Etlworks runs a flow that processes it. |
| UPLOAD | Source (FROM) | POST a file body → Etlworks copies the bytes into file storage (S3, FTP, server, …). |
| CRUD | Source or destination per method | One resource exposed across POST, PUT, DELETE, GET. |
What a request looks like
| Aspect | Spec |
|---|---|
| Path | /rest/v1/httplistener/<url-pattern> |
| Methods | GET, POST, PUT, DELETE (whichever you configured on the listener) |
| Auth header | Authorization: Bearer <access-token> · or Authorization: Basic <base64(user:pass)> |
| Request body | Any supported format — JSON, XML, CSV, fixed-length, HL7. Empty body allowed (see Empty payload). |
| Content-Type | Not required (the listener parses by Format) |
| Response | JSON output of the flow (PULL) · pre-defined receipt (PUSH/PUT/DELETE) · user-defined (see Custom responses) |
| Status codes | 200 success · 401 / 403 auth · 500 internal error |
Pre-defined response shape (PUSH / PUT / DELETE)
{
"auditId": 106358,
"messageUuid": "caaa8347-063c-449b-a29b-12e68799c986",
"flowId": 3219,
"status": "success"
}
Endpoint URL
Every user-defined API lives at:
https://<your-instance>.etlworks.com/rest/v1/httplistener/<your-url-pattern>
For Etlworks Cloud customers, the host is app.etlworks.com. For on-prem, it's your configured hostname.
URL pattern
Set the URL Pattern field when creating the HTTP Listener. The pattern becomes the path your endpoint responds at:
| URL Pattern | Resulting endpoint |
|---|---|
patients | https://app.etlworks.com/rest/v1/httplistener/patients |
hl7/patient | https://app.etlworks.com/rest/v1/httplistener/hl7/patient |
patients/{patientid} | https://app.etlworks.com/rest/v1/httplistener/patients/123 — {patientid} binds to the path segment |
Only one HTTP Listener can claim a given pattern at a time. /patients and /patient are different patterns and can coexist; two listeners both claiming /patients cannot.
Path parameters become flow variables
Tokens like {patientid} in the URL pattern are extracted from the request path and exposed to the flow as variables — usable in source SQL, the FROM/TO fields, and JavaScript / Python.
patients/{patientid}
select * from patient where patientid = {patientid}
You can also surface request headers as variables — see Request & response headers.
Authentication — JWT (recommended)
Set Auth Type to JWT on the listener. Auth Location is either Header (recommended) or Query Parameter.
Three steps to call the endpoint
- Create an API user with a role that can execute flows. (API users can only execute Listener-backed endpoints, not the platform REST API.)
- Exchange username + password for an access token via the auth endpoint:
Or skip this step entirely by generating a non-expiring API key (Settings → Users → Edit user → API Key) and using that as the token.Shell
curl -s -X POST https://app.etlworks.com/rest/v1/token-auth/issue \ -H "Content-Type: application/json" \ -d '{"login":"api-user","password":"…"}' # → { "token": "eyJ0eXAiOiJKV1Q…" } - Send the token on every call to your endpoint:
Auth location Where the token goes Header Authorization: Bearer eyJ0eXAi…Query Parameter ?Authorization=Bearer%20eyJ0eXAi…
Tokens minted via /v1/token-auth/issue are short-lived (~10 minutes). Refresh before each batch of calls, or use a non-expiring API key for service accounts.
Authentication — Basic
Less secure than JWT — use only when the calling system can't manage tokens (e.g. a third-party webhook that only sends a static Authorization header).
Set Auth Type to Basic on the listener. Auth Location is Header, Query Parameter, or (see next section) embedded in the URL path.
- Base64-encode
user:password:Shellprintf 'admin:admin' | base64 # → YWRtaW46YWRtaW4= - Send on every call:
Auth location Where it goes Header Authorization: Basic YWRtaW46YWRtaW4=Query Parameter ?Authorization=Basic%20YWRtaW46YWRtaW4=
Path-based auth (for restrictive webhooks)
Some third-party webhook systems (eBay, certain SaaS platforms) only let you configure a static URL — no header, no query string. The workaround: embed a doubly-base64-encoded Basic auth string as the last path segment.
- Base64-encode
user:password:admin:admin→YWRtaW46YWRtaW4= - Build the standard Basic auth string:
Basic YWRtaW46YWRtaW4= - Base64-encode that:
QmFzaWMgWVdSdGFXNDZZV1J0YVc0PQ== - Append to your URL Pattern:
some/path/QmFzaWMgWVdSdGFXNDZZV1J0YVc0PQ==— or use a path token likesome/path/{auth}for flexibility.
Wait ~1 minute for the URL to propagate, then point your webhook at it. The listener decodes the trailing segment and authenticates.
PULL — expose a data source as an API endpoint
Tableau wants the latest customer list, in JSON, behind an authenticated URL. Wire up a flow whose source is your warehouse and whose destination is the listener — the listener serves the flow's output as the response.
Setup
- Create an HTTP Listener with Method =
GETand a URL pattern. - Create a Format for the response: JSON, JSON Dataset, XML, XML Dataset, CSV, or Fixed-Length Text.
- Create an "any-to-web-service" flow where the source is your data (database, file, API) and the destination is the listener + format from steps 1 & 2.
- Configure the source query, mapping, and any flow parameters needed. Use
{patientid}-style tokens to bind path parameters into the SQL. - Schedule the flow as event-driven. Etlworks executes it on every inbound
GET.
Calling it
curl -s "https://app.etlworks.com/rest/v1/httplistener/patients/123" \
-H "Authorization: Bearer eyJ0eXAi…"
# → { "id": 123, "name": "…", … }
Test it from inside Etlworks Explorer too — create an Etlworks API Connection pointing at the endpoint and browse the response there.
PUSH — trigger a flow when a payload arrives
A SaaS application updates a record. Its webhook fires and POSTs the updated record's JSON to your endpoint. Your flow runs — transforms the payload, writes it to the warehouse, sends an email.
Setup
- Create an API user with a role that can run flows.
- Create an HTTP Listener with Method =
POST(orPUT). - Create a Format matching the inbound payload — JSON, XML, CSV, fixed-length, HL7 v2.x, FHIR.
- Build a flow whose source is a web service. The source connection (FROM) is the listener.
- Add mapping + parameters as you would normally.
- Schedule the flow as event-driven. It runs every time a payload arrives.
A flow can have at most one listener as its source connection. If you need a second inbound endpoint for the same processing logic, create a second flow that calls the first as a nested flow.
Calling it
curl -s -X POST "https://app.etlworks.com/rest/v1/httplistener/hl7/patient" \
-H "Authorization: Bearer eyJ0eXAi…" \
-H "Content-Type: application/json" \
-d '{"id":123,"name":"…"}'
# → { "auditId": 106358, "messageUuid": "…", "flowId": 3219, "status": "success" }
UPLOAD — ingest files via HTTP
An HTTP Listener can be the source of a Copy Files flow — turning a POST endpoint into a file dropzone that lands bytes in any supported file storage (server, FTP/FTPS/SFTP, S3, etc.).
Setup
- Create an HTTP Listener with Method =
POSTand an Auth Type. URL Pattern uses a path token for the filename:URL Patternfile/{filename} - Create a Copy Files flow.
- Set the FROM connection to the listener.
- Set the TO connection to any file connection.
- Set the destination filename to either a literal value (
test.json) or the path token ({filename}) so the caller controls it. - Schedule the flow as event-driven.
Calling it
curl -s -X POST "https://app.etlworks.com/rest/v1/httplistener/file/test2.json" \
-u "api-user:…" \
--data-binary @local-file.json
# The body of local-file.json lands at the configured destination as test2.json.
CRUD — multiple methods on one resource
Build a full create / read / update / delete API by configuring listeners with the appropriate HTTP method and wiring each to a flow. Path tokens parameterize destination queries.
| Method | Wiring | Execution mode |
|---|---|---|
GET | Listener as destination — same as PULL. | Synchronous — caller waits for the flow. |
POST | Listener as source. Destination is the resource you're creating into. Tokenize the destination query with path / payload variables. | Async — the caller gets a receipt before the flow runs. |
PUT | Same as POST; pick PUT when semantics matter. | Sync — caller waits for the flow. |
DELETE | Same shape as GET, but configure the listener with DELETE. Use Successful DELETE response code to control the returned status (200, 204, etc.). | Sync. |
Example — tokenized destination query
update patient
set name = {name}
where patientid = {patientid}
With URL Pattern patients/{patientid} and a JSON payload {"name":"Jane"}, calling PUT /rest/v1/httplistener/patients/123 with body {"name":"Jane"} resolves to update patient set name='Jane' where patientid=123.
Custom responses
By default, PUSH/PUT endpoints return the pre-defined receipt. Two ways to return something else:
Option A — return the destination's response
Useful when the flow forwards the payload to another HTTP API and you want to relay that API's reply back to the caller.
- On the listener, set POST/PUT Processing =
Syncand POST/PUT Response File Name ={app.data}/response.json. - On the destination HTTP connection, set Response File Name to the same path.
- Build the flow with the listener as source and that HTTP connection as destination.
- Schedule event-driven.
The destination's response gets written to response.json and returned to the caller. The file is auto-deleted after the response goes out.
Option B — user-defined response file
Generate a response from inside the flow itself, without delegating to an upstream API.
- Listener: POST/PUT Processing =
Sync, POST/PUT Response File Name ={app.data}/response.json. - Build a primary flow that processes the inbound payload.
- Build a second flow that writes the response file (same path as in step 1).
- Combine them into a nested flow so the response file always exists by the time the listener replies.
- Schedule the nested flow event-driven.
The response file is auto-deleted after the listener returns it.
Request & response headers
Read inbound headers
Enable Pass request headers to the flow as parameters on the listener. Each header becomes a flow variable — uppercased (Validation-Token → VALIDATION-TOKEN).
var token = scenario.getVariable('VALIDATION-TOKEN').getValue();
Set outbound headers
From any place a flow runs JavaScript or Python:
etlConfig.addResponseHeader('header-name', headerValue);
Real-world example: webhook subscription handshake
RingCentral's webhooks require the endpoint to echo a Validation-Token request header back in the response to complete the subscription. The flow:
var token = scenario.getVariable('VALIDATION-TOKEN');
etlConfig.addResponseHeader('Validation-Token', token != null ? token.getValue() : '');
value = true;
Empty payload
Some upstream systems POST with no body — especially webhook handshakes. Without a default, the parser raises an exception. Fix: set Default Payload on the listener to a valid document in the chosen format. The flow runs against the default whenever the request body is empty.
Response codes
| Code | When |
|---|---|
200 | Success. |
200 → configurable for DELETE | Set Successful DELETE response code on the listener if you want a different status (e.g. 204 No Content). |
401 | Missing or invalid auth. |
403 | Authenticated but not authorized to run this flow. |
500 | Internal error during flow execution. Check the audit log via auditId. |
Test from inside Etlworks
You don't need an external system to exercise an event-driven flow. Build a "sender" flow that calls your listener.
- Prepare test data. A source connection (file, cloud storage, database) plus a Format matching what real callers will send.
- Configure the destination. Create an Etlworks API Connection with:
- Endpoint path:
/rest/v1/httplistener/<your-pattern> - Method: matches your listener (
POST,PUT,GET,DELETE) - User / password: the API user configured for the listener
- Endpoint path:
- Build the sender flow. Source = test data + format. Destination = the API connection from step 2. Format on both sides matches.
- Run the sender. It POSTs (or GETs / PUTs / DELETEs) to your listener, triggering the event-driven flow exactly as a real caller would.
The sender flow is reusable — keep it around for regression testing or wire it into a CI flow.
View inbound payloads
Every payload an HTTP Listener receives is captured in the platform's messages store — queryable by id, by listener, by status, by substring. Use the Messages app in the UI, or the CLI:
// Filter recent messages
list-messages-filtered message=Notification;
// Pull one message body
get-message 24a6e77b-c05f-45ed-b1e8-908ce361bf31;
Or via REST: GET /v1/messages — see the REST API reference.
The NBCUniversal and Transport for NSW case studies are both built on this exact pattern — user-defined APIs in front of multi-source flows. Drop us a note at product@etlworks.com if you hit anything that's not in this guide.