Getting Started with the Mezmo Pipeline API

Create A New Pipeline

Curl
Copy

Response

Curl
Copy

This operation creates a new Mezmo Pipeline in your account. Think of this as a blank canvas. We will need to add sources, transforms, and destinations to make use of it. Take note of the "id" under "data" which will be your pipeline id. You will use this in future operations to edit and publish your pipeline.

Add A Source

Now that you have a new pipeline created, it's time to add a Source to it, which defines where your data will be coming from into the pipeline. In this example, we will be creating an HTTP Source

Bash
Copy

Response

Bash
Copy

Take Note of data/id. We will be using this in later steps so that we can connect pipeline nodes together.

Adding An Access Key

Now that we've created an HTTP Source, we need to add an access key to it so that only those authorized can send data to this endpoint. Grab the "gateway_route_id" from the response above as you'll use it in the next request.

Bash
Copy

Response

Bash
Copy

Success! Your access key has been generated under data/key and you can use this to start ingesting data! But first, let's add a transform to the pipeline to do something with the data coming in.

Adding a Transform

For this example, we're going to add a Remove Fields transform that will remove a set of fields from each payload that is ingested into the pipeline. Note: We also need to add the ID of the source transform above into the "inputs" field to connect these two nodes. This means that immediately after ingesting through your HTTP source, it will start removing fields. Without this, the nodes will be disconnected.

Bash
Copy

Response

Bash
Copy

Great! Now we have a new transform created that is connected to our HTTP Source node. Next, let's send this data somewhere, like AWS. Take note of data/id in the response. We will be using this in the next step to connect this transform node to the destination of our choosing.

Creating a Destination

Great! Now let's create a place for this data to go to. In this example, we've chosen to configure an S3 Destination.

Bash
Copy

Response

Bash
Copy

Publishing The Pipeline!

Bash
Copy

Response

Bash
Copy

And that's it! You've now published your first pipeline that you can send data into and out to S3! Stay tuned for part two for more advanced use cases, including removing/modifying nodes, reverting a pipeline version, pausing, and more.

Type to search, ESC to discard
Type to search, ESC to discard
Type to search, ESC to discard