# Creating a Flow

## Steps to create a new Flow

{% hint style="info" %}
Follow the below steps to create a flow with Salesforce as Input & S3 as Output along with a copy option to Redshift
{% endhint %}

### Step 1: Navigate to Home Page

![](https://272493989-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LyGZIunpED9t56ZtLzh%2F-LyJ2jexbYBDcXpZyh0v%2F-LyJ3O4oebuVHlyF9zfT%2FScreen%20Shot%202020-01-11%20at%202.04.37%20AM.png?alt=media\&token=4837e655-603f-4066-a581-75724e9b9077)

### Step 2: Click on "New Flow" button

### Step 3: Choose an Output

![](https://272493989-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LyGZIunpED9t56ZtLzh%2F-LyJ2jexbYBDcXpZyh0v%2F-LyJ3wFEVHZouSZx6Wji%2FScreen%20Shot%202020-01-11%20at%202.07.11%20AM.png?alt=media\&token=36169e15-737b-408f-b159-244154804813)

### Step 4: Choose an Output Connection

![](https://272493989-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LyGZIunpED9t56ZtLzh%2F-LyJ2jexbYBDcXpZyh0v%2F-LyJ4kC4a-Q9va23YTj8%2FScreen%20Shot%202020-01-11%20at%202.10.44%20AM.png?alt=media\&token=8553b96f-4890-4c1a-9f6e-92b4fe4c38de)

### Step 5: Choose an Input

![](https://272493989-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LyGZIunpED9t56ZtLzh%2F-LyJ2jexbYBDcXpZyh0v%2F-LyJ5HR96ul6lVfy-12s%2FScreen%20Shot%202020-01-11%20at%202.13.49%20AM.png?alt=media\&token=de61cd19-b266-4fe0-95b0-ccd4adea1c1e)

### Step 6: Enter Input & Output Settings

![](https://272493989-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LyGZIunpED9t56ZtLzh%2F-LyJ2jexbYBDcXpZyh0v%2F-LyJ6IoWmcjkqxsjgBpr%2FScreen%20Shot%202020-01-11%20at%202.17.02%20AM.png?alt=media\&token=7ce4d023-f268-41be-a644-0a542a38cdae)

1. Give a friendly name to the flow
2. Accept the default flow code, that is generated based on the name
3. Uncheck "Publish Transformation" if you want to perform some transformations on the flow events
4. Uncheck "Publish Mapping" if you want to manually perform some complex mapping between input & output
5. Uncheck if you want to allow events that doesn't comply with the input schema
6. Choose the Input connection
7. Check to copy the file uploaded to S3 into Redshift
8. Choose the Redshift connection to be used for the copy command
9. Enter the Salesforce object name to be replicated from Input to Output
10. Specify the fetch size to be used which performing the JDBC query
11. Specify the batch size used to determine the topic partition. Specify a large number if you want the data to load in the order in which it was queried.
12. Specify a partition size greater than 1 if you want data to be imported in parallel across nodes - You must run the Input service on multiple nodes for this to be more effective.
13. Specify a comma separated list of columns to be included in the query, if left blank, all columns will be fetched
14. Specify a comma separated list of columns to be excluded. Will be ignored, if include list is provided.
15. Specify a local directory path to be used to temporarily store the Salesforce bulk export files for the initial run.
16. Choose an Incremental Policy to be used for this flow. Refer to Step 7
17. Choose the creation date column if available. It will be used to determine if the event is newly inserted or updated
18. Specify the last update date column to be used to track the updates

### Step 7: Choose an Incremental Policy

![](https://272493989-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LyGZIunpED9t56ZtLzh%2F-LyJ7Zjwq2vKzdV69LAi%2F-LyJ9m48TXITmA1jamff%2FScreen%20Shot%202020-01-11%20at%202.32.36%20AM.png?alt=media\&token=69316309-0ce4-4b83-9b84-448a098766a0)

1. **Full dump and load** - will delete all the rows from the target table and reload full data again on every run
2. **Incremental Using Numeric ID Column** - will use a numeric ID column to incrementally load newly added rows with ID higher than the max ID of the previous run. e.g. inventory transactions or ledger transactions where there won't be any updates and only inserts with a running sequential ID used for the transaction ID column.
3. **Incremental Using Last Update Date Column** - will use a timestamp column to fetch incremental data with the timestamp value greater than the max value of the previous run
4. **One Time Load** - will just load data only once and the flow will not be schedule again. The flow status will be changed to Complete after the one time load.

### Step 8: Choose a schedule policy

![](https://272493989-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LyGZIunpED9t56ZtLzh%2F-LyJDEbSKFnJ00iZiifp%2F-LyJDYvKaFA-OLBmElWg%2FScreen%20Shot%202020-01-11%20at%202.49.54%20AM.png?alt=media\&token=902e6e0c-f9b7-4954-92e6-8e6533722ff4)

1. **Cron Expression** - use this to schedule at a specific time of the day or day of the week/month etc using a cron expression
2. **Fixed Interval** - use this to schedule every x minutes
3. **After Parent Flow** - use this to define dependency between flows. You can choose to run this flow soon after the parent flow runs irrespective of the status.
4. **After Parent Flow Success** - similar to the above, but only runs if the parent flow completed without any errors.
5. **After Parent Flow** Failure - similar to the above, but only runs when the parent flow completed with  errors.

### Step 9: Click Submit

{% hint style="success" %}
Congratulations, you have just created your first flow
{% endhint %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://flow-docs.cloudio.io/tutorials/creating-a-flow.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
