FLOW
  • FLOW
  • Getting Started with FLOW
    • Overview
    • The Basics
      • FLOW Concepts
      • Events & Schemas
        • Event Metadata
        • Sample Event
        • Sample Schema
      • Connections
    • Architecture
    • FLOW Schedule
      • Cron Expression
  • Stages
    • Validations
    • Transform
    • Actions
    • Mapper
  • Recipes
  • Scripts
  • Webhooks
  • REST Connector
  • Flow Monitoring
  • Live Monitoring
  • Advanced Topics
    • Post Process
    • Naming Policy
    • Manage Patches
  • Installation
  • FAQ
  • Tutorials
    • Creating a Connection
    • Creating a Flow
    • Creating a Script
    • Creating a Recipe
  • How To
    • OTBI Input
    • CSV File from Amazon S3
  • Changelog
  • Environment Variables
  • Data Security & Privacy
Powered by GitBook
On this page
  • Input
  • Transform
  • Mapper
  • Output

Was this helpful?

  1. Getting Started with FLOW

Architecture

FLOW Architecture Overview

PreviousConnectionsNextFLOW Schedule

Last updated 5 years ago

Was this helpful?

Summary An overview of the FLOW architecture, including descriptions of the data flow from input, through the FLOW, to output.

The basic flow starts with the input (source connectors) and flows through various stages (encryption, transformation, validation, mapper etc.) and gets loaded into the output which is a typical data lake or data warehouse destinations.

Each of these stages is described in more detail below.

Input

FLOW can source data from various systems (Data Source or Input) both on premise and on cloud.

There are various types of Input such as Relational Databases like Oracle, SQL Server, PostgreSQL, MySQL etc., SaaS systems such as Salesforce, Workday or any system that exposes REST API or database access and Cloud Storage such as Amazon S3, Google Drive, Azure Blob Storage and traditional sources like SFTP & File System.

Transform

Transform provides a powerful way to perform transformation on incoming data using Groovy scripts or just by writing some standard Java code. Whatever you can write in Java, you can apply it on each and every event as it streams through various stages.

Mapper

Mapper provides an intuitive user interface to map the input data schema into its respective output schema. You can use either the text mode or a more powerful function mode for more advanced usage.

Output

Output (Data Destination) is typically a cloud data lake like Amazon Redshift, Azure SQL, a Cloud Storage like Amazon S3 or Azure Blob Storage, a traditional data warehouse or a simple relational database such as Oracle or SQL Server.