REST API Reference > Mass Ingestion Streaming REST API > UpdateEntity resource
  

UpdateEntity resource

Use the UpdateEntity resource to update a streaming ingestion task. You can update streaming ingestion tasks that use the following connectors: Amazon Kinesis, Amazon S3 V2, Microsoft Azure Event Hub, Microsoft Azure Data Lake Storage Gen2, Flat file, JDBC V2, JMS, Kafka, or MQTT.

POST request

Use a POST request to update a streaming ingestion task.
To update a streaming ingestion task, use the following URL:
<server URI>/sisvc/restapi/v1/UpdateEntity/Documents('<document ID>')
You can include the following fields in the request:
Field
Type
Required
Description
name
String
Yes
Name of the task.
description
String
-
Description of the task.
runtimeId
String
Yes
ID of the runtime environment.
currentVersion
String
Yes
The latest dataflow object version.
nodes
Array
Yes
Details of the task source and target connections.

Fields of the nodes array

The fields in the array provide the name, type, and connection ID of the connection. It includes the configuration of the source and target connections in key-value pairs which you can edit. You can include the following fields in the nodes array:
Field
Type
Required
Description
name
String
Yes
Name of the connection.
type
String
Yes
The connection type, source or target.
connectionId
String
Yes
ID of the connection.
transformationType
String
-
Not applicable.
config
Array
Yes
Configuration of the source and target connections.

Connection configuration for tasks with MQTT as a source

When the source connection of the task source is MQTT, you can include the following fields and key-value pairs in the config array of the source connection:
Key
Type
Required
Description
ClientID
String
-
Unique identifier of the connection between the MQTT source and the MQTT broker. The client ID is the file-based persistence store that the MQTT source uses to store messages while they are processed.
You can enter a string of up to 255 characters.
MaxQueueSize
Integer
-
The maximum number of messages that the processor can store in memory.
You can enter a value between 1 and 2147483647.
Topic
String
Yes
Name of the MQTT topic.

POST request example

To update a streaming ingestion task with an MQTT source and a flat file target, you might send a request similar to the following example:
{
"name": "mqtt to flatfile",
"description": "mqtt to flatfile",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "mqtt to flatfile_source",
"type": "source",
"connectionId": "012MGS0B00000000001O",
"transformationType": "",
"config": [
{
"key": "ClientID",
"value": "test"
},
{
"key": "MaxQueueSize",
"value": 1024
},
{
"key": "Topic",
"value": "test"
}
]
},
{
"name": "mqtt to flatfile_target",
"type": "target",
"connectionId": "012MGS0B00000000002N",
"transformationType": "",
"config": [
{
"key": "interimDirectory",
"value": "/home/agent/test"
},
{
"key": "rolloverSize",
"value": 1024
},
{
"key": "rolloverEvents",
"value": 100
},
{
"key": "rolloverTime",
"value": 300000
},
{
"key": "File Name",
"value": "test"
}
]
}
],
"edges": [
{
"from": "mqtt to flatfile_source",
"to": "mqtt to flatfile_target"
}
]
}

Connection configuration for tasks with JMS as a source

When the source connection of the task source is JMS, you can include the following fields and key-value pairs in the config array of the source connection:
Key
Type
Required
Description
destinationType
String
Yes
Type of destination that the source service sends JMS messages to. Enter one of the following values:
  • - QUEUE. The JMS provider delivers messages to a single consumer who is registered for the queue.
  • - TOPIC. The JMS provider delivers messages to all active consumers that subscribe to the topic.
clientId
String
Yes
Unique ID of the JMS connection. You can enter a string of up to 255 characters.
sharedSubscription
String
Yes
Enables multiple consumers to access a single subscription. Applies to the TOPIC destination type. Enter one of the following values:
  • - True
  • - False
durableSubscription
String
Yes
When set to True, the JMS source service enables inactive subscribers to retain messages and then deliver them when the subscriber reconnects. Applies to the TOPIC destination type. Enter one of the following values:
  • - True
  • - False
subscriptionName
String
Yes
Name of the subscription. Applies to the TOPIC destination type, when the topic subscription type is shared, durable, or both.
JMS Destination
String
Yes
Name of the queue or topic that the JMS provider delivers messages to.

POST request example

To update a streaming ingestion task with an JMS source and a flat file target, you might send a request similar to the following example:
{
"name": "crud",
"description": "JMS to FileToFile",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "crud_source",
"type": "source",
"connectionId": "012MGS0B000000000003",
"transformationType": "",
"config": [
{
"key": "destinationType",
"value": "QUEUE"
},
{
"key": "clientId",
"value": ""
},
{
"key": "JMS Destination",
"value": "test"
}
]
},
{
"name": "crud_target",
"type": "target",
"connectionId": "012MGS0B00000000000H",
"transformationType": "",
"config": [
{
"key": "interimDirectory",
"value": "/home/agent/test"
},
{
"key": "rolloverSize",
"value": 1024
},
{
"key": "rolloverEvents",
"value": 100
},
{
"key": "rolloverTime",
"value": 300000
},
{
"key": "File Name",
"value": "test"
}
]
}
],
"edges": [
{
"from": "crud_source",
"to": "crud_target"
}
]
}

Connection configuration for tasks with Microsoft Azure Data Lake Storage Gen2 (ADLS Gen2) as a target

When the target connection of the task target is ADLS Gen2, you can include the following fields and key-value pairs in the config array of the target connection:
Key
Type
Required
Description
writeStrategy
String
Yes
The action to take when a file by the same name exists in the ADLS Gen2 storage.
Enter one of the following values:
  • - Append. Add data to the existing file.
  • - Overwrite. Replaces the existing file with the new file.
  • - Fail. Fail the request.
  • - Rollover. Close the current file and create a new file based on the configured rollover value.
rolloverSize *
Integer
-
Target file size, in KB, at which to trigger rollover. Applies to a Rollover write strategy.
You can enter a value between 1 and 2147483647.
rolloverEvents *
Integer
-
Number of events or messages to accumulate before a rollover. Applies to a Rollover write strategy.
You can enter a value between 1 and 2147483647.
rolloverTime *
Integer
-
Length of time, in milliseconds, after which to trigger a rollover. Applies to a Rollover write strategy.
You can enter a value between 1 and 2147483647.
filesystemNameOverride
String
-
Overrides the default file system name provided in the connection. This file system name is used write to a file at run time.
You can enter a string of up to 1,280 characters.
directoryOverride
String
-
Overrides the default directory path. The ADLS Gen2 directory path to write data to. If left blank, the default directory path is used.
You can enter a string of up to 1,280 characters.
compressionFormat
String
-
Compression format to use before the streaming ingestion task writes data to the target file.
Enter one of the following values:
  • - None
  • - GZIP
  • - BZIP2
  • - DEFAULT1
  • Enter this value to use the Zlib format.
  • - DEFAULT2
  • Enter this value to use the Deflate format.
File Name/Expression
String
Yes
ADLS Gen2 file name or a regular expression.
You can enter a string of up to 249 characters.
* Enter a value for at least one of the fields.

POST request example

To update a streaming ingestion task with a flat file source and an ADLS Gen2 target, you might send a request similar to the following example:
{
"name": "flatfile to adls",
"description": "flatfile to adls",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "flatfile to adls_source",
"type": "source",
"connectionId": "012MGS0B00000000002N",
"transformationType": "",
"config": [
{
"key": "File",
"value": "logfile"
},
{
"key": "initialPosition",
"value": "Current Time"
},
{
"key": "rolloverPattern",
"value": "test"
},
{
"key": "tailingMode",
"value": "Single file"
}
]
},
{
"name": "flatfile to adls_target",
"type": "target",
"connectionId": "012MGS0B00000000003D",
"transformationType": "",
"config": [
{
"key": "writeStrategy",
"value": "Rollover"
},
{
"key": "filesystemNameOverride",
"value": "test"
},
{
"key": "File Name/Expression",
"value": "test"
},
{
"key": "compressionFormat",
"value": "NONE"
},
{
"key": "directoryOverride",
"value": "/test"
},
{
"key": "interimDirectory",
"value": "/home/agent/test"
},
{
"key": "rolloverSize",
"value": 1024
},
{
"key": "rolloverEvents",
"value": 100
},
{
"key": "rolloverTime",
"value": 300000
}
]
}
]
}

Connection configuration for tasks with Amazon S3 as a target

When the target connection of the task target is Amazon S3, you can include the following fields and key-value pairs in the config array of the target connection:
Key
Type
Required
Description
partitionTime
String
-
The time interval according to which the streaming ingestion task creates partitions in the Amazon S3 bucket.
Enter one of the following values:
  • - None
  • - 5min
  • - 10min
  • - 15min
  • - 20min
  • - 30min
  • - 1hr
  • - 1day
minUploadPartSize
Integer
-
Minimum part size when uploading a large file as a set of multiple independent parts, in megabytes. Use this property to tune the file load to Amazon S3.
You can enter a value between 50 and 5120.
multipartUploadThreshold
Integer
-
Multipart threshold when uploading objects in multiple parts in parallel.
You can enter a value between 50 and 5120.
Object Name/Expression
String
Yes
Amazon S3 target file name or a regular expression for the Amazon S3 file name pattern.

POST request example

To update a streaming ingestion task with a flat file source and an Amazon S3 as target, you might send a request similar to the following example:
{
"name": "flatfile to amazon S3",
"description": "flatfile to amazon S3",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "flatfile to amazon S3_source",
"type": "source",
"connectionId": "012MGS0B00000000002N",
"transformationType": "",
"config": [
{
"key": "File",
"value": "logfile"
},
{
"key": "initialPosition",
"value": "Current Time"
},
{
"key": "rolloverPattern",
"value": "test"
},
{
"key": "tailingMode",
"value": "Single file"
}
]
},
{
"name": "flatfile to amazon S3_target",
"type": "target",
"connectionId": "012MGS0B0000000000I7",
"transformationType": "",
"config": [
{
"key": "partitionTime",
"value": "None"
},
{
"key": "minUploadPartSize",
"value": 5120
},
{
"key": "multipartUploadThreshold",
"value": 5120
},
{
"key": "Object Name/Expression",
"value": "test"
}
]
}
],
"edges": [
{
"from": "flatfile to amazon S3_source",
"to": "flatfile to amazon S3_target"
}
]
}

Connection configuration for tasks with Azure Event Hubs as a target

When the target connection of the task target is Azure Event Hubs, you can include the following fields and key-value pairs in the config array of the target connection:
Key
Type
Required
Description
sasPolicyName
String
-
The name of the Event Hub Namespace Shared Access Policy.
You can enter a string of up to 255 characters.
sasPolicyPrimaryKey
String
-
The primary key of the Event Hub Namespace Shared Access Policy.
You can enter a string of up to 255 characters.
Event Hub
String
Yes
The name of the Azure Event Hubs.
You can enter a string up to 255 characters. The name can contain lower case characters, upper case characters, numbers, and the special characters - and _.

POST request example

To update a streaming ingestion task with a flat file source and an Azure Event Hubs target, you might send a request similar to the following example:
{
"name": "flatfile to azure event hub",
"description": "flatfile to azure event hub",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "flatfile to azure event hub_source",
"type": "source",
"connectionId": "012MGS0B00000000002N",
"transformationType": "",
"config": [
{
"key": "File",
"value": "logfile"
},
{
"key": "initialPosition",
"value": "Current Time"
},
{
"key": "rolloverPattern",
"value": "test"
},
{
"key": "tailingMode",
"value": "Single file"
}
]
},
{
"name": "flatfile to azure event hub_target",
"type": "target",
"connectionId": "012MGS0B00000000001S",
"transformationType": "",
"config": [
{
"key": "sasPolicyName",
"value": "test"
},
{
"key": "sasPolicyPrimaryKey",
"value": "test"
},
{
"key": "Event Hub",
"value": "test"
}
]
}
],
"edges": [
{
"from": "flatfile to azure event hub_source",
"to": "flatfile to azure event hub_target"
}
]
}

Connection configuration for tasks with JDBC V2 as a target

When the target connection of the task target is JDBC V2, you can include the following fields and key-value pairs in the config array of the target connection:
Key
Type
Required
Description
Table Name
String
Yes
Name of the table to insert data to in JSON format.
Enter a string of up to 988 characters.

POST request example

To update a streaming ingestion task with a flat file source and a JDBC V2 target, you might send a request similar to the following example:
{
"name": "FileFile to jdbc",
"description": "FileToFile to jdbc_target",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "flatfile to jdbc_source",
"type": "source",
"connectionId": "012MGS0B00000000002N",
"transformationType": "",
"config": [
{
"key": "initialPosition",
"value": "Current Time"
},
{
"key": "tailingMode",
"value": "Single file"
},
{
"key": "rolloverPattern",
"value": "test"
},
{
"key": "File",
"value": "logfile"
}
]
},
{
"name": "flatfile to jdbc_target",
"type": "target",
"connectionId": "012MGS0B0000000000KF",
"transformationType": "",
"config": [
{
"key": "Table Name",
"value": "table"
}
]
}
],
"edges": [
{
"from": "flatfile to jdbc_source",
"to": "flatfile to jdbc_target"
}
]
}

Connection configuration for tasks with Amazon Kinesis Streams as a source and as a target

When the source and target connection of the task is Amazon Kinesis Streams, you can include the following fields and key-value pairs in the config array of the source and target connection:
Key
Type
Required
Description
appendGUID
Boolean
Specifies whether or not to add a GUID as a suffix to the Amazon DynamoDB table name.
Enter one of the following values:
  • - true
  • - false
dynamoDB
String
Amazon DynamoDB table name where to store the checkpoint details of the Kinesis source data.
You can enter a string of up to 128 characters.
Stream
String
Yes
Name of the Kinesis Stream to read data from.
Enter a string of up to 128 characters.
Appears in the source node.
Stream Name/Expression
String
Yes
Kinesis Stream name or a regular expression to write data to.
Enter a string of up to 128 characters.
Appears in the target node.

POST request example

To update a streaming ingestion task with an Amazon Kinesis Streams source and target, you might send a request similar to the following example:
{
"name": "kinesis to kinesis",
"description": "kinesis to kinesis",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "kinesis to kinesis_source",
"type": "source",
"connectionId": "012MGS0B00000000000F",
"transformationType": "",
"config": [
{
"key": "appendGUID",
"value": true
},
{
"key": "dynamoDB",
"value": "table"
},
{
"key": "Stream",
"value": "test"
}
]
},
{
"name": "kinesis to kinesis_target",
"type": "target",
"connectionId": "012MGS0B00000000000F",
"transformationType": "",
"config": [
{
"key": "Stream Name/Expression",
"value": "trgt"
}
]
}
],
"edges": [
{
"from": "kinesis to kinesis_source",
"to": "kinesis to kinesis_target"
}
]
}

Connection configuration for tasks with flat file as a source and as a target

When the source and target connection of the task is flat file, you can include the following fields and key-value pairs in the config array of the source and target connection:
Key
Type
Required
Description
File
String
Yes
Absolute path and name of the source file. Enter the base directory for multiple files mode.
initialPosition
String
Yes
Starting position to read data from the file to tail. Enter one of the following values:
  • - Beginning of File. Read from the beginning of the file. Don't ingest any data that has already been rolled over.
  • - Current Time. Read from the most recently updated part of the file. Don't ingest data that was rolled over or data in the file that was written.
rolloverPattern
String
-
File name pattern for the file that rolls over.
If the file to tail rolls over, the Secure Agent uses the file name pattern to identify files that have rolled over. If the Secure Agent stops during a file rollover, when it restarts, it picks up the file where it was left off.
You can use asterisk (*) and question mark (?) as wildcard characters to indicate that the files are rolled over in the same directory. For example,${filename}.log.*. Here, asterisk (*) represents the successive version numbers that would be appended to the file name.
tailingMode
String
Yes
Tail a file or multiple files based on the logging pattern. Enter one of the following values:
  • - Single file. Tail one file.
  • - Multiple files. Tail all the files indicated in the base directory. You can enter a regular expression to indicate the files to tail.
File Name
String
Yes
The name of the target file.
interimDirectory
String
Yes
Path to the staging directory on the Secure Agent.
rolloverSize
Integer
Yes
The file size, in KB, at which the task moves the file from the staging directory to the target.
You can enter a value between 1 and 2147483647.
rolloverEvents
Integer
Yes
Number of events or messages to accumulate before a file rollover.
You can enter a value between 1 and 2147483647.
rolloverTime
Integer
-
Length of time, in milliseconds, after which the target file rolls over.
You can enter a value between 1 and 2147483647.
edges
Array
-
Sequence of dataflow execution.

POST request example

To update a streaming ingestion task with a flat file source and target, you might send a request similar to the following example:
{
"name": "FileToFile",
"description": "FileToFile_V2",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "FileToFile_source",
"type": "source",
"connectionId": "0100000B000000000002",
"transformationType": "",
"config": [
{
"key": "File",
"value": "siagent.log"
},
{
"key": "initialPosition",
"value": "Current Time"
},
{
"key": "rolloverPattern",
"value": ""
},
{
"key": "tailingMode",
"value": "Single file"
}
]
},
{
"name": "FileToFile_target",
"type": "target",
"connectionId": "0100000B000000000002",
"transformationType": "",
"config": [
{
"key": "File Name",
"value": "testing.log"
},
{
"key": "interimDirectory",
"value": "/home/agent/infa/test_file_target"
},
{
"key": "rolloverSize",
"value": 100
},
{
"key": "rolloverEvents",
"value": 100
},
{
"key": "rolloverTime",
"value": 100
}
]
}
],
"edges": [
{
"from": "FileToFile_source",
"to": "FileToFile_target"
}
],
"runtimeOptions": {
"maxLogSize": {
"value": 10,
"unit": "MB"
},
"logLevel": "INFO"
}
}

Connection configuration for tasks with Kafka as a source and as a target

When the source and target connection of the task is Kafka, you can include the following fields and key-value pairs in the config array of the source and target connection:
Key
Type
Required
Description
Topic
String
Yes
Kafka source topic name or a Java supported regular expression for the Kafka source topic name pattern to read the events from.
Enter a string of up to 249 characters.
consumerProperties
String
-
Provide a comma-separated list of optional consumer configuration properties. Specify the values as key-value pairs. For example, key1=value1, key2=value2 .
You can enter a string of up to 4000 characters.
producerProperties
String
-
The configuration properties for the producer.
Provide a comma-separated list and specify the values as key-value pairs.
You can enter a string of up to 4000 characters.
mdFetchTimeout
Integer
-
The time after which the metadata is not fetched.
Enter a value between 1 and 2147483647.
batchSize
Integer
-
The batch size of the events after which a streaming ingestion task writes data to the target.
Enter a value between 1 and 2147483647.
Topic Name/Expression
String
Yes
Kafka topic name or a Java supported regular expression for the Kafka topic name pattern.
You can enter a string of up to 249 characters.

POST request example

To update a streaming ingestion task with a Kafka source and target, you might send a request similar to the following example:
{
"name": "kafka to kafka",
"description": "kafka to kafka",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "kafka to kafka_source",
"type": "source",
"connectionId": "012MGS0B000000000002",
"transformationType": "",
"config": [
{
"key": "consumerProperties",
"value": "key=value"
},
{
"key": "Topic",
"value": "test"
}
]
},
{
"name": "kafka to kafka_target",
"type": "target",
"connectionId": "012MGS0B000000000002",
"transformationType": "",
"config": [
{
"key": "producerProperties",
"value": "key=value"
},
{
"key": "mdFetchTimeout",
"value": 5000
},
{
"key": "batchSize",
"value": 1048576
},
{
"key": "Topic Name/Expression",
"value": "test"
}
]
}
],
"edges": [
{
"from": "kafka to kafka_source",
"to": "kafka to kafka_target"
}
]
}

POST response

When the REST API successfully performs an action, it returns a 200 or 201 success response. When the REST API encounters an error, it returns an appropriate error code.
If the request is successful, the response returns the following fields:
Field
Type
Description
name
String
Name of the task.
description
String
Description of the task, if available.
runtimeId
String
ID of the runtime environment.
currentVersion
String
The latest dataflow object version.
nodes
Array
Details of the task source and target connections.

Fields of the nodes array

The response includes the following fields in the nodes array:
Field
Type
Description
name
String
Name of the connection.
type
String
The connection type.
connectionId
String
ID of the connection.
transformationType
String
The type of transformation.
config
String
Configuration of the source and target connections in key-value pairs. The keys in the array depend on the type of source and target connections.
If the request is unsuccessful, the response includes a reason for the failure.

Configuration information in the config array MQTT as a source

If the request is successful, the response returns the following fields:
Key
Type
Description
ClientID
String
Unique identifier that identifies the connection between the MQTT source and the MQTT broker. The client ID is the file-based persistence store that the MQTT source uses to store messages when they are being processed.
MaxQueueSize
Integer
The maximum number of messages that the processor can store in memory.
Topic
String
Name of the MQTT topic.
If the request is unsuccessful, the response includes a reason for the failure.

POST response example

If the request is successful, you might receive a response similar to the following example in a Success node:
{
"Success": {
"name": "mqtt to flatfile",
"description": "mqtt to flatfile",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "mqtt to flatfile_source",
"type": "source",
"connectionId": "012MGS0B00000000001O",
"transformationType": "",
"config": [
{
"key": "ClientID",
"value": "test"
},
{
"key": "MaxQueueSize",
"value": 1024
},
{
"key": "Topic",
"value": "test"
}
]
},
{
"name": "mqtt to flatfile_target",
"type": "target",
"connectionId": "012MGS0B00000000002N",
"transformationType": "",
"config": [
{
"key": "interimDirectory",
"value": "/home/agent/test"
},
{
"key": "rolloverSize",
"value": 1024
},
{
"key": "rolloverEvents",
"value": 100
},
{
"key": "rolloverTime",
"value": 300000
},
{
"key": "File Name",
"value": "test"
}
]
}
],
"edges": [
{
"from": "mqtt to flatfile_source",
"to": "mqtt to flatfile_target"
}
]
}
}

Configuration information in the config array for JMS as a source

The response returns only the fields that you entered in the request.
If the request is successful, the response returns the following fields:
Key
Type
Description
destinationType
String
Type of destination that the source service sends JMS messages to.
clientId
String
Unique ID of the JMS connection.
sharedSubscription
String
Enables multiple consumers to access a single subscription. Applies to the TOPIC destination type.
durableSubscription
String
The JMS source service enables inactive subscribers to retain messages and then deliver them when the subscriber reconnects. Applies to the TOPIC destination type.
subscriptionName
String
Name of the subscription. Applies to the TOPIC destination type, when the topic subscription type is shared, durable, or both.
JMS Destination
String
Name of the queue or topic that the JMS provider delivers messages to.
If the request is unsuccessful, the response includes a reason for the failure.

POST response example

If the request is successful, you might receive a response similar to the following example in a Success node:
{
"Success": {
"name": "crud",
"description": "JMS to FileToFile",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "crud_source",
"type": "source",
"connectionId": "012MGS0B000000000003",
"transformationType": "",
"config": [
{
"key": "destinationType",
"value": "QUEUE"
},
{
"key": "clientId",
"value": ""
},
{
"key": "JMS Destination",
"value": "test"
}
]
},
{
"name": "crud_target",
"type": "target",
"connectionId": "012MGS0B00000000000H",
"transformationType": "",
"config": [
{
"key": "interimDirectory",
"value": "/home/agent/test"
},
{
"key": "rolloverSize",
"value": 1024
},
{
"key": "rolloverEvents",
"value": 100
},
{
"key": "rolloverTime",
"value": 300000
},
{
"key": "File Name",
"value": "test"
}
]
}
],
"edges": [
{
"from": "crud_source",
"to": "crud_target"
}
]
}
}

Configuration information in the config array for ADLS Gen2 as a target

The response returns only the fields that you entered in the request.
If the request is successful, the response returns the following fields:
Key
Type
Description
writeStrategy
String
The action to take when a file exists in the ADLS Gen2 storage.
rolloverSize *
Integer
Target file size, in KB, at which to trigger rollover. Applies to a Rollover write strategy.
rolloverEvents *
Integer
Number of events or messages to accumulate before a rollover. Applies to a Rollover write strategy.
rolloverTime *
Integer
Length of time, in milliseconds, after which to trigger a rollover. Applies to a Rollover write strategy.
filesystemNameOverride
String
Overrides the default file system name provided in the connection. This file system name is used write to a file at run time.
directoryOverride
String
Overrides the default directory path. The ADLS Gen2 directory path to write data to. If left blank, the default directory path is used.
compressionFormat
String
Compression format to use before the streaming ingestion task writes data to the target file.
File Name/Expression
String
ADLS Gen2 file name or a regular expression.
* Enter a value for at least one of the fields.
If the request is unsuccessful, the response includes a reason for the failure.

POST response example

If the request is successful, you might receive a response similar to the following example in a Success node:
{
"Success": {
"name": "flatfile to adls",
"description": "flatfile to adls",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "flatfile to adls_source",
"type": "source",
"connectionId": "012MGS0B00000000002N",
"transformationType": "",
"config": [
{
"key": "File",
"value": "logfile"
},
{
"key": "initialPosition",
"value": "Current Time"
},
{
"key": "rolloverPattern",
"value": "test"
},
{
"key": "tailingMode",
"value": "Single file"
}
]
},
{
"name": "flatfile to adls_target",
"type": "target",
"connectionId": "012MGS0B00000000003D",
"transformationType": "",
"config": [
{
"key": "writeStrategy",
"value": "Rollover"
},
{
"key": "filesystemNameOverride",
"value": "test"
},
{
"key": "File Name/Expression",
"value": "test"
},
{
"key": "compressionFormat",
"value": "NONE"
},
{
"key": "directoryOverride",
"value": "/test"
},
{
"key": "interimDirectory",
"value": "/home/agent/test"
},
{
"key": "rolloverSize",
"value": 1024
},
{
"key": "rolloverEvents",
"value": 100
},
{
"key": "rolloverTime",
"value": 300000
}
]
}
]
}
}

Configuration information in the config array for Amazon S3 as a target

The response returns only the fields that you entered in the request.
If the request is successful, the response returns the following fields:
Key
Type
Description
partitionTime
String
The time interval according to which the streaming ingestion task creates partitions in the Amazon S3 bucket.
minUploadPartSize
Integer
Minimum part size when uploading a large file as a set of multiple independent parts, in megabytes. Use this property to tune the file load to Amazon S3.
multipartUploadThreshold
Integer
Multipart threshold when uploading objects in multiple parts in parallel.
Object Name/Expression
String
Amazon S3 target file name or a regular expression for the Amazon S3 file name pattern.
If the request is unsuccessful, the response includes a reason for the failure.

POST response example

If the request is successful, you might receive a response similar to the following example in the Success node:
{
"Success": {
"name": "flatfile to amazon S3",
"description": "flatfile to amazon S3",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "flatfile to amazon S3_source",
"type": "source",
"connectionId": "012MGS0B00000000002N",
"transformationType": "",
"config": [
{
"key": "File",
"value": "logfile"
},
{
"key": "initialPosition",
"value": "Current Time"
},
{
"key": "rolloverPattern",
"value": "test"
},
{
"key": "tailingMode",
"value": "Single file"
}
]
},
{
"name": "flatfile to amazon S3_target",
"type": "target",
"connectionId": "012MGS0B0000000000I7",
"transformationType": "",
"config": [
{
"key": "partitionTime",
"value": "None"
},
{
"key": "minUploadPartSize",
"value": 5120
},
{
"key": "multipartUploadThreshold",
"value": 5120
},
{
"key": "Object Name/Expression",
"value": "test"
}
]
}
],
"edges": [
{
"from": "flatfile to amazon S3_source",
"to": "flatfile to amazon S3_target"
}
]
}
}

Configuration information in the config array for Azure Event Hubs as a target

The response returns only the fields that you entered in the request.
If the request is successful, the response returns the following fields:
Key
Type
Description
sasPolicyName
String
The name of the Event Hub Namespace Shared Access Policy.
sasPolicyPrimaryKey
String
The primary key of the Event Hub Namespace Shared Access Policy.
Event Hub
String
The name of the Azure Event Hubs.
If the request is unsuccessful, the response includes a reason for the failure.

POST response example

If the request is successful, you might receive a response similar to the following example in a Success node:
{
"Success": {
"name": "flatfile to azure event hub",
"description": "flatfile to azure event hub",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "flatfile to azure event hub_source",
"type": "source",
"connectionId": "012MGS0B00000000002N",
"transformationType": "",
"config": [
{
"key": "File",
"value": "logfile"
},
{
"key": "initialPosition",
"value": "Current Time"
},
{
"key": "rolloverPattern",
"value": "test"
},
{
"key": "tailingMode",
"value": "Single file"
}
]
},
{
"name": "flatfile to azure event hub_target",
"type": "target",
"connectionId": "012MGS0B00000000001S",
"transformationType": "",
"config": [
{
"key": "sasPolicyName",
"value": "test"
},
{
"key": "sasPolicyPrimaryKey",
"value": "test"
},
{
"key": "Event Hub",
"value": "test"
}
]
}
],
"edges": [
{
"from": "flatfile to azure event hub_source",
"to": "flatfile to azure event hub_target"
}
]
}
}

Configuration information in the config array for JDBC as a target

The response returns only the fields that you entered in the request.
If the request is successful, the response returns the following field:
Key
Type
Description
Table Name
String
Name of the table to insert data to in JSON format.
If the request is unsuccessful, the response includes a reason for the failure.

POST response example

If the request is successful, you might receive a response similar to the following example in a Success node:
{
"Success": {
"name": "FileFile to jdbc",
"description": "FileToFile to jdbc_target",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "flatfile to jdbc_source",
"type": "source",
"connectionId": "012MGS0B00000000002N",
"transformationType": "",
"config": [
{
"key": "initialPosition",
"value": "Current Time"
},
{
"key": "tailingMode",
"value": "Single file"
},
{
"key": "rolloverPattern",
"value": "test"
},
{
"key": "File",
"value": "logfile"
}
]
},
{
"name": "flatfile to jdbc_target",
"type": "target",
"connectionId": "012MGS0B0000000000KF",
"transformationType": "",
"config": [
{
"key": "Table Name",
"value": "table"
}
]
}
],
"edges": [
{
"from": "flatfile to jdbc_source",
"to": "flatfile to jdbc_target"
}
]
}
}

Configuration information in the config array for Amazon Kinesis Streams as a source and as a target

The response returns only the fields that you entered in the request.
If the request is successful, the response returns the following fields:
Key
Type
Description
appendGUID
Boolean
Specifies whether or not to add a GUID as a suffix to the Amazon DynamoDB table name.
dynamoDB
String
Amazon DynamoDB table name where to store the checkpoint details of the Kinesis source data.
Stream
String
Name of the Kinesis Stream from where to read data.
Applies when you use Amazon Kinesis Streams as a source.
Stream Name/Expression
String
Kinesis stream name or a regular expression for the Kinesis stream name pattern.
Applies when you use Amazon Kinesis Streams as a target.
If the request is unsuccessful, the response includes a reason for the failure.

POST response example

If the request is successful, you might receive a response similar to the following example in a Success node:
{
"Success": {
"name": "kinesis to kinesis",
"description": "kinesis to kinesis",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "kinesis to kinesis_source",
"type": "source",
"connectionId": "012MGS0B00000000000F",
"transformationType": "",
"config": [
{
"key": "appendGUID",
"value": true
},
{
"key": "dynamoDB",
"value": "table"
},
{
"key": "Stream",
"value": "test"
}
]
},
{
"name": "kinesis to kinesis_target",
"type": "target",
"connectionId": "012MGS0B00000000000F",
"transformationType": "",
"config": [
{
"key": "Stream Name/Expression",
"value": "trgt"
}
]
}
],
"edges": [
{
"from": "kinesis to kinesis_source",
"to": "kinesis to kinesis_target"
}
]
}
}

Configuration information in the config array for flat file as a source and as a target

The response returns only the fields that you entered in the request.
If the request is successful, the response returns the following fields:
Key
Type
Required
Description
File
String
Yes
Absolute path and name of the source file you want to read.
initialPosition
String
Yes
Starting position to read data from the file to tail.
rolloverPattern
String
-
File name pattern for the file that rolls over.
tailingMode
String
Yes
Tail a file or multiple files based on the logging pattern.
File Name
String
Yes
The name of the target file.
interimDirectory
String
Yes
Path to the staging directory on the Secure Agent.
rolloverSize
Integer
Yes
The file size, in KB, at which the task moves the file from the staging directory to the target.
rolloverEvents
Integer
Yes
Number of events or messages to accumulate before a file rollover.
rolloverTime
Integer
-
Length of time, in milliseconds, after which the target file rolls over.
If the request is unsuccessful, the response includes a reason for the failure.

POST response example

If the request is successful, you might receive a response similar to the following example:
{
"Success": {
"name": "FileToFile",
"description": "FileToFile_V2",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "FileToFile_source",
"type": "source",
"connectionId": "0100000B000000000002",
"transformationType": "",
"config": [
{
"key": "File",
"value": "siagent.log"
},
{
"key": "initialPosition",
"value": "Current Time"
},
{
"key": "rolloverPattern",
"value": ""
},
{
"key": "tailingMode",
"value": "Single file"
}
]
},
{
"name": "FileToFile_target",
"type": "target",
"connectionId": "0100000B000000000002",
"transformationType": "",
"config": [
{
"key": "File Name",
"value": "testing.log"
},
{
"key": "interimDirectory",
"value": "/home/agent/infa/test_file_target"
},
{
"key": "rolloverSize",
"value": 100
},
{
"key": "rolloverEvents",
"value": 100
},
{
"key": "rolloverTime",
"value": 100
}
]
}
],
"edges": [
{
"from": "FileToFile_source",
"to": "FileToFile_target"
}
],
"runtimeOptions": {
"maxLogSize": {
"value": 10,
"unit": "MB"
},
"logLevel": "INFO"
}
}
}

Configuration information in the config array for Kafka as a source and as a target

The response returns only the fields that you entered in the request.
If the request is successful, the response returns the following fields:
Key
Type
Description
Topic
String
Kafka source topic name or a Java supported regular expression for the Kafka source topic name pattern to read the events from.
consumerProperties
String
A comma-separated list of optional consumer configuration properties.
producerProperties
String
The configuration properties for the producer.
mdFetchTimeout
Integer
The time after which the metadata is not fetched.
batchSize
Integer
The batch size of the events after which a streaming ingestion task writes data to the target.
Topic Name/Expression
String
Kafka topic name or a Java supported regular expression for the Kafka topic name pattern.
If the request is unsuccessful, the response includes a reason for the failure.

POST response example

If the request is successful, you might receive a response similar to the following example in a Success node:
{
"Success": {
"name": "kafka to kafka",
"description": "kafka to kafka",
"runtimeId": "01000025000000000003",
"locationId": "5sJ0JDyJyWLlrosS5qJjsQ",
"currentVersion": "2",
"messageFormat": "binary",
"nodes": [
{
"name": "kafka to kafka_source",
"type": "source",
"connectionId": "012MGS0B000000000002",
"transformationType": "",
"config": [
{
"key": "consumerProperties",
"value": "key=value"
},
{
"key": "Topic",
"value": "test"
}
]
},
{
"name": "kafka to kafka_target",
"type": "target",
"connectionId": "012MGS0B000000000002",
"transformationType": "",
"config": [
{
"key": "producerProperties",
"value": "key=value"
},
{
"key": "mdFetchTimeout",
"value": 5000
},
{
"key": "batchSize",
"value": 1048576
},
{
"key": "Topic Name/Expression",
"value": "test"
}
]
}
],
"edges": [
{
"from": "kafka to kafka_source",
"to": "kafka to kafka_target"
}
]
}
}