Input Control Type | Description | Recommended Usage |
---|---|---|
Text Box | Use for any information. Displays an empty text box. Requires an understanding of the type of information to enter. If the information is not obvious, use a template parameter description to create a tooltip. Default for string template parameters. | Expressions or partial expressions. Data values and other template parameters that do not fit in the other input control types. |
Condition | Use to create a boolean condition that resolves to True or False. Displays a Data Filter dialog box that allows you to create a simple or advanced data filter. | Filter expressions defined in a Filter object, conditions used in a Router object, and other boolean expressions. |
Expression | Use to create simple or complex expressions. Displays a Field Expression dialog box with a list of source fields, functions, and operators. | Expressions in the Expression and Aggregator objects. |
Field | Use to select a single source or lookup field. Displays a list of fields and allows you to select a single field. | Field selection for a lookup condition or other expressions. Or for a link rule to propagate specific fields. |
Field Mapping | Use to map more than one field. Displays a field mapping input control like on the Field Mapping page of the Data Synchronization Task wizard. Allows you to map available fields from upstream sources, lookups, and mapplets to downstream mapplets or targets. Defines whether you can use aggregate functions in expressions. | Define a set of field level mappings between sources, lookups, mapplets, and targets. To allow aggregate functions in expressions, enable the Aggregate Functions option. |
Custom Dropdown | Use to provide a list of options. Displays a drop down menu with options that you configure when you import the integration template. When you define the options, you create a display label and a value for the label. In the Mapping Configuration Task wizard, the label displays. The value does not display. | Define a set of options for possible selection. Does not display the values that the options represent. |
General Options | Description |
---|---|
Write Backward Compatible Session Log File | Writes the session log to a file. |
Session Log File Name | Name for the session log. Use any valid file name. You can use the following variables as part of the session log name:
|
Session Log File Directory | Directory where the session log is saved. Use a directory local to the Secure Agent to run the task. By default, the session log is saved to the following directory: <Secure Agent installation directory>/apps/Data_Integration_Server/logs |
$Source Connection Value | Source connection name for integration templates. |
$Target Connection Value | Target connection name for integration templates. |
Treat Source Rows as | When the task reads source data, it marks each row with an indicator that specifies the target operation to perform when the row reaches the target. Use one of the following options:
|
Commit Type | Commit type to use. Use one of the following options.
When you do not configure a commit type, the task performs a target commit. |
Commit Interval | Interval in rows between commits. When you do not configure a commit interval, the task commits every 10,000 rows. |
Commit on End of File | Commits data at the end of the file. |
Rollback Transactions on Errors | Rolls back the transaction at the next commit point when the task encounters a non-fatal error. When the task encounters a transformation error, it rolls back the transaction if the error occurs after the effective transaction generator for the target. |
Java Classpath | Java classpath to use. The Java classpath is added to the beginning of the system classpath when the task runs. Use this option when you use third-party Java packages, built-in Java packages, or custom Java packages in a Java transformation. |
Performance Settings | Description |
---|---|
DTM Buffer Size | Amount of memory allocated to the task from the DTM process. By default, a minimum of 12 MB is allocated to the buffer at run time. Use one of the following options:
You might increase the DTM buffer size in the following circumstances:
|
Incremental Aggregation | Performs incremental aggregation for tasks based on integration templates. |
Reinitialize Aggregate Cache | Overwrites existing aggregate files for a task that performs incremental aggregation. |
Enable High Precision | Processes the Decimal datatype to a precision of 28. |
Session Retry on Deadlock | The task retries a write on the target when a deadlock occurs. |
Pushdown Optimization | Type of pushdown optimization. Use one of the following options:
When you use $$PushdownConfig, ensure that the user-defined parameter is configured in the parameter file. When you use pushdown optimization, do not use the Error Log Type property. |
Allow Temporary View for Pushdown | Allows the task to create temporary view objects in the database when it pushes the task to the database. Use when the task includes an SQL override in the Source Qualifier transformation or Lookup transformation. You can also for a task based on an integration template that includes a lookup with a lookup source filter. |
Allow Temporary Sequence for Pushdown | Allows the task to create temporary sequence objects in the database. Use when the task is based on an integration templates that includes a Sequence Generator transformation. |
Allow Pushdown for User Incompatible Connections | Indicates that the database user of the active database has read permission on idle databases. If you indicate that the database user of the active database has read permission on idle databases, and it does not, the task fails. If you do not indicate that the database user of the active database has read permission on idle databases, the task does not push transformation logic to the idle databases. |
Session Sort Order | Order to use to sort character data for the task. |
Advanced Options | Description |
---|---|
Constraint Based Load Ordering | Loads targets based on primary key-foreign key constraints when possible. |
Cache Lookup() Function | Caches lookup functions in integration templates with unconnected lookups. Overrides lookup configuration in the template. By default, the task performs lookups on a row-by-row basis, unless otherwise specified in the template. |
Default Buffer Block Size | Size of buffer blocks used to move data and index caches from sources to targets. By default, the task determines this value at run time. Use one of the following options:
The task must have enough buffer blocks to initialize. The minimum number of buffer blocks must be greater than the total number of Source Qualifiers, Normalizers for COBOL sources, and targets. The number of buffer blocks in a task = DTM Buffer Size / Buffer Block Size. Default settings create enough buffer blocks for 83 sources and targets. If the task contains more than 83, you might need to increase DTM Buffer Size or decrease Default Buffer Block Size. |
Line Sequential Buffer Length | Number of bytes that the task reads for each line. Increase this setting from the default of 1024 bytes if source flat file records are larger than 1024 bytes. |
Maximum Partial Session Log Files | The maximum number of partial log files to save. Configure this option with Session Log File Max Size or Session Log File Max Time Period. Default is one. |
Maximum Memory Allowed for Auto Memory Attributes | Maximum memory allocated for automatic cache when you configure the task to determine the cache size at run time. You enable automatic memory settings by configuring a value for this attribute. Enter a numeric value. The default unit is bytes. Append KB, MB, or GB to the value to specify a different unit of measure. For example, 512MB. If the value is set to zero, the task uses default values for memory attributes that you set to auto. |
Maximum Percentage of Total Memory Allowed for Auto Memory Attributes | Maximum percentage of memory allocated for automatic cache when you configure the task to determine the cache size at run time. If the value is set to zero, the task uses default values for memory attributes that you set to auto. |
Additional Concurrent Pipelines for Lookup Cache Creation | Restricts the number of pipelines that the task can create concurrently to pre-build lookup caches. You can configure this property when the Pre-build Lookup Cache property is enabled for a task or transformation. When the Pre-build Lookup Cache property is enabled, the task creates a lookup cache before the Lookup receives the data. If the task has multiple Lookups, the task creates an additional pipeline for each lookup cache that it builds. To configure the number of pipelines that the task can create concurrently, select one of the following options:
|
Custom Properties | Configure custom properties for the task. You can override the custom properties that the task uses after the job has started. The task also writes the override value of the property to the session log. |
Pre-build Lookup Cache | Allows the task to build the lookup cache before the Lookup receives the data. The task can build multiple lookup cache files at the same time to improve performance. You can configure this option in an integration template template or in a task. The task uses the task-level setting if you configure the Lookup option as Auto for an integration template. Configure one of the following options:
When you use this option, configure the Configure the Additional Concurrent Pipelines for Lookup Cache Creation property. The task can pre-build the lookup cache if this property is greater than zero. |
DateTime Format String | Date time format for the task. You can specify seconds, milliseconds, or nanoseconds. To specify seconds, enter MM/DD/YYYY HH24:MI:SS. To specify milliseconds, enter MM/DD/YYYY HH24:MI:SS.MS. To specify microseconds, enter MM/DD/YYYY HH24:MI:SS.US. To specify nanoseconds, enter MM/DD/YYYY HH24:MI:SS.NS. By default, the format specifies microseconds, as follows: MM/DD/YYYY HH24:MI:SS.US. |
Pre 85 Timestamp Compatibility | Do not use with Informatica Cloud. |
Error Handling Options | Description |
---|---|
Stop On Errors | Indicates how many non-fatal errors the task can encounter before it stops the session. Non-fatal errors include reader, writer, and DTM errors. Enter the number of non-fatal errors you want to allow before stopping the session. The task maintains an independent error count for each source, target, and transformation. If you specify 0, non-fatal errors do not cause the session to stop. |
Override Tracing | Overrides tracing levels set on an object level. |
On Stored Procedure Error | Determines the behavior when a task based on an integration template encounters pre- or post-session stored procedure errors. Use one of the following options:
By default, the task stops. |
On Pre-Session Command Task Error | Determines the behavior when a task that includes pre-session shell commands encounters errors. Use one of the following options:
By default, the task stops. |
On Pre-Post SQL Error | Determines the behavior when a task that includes pre- or post-session SQL encounters errors:
By default, the task stops. |
Error Log Type | Specifies the type of error log to create. You can specify flat file or no log. Default is none. You cannot log row errors from XML file sources. You can view the XML source errors in the session log. Do not use this property when you use the Pushdown Optimization property. |
Error Log File Directory | Specifies the directory where errors are logged. By default, the error log file directory is $PMBadFilesDir\. |
Error Log File Name | Specifies error log file name. By default, the error log file name is PMError.log. |
Log Row Data | Specifies whether or not to log transformation row data. When you enable error logging, the task logs transformation row data by default. If you disable this property, n/a or -1 appears in transformation row data fields. |
Log Source Row Data | Specifies whether or not to log source row data. By default, the check box is clear and source row data is not logged. |
Data Column Delimiter | Delimiter for string type source row data and transformation group row data. By default, the task uses a pipe ( | ) delimiter. Tip: Verify that you do not use the same delimiter for the row data as the error logging columns. If you use the same delimiter, you may find it difficult to read the error log file. |
If you choose to delete the tasks, Informatica Cloud deletes all listed tasks immediately. You cannot undo this action.
If you want to make signification changes to the integration template and already have tasks that use the existing template, you might want to import the template again and configure the changes with the newly-imported template.
Template Details Property | Required/Optional | Description |
---|---|---|
Template Name | Required | Name of the template. |
Description | Optional | Description. |
Template XML File | Required | Integration template XML file.
|
Template Image File | Optional | Image file associated with the integration template. Use a JPG or PNG file that is less than 1 MB in size and 1024 x 768 pixels or smaller.
To remove the selected template image file, click Clear. |
Template Parameter Detail | Required/Optional | Description |
---|---|---|
Label | Required | Label used for the template parameter in the Mapping Configuration Task wizard. Default is the template parameter name. |
Description | Optional | Tooltip for the template parameter in the Mapping Configuration Task wizard. Use to provide additional information about the template parameter. |
Template Parameter Display Property | Required/Optional | Description |
---|---|---|
Default Value | Optional | Default value for the template parameter. |
Visible | Required | Determines if the template parameter displays in the Mapping Configuration Task wizard. Use to hide a template parameter that does not need to be displayed.
|
Editable | Required | Determines if the template parameter is editable in the Mapping Configuration Task wizard:
|
Required | Required | Determines if a template parameter must be defined in the Mapping Configuration Task wizard:
|
Valid Connection Types | Required for connection template parameters | Defines the connection type allowed for a connection template parameter. Select a connection type or select All Connection Types. Connection template parameters only. |
Logical Connection | Optional | Logical connection name. Use when you want the task developer to use the same connection for logical connections with the same name. Enter any string value. Use the same string for logical connections that should use the same connection. Connection template parameters only. |
Input Control | Required for string template parameters | Defines how the task developer can enter information to configure template parameters in the Mapping Configuration Task wizard. String template parameters only. |
Field Filtering | Optional for condition, expression, and field input controls | A regular expression to limit the fields from the input control. Use a colon with the include and exclude statements. You can use a combination of include and exclude statements. Include statements take precedence. Use semicolons or line breaks to separate field names. Use any valid regular expression syntax. For example: Include: *ID$; First_Name Last_Name Annual_Revenue Exclude: DriverID$ |
Left Title | Required for field mapping input controls | Name for the left table of the field mapping display. The left table can display source, mapplet, and lookup fields. |
Left Field Filtering | Optional for field mapping input controls | Regular expression to limit the fields that display in the left table of the field mapping display. Use a colon with the include and exclude statements. You can use a combination of include and exclude statements. Include statements take precedence. Use semicolons or line breaks to separate field names. Use any valid regular expression syntax. For example: Include: *ID$; First_Name Last_Name Annual_Revenue Exclude: DriverID$ |
Right Title | Required for field mapping input controls | Name for the right table of the field mapping display. The right table can display target, mapplet, and lookup fields. |
Right Data Provider | Required for field mapping input controls | Set of fields to display in the right table of the field mapping display:
|
Fields Declaration | Required for field mapping input controls | List of fields to display on the right table of the field mapping display. List field names and associated datatypes separated by a line break or semicolon (;) as follows: <datatype>(<precision>,<scale>)<field name1>; <field name2>;... or <datatype>(<precision>,<scale>)<field name1> <field name2> <datatype>(<precision>,<scale>)<field name3> If you omit the datatype, Informatica Cloud assumes a datatype of String(255). Available when Static is selected for the Right Data Provider. |
Right Field Filtering | Optional for field mapping input controls | Regular expression to limit the fields that display in the right table of the field mapping display. Use a colon with the include and exclude statements. You can use a combination of include and exclude statements. Include statements take precedence. Use semicolons or line breaks to separate field names. Use any valid regular expression syntax. For example: Include: *ID$; First_Name Last_Name Annual_Revenue Exclude: DriverID$ |
Aggregate Functions | Required for expression and field mapping input controls | Enables the display and use of aggregate functions in the Field Expression dialog box in the Mapping Configuration Task wizard.
|
Label | Required for custom dropdown input controls | Label for an option to display in a custom dropdown. Click Add to add additional options. Click Remove to remove any configured options. |
Value | Required for custom dropdown input controls | Value for an option to display in a custom dropdown. Click Add to add additional options. Click Remove to remove any configured options. |