General options | Description |
---|---|
Write Backward Compatible Session Log File | Writes the session log to a file. |
Session Log File Name | Name for the session log. Use any valid file name. You can customize the session log file name in one of the following ways:
If you use a static name, the log file name is appended with a sequence number each time the task runs, for example samplelog.1, samplelog.2. When the maximum number of log files is reached, the numbering sequence begins a new cycle. If you use a dynamic name, the file name is unique for every task run. The Maximum Number of Log Files property is not applied. To purge old log files, delete the files manually. |
Session Log File Directory | Directory where the session log is saved. Use a directory local to the Secure Agent to run the task. By default, the session log is saved to the following directory: <Secure Agent installation directory>/apps/Data_Integration_Server/logs |
$Source Connection Value | Source connection name. |
$Target Connection Value | Target connection name. |
Source File Directory | Source file directory path. Use for flat file connections only. |
Target File Directory | Target file directory path. Use for flat file connections only. |
Treat Source Rows as | When the task reads source data, it marks each row with an indicator that specifies the target operation to perform when the row reaches the target. Use one of the following options:
|
Commit Type | Commit type to use. Use one of the following options.
When you do not configure a commit type, the task performs a target commit. |
Commit Interval | Interval in rows between commits. When you do not configure a commit interval, the task commits every 10,000 rows. |
Commit on End of File | Commits data at the end of the file. |
Rollback Transactions on Errors | Rolls back the transaction at the next commit point when the task encounters a non-fatal error. When the task encounters a transformation error, it rolls back the transaction if the error occurs after the effective transaction generator for the target. |
Java Classpath | This option is not used. |
Performance settings | Description |
---|---|
DTM Buffer Size | Amount of memory allocated to the task from the DTM process. By default, a minimum of 12 MB is allocated to the buffer at run time. Use one of the following options:
You might increase the DTM buffer size in the following circumstances:
|
Incremental Aggregation | Performs incremental aggregation for tasks. |
Reinitialize Aggregate Cache | This option is not used. |
Enable High Precision | Processes the Decimal data type to a precision of 28. |
Session Retry on Deadlock | The task retries a write on the target when a deadlock occurs. |
SQL ELT Optimization | Type of SQL ELT optimization. Use one of the following options:
When you use $$PushdownConfig, ensure that the user-defined parameter is configured in the parameter file. When you use SQL ELT optimization, do not use the Error Log Type property. For more information, see the help for the appropriate connector. The SQL ELT optimization functionality varies depending on the support available for the connector. For more information, see the help for the appropriate connector. |
Create Temporary View | Allows the task to create temporary view objects in the database when it pushes the task to the database. Use when the task includes an SQL override in the Source Qualifier transformation or Lookup transformation. |
Create Temporary Sequence | This option is not used. |
Enable cross-schema SQL ELT optimization | Enables SQL ELT optimization for tasks that use source or target objects associated with different schemas within the same database. To see if cross-schema SQL ELT optimization is applicable to the connector you use, see the help for the relevant connector. This property is enabled by default. |
Allow SQL ELT Optimization for User Incompatible Connections | Indicates that the database user of the active database has read permission on idle databases. If you indicate that the database user of the active database has read permission on idle databases, and it does not, the task fails. If you do not indicate that the database user of the active database has read permission on idle databases, the task does not push transformation logic to the idle databases. |
Session Sort Order | Order to use to sort character data for the task. |
Advanced options | Description |
---|---|
Constraint Based Load Ordering | This option is not used. |
Cache Lookup() Function | This option is not used. |
Default Buffer Block Size | Size of buffer blocks used to move data and index caches from sources to targets. By default, the task determines this value at run time. Use one of the following options:
The task must have enough buffer blocks to initialize. The minimum number of buffer blocks must be greater than the total number of Source Qualifiers, Normalizers for COBOL sources, and targets. The number of buffer blocks in a task = DTM Buffer Size / Buffer Block Size. Default settings create enough buffer blocks for 83 sources and targets. If the task contains more than 83, you might need to increase DTM Buffer Size or decrease Default Buffer Block Size. |
Line Sequential Buffer Length | Number of bytes that the task reads for each row. Data Integration dynamically increases the maximum line sequential buffer length from the default of 1024 bytes. |
Maximum Memory Allowed for Auto Memory Attributes | Maximum memory allocated for automatic cache when you configure the task to determine the cache size at run time. You enable automatic memory settings by configuring a value for this attribute. Enter a numeric value. The default unit is bytes. Append KB, MB, or GB to the value to specify a different unit of measure. For example, 512MB. If the value is set to zero, the task uses default values for memory attributes that you set to auto. |
Maximum Percentage of Total Memory Allowed for Auto Memory Attributes | Maximum percentage of memory allocated for automatic cache when you configure the task to determine the cache size at run time. If the value is set to zero, the task uses default values for memory attributes that you set to auto. |
Additional Concurrent Pipelines for Lookup Cache Creation | Restricts the number of pipelines that the task can create concurrently to pre-build lookup caches. You can configure this property when the Pre-build Lookup Cache property is enabled for a task or transformation. When the Pre-build Lookup Cache property is enabled, the task creates a lookup cache before the Lookup receives the data. If the task has multiple Lookups, the task creates an additional pipeline for each lookup cache that it builds. To configure the number of pipelines that the task can create concurrently, select one of the following options:
|
Custom Properties | Configure custom properties for the task. You can override the custom properties that the task uses after the job has started. The task also writes the override value of the property to the session log. |
Pre-build Lookup Cache | Allows the task to build the lookup cache before the Lookup receives the data. The task can build multiple lookup cache files at the same time to improve performance. Configure one of the following options:
When you use this option, configure the Configure the Additional Concurrent Pipelines for Lookup Cache Creation property. The task can pre-build the lookup cache if this property is greater than zero. |
DateTime Format String | Date time format for the task. You can specify seconds, milliseconds, or nanoseconds. To specify seconds, enter MM/DD/YYYY HH24:MI:SS. To specify milliseconds, enter MM/DD/YYYY HH24:MI:SS.MS. To specify microseconds, enter MM/DD/YYYY HH24:MI:SS.US. To specify nanoseconds, enter MM/DD/YYYY HH24:MI:SS.NS. By default, the format specifies microseconds, as follows: MM/DD/YYYY HH24:MI:SS.US. |
Pre 85 Timestamp Compatibility | This option is not used. |
Error handling options | Description |
---|---|
Stop on Errors | The number of non-fatal errors the task can encounter before it stops the job. Non-fatal errors include reader, writer, and transformation errors. Enter the number of non-fatal errors you want to allow before stopping the job. The task maintains an independent error count for each source, target, and transformation. If you specify 0, non-fatal errors do not cause the job to stop. Optionally, you can use the $PMSessionErrorThreshold service process variable to set this threshold. Configure this variable as a DTM custom property for the Data Integration Server. You can override the value in a parameter file. For more information, see the following KB article: HOW TO: Set the session error threshold for a mapping task using $PMSessionErrorThreshold in CDI. |
Override Tracing | Overrides tracing levels set on an object level. |
On Stored Procedure Error | This option is not used. |
On Pre-Session Command Task Error | Determines the behavior when a task that includes pre-session shell commands encounters errors. Use one of the following options:
By default, the task stops. |
On Pre-Post SQL Error | Determines the behavior when a task that includes pre-session or post-session SQL encounters errors:
By default, the task stops. |
Error Log Type | Specifies the type of error log to create. You can specify flat file or no log. Default is none. You cannot log row errors from XML file sources. You can view the XML source errors in the session log. Do not use this property when you use the SQL ELT Optimization property. |
Error Log File Directory | Specifies the directory where errors are logged. By default, the error log file directory is $PMBadFilesDir\. |
Error Log File Name | Specifies error log file name. By default, the error log file name is PMError.log. |
Log Row Data | Specifies whether or not to log transformation row data. When you enable error logging, the task logs transformation row data by default. If you disable this property, n/a or -1 appears in transformation row data fields. |
Log Source Row Data | Specifies whether or not to log source row data. By default, the check box is clear and source row data is not logged. |
Data Column Delimiter | Delimiter for string type source row data and transformation group row data. By default, the task uses a pipe ( | ) delimiter. Tip: Verify that you do not use the same delimiter for the row data as the error logging columns. If you use the same delimiter, you may find it difficult to read the error log file. |