Design > Integration Templates > Step 3. Import an Integration Template and Configure Template Parameter Properties
  

Step 3. Import an Integration Template and Configure Template Parameter Properties

Import an integration template when you want to use it in your organization. After you import an integration template, you can use it in Mapping Configuration tasks.

Logical Connections

A logical connection is a name used to represent a shared connection.
Use a logical connection when you want to use the same connection for more than one connection template parameter in an integration template. To use a logical connection, enter the same logical connection name for all template parameters that you want to use the same connection.
When you use logical connections for connections that display on a single page of the Mapping Configuration Task wizard, the wizard displays a single connection for the user to select.
For example, you might use a logical connection when an integration template includes two sources that should reside in the same source system. To ensure that the task developer selects a single connection for both sources, when you import the integration template, you can enter "Sources" as the logical connection name for both source template parameters. When you create a Mapping Configuration task, the Mapping Configuration Task wizard displays one source connection template parameter named Sources and two source object template parameters.
When you use a logical connections for connections that display on different pages of the Mapping Configuration Task wizard, the wizard uses the logical connection name for the connection template parameters. If a logical connection appears on the Other Parameters page, it displays in the Shared Connection Details section. Since the requirement to use the same connection for all logical connections is less obvious, you might configure a lookup to display on the same wizard page as the other logical connection, or use descriptions for each logical template parameter to create tooltips to guide the task developer.
You can use any string value for the logical connection name.

Input Control Options

Input control options define how the task developer configures template parameters in the Mapping Configuration Task wizard. You can configure template parameter input control options when you import or edit an integration template.
When you configure an input control option for a template parameter, select a logical input control option for the template parameter type.
For example, for a filter template parameter, you could use a condition, expression, or text box input control, but the condition input control would best indicate the type of information that the template parameter requires. The field or field mapping input controls would not allow the task developer to enter the appropriate information for a filter template parameter.
The following table describes the input control options that you can use:
Input Control Type
Description
Recommended Usage
Text Box
Use for any information.
Displays an empty text box.
Requires an understanding of the type of information to enter. If the information is not obvious, use a template parameter description to create a tooltip.
Default for string template parameters.
Expressions or partial expressions. Data values and other template parameters that do not fit in the other input control types.
Condition
Use to create a boolean condition that resolves to True or False.
Displays a Data Filter dialog box that allows you to create a simple or advanced data filter.
Filter expressions defined in a Filter object, conditions used in a Router object, and other boolean expressions.
Expression
Use to create simple or complex expressions.
Displays a Field Expression dialog box with a list of source fields, functions, and operators.
Expressions in the Expression and Aggregator objects.
Field
Use to select a single source or lookup field.
Displays a list of fields and allows you to select a single field.
Field selection for a lookup condition or other expressions. Or for a link rule to propagate specific fields.
Field Mapping
Use to map more than one field.
Displays a field mapping input control like on the Field Mapping page of the Data Synchronization Task wizard.
Allows you to map available fields from upstream sources, lookups, and mapplets to downstream mapplets or targets.
Defines whether you can use aggregate functions in expressions.
Define a set of field level mappings between sources, lookups, mapplets, and targets.
To allow aggregate functions in expressions, enable the Aggregate Functions option.
Custom Dropdown
Use to provide a list of options.
Displays a drop down menu with options that you configure when you import the integration template.
When you define the options, you create a display label and a value for the label. In the Mapping Configuration Task wizard, the label displays. The value does not display.
Define a set of options for possible selection. Does not display the values that the options represent.

Parameter Display Customization

You can add one or more steps or pages to the Mapping Configuration Task wizard to customize the order in which a task developer configures template parameters. You can configure the parameter display order when you import or edit an integration template.
When you import an integration template, the defined source and target parameters appear on the Sources or Targets steps by default. You can move other connection template parameters to these steps. All other template parameters appear in the Other Parameters step by default.
You can create steps in the Mapping Configuration Task wizard to group similar parameters together. For example, you can group the field mapping parameters into one step, and the filter parameters into another step.
You can also create steps in the Mapping Configuration Task wizard to order logically dependent parameters. The task developer must then configure parameters in a certain order. For example, an integration template includes a parameterized mapplet and a field mapping. Configure the mapplet parameter to appear on a step before the field mapping parameter. This ensures that the field mapping parameter displays the mapplet input and output fields.

Advanced Session Properties

Advanced session properties are optional properties that you can configure in integration templates and Mapping Configuration tasks. Use caution when you configure advanced session properties. The properties are based on PowerCenter advanced session properties and might not be appropriate for use with all tasks.
You can configure the following types of advanced session properties:

General Options

The following table describes the general options:
General Options
Description
Write Backward Compatible Session Log File
Writes the session log to a file.
Session Log File Name
Name for the session log. Use any valid file name. You can use the following variables as part of the session log name:
  • - $CurrentTaskName. Replaced with the task name.
  • - $CurrentTime. Replaced with the current time.
Session Log File Directory
Directory where the session log is saved. Use a directory local to the Secure Agent to run the task.
By default, the session log is saved to the following directory:
<Secure Agent installation directory>/apps/Data_Integration_Server/logs
$Source Connection Value
Source connection name for integration templates.
$Target Connection Value
Target connection name for integration templates.
Treat Source Rows as
When the task reads source data, it marks each row with an indicator that specifies the target operation to perform when the row reaches the target. Use one of the following options:
  • - Insert. All rows are marked for insert into the target.
  • - Update. All rows are marked for update in the target.
  • - Delete. All rows are marked for delete from the target.
  • - Data Driven. The task uses the Update Strategy object in the data flow to mark the operation for each source row.
Commit Type
Commit type to use. Use one of the following options.
  • - Source. The task performs commits based on the number of source rows.
  • - Target. The task performs commits based on the number of target rows.
  • - User Defined. The task performs commits based on the commit logic defined in the integration template.
When you do not configure a commit type, the task performs a target commit.
Commit Interval
Interval in rows between commits.
When you do not configure a commit interval, the task commits every 10,000 rows.
Commit on End of File
Commits data at the end of the file.
Rollback Transactions on Errors
Rolls back the transaction at the next commit point when the task encounters a non-fatal error.
When the task encounters a transformation error, it rolls back the transaction if the error occurs after the effective transaction generator for the target.
Java Classpath
Java classpath to use.
The Java classpath is added to the beginning of the system classpath when the task runs.
Use this option when you use third-party Java packages, built-in Java packages, or custom Java packages in a Java transformation.

Performance Settings

The following table describes the performance settings:
Performance Settings
Description
DTM Buffer Size
Amount of memory allocated to the task from the DTM process.
By default, a minimum of 12 MB is allocated to the buffer at run time.
Use one of the following options:
  • - Auto. Enter Auto to use automatic memory settings. When you use Auto, configure Maximum Memory Allowed for Auto Memory Attributes.
  • - A numeric value. Enter the numeric value that you want to use. The default unit of measure is bytes. Append KB, MB, or GB to the value to specify a different unit of measure. For example, 512MB.
You might increase the DTM buffer size in the following circumstances:
  • - When a task contains large amounts of character data, increase the DTM buffer size to 24 MB.
  • - When a task contains n partitions, increase the DTM buffer size to at least n times the value for the task with one partition.
  • - When a source contains a large binary object with a precision larger than the allocated DTM buffer size, increase the DTM buffer size so that the task does not fail.
Incremental Aggregation
Performs incremental aggregation for tasks based on integration templates.
Reinitialize Aggregate Cache
Overwrites existing aggregate files for a task that performs incremental aggregation.
Enable High Precision
Processes the Decimal datatype to a precision of 28.
Session Retry on Deadlock
The task retries a write on the target when a deadlock occurs.
Pushdown Optimization
Type of pushdown optimization. Use one of the following options:
  • - None. The task processes all transformation logic for the task.
  • - To Source. The task pushes as much of the transformation logic to the source database as possible.
  • - To Target. The task pushes as much of the transformation logic to the target database as possible.
  • - Full. The task pushes as much of the transformation logic to the source and target databases as possible. The task processes any transformation logic that it cannot push to a database.
  • - $$PushdownConfig. The task uses the pushdown optimization type specified in the user-defined parameter file for the task.
  • When you use $$PushdownConfig, ensure that the user-defined parameter is configured in the parameter file.
When you use pushdown optimization, do not use the Error Log Type property.
Allow Temporary View for Pushdown
Allows the task to create temporary view objects in the database when it pushes the task to the database.
Use when the task includes an SQL override in the Source Qualifier transformation or Lookup transformation. You can also for a task based on an integration template that includes a lookup with a lookup source filter.
Allow Temporary Sequence for Pushdown
Allows the task to create temporary sequence objects in the database.
Use when the task is based on an integration templates that includes a Sequence Generator transformation.
Allow Pushdown for User Incompatible Connections
Indicates that the database user of the active database has read permission on idle databases.
If you indicate that the database user of the active database has read permission on idle databases, and it does not, the task fails.
If you do not indicate that the database user of the active database has read permission on idle databases, the task does not push transformation logic to the idle databases.
Session Sort Order
Order to use to sort character data for the task.

Advanced Options

The following table describes the advanced options:
Advanced Options
Description
Constraint Based Load Ordering
Loads targets based on primary key-foreign key constraints when possible.
Cache Lookup() Function
Caches lookup functions in integration templates with unconnected lookups. Overrides lookup configuration in the template.
By default, the task performs lookups on a row-by-row basis, unless otherwise specified in the template.
Default Buffer Block Size
Size of buffer blocks used to move data and index caches from sources to targets. By default, the task determines this value at run time.
Use one of the following options:
  • - Auto. Enter Auto to use automatic memory settings. When you use Auto, configure Maximum Memory Allowed for Auto Memory Attributes.
  • - A numeric value. Enter the numeric value that you want to use. The default unit of measure is bytes. Append KB, MB, or GB to the value to specify a different unit of measure. For example, 512MB.
The task must have enough buffer blocks to initialize. The minimum number of buffer blocks must be greater than the total number of Source Qualifiers, Normalizers for COBOL sources, and targets.
The number of buffer blocks in a task = DTM Buffer Size / Buffer Block Size. Default settings create enough buffer blocks for 83 sources and targets. If the task contains more than 83, you might need to increase DTM Buffer Size or decrease Default Buffer Block Size.
Line Sequential Buffer Length
Number of bytes that the task reads for each line. Increase this setting from the default of 1024 bytes if source flat file records are larger than 1024 bytes.
Maximum Partial Session Log Files
The maximum number of partial log files to save. Configure this option with Session Log File Max Size or Session Log File Max Time Period. Default is one.
Maximum Memory Allowed for Auto Memory Attributes
Maximum memory allocated for automatic cache when you configure the task to determine the cache size at run time.
You enable automatic memory settings by configuring a value for this attribute. Enter a numeric value. The default unit is bytes. Append KB, MB, or GB to the value to specify a different unit of measure. For example, 512MB.
If the value is set to zero, the task uses default values for memory attributes that you set to auto.
Maximum Percentage of Total Memory Allowed for Auto Memory Attributes
Maximum percentage of memory allocated for automatic cache when you configure the task to determine the cache size at run time. If the value is set to zero, the task uses default values for memory attributes that you set to auto.
Additional Concurrent Pipelines for Lookup Cache Creation
Restricts the number of pipelines that the task can create concurrently to pre-build lookup caches. You can configure this property when the Pre-build Lookup Cache property is enabled for a task or transformation.
When the Pre-build Lookup Cache property is enabled, the task creates a lookup cache before the Lookup receives the data. If the task has multiple Lookups, the task creates an additional pipeline for each lookup cache that it builds.
To configure the number of pipelines that the task can create concurrently, select one of the following options:
  • - Auto. The task determines the number of pipelines it can create at run time.
  • - Numeric value. The task can create the specified number of pipelines to create lookup caches.
Custom Properties
Configure custom properties for the task. You can override the custom properties that the task uses after the job has started. The task also writes the override value of the property to the session log.
Pre-build Lookup Cache
Allows the task to build the lookup cache before the Lookup receives the data. The task can build multiple lookup cache files at the same time to improve performance.
You can configure this option in an integration template template or in a task. The task uses the task-level setting if you configure the Lookup option as Auto for an integration template.
Configure one of the following options:
  • - Always allowed. The task can build the lookup cache before the Lookup receives the first source row. The task creates an additional pipeline to build the cache.
  • - Always disallowed. The task cannot build the lookup cache before the Lookup receives the first row.
When you use this option, configure the Configure the Additional Concurrent Pipelines for Lookup Cache Creation property. The task can pre-build the lookup cache if this property is greater than zero.
DateTime Format String
Date time format for the task. You can specify seconds, milliseconds, or nanoseconds.
To specify seconds, enter MM/DD/YYYY HH24:MI:SS.
To specify milliseconds, enter MM/DD/YYYY HH24:MI:SS.MS.
To specify microseconds, enter MM/DD/YYYY HH24:MI:SS.US.
To specify nanoseconds, enter MM/DD/YYYY HH24:MI:SS.NS.
By default, the format specifies microseconds, as follows: MM/DD/YYYY HH24:MI:SS.US.
Pre 85 Timestamp Compatibility
Do not use with Informatica Cloud.

Error Handling

The following table describes the error handling options:
Error Handling Options
Description
Stop On Errors
Indicates how many non-fatal errors the task can encounter before it stops the session. Non-fatal errors include reader, writer, and DTM errors.
Enter the number of non-fatal errors you want to allow before stopping the session. The task maintains an independent error count for each source, target, and transformation. If you specify 0, non-fatal errors do not cause the session to stop.
Override Tracing
Overrides tracing levels set on an object level.
On Stored Procedure Error
Determines the behavior when a task based on an integration template encounters pre- or post-session stored procedure errors. Use one of the following options:
  • - Stop Session. The task stops when errors occur while executing a pre-session or post-session stored procedure.
  • - Continue Session. The task continues regardless of errors.
By default, the task stops.
On Pre-Session Command Task Error
Determines the behavior when a task that includes pre-session shell commands encounters errors. Use one of the following options:
  • - Stop Session. The task stops when errors occur while executing pre-session shell commands.
  • - Continue Session. The task continues regardless of errors.
By default, the task stops.
On Pre-Post SQL Error
Determines the behavior when a task that includes pre- or post-session SQL encounters errors:
  • - Stop Session. The task stops when errors occur while executing pre-session or post-session SQL.
  • - Continue. The task continues regardless of errors.
By default, the task stops.
Error Log Type
Specifies the type of error log to create. You can specify flat file or no log. Default is none.
You cannot log row errors from XML file sources. You can view the XML source errors in the session log.
Do not use this property when you use the Pushdown Optimization property.
Error Log File Directory
Specifies the directory where errors are logged. By default, the error log file directory is $PMBadFilesDir\.
Error Log File Name
Specifies error log file name. By default, the error log file name is PMError.log.
Log Row Data
Specifies whether or not to log transformation row data. When you enable error logging, the task logs transformation row data by default. If you disable this property, n/a or -1 appears in transformation row data fields.
Log Source Row Data
Specifies whether or not to log source row data. By default, the check box is clear and source row data is not logged.
Data Column Delimiter
Delimiter for string type source row data and transformation group row data. By default, the task uses a pipe ( | ) delimiter.
Tip: Verify that you do not use the same delimiter for the row data as the error logging columns. If you use the same delimiter, you may find it difficult to read the error log file.

Pushdown Optimization

You can use pushdown optimization to push transformation logic to source databases or target databases for execution. Use pushdown optimization when using database resources can improve task performance.
When you run a task configured for pushdown optimization, the task converts the transformation logic to an SQL query. The task sends the query to the database, and the database executes the query.
The amount of transformation logic that you can push to the database depends on the database, transformation logic, and task configuration. The task processes all transformation logic that it cannot push to a database.
Use the Pushdown Optimization advanced session property to configure pushdown optimization for a task.

Pushdown Optimization Types

You can use the following pushdown optimization types:
Source Pushdown Optimization
The task analyzes the mapping from source to target until it reaches transformation logic that it cannot push to the source database.
The task generates and executes a Select statement based on the transformation logic for each transformation that it can push to the database. Then, the task reads the results of the SQL query and processes the remaining transformations.
Target Pushdown Optimization
The task analyzes the mapping from target to source or until it reaches transformation logic that it cannot push to the target database.
The task generates an Insert, Delete, or Update statement based on the transformation logic for each transformation that it can push to the target database. The task processes the transformation logic up to the point where it can push the transformation logic to the database. Then, the task executes the generated SQL on the target database.
Full Pushdown Optimization
The task analyzes the mapping from source to target or until it reaches transformation logic that it cannot push to the target database.
The task generates and executes SQL statements against the source or target based on the transformation logic that it can push to the database.
You can use full pushdown optimization when the source and target databases are in the same relational database management system.
When you run a task with large quantities of data and full pushdown optimization, the database server must run a long transaction. Consider the following database performance issues when you generate a long transaction:
To minimize database performance issues for long transactions, consider using source or target pushdown optimization.

Pushdown Optimization User-Defined Parameters

You can use a pushdown optimization user-defined parameter to perform pushdown optimization based on the parameter value defined in a parameter file. Use a pushdown optimization user-defined parameter when you want to perform different pushdown optimization options at different times.
For example, you might use source or target pushdown optimization during the peak hours of the day, but use full pushdown optimization from midnight until 2 a.m. when database activity is low.
To use the pushdown optimization user-defined parameter, perform the following steps:
  1. 1. Configure a parameter file to use the $$PushdownConfig user-defined parameter. Save the file to a directory local the Secure Agent to run the task.
  2. Use the following format to define the parameter:
    $$PushdownConfig=<pushdown optimization type>
    For example: $$PushdownConfig=Source.
    Configure a different parameter file of the same name for each pushdown type that you want to use.
  3. 2. In the task, add the Pushdown Optimization property and select the $$PushdownConfig option.
  4. 3. Configure the task to use the parameter file.
  5. 4. Replace the parameter file version as needed.

Importing or Editing an Integration Template

Import an integration template for use in your organization. When you import an integration template, you can define template parameter properties. You can also select a template image to visually represent the template data flow.
When you import an integration template to an Informatica Cloud organization, you can define template parameter properties, such as template parameter descriptions and display properties.
A template parameter description displays as a tooltip for the template parameter in the Mapping Configuration Task wizard. Use template parameter descriptions to provide additional information about a template parameter to the task developer.
Display properties determine where and how a template parameter displays in the Mapping Configuration Task wizard.
After you import an integration template, you can edit the imported template. If you select a different template XML file, you can choose between the following options:
With both options, mapplets and template parameters that are not used in the new file are deleted. New mapplets and template parameters display.
If you edit an integration template that is already used by a Mapping Configuration task, Informatica Cloud performs the following actions based on the changes that you make:
    1. To import a new integration template, click Configure > Integration Templates > New.
    To edit an imported integration template, click Configure > Integration Templates. Click the integration template name, and then click Edit.
    2. Complete the following template details:
    Template Details Property
    Required/Optional
    Description
    Template Name
    Required
    Name of the template.
    Description
    Optional
    Description.
    Template XML File
    Required
    Integration template XML file.
    • - To select an integration template, click Select.
    • - In the Select XML File dialog box, browse to locate and select the file you want to use, and then click OK.
    Template Image File
    Optional
    Image file associated with the integration template.
    Use a JPG or PNG file that is less than 1 MB in size and 1024 x 768 pixels or smaller.
    • - To select an image file, click Select.
    • - In the Select Template Image File dialog box, browse to locate and select the file you want to use, and then click OK.
    To remove the selected template image file, click Clear.
    3. If the integration template includes a mapplet template parameter, click Select to select a mapplet or click New to import a mapplet.
    4. Optionally, edit the following template parameter details:
    Template Parameter Detail
    Required/Optional
    Description
    Label
    Required
    Label used for the template parameter in the Mapping Configuration Task wizard.
    Default is the template parameter name.
    Description
    Optional
    Tooltip for the template parameter in the Mapping Configuration Task wizard.
    Use to provide additional information about the template parameter.
    5. To edit the display properties for a template parameter, click Edit and configure the following template parameter display properties:
    Template Parameter Display Property
    Required/Optional
    Description
    Default Value
    Optional
    Default value for the template parameter.
    Visible
    Required
    Determines if the template parameter displays in the Mapping Configuration Task wizard.
    Use to hide a template parameter that does not need to be displayed.
    • - Yes. Displays the template parameter.
    • - No. Hides the template parameter.
    Editable
    Required
    Determines if the template parameter is editable in the Mapping Configuration Task wizard:
    • - Yes. Allows editing.
    • - No. Prevents editing.
    Required
    Required
    Determines if a template parameter must be defined in the Mapping Configuration Task wizard:
    • - Yes. Requires the template parameter to be defined.
    • - No. Allows the template parameter to remain undefined.
    Valid Connection Types
    Required for connection template parameters
    Defines the connection type allowed for a connection template parameter.
    Select a connection type or select All Connection Types.
    Connection template parameters only.
    Logical Connection
    Optional
    Logical connection name. Use when you want the task developer to use the same connection for logical connections with the same name.
    Enter any string value. Use the same string for logical connections that should use the same connection.
    Connection template parameters only.
    Input Control
    Required for string template parameters
    Defines how the task developer can enter information to configure template parameters in the Mapping Configuration Task wizard.
    String template parameters only.
    Field Filtering
    Optional for condition, expression, and field input controls
    A regular expression to limit the fields from the input control.
    Use a colon with the include and exclude statements.
    You can use a combination of include and exclude statements. Include statements take precedence.
    Use semicolons or line breaks to separate field names.
    Use any valid regular expression syntax.
    For example:
    Include: *ID$; First_Name
    Last_Name
    Annual_Revenue
    Exclude: DriverID$
    Left Title
    Required for field mapping input controls
    Name for the left table of the field mapping display. The left table can display source, mapplet, and lookup fields.
    Left Field Filtering
    Optional for field mapping input controls
    Regular expression to limit the fields that display in the left table of the field mapping display.
    Use a colon with the include and exclude statements.
    You can use a combination of include and exclude statements. Include statements take precedence.
    Use semicolons or line breaks to separate field names.
    Use any valid regular expression syntax.
    For example:
    Include: *ID$; First_Name
    Last_Name
    Annual_Revenue
    Exclude: DriverID$
    Right Title
    Required for field mapping input controls
    Name for the right table of the field mapping display. The right table can display target, mapplet, and lookup fields.
    Right Data Provider
    Required for field mapping input controls
    Set of fields to display in the right table of the field mapping display:
    • - All objects. Shows all fields from all possible right table objects.
    • - <object name>. Individual object names. You can select a single object for the right table fields to display.
    • - Static. A specified set of fields. Allows you to define the fields to display.
    Fields Declaration
    Required for field mapping input controls
    List of fields to display on the right table of the field mapping display.
    List field names and associated datatypes separated by a line break or semicolon (;) as follows:
    <datatype>(<precision>,<scale>)<field name1>; <field name2>;...
    or
    <datatype>(<precision>,<scale>)<field name1>
    <field name2>
    <datatype>(<precision>,<scale>)<field name3>
    If you omit the datatype, Informatica Cloud assumes a datatype of String(255).
    Available when Static is selected for the Right Data Provider.
    Right Field Filtering
    Optional for field mapping input controls
    Regular expression to limit the fields that display in the right table of the field mapping display.
    Use a colon with the include and exclude statements.
    You can use a combination of include and exclude statements. Include statements take precedence.
    Use semicolons or line breaks to separate field names.
    Use any valid regular expression syntax.
    For example:
    Include: *ID$; First_Name
    Last_Name
    Annual_Revenue
    Exclude: DriverID$
    Aggregate Functions
    Required for expression and field mapping input controls
    Enables the display and use of aggregate functions in the Field Expression dialog box in the Mapping Configuration Task wizard.
    • - Yes. Displays aggregate functions.
    • - No. Does not display aggregate functions.
    Label
    Required for custom dropdown input controls
    Label for an option to display in a custom dropdown.
    Click Add to add additional options. Click Remove to remove any configured options.
    Value
    Required for custom dropdown input controls
    Value for an option to display in a custom dropdown.
    Click Add to add additional options. Click Remove to remove any configured options.
    6. To add another step to the Mapping Configuration Task wizard, click New Step.
    You can name the step and rename the Other Parameters step.
    7. To order template parameters, use the Move Up and Move Down icons.
    You can use the icons to move connection template parameters to the Sources or Targets steps.
    You can use the icons to move template parameters from the Other Parameters step to any new steps.
    8. If you want to add advanced session properties, click Add, select the property that you want to use, and configure the session property value.
    9. Click OK.