For database ingestion and replication initial load jobs and the unload phase of combined load jobs, you can enable auto-tuning of some key parameters to optimize job performance on the source side and target side. The jobs can use a Db2 for i, Db2 for LUW, Db2 for z/OS, MySQL, Oracle, PostgreSQL, SAP HANA, SQL Server, or Teradata source type with any supported target type.
Preview Notice: Effective in the April 2026 release, auto-tuning is available for preview.
Technical preview functionality is supported for evaluation purposes but is unwarranted and is not supported in production environments or any environment that you plan to push to production. Informatica intends to include the preview functionality in an upcoming release for production use, but might choose not to in accordance with changing market or technical circumstances. For more information, contact Informatica Global Customer Support.
Auto-tuning is based on performance and system metrics automatically collected from your environment, such as network and database latency, row counts, table sizes, CPU cores, and memory usage. It’s also based on application-specific metrics such as JVM heap allocation and task capacity. These metrics are used collectively to make dynamic tuning adjustments for optimal efficiency and resource use.
To enable auto-tuning for a task, select the Auto-Tune option on the final Let's Go page of the task configuration wizard.
Auto-tuning automatically optimizes settings for the following partitioning, distributor, and thread count properties:
•Partitioning properties for improved data extraction performance at the source:
- Enable Partitioning. Controls whether to use multiple partitions to query for the source data to be unloaded in parallel
- Unload JDBC Partitioning Technique. The type of partitioning technique to use for unloading source data. Options are uniform or heuristic
- Unload Source Partition Count. The total number of partitions to use for concurrently reading source data in parallel.
- Unload Source Max Parallel Partitions. The maximum number of partition reader threads that can be used to query the source for data in parallel.
•Multiple-distributor properties for enhancing parallelism and throughput at the target side:
- Writer Unload Multiple Distributors. Controls whether multiple distributor threads can run in parallel to perform work such as uploading data files to staging areas and flushing data to the target.
- Writer Distributor Count. The number of distributors that can run on separate threads in parallel for event distribution when writing to the target.
•Thread-count properties for optimizing concurrency and resource utilization:
- Unload Help Thread Count. The number of unload helper threads that can be used concurrently to convert unloaded raw events to DML events that can be passed to the writer. Increasing the thread count can help unload helpers keep pace with high data read rates.
- Writer Helper Thread Count. The number of writer helper threads that can run in parallel to convert incoming data to the target output format. This setting can improve the write efficiency of payload processing.
•Snowflake property:
- snowflakeCompression. For Snowflake targets, a compression option to pass to Snowflake for creating the internal stage that stores data files. Options are: Auto, None, or a specific compression type. For information about Snowflake compression options, see the CREATE STAGE command > COMPRESSION option in the Snowflake documentation.
Note: All of these properties, except snowflakeCompression, are listed under Custom Properties on Task Details pages in the task configuration wizard, depending on your source or target type. You can edit the property values there if you want to override the auto-tuned settings. To edit the snowflakeCompression custom property, you must select the Custom option under Custom Properties and then manually enter the snowflakeCompression property name and value.