Property | Description |
---|---|
SSIS Scanner Type | Select the type of SSIS resource:
|
Agent URL | The URL to access the SQL Server Agent that automates package runs. Note: Make sure that the SQL Server Agent runs on the same machine if you had selected the SSIS Scanner Type as Repository Server. |
SQL Server Version | Select one of the following options from the SQL Server Version drop-down list:
Default is SQL Server 2008. |
Host Name | The host name or IP address of the machine where SSIS is running. |
Password | The password for the package. |
Package/Repository Name | Specify the repository or package from which you want to import metadata. Click Select. The Select Package/Repository Name dialog box appears. Select the required package using one of the following options:
|
Variable values file | Click Choose to select the file that includes values for the SSIS variables. Alternatively, you can place the file in the Variable values file text box from your file browser using a drag-and-drop operation. |
Property | Description |
---|---|
Auto assign connections | Select this option to specify that the connection must be assigned automatically. |
Memory | Specifies the memory required to run the scanner job. Select one of the following values based on the data set size imported:
See the Tuning Live Data Map Performance How-to-Library article for more information about memory values. |
Property | Description |
---|---|
Username | The user name to access SAP R/3 system. |
Password | The password associated with the user name. |
Application server host | Host name or IP address of the system that hosts SAP R/3. |
System number | System number of the SAP R/3 system to which you want to connect. |
Client | The SAP R/3 client to access data from the SAP R/3 system. |
Language | Specify the language to be used while importing metadata using the resource. |
Encoding | Default is UTF-8 character encoding for metadata imported from the resource. You cannot change the default setting for this property. |
Property | Description |
---|---|
Repository Objects | Imports the repository objects such as resources, information, and activities from the SAP R/3 system. |
Memory | Specifies the memory required to run the scanner job. Select one of the following values based on the data set size imported:
See the Tuning Live Data Map Performance How-to-Library article for more information about memory values. |
Property | Description |
---|---|
Version | Select one of the following options to specify the version of OBIEE:
|
Server URL | The OBIEE Presentation Server URL. If you use SSL, you must make sure that Live Data Map trusts the server certificate of the OBIEE Presentation Server. |
Login User | The username used to log on to the OBIEE Presentation Server. Make sure that the username you use has the necessary permissions to import metadata. |
Login Password | The password associated with the username. |
Optimize for large models | Select this option to optimize the import of metadata for large OBIEE repository models. If you select this option, Live Data Map does not import metadata for the following assets:
In addition, Live Data Map does not store expression tree objects with lineage links. If you do not select this option, Live Data Map imports the entire repository model, resulting in a high consumption of memory. |
Incremental import | Select this option to import only the changes made to the data source since the last metadata import. If you do not use this option, Live Data Map imports the entire metadata from the data source. |
Worker threads | Specify the number of worker threads to process metadata asynchronously. You can leave the value empty if you want Live Data Map to calculate the value. Live Data Map assigns a value between one and six based on the JVM architecture and number of available CPU cores. You can use the following points to decide the value to use:
Note: Specifying a higher value might impact performance of the system. |
File | The Oracle Business Intelligence Repository RPD file where the metadata is stored. Click Choose to select the RPD file. Alternatively, you can place the RPD file in the Variable values file text box from your file browser using a drag-and-drop operation. |
Variable values file | (Optional) The file that defines the list of RPD variable values. Click Choose to select the file that includes the variable values. Alternatively, you can place the file in the Variable values file text box from your file browser using a drag-and-drop operation. |
Auto assign connections | Select this option to specify that the connection must be assigned automatically. |
Property | Description |
---|---|
Repository Subset | Click Select. The Select Repository subset dialog box appears. Select the folders from where you want to import metadata for reports from the Oracle Business Intelligence Presentation Server. |
Memory | Specifies the memory required to run the scanner job. Select one of the following values based on the data set size imported:
See the Tuning Live Data Map Performance How-to-Library article for more information about memory values. |
Property | Description |
---|---|
User | The user name used to access the MDM Hub Store. |
Password | The password associated with the user name. |
JDBC | The JDBC string to connect to the MDM Hub Store. |
JDBCDriverClassName | The JDBC driver class name specific to the MDM Hub Store. |
Property | Description |
---|---|
Memory | Specifies the memory required to run the scanner job. Select one of the following values based on the data set size imported:
See the Tuning Live Data Map Performance How-to-Library article for more information about memory values. |
Property | Description |
---|---|
Cloud URL | The URL to access the Informatica Cloud Service. |
Username | The user name to connect to the Informatica Cloud Service. |
Password | The password associated with the user name. |
Auto Assign Connections | Select this option to specify that the connection must be assigned automatically. |
Property | Description |
---|---|
Memory | Specifies the memory required to run the scanner job. Select one of the following values based on the data set size imported:
See the Tuning Live Data Map Performance How-to-Library article for more information about memory values. |
Property | Description |
---|---|
File | The CSV file or the .zip file that includes the CSV files with the lineage data. Click Choose to select the required CSV file or .zip file that you want to upload. Ensure that the CSV files in the .zip file are not stored in a directory within the .zip file. If you want to select multiple CSV files, you must include the required CSV files in a .zip file and then select the .zip file for upload. Note: Make sure that the CSV file includes the following parameters in the header:
|
Property | Description |
---|---|
Auto Assign Connections | Specifies to automatically assign the connection. |
Memory | Specifies the memory required to run the scanner job. Select one of the following values based on the data set size imported:
See the Tuning Live Data Map Performance How-to-Library article for more information about memory values. |
Property | Description |
---|---|
Amazon Web Services Bucket URL | Amazon Web Services URL to access a bucket. |
Amazon Web Services Access Key ID | Amazon Web Services access key ID to sign requests that you send to Amazon Web Services. |
Amazon Web Services Secret Access Key | Amazon Web Services secret access key to sign requests that you send to Amazon Web Services. |
Amazon Web Services Bucket Name | Amazon Web Services bucket name that Live Data Map needs to scan. |
Source Directory | The source directory from where metadata must be extracted. |
Property | Description |
---|---|
File Types | Select any or all of the following file types from which you want to extract metadata:
|
First Row as Column Header for CSV | Select this option to specify the first row as the column header for the CSV file. |
First Level Directory | Use this option to specify a directory or a list of directories under the source directory. If you leave this option blank, Live Data Map imports all the files from the specified source directory. To specify a directory or a list of directories, you can perform the following steps:
Note: If you are selecting multiple directories, you must separate the directories using a semicolon (;). |
Include Subdirectory | Select this option to import all the files in the subdirectories under the source directory. |
Memory | Specifies the memory required to run the scanner job. Select one of the following values based on the data set size imported:
See the Tuning Live Data Map Performance How-to-Library article for more information about memory values. |
Property | Description |
---|---|
Name Node URI 1 | URI to the active HDFS NameNode. The active HDFS NameNode manages all the client operations in the cluster. |
Name Node URI 2 | URI to the secondary HDFS NameNode. The secondary HDFS NameNode stores modifications to HDFS as a log file appended to a native file system file. |
HDFS Service Name | HDFS service name. |
User Name/User Principal | User name to connect to HDFS. Specify the Kerberos Principal if the cluster is enabled for Kerberos. |
Source Directory | The source location from where metadata must be extracted. |
Kerberos Cluster | Select Yes if the cluster is enabled for Kerberos. If the cluster is enabled for Kerberos, provide the following details:
|
Property | Description |
---|---|
File Types | Select any or all of the following file types from which you want to extract metadata:
|
First Level Directory | Specifies that all the directories must be selected. If you want specific directories to be selected, use the Select Directory option. This option is disabled if you had selected the Include Subdirectories option on the General tab. |
Include Subdirectory | Type the required directories in the text box or click Select... to choose the required directories. This option is disabled if you had selected the Include Subdirectories option on the General tab or the Select all Directories option listed above. |
Memory | Specifies the memory required to run the scanner job. Select one of the following values based on the data set size imported:
See the Tuning Live Data Map Performance How-to-Library article for more information about memory values. |
Property | Description |
---|---|
User | The user name used to access the database. |
Password | The password associated with the user name. |
Host | Host name or IP address of Amazon Redshift service. |
Port | Amazon Redshift server port number. Default is 5439. |
Database | The name of the database instance. |
Property | Description |
---|---|
Import System Objects | Select this option to specify that the system objects must be imported. |
Schema | Click Select... to specify the Amazon Redshift schemas that you want to import. You can use one of the following options from the Select Schema dialog box to import the schemas:
|
S3 Bucket Name | Provide a valid Amazon S3 bucket name for the Amazon Redshift data source. You must provide this value if you want to enable profiling for Amazon Redshift. If you do not want to enable profiling, retain the default value. |
Memory | Specifies the memory required to run the scanner job. Select one of the following values based on the data set size imported:
See the Tuning Live Data Map Performance How-to-Library article for more information about memory values. |
Property | Description |
---|---|
Agent URL | URL to the Live Data Map agent that runs on a Microsoft Windows Server. |
Version | Select the version of MicroStrategy from the drop-down list. You can select the Auto detect option if you want Live Data Map to automatically detect the version of the MicroStrategy resource. |
Project Source | Name of the MicroStrategy project source to which you want to connect. |
Login User | The user name used to connect to the project source. |
Login Password | The password associated with the user name. |
Default Language | Specify the language to be used while importing metadata from the resource. |
Import Schema Only | Select this option to import the project schema without the reports and documents. |
Data Model Tables Design Level | Select one of the following options to specify the design for the imported tables:
|
Incremental Import | Select this option to import only the changes from the source. Clear this option to import the complete source every time. |
Project(s) | Select the names of the projects to which you want to connect from the project source. |
Auto Assign Connections | Specifies to automatically assign the connection. |
Property | Description |
---|---|
Memory | Specifies the memory required to run the scanner job. Select one of the following values based on the data set size imported:
See the Tuning Live Data Map Performance How-to-Library article for more information about memory values. |
Property | Description |
---|---|
Username | Name of the user account used that connects to the Analyst tool. |
Password | Password for the user account that connects to the Analyst tool. |
Host | Name of the Analyst tool business glossary from which you want to extract metadata. Each resource can extract metadata from one business glossary. |
Port | Port number on which the Analyst tool runs. |
Namespace | Name of the security domain to which the Analyst tool user belongs. If the domain uses LDAP authentication or Kerberos authentication, enter the security domain name. Otherwise, enter Native. |
Enable Secure Communication | Enable secure communication from the Analyst tool to the Analyst Service. |
Import Published Content Only | Select this option to specify that you want to import only the published content. If you do not select this option, Live Data Map imports all content. |
Property | Description |
---|---|
Glossary | Name of the business glossary resource that you want to import. |
Memory | Specify the memory value required to run a scanner job. Specify one of the following memory values:
Note: For details about the memory values, see the Tuning Live Data Map Performance How-To Library article. |
Property | Description |
---|---|
Agent URL | URL to the Live Data Map agent that runs on a Microsoft Windows Server. |
Version | Indicates the Cognos server version. |
Dispatcher URL | URL used by the framework manager to send requests to Cognos. |
Namespace | Defines a collection of user accounts from an authentication provider. |
User | User name used to connect to the Cognos server. |
Password | Password for the user account to connect to the Cognos server. |
Add Dependent Objects | Use to import dependent objects to the selection. Selecting this option requires a complete scan of report dependencies on the Cognos server. You can select any of the following options for this property:
|
Incremental Import | You can specify one of the following values for this property:
|
Folder Representation | Specifies how the folders from Cognos framework manager must be represented. You can select from the following options:
|
Transformer Import Configuration | The XML file that describes mappings between Cognos Content Manager data sources and PowerPlay Transformer models. |
Worker Threads | Number of worker threads required to retrieve metadata asynchronously. |
Auto Assign Connections | Specifies to automatically assign the connection. |
Property | Description |
---|---|
Content Browsing Mode | Specifies the content to be retrieved while searching the Cognos repository. You can select any of the following options:
|
Content | Specifies the hierarchy for the content objects. |
Memory | Specifies the memory required to run the scanner job. Select one of the following values based on the data set size imported:
See the Tuning Live Data Map Performance How-to-Library article for more information about memory values. |
Property | Description |
---|---|
Server | The host name or the IP address where the Tableau server runs. |
Site | Specify the site if the Tableau server has multiple sites installed. The value is case sensitive. |
Username | The user name to connect to the Tableau server. |
Password | The password associated with the user name. |
Incremental Import | You can specify one of the following values for this property:
|
Worker Threads | Number of worker threads required to retrieve metadata asynchronously. |
Cache | Path to the folder with the Tableau repository cache. |
Auto Assign Connections | Specifies to automatically assign the connection. |
Property | Description |
---|---|
Group By | Specify to group workbooks in the following categories:
|
Repository Objects | Imports the repository objects such as workbooks and data sources. For any workbooks, the dependent data sources are also imported. |
Memory | Specifies the memory required to run the scanner job. Select one of the following values based on the data set size imported:
See the Tuning Live Data Map Performance How-to-Library article for more information about memory values. |
Property | Description |
---|---|
Navigator URL | URL of the Cloudera Navigator Server. |
User | Name of the user account that connects to Cloudera Navigator. |
Password | Password for the user account that connects to Cloudera Navigator. |
Property | Description |
---|---|
Hive Database | Name of the Hive database or a schema from where you want to import a table. |
Memory | Specify the memory value required to run a scanner job. Specify one of the following memory values:
Note: For details about the memory values, see the Tuning Live Data Map Performance How-To Library article. |
Property | Description |
---|---|
Hadoop Distribution | Select one of the following Hadoop distribution types for the Hive resource:
|
URL | JDBC connection URL used to access the Hive server. |
User | The Hive user name. |
Password | The password for the Hive user name. |
Keytab file | Path to the keytab file if Hive uses Kerberos for authentication. |
User proxy | The proxy user name to be used if Hive uses Kerberos for authentication. |
Kerberos Configuration File | Specify the path to the Kerberos configuration file if you use Kerberos-based authentication for Hive. |
Enable Debug for Kerberos | Select this option to enable debugging options for Kerberos-based authentication. |
Property | Description |
---|---|
Schema | Click Select... to specify the Hive schemas that you want to import. You can use one of the following options from the Select Schema dialog box to import the schemas:
|
Table | Specify the name of the Hive table that you want to import. If you leave this property blank, Live Data Map imports all the Hive tables. |
SerDe jars list | Specify the path to the Serializer/DeSerializer (SerDe) jar file list. You can specify multiple jar files by separating the jar file paths using a semicolon (;). |
Worker Threads | Specify the number of worker threads to process metadata asynchronously. You can leave the value empty if you want Live Data Map to calculate the value. Live Data Map assigns a value between one and six based on the JVM architecture and number of available CPU cores. You can use the following points to decide the value to use:
Note: Specifying a higher value might impact performance of the system. |
Memory | Specify the memory value required to run a scanner job. Specify one of the following memory values:
Note: For details about the memory values, see the Tuning Live Data Map Performance How-To Library article. |
Property | Description |
---|---|
Target version | The version number of the Informatica platform. You can choose any of the following Informatica versions:
|
Domain Name | Name of the Informatica domain. |
Data Integration Service Name | Name of the Data Integration Service. |
Username | Username for the Data Integration Service connection. |
Password | Password for the Data Integration Service connection. |
Security Domain | Name of the LDAP security domain if the Informatica domain contains an LDAP security domain. |
Host | Host name for the informatica domain. |
Port | Port number of the Informatica domain. |
Application Name | Name of the Data Integration Service application. Click Select... to select the name of the application from the Select Application Name dialog box. Note: This property is applicable if you select Target version as 10.0, 10.1, or 10.1.1. |
Param Set for Mappings in Application | Parameter set for mappings configured for the Data Integration Service application. Click Select... to select the parameter set from the Select Param Sets for Mappings in Application dialog box. Note: This property is applicable if you select Target version as 10.0, 10.1, or 10.1.1. |
Property | Description |
---|---|
Auto assign Connections | Specifies whether the connection must be automatically assigned. |
Memory | Specify the memory value required to run a scanner job. Specify one of the following memory values:
Note: For details about the memory values, see the Tuning Live Data Map Performance How-To Library article. |
Property | Description |
---|---|
Gateway Host Name or Address | PowerCenter domain gateway host name or address. |
Gateway Port Number | PowerCenter domain gateway port number. |
Informatica Security Domain | LDAP security domain name if one exists. Otherwise, enter "Native." |
Repository Name | Name of the PowerCenter repository. |
Repository User Name | Username for the PowerCenter repository. |
Repository User Password | Password for the PowerCenter repository. |
PowerCenter Version | PowerCenter repository version. |
PowerCenter Code Page | Code page for the PowerCenter repository. |
Property | Description |
---|---|
Parameter File | Specify the parameter file that you want to attach from a local system. |
Auto assign Connections | Specifies whether Live Data Map assigns the connection is automatically. |
Repository subset | Enter the file path list separated by semicolons for the Informatica PowerCenter Repository object. |
Memory | Specify the memory value required to run a scanner job. Specify one of the following memory values:
Note: For details about the memory values, see the Tuning Live Data Map Performance How-To Library article. |
Property | Description |
---|---|
User | Name of the user account that connects to IBM DB2 database. |
Password | Password for the user account that connects to IBM DB2 database. |
Host | Fully qualified host name of the machine where IBM DB2 database is hosted. |
Port | Port number for the IBM DB2 database. |
Database | The DB2 connection URL used to access metadata from the database. |
Property | Description |
---|---|
Import system objects | Specifies the system objects to import. |
Schema | Specifies a list of database schema. |
Import stored procedures | Specifies the stored procedures to import. |
Memory | Specify the memory value required to run a scanner job. Specify one of the following memory values:
Note: For details about the memory values, see the Tuning Live Data Map Performance How-To Library article. |
Property | Description |
---|---|
Location | Node name in the dbmover.cfg file on the machine where the Catalog Service runs that points to the PowerExchange Listener on the z/OS system. Note: Live Data Map uses PowerExchange for DB2 for z/OS to access metadata from z/OS subsystems. |
User | Name of the user account that connects to IBM DB2 for z/OS database. |
Password | Password for the user account that connects to IBM DB2 for z/OS database. |
Encoding | Code page for the IBM DB2 for z/OS subsystem. |
Sub System ID | Name of the DB2 subsystem. |
Property | Description |
---|---|
Schema | Specifies a list of database schema. |
Property | Description |
---|---|
Host | Host name or IP address of the machine where the database management server runs. |
Port | Port number for the Netezza database. |
User | Name of the user account used to connect to the Netezza database. |
Password | Password for the user account used to connect to the Netezza database. |
Database | ODBC data source connect string for a Netezza database. Enter the data source name of the Netezza DSN if you created one. |
Property | Description |
---|---|
Schema | Specifies a list of semicolon-separated database schema. |
Memory | Specify the memory value required to run a scanner job. Specify one of the following memory values:
Note: For details about the memory values, see the Tuning Live Data Map Performance How-To Library article. |
Property | Description |
---|---|
Driver class | Name of the JDBC driver class. |
URL | Connection string to connect to the database. |
User | Database username. |
Password | Password for the database user name. |
Property | Description |
---|---|
Catalog | Catalog name. Note: You cannot use Catalog option for JDBC or ODBC sources. |
Schema | Specifies a list of schemas to import. |
Case sensitivity | Select one of the following options to specify if the database is configured for case sensitivity:
|
View definition extracting SQL | Specifies the database specific SQL query to retrieve the view definition text. |
Synonyms lineage SQL | Specifies the database specific SQL query to retrieve the synonym lineage. The following are the two columns that the query returns:
|
Optional Scope | Specifies the database object types to import, such as Tables and Views, Indexes, and Procedures. Specify a list of optional database object types that you want to import. The list can have zero or more database object types, which are separated by semicolons. For example, Keys and Indexes, and Stored Procedures. |
Import stored procedures | Specifies the stored procedures to import. The default value is True or False whatever the case might be. |
Memory | Specify the memory value required to run a scanner job. Specify one of the following memory values:
Note: For details about the memory values, see the Tuning Live Data Map Performance How-To Library article. |
Property | Description |
---|---|
User | Name of the SQL Server user account that connects to the Microsoft SQL Server database. The Catalog Service uses SQL Server authentication to connect to the Microsoft SQL Server database. |
Password | Password for the user account that connects to the Microsoft SQL Server database. |
Host | Host name of the machine where Microsoft SQL Server runs. |
Port | Port number for the SQL Server database engine service. |
Database | Name of the SQL Server database. |
Instance | SQL Server instance name. |
Property | Description |
---|---|
Import system objects | Specifies the system objects to import. The default value is True or False whatever the case might be. |
Schema | Specifies a list of semicolon-separated database schema. |
Import stored procedures | Specifies the stored procedures to be imported. The default value is True or False whatever the case might be. |
Memory | Specify the memory value required to run a scanner job. Specify one of the following memory values:
Note: For details about the memory values, see the Tuning Live Data Map Performance How-To Library article. |
Property | Description |
---|---|
User | Name of the user account that connects to the Oracle database. |
Password | Password for the user account that connects to the Oracle database. |
Host | Fully qualified host name of the machine where the Oracle database is hosted. |
Port | Port number for the Oracle database engine service. |
Service | Unique identifier or system identifier for the Oracle database server. |
Property | Description |
---|---|
Import system objects | Specifies the system objects to import. The default value is True or False whatever the case might be. |
Schema | Specifies a list of semicolon-separated database schema. |
Import stored procedures | Specifies the stored procedures to import. The default value is True or False whatever the case might be. |
Memory | Specify the memory value required to run a scanner job. Specify one of the following memory values:
Note: For details about the memory values, see the Tuning Live Data Map Performance How-To Library article. |
Property | Description |
---|---|
Host | Host name of the machine where Sybase database is hosted. |
Port | Port number for the Sybase database engine service. |
User | Database user name. |
Password | The password for the database user name. |
Database | Name of the database. |
Property | Description |
---|---|
Schema | Specify a list of database or scheme to import. |
Imported stored procedures | Specifies the stored procedures to import. The default value is True or False whatever the case might be. |
Memory | Specify the memory value required to run a scanner job. Specify one of the following memory values:
Note: For details about the memory values, see the Tuning Live Data Map Performance How-To Library article. |
Property | Description |
---|---|
User | Name of the user account that connects to the Teradata database. |
Password | Password for the user account that connects to the Teradata database. |
Host | Fully qualified host name of the machine where the Teradata database is hosted. Note: To connect to a Teradata resource using the LDAP server, you must specify /LOGMECH=LDAP to the host name. |
Property | Description |
---|---|
Import system objects | Specifies the system objects to import. The default value is True or False whatever the case might be. |
Schema | Specifies a list of semicolon-separated database schema. |
Import stored procedures | Specifies the stored procedures to import. The default value is True or False whatever the case might be. |
Fetch Views Data Types | Specifies that views data type must be imported. |
Memory | Specify the memory value required to run a scanner job. Specify one of the following memory values:
Note: For details about the memory values, see the Tuning Live Data Map Performance How-To Library article. |
Property | Description |
---|---|
Agent URL | Host name and port number of the Live Data Map agent that runs on a Microsoft Windows Server. |
Version | Version of the SAP Business Objects repository. |
System | Name of the BusinessObjects repository. For BusinessObjects 11.x and 12.x, specify the name of the BusinessObjects Central Management Server. Specify the server name in the following format: <server name>:<port number> If the Central Management Server is configured on a cluster, specify the cluster name in the following format: <host name>:<port>@<cluster name> Default port is 6400. Note: If the version of the BusinessObjects repository is 14.0.6, do not specify a port number in the repository name. If you specify the port number, Live Data Map cannot extract the Web Intelligence reports. |
Authentication mode | The authentication mode for the user account that logs in to the BusinessObjects repository. Specify one of the following values:
Default is Enterprise. |
User Name | User name to log in to the BusinessObjects repository. |
Password | Password of the user account for the BusinessObjects repository. |
Incremental import | Loads changes after the previous resource load or loads complete metadata. Specify one of the following values:
|
Add dependent objects | Choose the documents that depend on the universe you selected. Specify one of the following values:
Note: Dependency information is retrieved from the Business Objects repository metadata cache. If the Live Data Map load does not reflect modified or moved reports, refresh the cache by loading these reports and refreshing the queries. |
Add specific objects | Specifies additional objects to the universe. Specify one of the following values:
Default is none. |
Crystal CORBA port | Specifies the client port number on which the Crystal SDK communicates with the Report Application Server (RAS). The RAS server uses the port to send metadata to the local client computer. If you do not specify a port, the server randomly selects a port for each execution. |
Class representation | Controls how the import of the tree structure of classes and sub classes occur. The Live Data Map agent imports each class containing objects as a dimension or as a tree of packages. Specify one of the following values:
Default is As a flat structure. |
Worker Threads | Number of worker threads that the Live Data Map agent uses to extract metadata asynchronously. Leave blank or enter a positive integer value. If left blank, the Live Data Map agent calculates the number of worker threads. The Live Data Map agent uses the JVM architecture and number of available CPU cores on the Live Data Map agent machine to calculate the number of threads. If you specify a value that is not valid, the Live Data Map agent uses one worker thread. Reduce the number of worker threads if the Live Data Map agent generates out-of-memory errors during metadata extraction. Increase the number of worker threads if the Live Data Map agent machine has a large amount of available memory, for example, 10 GB or more. If you specify too many worker threads, performance can decrease. Default is blank. |
Auto Assign Connections | Choose to automatically assign the database schemas to the resource that you create for SAP BusinessObjects source. |
Property | Description |
---|---|
Repository browsing mode | Specifies the available objects in the SAP BusinessObjects repository. select one of the following options:
|
Repository subset | Specifies the objects stored in a remote SAP BusinessObjects repository. |
Memory | Specify the memory value required to run a scanner job. Specify one of the following memory values:
Note: For details about the memory values, see the Tuning Live Data Map Performance How-To Library article. |
Property | Description |
---|---|
Username | Salesforce username. |
Password | Password for the Salesforce user name. |
Service_URL | URL of the Salesforce service that you want to access. |
Property | Description |
---|---|
Memory | Specify the memory value required to run a scanner job. Specify one of the following memory values:
Note: For details about the memory values, see the Tuning Live Data Map Performance How-To Library article. |