Mass Ingestion Applications > Mass Ingestion Applications > Supported sources
  

Supported sources

The sources that Mass Ingestion Applications support depend on whether the application ingestion tasks transfer a point-in-time snapshot of data in a batch initial load operation or load incremental change data from a specific start point.
The following table lists the source types that Mass Ingestion Applications support along with the types of load operations supported for each source type:
Source type
Supported load operations
Adobe Analytics
Initial load, incremental load, and combined initial and incremental load
Google Analytics
Initial load, incremental load, and combined initial and incremental load
Marketo
Initial load, incremental load, and combined initial and incremental load
Microsoft Dynamics 365
Initial load, incremental load, and combined initial and incremental load
NetSuite
Initial load, incremental load, and combined initial and incremental load
Oracle Fusion Cloud Applications
REST: Initial load, incremental load, and combined initial and incremental load
BICC: Initial load, incremental load, and combined initial and incremental load
Salesforce
Initial load, incremental load, and combined initial and incremental load
Salesforce Marketing Cloud
Initial load
SAP ERP Central Component (SAP ECC)
Initial load, incremental load, and combined initial and incremental load depending on the connection type.
SAP ODP
SAP ECC: Initial load, incremental load, and combined initial and incremental load
SAP S4/HANA: Initial load, incremental load, and combined initial and incremental load
SAP Mass Ingestion
SAP ECC: Initial load, incremental load, and combined initial and incremental load for Oracle database. Certified on Snowflake target.
SAP S4/HANA: Initial load and incremental load for HANA database. Certified on Snowflake target.
ServiceNow
Initial load, incremental load, and combined initial and incremental load
Workday
Initial load, incremental load, and combined initial and incremental load
Zendesk
Initial load, incremental load, and combined initial and incremental load
To determine the connectors to use for the source types, see Connectors and Connections > Mass Ingestion Applications connectors.

Guidelines for Google Analytics sources

Consider the following guidelines when you use Google Analytics sources:

Guidelines for Marketo sources

Consider the following guidelines when you use Marketo sources:

Guidelines for Microsoft Dynamics 365 sources

Consider the following guidelines when you use Microsoft Dynamics 365 sources:

Guidelines for NetSuite sources

Consider the following guidelines when you use NetSuite sources:

Guidelines for Oracle Fusion Cloud sources

Consider the following guidelines when you use Oracle Fusion Cloud sources:

Guidelines for Salesforce sources

Consider the following guidelines when you use Salesforce sources:

Guidelines for Salesforce Marketing Cloud sources

Consider the following guidelines when you use Salesforce Marketing Cloud sources:

Guidelines for SAP ECC and SAP S4/HANA sources using the SAP ODP Extractor connector

Consider the following guidelines when you use SAP ECC or SAP S4/HANA sources with SAP ODP Extractor connector:

Configuring SAP User Authorization

Configure the SAP user account to process SAP table data.
The following table describes the required authorization to read from SAP tables:
Read Object Name
Required Authorization
S_BTCH_JOB
DELE, LIST, PLAN, SHOW.
Set Job Operation to RELE.
S_PROGRAM
BTCSUBMIT, SUBMIT
S_RFC
SYST, SDTX, SDIFRUNTIME, /INFADI/TBLRDR, RFC1
S_TABU_DIS / S_TABU_NUM
Provide SAP table name from which you want to read data.

Installing SAP ODP Connector Transport Files

Install the SAP ODP Extractor transport files on the SAP machines that you want to access. Before you install the transports on your production system, install and test the transports in a development system.
Verify that you install the latest transport files to extract data from the SAP ODP object.
Install the following data file and cofile to read data from the SAP ODP object:
Data and Cofile Names
Transport Request
Functionality
  • - K900426.IN7
  • - R900426.IN7
IN7K900426
Install the transports only when you want to read from an SAP ODP object that supports hierarchy.
You can use the SAP ODP Extractor Connector without installing the SAP ODP Extractor transport files for objects that do not support hierarchy.

Installing Transport Files

Install the latest transport files from a Secure Agent directory to read from a Unicode SAP system. The transport files are for SAP version ERP 6.0 EHP7 system or later.
    1 Find the transport files in the following directory on the Secure Agent machine:
    <Informatica Secure Agent installation directory>\downloads\package-SAPODP.<Latest version>\package\sapodp\sap-transport
    2Copy the cofile transport file to the Cofile directory in the SAP transport management directory on each SAP machine that you want to access.
    The cofile transport file uses the following naming convention: <number>.<sap system>.
    3Copy the data transport file to the Data directory in the SAP transport management directory on each SAP machine that you want to access.
    The data transport file uses the following naming convention: <number>.<sap-system>.
    4To import the transports to SAP, in the STMS, click Extras > Other Requests > Add and add the transport request to the system queue.
    5In the Add Transport Request to Import Queue dialog box, enter the request number for the cofile transport.
    The request number inverts the order of the renamed cofile as follows: <sap-system><number>.
    6In the Request area of the import queue, select the transport request number that you added, and click Import.
    7If you are upgrading from a previous version of the Informatica Transports, select the Overwrite Originals option.

Guidelines for SAP sources using the SAP Mass Ingestion connector

The SAP Mass Ingestion connector supports the following databases:

Source preparation

To use Oracle sources in application ingestion tasks, first prepare the source database and review the usage considerations

Usage considerations

Installing transport files

Install the latest transport files from a Secure Agent directory to read from a Unicode SAP system. The transport files are for SAP version ECC 5.0 or later.
    1 Find the transport files in the following directory on the Secure Agent machine:
    <Secure_Agent>\downloads\package-SAPAmi.<version>\package\sapami\SAPTableReader\
    2Copy the cofile transport file to the Cofile directory in the SAP transport management directory on each SAP machine that you want to access.
    The cofile transport file uses the following naming convention: K<number>.EP6.
    3Copy the data transport file to the Data directory in the SAP transport management directory on each SAP machine that you want to access.
    The data transport file uses the following naming convention: R<number>.EP6.
    4To import the transports to SAP, in the STMS, click Extras > Other Requests > Add and add the transport request to the system queue.
    5In the Add Transport Request to Import Queue dialog box, enter the request number for the cofile transport.
    The request number inverts the order of the renamed cofile as follows: EP6K<number>.
    For example, for a cofile transport file renamed as K900215.EP6, enter the request number as EP6K900215.
    6In the Request area of the import queue, select the transport request number that you added, and click Import.
    7If you are upgrading from a previous version of the Informatica Transports, select the Overwrite Originals option.

Oracle privileges

To deploy and run an application ingestion task that has an Oracle source, the source connection must specify a Mass Ingestion Applications user who has the privileges required for the ingestion load type.
Privileges for incremental load processing with log-based CDC
Note: If the Oracle logs are managed by ASM, the user must have SYSASM or SYSDBA authority.
For an application ingestion task that performs an incremental load or combined initial and incremental load using the log-based CDC method, ensure that the Mass Ingestion Applications user (cmid_user) has been granted the following privileges:
GRANT CREATE SESSION TO <cmid_user>;

GRANT SELECT ON table TO <cmid_user>; -- For each source table created by user
GRANT EXECUTE ON DBMS_FLASHBACK TO <cmid_user>;

-- The following grant is required for combined initial and incremental loads only. Do not
-- use ANY TABLE unless your security policy allows it.
GRANT FLASHBACK ON table|ANY TABLE TO <cmid_user>;

-- Include the following grant only if you want to Execute the CDC script for enabling
-- supplemental logging from the user interface. If you manually enable supplemental
-- logging, this grant is not needed.
GRANT ALTER table|ANY TABLE TO <cmid_user>;

GRANT SELECT ON DBA_CONSTRAINTS TO <cmid_user>;
GRANT SELECT ON DBA_CONS_COLUMNS TO <cmid_user>;
GRANT SELECT ON DBA_INDEXES TO <cmid_user>;
GRANT SELECT ON DBA_LOG_GROUPS TO <cmid_user>;
GRANT SELECT ON DBA_LOG_GROUP_COLUMNS TO <cmid_user>;
GRANT SELECT ON DBA_OBJECTS TO <cmid_user>;
GRANT SELECT ON DBA_OBJECT_TABLES TO <cmid_user>;
GRANT SELECT ON DBA_TABLES TO <cmid_user>;
GRANT SELECT ON DBA_TABLESPACES TO <cmid_user>;
GRANT SELECT ON DBA_USERS TO <cmid_user>;

GRANT SELECT ON "PUBLIC".V$ARCHIVED_LOG TO <cmid_user>;
GRANT SELECT ON "PUBLIC".V$CONTAINERS TO <cmid_user>; -- For Oracle multitenant environments
GRANT SELECT ON "PUBLIC".V$DATABASE TO <cmid_user>;
GRANT SELECT ON "PUBLIC".V$DATABASE_INCARNATION TO <cmid_user>;
GRANT SELECT ON "PUBLIC".V$ENCRYPTION_WALLET TO <cmid_user>; -- For Oracle TDE access
GRANT SELECT ON "PUBLIC".V$LOG TO <cmid_user>;
GRANT SELECT ON "PUBLIC".V$LOGFILE TO <cmid_user>;
GRANT SELECT ON "PUBLIC".V$PARAMETER TO <cmid_user>;
GRANT SELECT ON "PUBLIC".V$PDBS TO <cmid_user>; -- For Oracle multitenant environments
GRANT SELECT ON "PUBLIC".V$SPPARAMETER TO <cmid_user>;
GRANT SELECT ON "PUBLIC".V$STANDBY_LOG TO <cmid_user>;
GRANT SELECT ON "PUBLIC".V$THREAD TO <cmid_user>;
GRANT SELECT ON "PUBLIC".V$TRANSACTION TO <cmid_user>;
GRANT SELECT ON "PUBLIC".V$TRANSPORTABLE_PLATFORM TO <cmid_user>;
GRANT SELECT ON "PUBLIC".V$VERSION TO <cmid_user>;

GRANT SELECT ON SYS.ATTRCOL$ TO <cmid_user>;
GRANT SELECT ON SYS.CCOL$ TO <cmid_user>;
GRANT SELECT ON SYS.CDEF$ TO <cmid_user>;
GRANT SELECT ON SYS.COL$ TO <cmid_user>;
GRANT SELECT ON SYS.COLTYPE$ TO <cmid_user>;
GRANT SELECT ON SYS.IDNSEQ$ TO <cmid_user>;
GRANT SELECT ON SYS.IND$ TO <cmid_user>;
GRANT SELECT ON SYS.INDPART$ TO <cmid_user>;
GRANT SELECT ON SYS.OBJ$ TO <cmid_user>;
GRANT SELECT ON SYS.PARTOBJ$ TO <cmid_user>;
GRANT SELECT ON SYS.RECYCLEBIN$ TO <cmid_user>;
GRANT SELECT ON SYS.TAB$ TO <cmid_user>;
GRANT SELECT ON SYS.TABCOMPART$ TO <cmid_user>;
GRANT SELECT ON SYS.TABPART$ TO <cmid_user>;
GRANT SELECT ON SYS.TABSUBPART$ TO <cmid_user>;

-- Also ensure that you have access to the following ALL_* views:
ALL_CONSTRAINTS
ALL_CONS_COLUMNS
ALL_ENCRYPTED_COLUMNS
ALL_INDEXES
ALL_IND_COLUMNS
ALL_OBJECTS
ALL_TABLES
ALL_TAB_COLS
ALL_TAB_PARTITIONS
ALL_USERS
Privileges for incremental load processing with query-based CDC
For an application ingestion task that performs an incremental load or combined initial and incremental load using the query-based CDC method, ensure that the user has the following privileges at minimum:
GRANT CREATE SESSION TO <cmid_user>;

GRANT SELECT ON DBA_INDEXES TO <cmid_user>;
GRANT SELECT ON DBA_OBJECT_TABLES TO <cmid_user>;
GRANT SELECT ON DBA_OBJECTS TO cmid_user;
GRANT SELECT ON DBA_TABLES TO <cmid_user>;
GRANT SELECT ON DBA_USERS TO <cmid_user>;
GRANT SELECT ON DBA_VIEWS TO <cmid_user>; -- Only if you unload data from views

GRANT SELECT ANY TABLE TO <cmid_user>;
-or-
GRANT SELECT ON table TO <cmid_user>; -- For each source table created by user

GRANT SELECT ON ALL_CONSTRAINTS TO <cmid_user>;
GRANT SELECT ON ALL_CONS_COLUMNS TO <cmid_user>;
GRANT SELECT ON ALL_ENCRYPTED_COLUMNS TO <cmid_user>;
GRANT SELECT ON ALL_IND_COLUMNS TO <cmid_user>;
GRANT SELECT ON ALL_INDEXES TO <cmid_user>;
GRANT SELECT ON ALL_OBJECTS TO <cmid_user>;
GRANT SELECT ON ALL_TAB_COLS TO <cmid_user>;
GRANT SELECT ON ALL_USERS TO <cmid_user>;

GRANT SELECT ON "PUBLIC"."V$DATABASE" TO cmid_user;
GRANT SELECT ON "PUBLIC"."V$CONTAINERS" TO cmid_user;
GRANT SELECT ON SYS.ATTRCOL$ TO <cmid_user>;
GRANT SELECT ON SYS.CCOL$ TO <cmid_user>;
GRANT SELECT ON SYS.CDEF$ TO <cmid_user>;
GRANT SELECT ON SYS.COL$ TO <cmid_user>;
GRANT SELECT ON SYS.COLTYPE$ TO <cmid_user>;
GRANT SELECT ON SYS.IND$ TO <cmid_user>;
GRANT SELECT ON SYS.IDNSEQ$ TO cmid_user;
GRANT SELECT ON SYS.OBJ$ TO <cmid_user>;
GRANT SELECT ON SYS.RECYCLEBIN$ TO <cmid_user>;
GRANT SELECT ON SYS.TAB$ TO <cmid_user>;
Privileges for initial load processing
For an application ingestion task that performs an initial load, ensure that the user has the following privileges at minimum:
GRANT CREATE SESSION TO <cmid_user>;

GRANT SELECT ON DBA_INDEXES TO <cmid_user>;
GRANT SELECT ON DBA_OBJECT_TABLES TO <cmid_user>;
GRANT SELECT ON DBA_OBJECTS TO cmid_user;
GRANT SELECT ON DBA_TABLES TO <cmid_user>;
GRANT SELECT ON DBA_USERS TO <cmid_user>;
GRANT SELECT ON DBA_VIEWS TO <cmid_user>; -- Only if you unload data from views

GRANT SELECT ANY TABLE TO <cmid_user>;
-or-
GRANT SELECT ON table TO <cmid_user>; -- For each source table created by user

GRANT SELECT ON ALL_CONSTRAINTS TO <cmid_user>;
GRANT SELECT ON ALL_CONS_COLUMNS TO <cmid_user>;
GRANT SELECT ON ALL_ENCRYPTED_COLUMNS TO <cmid_user>;
GRANT SELECT ON ALL_IND_COLUMNS TO <cmid_user>;
GRANT SELECT ON ALL_INDEXES TO <cmid_user>;
GRANT SELECT ON ALL_OBJECTS TO <cmid_user>;
GRANT SELECT ON ALL_TAB_COLS TO <cmid_user>;
GRANT SELECT ON ALL_USERS TO <cmid_user>;

GRANT SELECT ON "PUBLIC"."V$DATABASE" TO cmid_user;
GRANT SELECT ON "PUBLIC"."V$CONTAINERS" TO cmid_user;
GRANT SELECT ON SYS.ATTRCOL$ TO <cmid_user>;
GRANT SELECT ON SYS.CCOL$ TO <cmid_user>;
GRANT SELECT ON SYS.CDEF$ TO <cmid_user>;
GRANT SELECT ON SYS.COL$ TO <cmid_user>;
GRANT SELECT ON SYS.COLTYPE$ TO <cmid_user>;
GRANT SELECT ON SYS.IND$ TO <cmid_user>;
GRANT SELECT ON SYS.IDNSEQ$ TO cmid_user;
GRANT SELECT ON SYS.OBJ$ TO <cmid_user>;
GRANT SELECT ON SYS.RECYCLEBIN$ TO <cmid_user>;
GRANT SELECT ON SYS.TAB$ TO <cmid_user>;n

Oracle privileges for Amazon RDS for Oracle sources

If you have an Amazon RDS for Oracle source, you must grant certain privileges to the Mass Ingestion Applications user.
Important: You must log in to Amazon RDS under the master username to run GRANT statements and procedures.
Grant the SELECT privilege, at minimum, on objects and system tables that are required for CDC processing to the Mass Ingestion Applications user (cmid_user). Some additional grants are required in certain situations.
Use the following GRANT statements:
GRANT SELECT ON "PUBLIC"."V$ARCHIVED_LOG" TO "cmid_user";

GRANT SELECT ON "PUBLIC"."V$DATABASE" TO "cmid_user";
GRANT SELECT ON "PUBLIC"."V$LOG" TO "cmid_user";
GRANT SELECT ON "PUBLIC"."V$LOGFILE" TO "cmid_user";
GRANT SELECT ON "PUBLIC"."V$TRANSPORTABLE_PLATFORM" TO "cmid_user";
GRANT SELECT ON "PUBLIC"."V$THREAD" TO "cmid_user";
GRANT SELECT ON "PUBLIC"."V$DATABASE_INCARNATION" TO "cmid_user";
GRANT SELECT ON "PUBLIC"."V$TRANSACTION" TO "cmid_user";

GRANT SELECT ON "SYS"."DBA_CONS_COLUMNS" TO "cmid_user";
GRANT SELECT ON "SYS"."DBA_CONSTRAINTS" TO "cmid_user";
GRANT SELECT ON DBA_INDEXES TO "cmid_user";
GRANT SELECT ON "SYS"."DBA_LOG_GROUP_COLUMNS" TO "cmid_user";
GRANT SELECT ON "SYS"."DBA_TABLESPACES" TO "cmid_user";

GRANT SELECT ON "SYS"."OBJ$" TO "cmid_user";
GRANT SELECT ON "SYS"."TAB$" TO "cmid_user";
GRANT SELECT ON "SYS"."IND$" TO "cmid_user";
GRANT SELECT ON "SYS"."COL$" TO "cmid_user";

GRANT SELECT ON "SYS"."PARTOBJ$" TO "cmid_user";
GRANT SELECT ON "SYS"."TABPART$" TO "cmid_user";
GRANT SELECT ON "SYS"."TABCOMPART$" TO "cmid_user";
GRANT SELECT ON "SYS"."TABSUBPART$" TO "cmid_user";
COMMIT;

/* For combined load jobs:*/
GRANT EXECUTE ON DBMS_FLASHBACK TO "cmid_user";

/*To provide read access to the Amazon RDS online and archived redo logs:*/
GRANT READ ON DIRECTORY ONLINELOG_DIR TO "cmid_user";
GRANT READ ON DIRECTORY ARCHIVELOG_DIR TO "cmid_user";
Additionally, log in as the master user and run the following Amazon RDS procedures to grant the SELECT privilege on some more objects:
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'DBA_TABLES',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'DBA_OBJECTS',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'DBA_OBJECT_TABLES',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'DBA_VIEWS',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
/begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'DBA_USERS',
p_grantee => 'cmid_user',
p_privilege => 'SELECT',
p_grant_option => false);
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'V_$CONTAINERS',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'V_$PARAMETER',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'V_$SPPARAMETER',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'V_$STANDBY_LOG',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'V_$VERSION',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'ALL_CONS_COLUMNS',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'ALL_CONSTRAINTS',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'ALL_OBJECTS',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'ALL_TABLES',
p_grantee => 'cmid_user',
p_privilege => 'SELECT',
p_grant_option => false);
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'ALL_TAB_PARTITIONS',
p_grantee => 'cmid_user',
p_privilege => 'SELECT',
p_grant_option => false);
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'ALL_USERS',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rd 'ALL_TABLES',
p_grantee => 'sadmin_util.grant_sys_object(
p_obj_name => 'cmid_user',
p_privilege => 'SELECT',
p_grant_option => false);
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'ALL_TAB_PARTITIONS',
p_grantee => 'cmid_user',
p_privilege => 'SELECT',
p_grant_option => false);
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'ATTRCOL$',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'CCOL$',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'COLTYPE$',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'INDPART$',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'IDNSEQ$',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'CDEF$',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'RECYCLEBIN$',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/
/* Only required for RDS21 which supports PDB*/
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'V_$PDBS',
p_grantee => 'cmid_user',
p_privilege => 'SELECT');
end;
/

Configuring BFILE access to Oracle redo logs in the Oracle file system

If you store redo logs in the local Oracle server file system and want to access the logs by using Oracle directory objects with BFILEs, perform the following configuration tasks:
Complete the following usual Oracle source preparation tasks, which are not specific to BFILE access:
Additionally, for BFILE access, perform the following steps:
  1. 1Query the Oracle database for the online and archived redo log locations in the Oracle server file system. You can use the following example queries:
  2. To get location of the online redo logs:
    select * from v$logfile;
    To get the archive log destination:
    select dest_id, dest_name, destination, status from V$ARCHIVE_DEST;
  3. 2Create the ONLINELOG_DIR and ARCHIVELOG_DIR directory objects that point to the locations of log files from step 1. An Oracle directory object specifies a logical alias name for a physical directory on the Oracle server file system under which the log files to be accessed are located. For example:
  4. CREATE DIRECTORY ONLINELOG_DIR AS '/u01/oracle/data';
    CREATE DIRECTORY ARCHIVELOG_DIR AS '/u01/oracle/archivedata';
    Note: If you plan to set the reader mode to ARCHIVEONLY in the Oracle Database Ingestion connection to read changes only from archive logs, you do not need to create an ONLINELOG_DIR directory or directory object.
    The Oracle database does not verify that the directories you specify exist. Make sure you specify valid directories that exist in the Oracle file system.
  5. 3To verify that the directory objects were created with the correct file system paths for the redo logs, issue a select statement such as:
  6. select * from all_directories;
    OWNER DIRECTORY_NAME DIRECTORY_PATH
    -------- ------------------- ----------------------------------
    SYS ARCHIVELOG_DIR /u01/oracle/data/JO112DTL
    SYS ONLINELOG_DIR /u01/oracle/data/JO112DTL
  7. 4Grant read and write access on the ONLINELOG_DIR and ARCHIVELOG_DIR directory objects to the Mass Ingestion Applications user who is specified in the Oracle Database Ingestion connection properties. For example:
  8. grant read on directory "ARCHIVELOG_DIR" to "cmid_user";
    grant read on directory "ONLINELOG_DIR" to "cmid_user";
  9. 5In the Oracle Database Ingestion connection properties, select the BFILE Access check box.

Oracle archive log retention considerations

Application ingestion incremental and combined initial and incremental load jobs must be able to access transaction data in Oracle online and archive redo logs. If the logs are not available, application ingestion jobs end with an error.
Typically, the Oracle DBA sets the archive log retention period based on your oganization's particular business needs and Oracle environment. Make sure that the source archive logs are retained for the longest period for which you expect change capture to be stopped or latent, plus about 1 hour, so that the logs will be available for restart processing.
To determine if the current log retention policy in your environment is sufficient to accommodate application ingestion change capture processing, consider the following factors:
If the archive logs are not available when you need to restart capture processing in the logs, you can ask your DBA to restore them and to modify the retention period if necessary. Otherwise, perform another initial load to re-materialize the target and then start incremental change data processing again. However, in this case, you might lose some changes.

Guidelines for ServiceNow sources

Consider the following guidelines when you use ServiceNow sources:

Guidelines for Workday sources

Consider the following guidelines when you use Workday sources:
Mass Ingestion Applications provides you the option to extract Workday data through the following web services:
The option to select the required web service appears on the Source tab of the application ingestion task wizard.

Guidelines for Workday Web Services

Guidelines for Workday RaaS

Guidelines for Zendesk sources

Consider the following guidelines when you use Zendesk sources: