Sample prompts for ingestion and replication tasks
In the first prompt, you can enter a natural language statement for creating an application ingestion and replication task or database ingestion and replication task. For best results, include key details such as the source and target connection names and schemas. Optionally, you can include criteria for selecting the source tables or objects.
The following sample prompts demonstrate the various ways you can enter a prompt:
I want to perform an initial load of data from Salesforce to Snowflake using the xxxx source connection and yyyy target connection. The target schema is def. Select source tables that have names starting with Sales, but do not select the Sales1 and Sales5 tables. Also select tables that have names starting with Cust.
Create a combined initial and incremental load task with an Oracle source and PostgreSQL target. Use the source schema SYS and the source connection Ora1. Use the target schema QATEST and the target connection Postgres1. Include source tables with names matching AB*.
Create a task that performs both an initial load of data and then captures changes in near real time from an Oracle database to a PostgreSQL target. Use the Oracle source schema abc and the source connection xxxx. Also use the target schema def and target connection yyyy. Include source tables that have names starting with SALES but exclude tables that have names starting with SALES_LASTYR.
Create a task that loads data in batch from Workday using xxxx connection to Amazon S3 using yyyy connection.
Create a data flow from SAP HANA source tables in the abc schema to Kafka. Use source connection xxxx and target connection yyyy.
From the SAP source system accessed with connection xxxx, transfer data to the Amazon S3 target bucket. Exclude tables with names matching pattern TEMP_* and include file partitioning by date with a maximum of 100 files. The target file format is Parquet.
Set up a data pipeline from an SAP source system to an Amazon S3 bucket using the connection xxxx. Use the Avro format for the output for efficient storage.
Load from Workday to Snowflake using the source connection xxxx and target connection yyyy. This task should be configured to pull records and prioritize the ingestion of tables with the prefix INC_PAYROLL_. If schema changes occur, such as column drops or renames, alert the maintenance team but continue processing.
Perform change data capture from Oracle schema abc with connection xxxx and replicate the data to the Kafka target using the yyyy target connection. Enable comprehensive logging for all changes and use a high-throughput Kafka topic to handle the volume. Filter out any transient records marked for deletion within the source database.
Generate a task that moves data from the PostgreSQL database accessed with the connection xxxx to Amazon S3. Select the source tables in the abc schema.
Set up a task that moves data in bulk from the PostgreSQL database using source connection xxxx to the SQL Server target using the target connection yyyy, in order to materialize the target.
Move new DML records from Oracle source tables in the src schema to the Snowflake destination on an ongoing basis. Use the xxxx source connection and yyyy destination connection.
Generate a task to load data in bulk from PostgreSQL to Oracle. Use the source connection xxxx and source schema abc.
Read bulk data from Netsuite using the xxxx connection and load it into Kafka.
Transfer data at a point in time from Oracle to Kafka using the xxxx source connection and yyyy target connection. Source schema is abc.
Perform an initial data load from the Db2 source using the xxxx connection and schema abc to the Oracle target using the connection yyyy.
Move records from SAP using the xxxx source connection to Snowflake using the yyyy connection.
Replicate change records from an SAP source using the xxxx connection to an Oracle target using the yyyy connection. The Oracle target schema is def.