Connections Overview
Define the connections that you want to use to access data in Kafka brokers, JMS servers, HDFS files, Hive tables, Amazon Kinesis streams, MapR streams or HBase resources. You can create the connections using the Developer tool and infacmd.
You can create the following types of connections:
- Hadoop
Create a Hadoop connection to run mappings on the Hadoop cluster. Select the Hadoop connection if you select the Hadoop run-time environment. You must also select the Hadoop connection to validate a mapping to run on the Hadoop cluster.
For more information about the Hadoop connection properties, see the Informatica Big Data Management User Guide.
- HBase
- Create an HBase connection to write data to an HBase resource.
- HDFS
- Create an HDFS connection to write data to an HDFS binary or sequence file.
- Hive
- Create a Hive connection to write data to Hive tables.
For more information, see the Informatica Big Data Management Administrator Guide.
- JDBC
- Create a JDBC connection when you perform a lookup on a relational database using Sqoop.
For more information about the JDBC connection properties, see the Informatica Big Data Management User Guide.
- Microsoft Azure Data Lake Store
- Create a Microsoft Azure Data Lake Store connection to write to a Microsoft Azure Data Lake Store.
- Messaging
- Create a Messaging connection to access data as it becomes available, and to run a streaming mapping on a Spark engine. You can create the following types of messaging connections:
- - AmazonKinesis. Create an AmazonKinesis connection to read from Amazon Kinesis Streams or write to Amazon Kinesis Firehose Delivery Streams.
- - AzureEventHub. Create an AzureEventHub connection to read from or write to Microsoft Event Hubs.
- - JMS. Create a JMS connection to read from or write to a JMS server.
- - Kafka. Create a Kafka connection to read from or write to a Kafka broker.
- - MapRStreams. Create a MapRStreams connection read from or write to MapR Streams.