Kafka Connector Guide > Kafka Connections and Processes > Kafka Event Source Properties
  

Kafka Event Source Properties

When you create a Kafka event source, you create a Kafka consumer to read Kafka messages. You can use each Kafka event source in a process that reads Kafka messages.
For each Kafka connection that you create, you can add one or more event sources. An event source serves as a start event that listens or monitors a specified Kafka topic for new messages. After you define an event source for a Kafka connection, you can publish the connection. You can then access the event source in a process and deploy the process to consume the process objects generated by the event source downstream.
When you run a Kafka connection on the Cloud Server, the Kafka consumer can read a maximum of 5 MB Kafka messages.
To create event sources for a Kafka connection, click Add Event Source on the Event Sources tab. Select the event source type as Kafka Consumer.
Note: If the content format specified in the event source of the Kafka connection does not match what the producer sends, the consumer process won't get triggered. You can see the error details in the Catalina logs.
The following table describes the basic event source properties that you can configure:
Property
Description
Name
Required. The event source name that appears in the Process Designer. The name must be unique for the connection.
Description
Optional. A description for the Kafka event source that appears in the Process Designer.
Enabled
Select Yes to make the event source available immediately after it is published.
Select No to disable the event source until you are ready to use it.
Note: You must not update this value when the Kafka consumer process is running. Otherwise, you might see duplicate messages.
Default is Yes.
The following table describes the Kafka event source properties that you can configure:
Event Source Property
Description
Enable Load Balancing
Required. Determines whether the connection must be deployed on all the Secure Agents in a group or on the selected Secure Agent for load balancing. You must enable this option only if the Process Server uses a Secure Agent cluster configuration.
When you enable this option, the Process Server distributes the routes across different Secure Agent machines in a Secure Agent cluster to ensure load balancing.
Default is No.
Note: After you publish a connection and run a process, if you toggle the load balancing option, you might see duplicate messages. To avoid this issue, Informatica recommends that you create a new connection for load balancing.
Topic
Required. The name of the Kafka topic from which you want to read messages.
Group Id
Required. The ID of the group that the Kafka consumer belongs to.
Content Format
Required. The format in which you want to read the messages.
Select one of the following options:
  • - TEXT
  • - XML
  • - JSON
Default is TEXT.
Other Attributes
Optional. A list of additional Apache Camel configuration properties for the Kafka consumer.
Specify the properties in key-value pairs. Separate multiple properties with an ampersand character (&) without any space.
For example, enter the following phrase to specify the maximum amount of data the server should return for a fetch request and the maximum amount of time the client will wait for the response of a request:
fetchMaxBytes=50000000&consumerRequestTimeoutMs=30000
When you configure consumer-specific connection properties on both the Event Sources tab and the Properties tab, the connection properties on the Event Sources tab take precedence.
To increase the throughput and invoke more processes simultaneously, you can configure the following consumer-specific properties:
maxPollRecords. Specifies the maximum number of records to return in a single call to poll messages from broker per consumer. Default is 500.
maxPollIntervalMs. Specifies the maximum delay between the invocations of poll() when using consumer group management. Default is 300000 milliseconds or 5 minutes.
partitionAssignor. Specifies the class name of the partition assignment strategy that the client uses to distribute partition ownership amongst consumer instances when group management is used. Default is org.apache.kafka.clients.consumer.RoundRobinAssignor.
For example, enter the following phrase to increase the throughput:
maxPollRecords=100&maxPollIntervalMs=480000&partitionAssignor=org.apache.kafka.clients.consumer.RoundRobinAssignor
For more information about the properties that you can specify, see the Apache Camel documentation.
You can view the status of each event source in the published connection. If the status of the event source is stopped, you can republish the connection and restart the event source. When you republish the connection, all the event sources in the connection start by default.
For more information about starting and stopping event sources in a listener-based connection, see Connectors for Cloud Application Integration and Monitor.