After you publish a Kafka connection on the Cloud Server, if the Kafka broker goes down after processing certain messages, and you restart the Kafka broker within 24 hours, you do not see duplicate messages. This guarantees data accuracy and integrity, and ensures that there is no data loss.
For example, after you publish a Kafka connection, consider that the Kafka broker goes down after reading 500 messages and processing 100 messages. If the Kafka broker is restarted and a connection is established within the next 24 hours, the remaining 400 messages will get processed.
In case you want to process all the messages again, you must unpublish and publish the connection.