Unrecoverable exception from producer send callback. RecordTooLargeException


We are using MongoDB kafka connector in an ec2 instance which runs in standalone mode and connects to AWS confluent instance.

When we try to create source connector which send data from collection to kafka topic in AWS it throws ‘Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1066826 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.’ Exception.

We have already configured the max.message.bytes in kafka topic to 8388608 bytes but still facing the exception. Are we missing any config. I have read that we need to add max.request.size config with apporpriate value, in this scenario where can we make this change in AWS kafka ?

If you are using Confluent Cloud, did you consider using the Atlas source connector?


With regard to your issue there are the following aspects to consider:

  1. if you want to support larger records than the default settings you have to make config changes not only on the broker side but also for the producer that sends these records

  2. in your case we are talking about a kafka source connector scenario which means that there some default kafka connect worker settings for the underlying producer configurations in place. you can provide overrides for these settings e.g. to change max.request.size accordingly.

The official docs state the following about this: “For configuration of the producers used by Kafka source tasks and the consumers used by Kafka sink tasks, the same parameters can be used but need to be prefixed with producer. and consumer. respectively.”

Using this approach you should be able to reconfigure your source connector to make use of the override and thereby get rid of the RecordTooLargeException.

Hope this helps!

1 Like