Resolving Apache Kafka Producer 'Topic not present in metadata' Error: Dependency Management and Configuration Analysis

Dec 04, 2025 · Programming · 10 views · 7.8

Keywords: Apache Kafka | Java Producer | Topic Metadata Error | Jackson Dependency | Timeout Configuration

Abstract: This article provides an in-depth analysis of the common TimeoutException: Topic not present in metadata after 60000 ms error in Apache Kafka Java producers. By examining Q&A data, it focuses on the core issue of missing jackson-databind dependency while integrating other factors like partition configuration, connection timeouts, and security protocols. Complete solutions and code examples are offered to help developers systematically diagnose and fix such Kafka integration issues.

In Apache Kafka Java client development, improper producer configuration often leads to the org.apache.kafka.common.errors.TimeoutException: Topic not present in metadata after 60000 ms error. This exception indicates that the producer cannot retrieve metadata for the target topic from the Kafka cluster within the specified timeout period. Even if the topic is accessible via command-line tools, Java clients may fail due to configuration or dependency issues.

Core Issue: Missing Jackson Dependency

Based on the best answer analysis, the most common cause of this error is the absence of the jackson-databind library. Kafka clients internally use Jackson for serialization operations, and when this dependency is not properly included, metadata retrieval fails. In Maven configuration, kafka-clients marks jackson-databind with a provided scope, meaning it expects the runtime environment (e.g., JDK or container) to supply this dependency. For standalone Java applications, it must be explicitly added.

Here is a corrected Maven dependency configuration example:

<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>2.6.0</version>
</dependency>
<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.12.3</version>
</dependency>

At the code level, producer configuration does not require direct modification, but ensuring complete dependencies allows the original producer code to function correctly. For instance, here is a basic producer implementation:

import java.util.Properties;
import org.apache.kafka.clients.producer.*;

public class FixedKafkaProducer {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
                  "org.apache.kafka.common.serialization.StringSerializer");
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
                  "org.apache.kafka.common.serialization.StringSerializer");
        
        Producer<String, String> producer = new KafkaProducer<>(props);
        producer.send(new ProducerRecord<>("testtopic2", "key", "message"));
        producer.close();
    }
}

Other Potential Causes and Solutions

Beyond dependency issues, several factors can trigger the same timeout error. The following analysis, based on supplementary answers, offers comprehensive diagnostic approaches.

Partition Configuration Error

If the producer attempts to send messages to a non-existent partition, it can also cause this exception. For example, a topic has only one partition (partition 0), but the code specifies partition 1. The fix is to ensure partition parameters are within valid ranges or use default partitioning strategies.

// Incorrect example: specifying a non-existent partition
ProducerRecord<String, String> record = new ProducerRecord<>("testtopic2", 1, "key", "value");
// Correct example: using a valid partition or auto-assignment
ProducerRecord<String, String> record = new ProducerRecord<>("testtopic2", "key", "value");

Connection and Timeout Settings

The max.block.ms parameter controls the maximum blocking time for metadata retrieval in producers, with a default value of 60000 milliseconds. If the Kafka instance is unreachable or the URL is incorrect, an exception is thrown after timeout. Adjusting this parameter can quickly diagnose connection issues.

props.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 30000); // Set to 30 seconds

If the error persists with a shortened timeout, check the bootstrap.servers configuration and Kafka service status.

Security Protocol Mismatch

In clusters with SSL or SASL enabled, incorrect client security protocol configuration may lead to metadata retrieval failures. Ensure security.protocol matches the cluster settings, for example:

props.put("security.protocol", "SSL"); // Or "PLAINTEXT" based on environment

Environment and Restart Strategies

In local development environments, abnormal ZooKeeper or Kafka service states can cause this issue. Attempting to restart services or clean data directories may resolve temporary faults. However, this should be a last resort after logical diagnostics.

Systematic Debugging Recommendations

When facing such errors, a layered debugging approach is recommended:

  1. Verify Dependencies: First, check if critical dependencies like jackson-databind are complete.
  2. Inspect Configuration: Confirm parameters such as bootstrap.servers, partitions, and security protocols are correct.
  3. Test Connectivity: Use telnet or Kafka command-line tools to verify network reachability.
  4. Review Logs: Enable Kafka client debug logging to obtain detailed error information.

By understanding Kafka metadata retrieval mechanisms and client dependency management, developers can effectively prevent and resolve Topic not present in metadata errors, ensuring stable operation in production environments.

Copyright Notice: All rights in this article are reserved by the operators of DevGex. Reasonable sharing and citation are welcome; any reproduction, excerpting, or re-publication without prior permission is prohibited.