-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Producer automatic topic creation is disabled when there are multiple topics #3058
Comments
@sonalys I'm not sure I understand your issue, can you elaborate a little more? The producer (and consumer) will call RefreshMetadata with the specific topic(s) you're producing or consuming from, going down the |
I'm sorry if I interpreted it wrong, but I looked through all the references to the configuration I don't know your codebase at all, so I presumed this is the place where it's used for creating topics automatically. The behavior I explained remains the same. Whenever using async producers, or sync producer with multiple topics, the topics are not automatically created. |
I'm not sure what you mean by "multiple topics" when it comes to the producer as you specify the topic in each individual message that you want to send and they're batched up appropriately. For example, if I do something simple like this with the sync producer and an empty Kafka cluster with topic := "foobar"
for i := 1; i < 4; i++ {
producer.Input() <- &sarama.ProducerMessage{
Topic: topic + "-" + strconv.Itoa(i),
Value: sarama.StringEncoder(time.Now().String()),
Headers: []sarama.RecordHeader{},
Timestamp: time.Now(),
}
message := <-producer.Successes()
log.Printf(
"Message produced: value = %s, timestamp = %v, topic = %s-%d", message.Value, message.Timestamp, message.Topic, message.Partition)
} Consumers do have the concept of being given a list of topics to subscribe to, but I see the same behaviour there. If I bring up an empty Kafka cluster and point a consumer group at it with three topics, I see all three being autocreated successfully topic := "barfoo"
group.Consume(ctx, []string{topic + "-1", topic + "-2", topic + "-3"}, &consumer) |
Yes, that's what I meant, sorry for my english. Multiple topics -> a batch of messages with different topics. I'm not experiencing the same behavior as you are. In my case I'm getting the UnknownPartitionOrTopic error. I'm running it with the confluentinc/confluent-local:7.5.0 and default configurations. |
Seems to work fine for me Started up "confluent-local" exposing port 9092
Ran my code sample from above using From the confluent-local logs we can see it forwarding the auto-create request to the active controller:
|
# Producer Code
producer, err := sarama.NewAsyncProducer(config.Config.BrokersAddr, saramaConfig)
if err != nil {
return nil, err
}
go func() {
for err := range producer.Errors() {
log.Error(context.Background(), "Failed to produce message", zap.Error(err))
}
}()
msgs := make([]*sarama.ProducerMessage, 0, len(evts))
input := p.producer.Input()
for _, evt := range evts {
msg := evt.toMessage()
otel.GetTextMapPropagator().Inject(ctx, otelsarama.NewProducerMessageCarrier(msg))
msgs = append(msgs, msg)
input <- msg
} # Consumer Code
consumer, err := sarama.NewConsumerGroup(config.Config.BrokersAddr, config.Group, saramaConfig)
if err != nil {
return nil, fmt.Errorf("failed to create consumer group: %w", err)
}
done := make(chan struct{})
go consume(ctx, consumer, producer, config, done)
for {
if err := consumerGroup.Consume(ctx, topics, handler); err != nil {
if !errors.Is(err, sarama.ErrClosedConsumerGroup) {
log.Error(ctx, "error from consumer group", zap.Error(err))
}
return
}
if ctx.Err() != nil {
return
} # Test Code
t.Run("should produce and consume multiple topics", func(t *testing.T) {
evt1 := newTestEvent()
evt2 := newTestEvent()
group := uuid.NewString()
wg := sync.WaitGroup{}
wg.Add(2)
handlers := map[string]EventHandler{
evt1.GetTopic(): func(ctx context.Context, gotEvt EventReader) ([]Event, error) {
defer wg.Done()
assert.Equal(t, evt1, gotEvt)
return nil, nil
},
evt2.GetTopic(): func(ctx context.Context, gotEvt EventReader) ([]Event, error) {
defer wg.Done()
assert.Equal(t, evt2, gotEvt)
return nil, nil
},
}
cfg := ConsumerConfig{
Group: group,
Config: Config{
BrokersAddr: brokers,
},
Handlers: handlers,
}
_, err := StartNewConsumerGroup(ctx, cfg)
require.NoError(t, err)
producer, err := NewProducer(ProducerConfig{
Config: Config{
BrokersAddr: brokers,
},
})
require.NoError(t, err)
defer producer.Shutdown()
producer.Produce(ctx, evt1, evt2)
wg.Wait()
}) I get the following from the test:
If I change the producer code to be synchronous: producer, err := sarama.NewSyncProducer(config.Config.BrokersAddr, saramaConfig)
if err != nil {
return nil, err
}
func (p *Producer) Produce(ctx context.Context, evts ...Event) error {
var errs []error
for _, evt := range evts {
msg := evt.toMessage()
otel.GetTextMapPropagator().Inject(ctx, otelsarama.NewProducerMessageCarrier(msg))
if _, _, err := p.producer.SendMessage(msg); err != nil {
errs = append(errs, err)
}
}
if len(errs) > 0 {
return errors.Join(errs...)
}
return nil
} It works normally. I'm using this configuration: config := sarama.NewConfig()
if fromConfig.DialTimeout > 0 {
config.Net.DialTimeout = fromConfig.DialTimeout
}
if fromConfig.SASL != nil {
fromConfig.SASL.ApplySASLSarama(config)
}
if fromConfig.TLS != nil {
fromConfig.TLS.ApplyTLSSarama(config)
}
config.Consumer.Offsets.Initial = sarama.OffsetOldest |
Description
When using the producer with:
allowAutoTopicCreation = true
, the functryRefreshMetadata
on fileclient.go
is behaving strangely:This behavior reproduces for both sync and async producers. So if you are asynchronously sending messages to different topics, they won't respect the
allowAutoTopicCreation
configuration.I don't know if this is a technical limitation or not, but this behavior is totally hidden from the developer.
Versions
github.com/IBM/sarama v1.44.0
The text was updated successfully, but these errors were encountered: