UNCLASSIFIED - NO CUI

Skip to content

How to write data across namespaces in Big Bang

I have a developer who needs to write data from a kafka topic to loki. To accomplish this he is installing his own alloy sidecar onto the kafka broker which is in its own non-Big Bang namespace. Then he wants to be able to have the logs scraped from the kafka topic by the alloy sidecar to be sent to the Big Bang loki deployment.* The question is how to allow the customer namespace deployed alloy to write into the logging namespace. I am including the alloy config as well.

discovery.relabel "kafka" {
  targets = []
  rule {
    source_labels = ["__meta_kafka_topic"]
    target_label  = "topic"
  }
  rule {
    source_labels = ["__meta_kafka_partition"]
    target_label  = "partition"
  }
  rule {
    source_labels = ["__meta_kafka_group_id"]
    target_label  = "group"
  }
}
loki.source.kafka "kafka" {
  brokers  = ["kafka-service.ncct.svc.cluster.local:8092"]
  topics   = ["fade-ncct-data"]
  group_id = ""
  assignor = ""
  version  = ""
  authentication { }
  labels = {
    host = "alloy",
    job  = "kafka",
  }
  forward_to    = [loki.write.default.receiver]
  relabel_rules = discovery.relabel.kafka.rules
}
loki.write "default" {
  endpoint {
    url="http://..."        }
    external_labels = {}
}

*The reasoning for doing it this way is that the customer deployment (where the kafka broker lives) is separate from the Big Bang deployment. If we configure the Big Bang alloy for the kafka topic, there is apparently a problem with there being no existing kafka topic until the customer software is later deployed. If there is a simple alternative solution I am open to that.