Skip to content

Collector not respecting resolution order defined by SemConv #43919

@thefirstofthe300

Description

@thefirstofthe300

Component(s)

No response

What happened?

Description

I am attempting to change the service.namespace annotation for an application. The pod spec has the following defined:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    ad.datadoghq.com/tags: '{"app":"control","service.name":"control-plane","service.namespace":"agent"}'
    cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
    instrumentation.opentelemetry.io/inject-java: observability-system/java-disable-tracing
    resource.opentelemetry.io/service.name: control-plane
    resource.opentelemetry.io/service.namespace: agent

Per the OTEL SemConv, the namespace should be equal to agent. However, when I have the service.namespace value defined in the extract.metadata list in the processor config, the k8s.namespace.name metadata overrides that value. If I remove the service.namespace value from the processor config, the correct value is set.

Collector version

v0.137.0

Environment information

No response

OpenTelemetry Collector configuration

k8sattributes:
  filter:
    node_from_env_var: K8S_NODE_NAME
  pod_association:
    - sources:
        # This rule will use the 'k8s.pod.name' and 'k8s.namespace.name' attributes already present on the telemetry data to find the matching pod
        - from: resource_attribute
          name: k8s.pod.name
    - sources:
        # This rule will use the 'k8s.node.name' attribute already present on the telemetry data to find all pods running on that node
        - from: resource_attribute
          name: k8s.pod.uid
    - sources:
        # This rule will use the IP from the incoming connection from which the resource is received, and find the matching pod, based on the 'pod.status.podIP' of the observed pods
        - from: connection
  extract:
    otel_annotations: true
    metadata:
      - k8s.pod.name
      - k8s.deployment.name
      - k8s.node.name
      - k8s.namespace.name
      - k8s.pod.start_time
      - k8s.replicaset.name
      - k8s.daemonset.name
      - k8s.job.name
      - k8s.cronjob.name
      - k8s.statefulset.name
      - k8s.container.name
      - container.image.name
      - container.image.tag
      - service.name
      - service.namespace
      - service.version
    labels:
      - tag_name: kube_app_name
        key: app.kubernetes.io/name
        from: pod
      - tag_name: kube_app_instance
        key: app.kubernetes.io/instance
        from: pod
      - tag_name: kube_app_version
        key: app.kubernetes.io/version
        from: pod
      - tag_name: kube_app_component
        key: app.kubernetes.io/component
        from: pod
      - tag_name: kube_app_part_of
        key: app.kubernetes.io/part-of
        from: pod
      - tag_name: kube_app_managed_by
        key: app.kubernetes.io/managed-by
        from: pod

Log output

Additional context

No response

Tip

React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it. Learn more here.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions