Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Many LLM platforms support the OpenAI SDK. This means systems such as the follow
* - Name
- gen_ai.system
* - `Azure OpenAI <https://github.com/openai/openai-python?tab=readme-ov-file#microsoft-azure-openai>`_
- ``az.ai.openai``
- ``azure.ai.openai``
* - `Gemini <https://developers.googleblog.com/en/gemini-is-now-accessible-from-the-openai-library/>`_
- ``gemini``
* - `Perplexity <https://docs.perplexity.ai/api-reference/chat-completions>`_
Expand Down Expand Up @@ -80,7 +80,26 @@ Enabling message content

Message content such as the contents of the prompt, completion, function arguments and return values
are not captured by default. To capture message content as log events, set the environment variable
`OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT` to `true`.
``OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT`` to one of the following values:

- ``true`` - Legacy. Used to enable content capturing on ``gen_ai.{role}.message`` and ``gen_ai.choice`` events when
`latest experimental features <#enabling-the-latest-experimental-features>`_ are *not* enabled.
- ``span`` - Used to enable content capturing on *span* attributes when
`latest experimental features <#enabling-the-latest-experimental-features>`_ are enabled.
- ``event`` - Used to enable content capturing on *event* attributes when
`latest experimental features <#enabling-the-latest-experimental-features>`_ are enabled.

Enabling the latest experimental features
***********************************************

To enable the latest experimental features, set the environment variable
``OTEL_SEMCONV_STABILITY_OPT_IN`` to ``gen_ai_latest_experimental``. Or, if you use
``OTEL_SEMCONV_STABILITY_OPT_IN`` to enable other features, append ``,gen_ai_latest_experimental`` to its value.

Without this setting, OpenAI instrumentation aligns with `Semantic Conventions v1.28.0 <https://github.com/open-telemetry/semantic-conventions/tree/v1.28.0/docs/gen-ai>`_
and would not capture additional details introduced in later versions.

.. note:: Generative AI semantic conventions are still evolving. The latest experimental features will introduce breaking changes in future releases.

Uninstrument
************
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,5 +12,19 @@ OPENAI_API_KEY=sk-YOUR_API_KEY

OTEL_SERVICE_NAME=opentelemetry-python-openai

# Change to 'false' to hide prompt and completion content
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true
# Remove or change to 'none' to hide prompt and completion content
# Possible values (case insensitive):
# - `span` - record content on span attibutes
# - `event` - record content on event attributes
# - `true` - only used for backward compatibility when
# `gen_ai_latest_experimental` is not set in the
# `OTEL_SEMCONV_STABILITY_OPT_IN` environemnt variable.
# - everything else - don't record content on any signal
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=span

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optionally - introduce a new OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT_MODE variable, which will include a list of supported modes - i.e.

  • span
  • event
  • span,event - sometimes preferred for development/teasting.

Reusing OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT seems fine too.

if we decide to support span,event, it will require either introducing ContentCapturingMode.SPAN_AND_EVENT or supporting a list of capturing modes. May be helper methods like is_capturing_span() and is_capturing_event.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now we have two config items in Spring AI prototype:

  • OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT - (for compatibility) decide whether we capture the chat messages.
    • false - (by default) don't capture any chat messages
    • true - capture all chat messages, specific behaviors depend on additional configuration
  • OTEL_INSTRUMENTATION_GENAI_MESSAGE_CONTENT_CAPTURE_STRATEGY - decide how we capture the chat messages.
    • span-attributes - (by default) capturing them as span attributes
    • event - capturing them as a event
    • event, span-attributes

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Cirilla-zmh , thanks for sharing!

As a long-time java developer - can't resist the comment VERY_JAVA_STYLE_VERBOSE_EVNIRONMENT_VARIABLE_NAMING 🤣

But seriously

  1. 👍 span-attributes for clarity.
  2. 👍 a list of modes "event, span-attributes, it is quite popular for testing.
  3. 👎 different word ordering in CAPTURE_MESSAGE_CONTENT and MESSAGE_CONTENT_CAPTURE_STRATEGY in env naming. I would 👍 to reuse OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT env var.

cc: @lmolkova

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhirafovod

As a long-time java developer - can't resist the comment VERY_JAVA_STYLE_VERBOSE_EVNIRONMENT_VARIABLE_NAMING 🤣

Ahh, that's it...🤣 But it comes from the requirement of multi-language compatibility and isolation, it's more clear when transported to a property key: otel.instrumentation.genai.capture_messages_content.

For compatibility, I tend to add another config key to control the behavior of the messages capturing - the naming could be discussed again.


# Enables latest and greatest features available in GenAI semantic conventions.
# Note: since conventions are still in development, using this flag would
# likely result in having breaking changes.
#
# Comment out if you want to use semantic conventions of version 1.36.0.
OTEL_SEMCONV_STABILITY_OPT_IN=gen_ai_latest_experimental
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ your OpenAI requests.

Note: `.env <.env>`_ file configures additional environment variables:

- ``OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true`` configures OpenAI instrumentation to capture prompt and completion contents on events.
- ``OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=span`` configures OpenAI instrumentation to capture prompt and completion contents on *span* attributes.
- ``OTEL_SEMCONV_STABILITY_OPT_IN=gen_ai_latest_experimental`` enables latest experimental features.

Setup
-----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,5 +18,19 @@ OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true
# Uncomment if your OTLP endpoint doesn't support logs
# OTEL_LOGS_EXPORTER=console

# Change to 'false' to hide prompt and completion content
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true
# Remove or change to 'none' to hide prompt and completion content
# Possible values (case insensitive):
# - `span` - record content on span attibutes
# - `event` - record content on event attributes
# - `true` - only used for backward compatibility when
# `gen_ai_latest_experimental` is not set in the
# `OTEL_SEMCONV_STABILITY_OPT_IN` environemnt variable.
# - everything else - don't record content on any signal
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=span

# Enables latest and greatest features available in GenAI semantic conventions.
# Note: since conventions are still in development, using this flag would
# likely result in having breaking changes.
#
# Comment out if you want to use semantic conventions of version 1.36.0.
OTEL_SEMCONV_STABILITY_OPT_IN=gen_ai_latest_experimental
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,9 @@ your OpenAI requests.
Note: `.env <.env>`_ file configures additional environment variables:

- ``OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true`` configures OpenTelemetry SDK to export logs and events.
- ``OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true`` configures OpenAI instrumentation to capture prompt and completion contents on events.
- ``OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=span`` configures OpenAI instrumentation to capture prompt and completion contents on *span* attributes.
- ``OTEL_LOGS_EXPORTER=otlp`` to specify exporter type.
- ``OTEL_SEMCONV_STABILITY_OPT_IN=gen_ai_latest_experimental`` enables latest experimental features.

Setup
-----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,10 @@
from opentelemetry._events import get_event_logger
from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
from opentelemetry.instrumentation.openai_v2.package import _instruments
from opentelemetry.instrumentation.openai_v2.utils import is_content_enabled
from opentelemetry.instrumentation.openai_v2.utils import (
get_content_mode,
is_latest_experimental_enabled,
)
from opentelemetry.instrumentation.utils import unwrap
from opentelemetry.metrics import get_meter
from opentelemetry.semconv.schemas import Schemas
Expand All @@ -71,38 +74,47 @@ def _instrument(self, **kwargs):
__name__,
"",
tracer_provider,
schema_url=Schemas.V1_28_0.value,
schema_url="https://opentelemetry.io/schemas/1.37.0", # TODO: Schemas.V1_37_0.value,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we probably need a telemetry test for the attributes in this telemetry.

)
event_logger_provider = kwargs.get("event_logger_provider")
event_logger = get_event_logger(
__name__,
"",
schema_url=Schemas.V1_28_0.value,
schema_url="https://opentelemetry.io/schemas/1.37.0", # TODO: Schemas.V1_37_0.value,
event_logger_provider=event_logger_provider,
)
meter_provider = kwargs.get("meter_provider")
self._meter = get_meter(
__name__,
"",
meter_provider,
schema_url=Schemas.V1_28_0.value,
schema_url="https://opentelemetry.io/schemas/1.37.0", # TODO: Schemas.V1_37_0.value,
)

instruments = Instruments(self._meter)

latest_experimental_enabled = is_latest_experimental_enabled()
wrap_function_wrapper(
module="openai.resources.chat.completions",
name="Completions.create",
wrapper=chat_completions_create(
tracer, event_logger, instruments, is_content_enabled()
tracer,
event_logger,
instruments,
get_content_mode(latest_experimental_enabled),
latest_experimental_enabled,
),
)

wrap_function_wrapper(
module="openai.resources.chat.completions",
name="AsyncCompletions.create",
wrapper=async_chat_completions_create(
tracer, event_logger, instruments, is_content_enabled()
tracer,
event_logger,
instruments,
get_content_mode(latest_experimental_enabled),
latest_experimental_enabled,
),
)

Expand Down
Loading
Loading