Core API

ldai.client

class ldai.client.AIConfig(enabled: bool | None = None, model: ldai.client.ModelConfig | None = None, messages: List[ldai.client.LDMessage] | None = None, provider: ldai.client.ProviderConfig | None = None)[source]
__init__(enabled: bool | None = None, model: ModelConfig | None = None, messages: List[LDMessage] | None = None, provider: ProviderConfig | None = None) None
enabled: bool | None = None
messages: List[LDMessage] | None = None
model: ModelConfig | None = None
provider: ProviderConfig | None = None
to_dict() dict[source]

Render the given default values as an AIConfig-compatible dictionary object.

class ldai.client.LDAIClient(client: LDClient)[source]

The LaunchDarkly AI SDK client object.

__init__(client: LDClient)[source]
config(key: str, context: Context, default_value: AIConfig, variables: Dict[str, Any] | None = None) Tuple[AIConfig, LDAIConfigTracker][source]

Get the value of a model configuration.

Parameters:
  • key – The key of the model configuration.

  • context – The context to evaluate the model configuration in.

  • default_value – The default value of the model configuration.

  • variables – Additional variables for the model configuration.

Returns:

The value of the model configuration along with a tracker used for gathering metrics.

class ldai.client.LDMessage(role: Literal['system', 'user', 'assistant'], content: str)[source]
__init__(role: Literal['system', 'user', 'assistant'], content: str) None
content: str
role: Literal['system', 'user', 'assistant']
to_dict() dict[source]

Render the given message as a dictionary object.

class ldai.client.ModelConfig(name: str, parameters: Dict[str, Any] | None = None, custom: Dict[str, Any] | None = None)[source]

Configuration related to the model.

__init__(name: str, parameters: Dict[str, Any] | None = None, custom: Dict[str, Any] | None = None)[source]
Parameters:
  • name – The name of the model.

  • parameters – Additional model-specific parameters.

  • custom – Additional customer provided data.

get_custom(key: str) Any[source]

Retrieve customer provided data.

get_parameter(key: str) Any[source]

Retrieve model-specific parameters.

Accessing a named, typed attribute (e.g. name) will result in the call being delegated to the appropriate property.

property name: str

The name of the model.

to_dict() dict[source]

Render the given model config as a dictionary object.

class ldai.client.ProviderConfig(name: str)[source]

Configuration related to the provider.

__init__(name: str)[source]
property name: str

The name of the provider.

to_dict() dict[source]

Render the given provider config as a dictionary object.

ldai.tracker

class ldai.tracker.FeedbackKind(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Types of feedback that can be provided for AI operations.

Negative = 'negative'
Positive = 'positive'
class ldai.tracker.LDAIConfigTracker(ld_client: LDClient, variation_key: str, config_key: str, context: Context)[source]

Tracks configuration and usage metrics for LaunchDarkly AI operations.

__init__(ld_client: LDClient, variation_key: str, config_key: str, context: Context)[source]

Initialize an AI configuration tracker.

Parameters:
  • ld_client – LaunchDarkly client instance.

  • variation_key – Variation key for tracking.

  • config_key – Configuration key for tracking.

  • context – Context for evaluation.

get_summary() LDAIMetricSummary[source]

Get the current summary of AI metrics.

Returns:

Summary of AI metrics.

track_bedrock_converse_metrics(res: dict) dict[source]

Track AWS Bedrock conversation operations.

This function will track the duration of the operation, the token usage, and the success or error status.

Parameters:

res – Response dictionary from Bedrock.

Returns:

The original response dictionary.

track_duration(duration: int) None[source]

Manually track the duration of an AI operation.

Parameters:

duration – Duration in milliseconds.

track_duration_of(func)[source]

Automatically track the duration of an AI operation.

An exception occurring during the execution of the function will still track the duration. The exception will be re-thrown.

Parameters:

func – Function to track.

Returns:

Result of the tracked function.

track_error() None[source]

Track an unsuccessful AI generation attempt.

track_feedback(feedback: Dict[str, FeedbackKind]) None[source]

Track user feedback for an AI operation.

Parameters:

feedback – Dictionary containing feedback kind.

track_openai_metrics(func)[source]

Track OpenAI-specific operations.

This function will track the duration of the operation, the token usage, and the success or error status.

If the provided function throws, then this method will also throw.

In the case the provided function throws, this function will record the duration and an error.

A failed operation will not have any token usage data.

Parameters:

func – Function to track.

Returns:

Result of the tracked function.

track_success() None[source]

Track a successful AI generation.

track_tokens(tokens: TokenUsage) None[source]

Track token usage metrics.

Parameters:

tokens – Token usage data from either custom, OpenAI, or Bedrock sources.

class ldai.tracker.LDAIMetricSummary[source]

Summary of metrics which have been tracked.

__init__()[source]
property duration: int | None
property feedback: Dict[str, FeedbackKind] | None
property success: bool | None
property usage: TokenUsage | None
class ldai.tracker.TokenUsage(total: int, input: int, output: int)[source]

Tracks token usage for AI operations.

Parameters:
  • total – Total number of tokens used.

  • input – Number of tokens in the prompt.

  • output – Number of tokens in the completion.

__init__(total: int, input: int, output: int) None
input: int
output: int
total: int