Skip to content

Conversation

@Lzzzzzt
Copy link

@Lzzzzzt Lzzzzzt commented Nov 7, 2025

This PR is to complete #400 .

To finish that, this PR completed following tasks:

  • change the LlmGenerateResponse to enum to contain the text or json
  • make all the LLM client that implements the LlmGenerationClient return json or text based on the output format.

But actually, I'm not sure about the json value that I choose is correct, I just follow the original logic to return the json value.

@Lzzzzzt Lzzzzzt marked this pull request as ready for review November 7, 2025 07:35
Copy link
Member

@georgeh0 georgeh0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please make sure you test it properly with our examples using LLM. Thanks!

let text = if let Some(json) = extracted_json {
// Try strict JSON serialization first
serde_json::to_string(&json)?
return Ok(LlmGenerateResponse::Json(json));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To make the two branches more symmetric, we may make both branches return a LlmGenerateResponse

// Try permissive json5 parsing as fallback
match json5::from_str::<serde_json::Value>(s) {
Ok(value) => {
println!("[Anthropic] Used permissive JSON5 parser for output");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this branch, since we already parsed JSON, we should directly return JSON too.

Comment on lines +378 to +379
return Ok(super::LlmGenerateResponse::Json(serde_json::json!(
resp_json
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not the JSON of the response message, e.g. things like

    {
      "content": {
        "parts": [
          {
            "text": "{...}"
          }
        ],
        "role": "model"
      },
      "index": 0
    }

NOT the JSON as the model response.

Did you get a chance to test this?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, to be more specific, if the return value is Json, just return the whole json value that responsed by llm, is that right?

text: json.response,
})

Ok(super::LlmGenerateResponse::Json(res.json().await?))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, this is the larger JSON as the response message, NOT the specific JSON responded by the model.

Did you get a chance to test?

Comment on lines +153 to +156
let response_json = serde_json::json!(
response_iter
.next()
.ok_or_else(|| anyhow::anyhow!("No response from OpenAI"))?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same problem as above. This is NOT the specific JSON respond by the model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants