-
Notifications
You must be signed in to change notification settings - Fork 262
feat: make LlmGenerationClient::generate return json
#1267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
fix: handle json value
georgeh0
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please make sure you test it properly with our examples using LLM. Thanks!
| let text = if let Some(json) = extracted_json { | ||
| // Try strict JSON serialization first | ||
| serde_json::to_string(&json)? | ||
| return Ok(LlmGenerateResponse::Json(json)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To make the two branches more symmetric, we may make both branches return a LlmGenerateResponse
| // Try permissive json5 parsing as fallback | ||
| match json5::from_str::<serde_json::Value>(s) { | ||
| Ok(value) => { | ||
| println!("[Anthropic] Used permissive JSON5 parser for output"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this branch, since we already parsed JSON, we should directly return JSON too.
| return Ok(super::LlmGenerateResponse::Json(serde_json::json!( | ||
| resp_json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not the JSON of the response message, e.g. things like
{
"content": {
"parts": [
{
"text": "{...}"
}
],
"role": "model"
},
"index": 0
}NOT the JSON as the model response.
Did you get a chance to test this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, to be more specific, if the return value is Json, just return the whole json value that responsed by llm, is that right?
| text: json.response, | ||
| }) | ||
|
|
||
| Ok(super::LlmGenerateResponse::Json(res.json().await?)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, this is the larger JSON as the response message, NOT the specific JSON responded by the model.
Did you get a chance to test?
| let response_json = serde_json::json!( | ||
| response_iter | ||
| .next() | ||
| .ok_or_else(|| anyhow::anyhow!("No response from OpenAI"))? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same problem as above. This is NOT the specific JSON respond by the model.
This PR is to complete #400 .
To finish that, this PR completed following tasks:
LlmGenerateResponseto enum to contain the text or jsonLlmGenerationClientreturn json or text based on the output format.But actually, I'm not sure about the json value that I choose is correct, I just follow the original logic to return the json value.