-
Notifications
You must be signed in to change notification settings - Fork 1.2k
[fix] stagehand.agent isn't using the correct model defaults #1213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[fix] stagehand.agent isn't using the correct model defaults #1213
Conversation
…l set to gpt-4.1-mini by default. This seems to be because the Vercel AI SDK is used as the default for logging, and the model isn't ever getting set to openai/gpt-4.1-mini. This should fix both issues and hopefully fix default model configs for all provides
🦋 Changeset detectedLatest commit: e6f6d86 The changes in this PR will be included in the next version bump. This PR includes changesets to release 2 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Greptile Overview
Greptile Summary
Fixed model configuration defaults for stagehand.agent() to ensure the correct model name is used in both logging and API calls. Previously, the model was incorrectly logged and sent to the API as gpt-4.1-mini instead of openai/gpt-4.1-mini.
Key Changes
- Logging fix (v3.ts:1374): Changed from
this.llmClient.modelNametothis.modelNameto ensure correct model name appears in logs - API call fix (v3.ts:1477-1485, 1585-1593): Explicitly set the model in
AgentConfigbefore callingapiClient.agentExecute()to ensure the default model is properly passed whenoptions.modelis not specified
The fix ensures that when users run stagehand.agent() without specifying a model, it correctly defaults to the fully qualified model name (e.g., openai/gpt-4.1-mini) instead of an incomplete identifier.
Confidence Score: 5/5
- This PR is safe to merge with minimal risk
- The changes are straightforward bug fixes that correct model name references in two specific contexts (logging and API calls). The fix uses existing instance properties (
this.modelName) which are already properly initialized during V3 construction, and ensures the model configuration is properly propagated to the API client. No new logic or edge cases are introduced. - No files require special attention
Important Files Changed
File Analysis
| Filename | Score | Overview |
|---|---|---|
| packages/core/lib/v3/v3.ts | 5/5 | Fixed model configuration defaults for stagehand.agent() - changed logging to use this.modelName instead of this.llmClient.modelName, and ensured model is properly set when calling API client |
Sequence Diagram
sequenceDiagram
participant User
participant V3
participant agent()
participant APIClient
participant Handler
User->>V3: new V3(opts)
V3->>V3: resolveModelConfiguration(opts.model)
V3->>V3: this.modelName = resolvedModelName
User->>V3: v3.agent(options?)
agent()->>agent(): Log "Creating v3 agent" with this.modelName
Note over agent(): Fixed: Use this.modelName instead of this.llmClient.modelName
alt options.cua === true
agent()->>agent(): resolveModel(options?.model || this.modelName)
agent()->>agent(): Create agentConfigWithModel with resolved model
User->>agent(): execute(instruction)
agent()->>APIClient: agentExecute(agentConfigWithModel, ...)
Note over agent(),APIClient: Fixed: Ensure model is set in config
else default AISDK path
agent()->>agent(): Use options?.model or this.modelName
User->>agent(): execute(instruction)
agent()->>APIClient: agentExecute(agentConfigWithModel, ...)
Note over agent(),APIClient: Fixed: Ensure model is set in config
end
APIClient-->>User: AgentResult
1 file reviewed, no comments
|
IMO it's hardcoded (for some reason) here |
So when I ran stagehand.agent out of the box, I kept getting the model set to gpt-4.1-mini by default. This seems to be because the Vercel AI SDK is used as the default for logging, and the model isn't ever getting set to openai/gpt-4.1-mini. This should fix both issues and hopefully fix default model configs for all providers