Skip to content

Agent re-attempts original tool call after HumanInTheLoopMiddleware edit decision #33787

@lesong36

Description

@lesong36

Checked other resources

  • This is a bug, not a usage question.
  • I added a clear and descriptive title that summarizes this issue.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
  • This is not related to the langchain-community package.
  • I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
  • I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.

Example Code

Agent re-attempts original tool call after HumanInTheLoopMiddleware edit decision

Bug Description

When an agent's tool call is interrupted by the HumanInTheLoopMiddleware and the human operator chooses the edit decision, the middleware successfully executes the edited tool call.

However, in the subsequent step, the agent re-evaluates its state. It compares the original HumanMessage (the user's initial request) with the result of the edited tool call. It concludes that the original request is still unfulfilled.

As a result, the agent generates a new AiMessage that attempts to execute the original, un-edited tool call.

From a user's perspective, this is a bug. The edit decision implies "replace"—the user's intent is to cancel the original tool call and replace it with the new, edited one. The current behavior doesn't "clear" the agent's original intent, leading to a frustrating loop where the human has to edit and then immediately reject the same conceptual action.

Code to Reproduce

from langchain.agents import create_agent
from langchain.agents.middleware import HumanInTheLoopMiddleware
from langgraph.checkpoint.memory import InMemorySaver
from langchain.tools import tool
from langchain.messages import HumanMessage
from langgraph.types import Command

@tool
def read_email_tool(email_id: str) -> str:
    """Read an email by its ID.
    
    Args:
        email_id: The ID of the email to read
        
    Returns:
        The content of the email
    """
    # Simulated email content
    emails = {
        "email_001": "Subject: Meeting Tomorrow\n\nHi, let's meet tomorrow at 2pm.",
        "email_002": "Subject: Project Update\n\nThe project is progressing well.",
    }
    return emails.get(email_id, f"Email {email_id} not found")

@tool
def send_email_tool(to: str, subject: str, body: str) -> str:
    """Send an email to a recipient.
    
    Args:
        to: Email address of the recipient
        subject: Subject line of the email
        body: Body content of the email
        
    Returns:
        Confirmation message
    """
    return f"Email sent successfully to {to} with subject: {subject}"

hitl=HumanInTheLoopMiddleware(
            interrupt_on={
                # Require approval, editing, or rejection for sending emails
                "send_email_tool": {
                    "allowed_decisions": ["approve", "edit", "reject"],
                },
                # Auto-approve reading emails
                "read_email_tool": False,
            }
        ),

agent = create_agent(
    model="gpt-5-nano",
    tools=[read_email_tool, send_email_tool],
    middleware=hitl,
    checkpointer=InMemorySaver()
)


# Test the HumanInTheLoopMiddleware
test_messages = [
    HumanMessage("Read email email_001"),
    HumanMessage("Send an email to alice@example.com with subject 'Test' and body 'Hello'"),
]
config = {"configurable": {"thread_id": "test-thread-4"}} 

# Invoke the agent with test messages
# Add thread_id to config since checkpointer is present
result = agent.invoke(
    {"messages": test_messages},
    config=config
)



# Human-in-the-loop leverages LangGraph's persistence layer.
# You must provide a thread ID to associate the execution with a conversation thread,
# so the conversation can be paused and resumed (as is needed for human review).


# Resume with edit decision - must provide edited_action when type is "edit"
result = agent.invoke(
    Command(
        resume={
            "decisions": [
                {
                    "type": "edit",
                    "message": "please drop the original tool calling and go on with the edited action",
                    "edited_action": {
                        "name": "send_email_tool",
                        "args": {"to": "alice@test.com", "subject": "this is a test", "body": "don't reply"}
                    }
                }
            ]
        }
    ),
    config=config  # Same thread ID to resume the paused conversation
)

for msg in result["messages"]:
    msg.pretty_print()

Error Message and Stack Trace (if applicable)

================================�[1m Human Message �[0m=================================

Read email email_001
================================�[1m Human Message �[0m=================================

Send an email to alice@example.com with subject 'Test' and body 'Hello'
==================================�[1m Ai Message �[0m==================================
Tool Calls:
  read_email_tool (call_qGZOuMl09JvYWItys68yIPUn)
 Call ID: call_qGZOuMl09JvYWItys68yIPUn
  Args:
    email_id: email_001
=================================�[1m Tool Message �[0m=================================
Name: read_email_tool

Subject: Meeting Tomorrow

Hi, let's meet tomorrow at 2pm.
==================================�[1m Ai Message �[0m==================================
Tool Calls:
  read_email_tool (call_ePau9qbENNSwUH0dzHfQfLvC)
 Call ID: call_ePau9qbENNSwUH0dzHfQfLvC
  Args:
    email_id: email_001
  send_email_tool (call_vsQaCe4bDz0Z270STcbGB5BT)
 Call ID: call_vsQaCe4bDz0Z270STcbGB5BT
  Args:
    to: alice@test.com
    subject: this is a test
    body: don't reply
=================================�[1m Tool Message �[0m=================================
Name: read_email_tool

Subject: Meeting Tomorrow

Hi, let's meet tomorrow at 2pm.
=================================�[1m Tool Message �[0m=================================
Name: send_email_tool

Email sent successfully to alice@test.com with subject: this is a test
==================================�[1m Ai Message �[0m==================================
Tool Calls:
  send_email_tool (call_dUSgtnAQwmp9x3jBK1fKae32)
 Call ID: call_dUSgtnAQwmp9x3jBK1fKae32
  Args:
    to: alice@example.com
    subject: Test
    body: Hello

Description

Expected Behavior

  1. The agent runs and is interrupted, requesting to send an email to alice@example.com.
  2. The human resumes the agent with an edit decision, changing the recipient to alice@test.com.
  3. The agent executes the edited tool call (sending to alice@test.com).
  4. The agent receives the ToolMessage (e.g., "Email sent successfully to alice@test.com...").
  5. The agent should now consider the user's original request (which led to the first tool call) as fulfilled or superseded by the human's edit.
  6. The agent should not make any further tool calls. The final message in the result_after_edit["messages"] list should be a final AIMessage to the user (e.g., "The email has been sent as edited.") or an empty message indicating the turn is over.

Actual Behavior

  1. Steps 1-4 happen as expected. The agent correctly executes the edited call to alice@test.com.
  2. However, after receiving the ToolMessage for the edited call, the agent re-evaluates its state.
  3. It sees the original HumanMessage("Send an email to alice@example.com...") and determines this task is still pending.
  4. As shown in the output, the agent's final message is another AiMessage containing a new tool call (call_dUSgtnAQwmp9x3jBK1fKae32 in the user's example, or call_003 in the code above).
  5. This new tool call attempts to send the email to the original recipient, alice@example.com.
  6. This will, of course, trigger another HumanInTheLoop interruption, forcing the user to reject the very action they thought their edit had already replaced.

System Info

System Information

OS: Darwin
OS Version: Darwin Kernel Version 25.0.0: Wed Sep 17 21:41:26 PDT 2025; root:xnu-12377.1.9~141/RELEASE_ARM64_T6041
Python Version: 3.13.9 | packaged by Anaconda, Inc. | (main, Oct 21 2025, 19:11:29) [Clang 20.1.8 ]

Package Information

langchain_core: 1.0.2
langchain: 1.0.3
langsmith: 0.4.39
langchain_anthropic: 1.0.0
langchain_chroma: 1.0.0
langchain_classic: 1.0.0
langchain_deepseek: 1.0.0
langchain_huggingface: 1.0.0
langchain_ollama: 1.0.0
langchain_openai: 1.0.1
langchain_prompty: 1.0.0
langchain_tavily: 0.2.12
langchain_text_splitters: 1.0.0
langgraph_sdk: 0.2.9

Optional packages not installed

langserve

Other Dependencies

aiohttp: 3.13.2
anthropic: 0.72.0
async-timeout: Installed. No version info available.
chromadb: 1.3.0
claude-agent-sdk: Installed. No version info available.
httpx: 0.28.1
huggingface-hub: 0.36.0
jsonpatch: 1.33
langchain-aws: Installed. No version info available.
langchain-community: Installed. No version info available.
langchain-fireworks: Installed. No version info available.
langchain-google-genai: Installed. No version info available.
langchain-google-vertexai: Installed. No version info available.
langchain-groq: Installed. No version info available.
langchain-mistralai: Installed. No version info available.
langchain-perplexity: Installed. No version info available.
langchain-together: Installed. No version info available.
langchain-xai: Installed. No version info available.
langgraph: 1.0.2
langsmith-pyo3: Installed. No version info available.
numpy: 2.3.4
ollama: 0.6.0
openai: 2.6.1
openai-agents: Installed. No version info available.
opentelemetry-api: 1.38.0
opentelemetry-exporter-otlp-proto-http: Installed. No version info available.
opentelemetry-sdk: 1.38.0
orjson: 3.11.4
packaging: 25.0
pydantic: 2.12.3
pytest: 8.4.2
pyyaml: 6.0.3
requests: 2.32.5
requests-toolbelt: 1.0.0
rich: 14.2.0
sentence-transformers: Installed. No version info available.
sqlalchemy: 2.0.44
tenacity: 9.1.2
tiktoken: 0.12.0
tokenizers: 0.22.1
transformers: Installed. No version info available.
typing-extensions: 4.15.0
vcrpy: Installed. No version info available.
zstandard: 0.25.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugRelated to a bug, vulnerability, unexpected error with an existing feature

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions