Taking Control: Building Flexible AI Workflows with Elixir's LangChain Step Mode

Building an AI assistant that knows when to ask for permission versus acting autonomously is trickier than it sounds.

bees flying back to the hive
Photo by Eric Ward / Unsplash

Think about a personal finance bot that can check your balance freely, but should pause before transferring money. Or a customer service assistant that can look up orders without hesitation but needs approval before issuing refunds.

The problem with traditional LangChain execution modes is they force you to make execution decisions upfront, before you know what the AI will actually try to do.

The Problem: One-Size-Fits-All Execution

Before step mode, LangChain offered two main execution strategies:

  • :until_success - Run until the chain completes successfully
  • :while_needs_response - Keep running while the chain indicates it needs more responses

These work well for straightforward workflows, but they break down when you need conditional control. Here's what the old approach looked like:

# Before: You had to pick your execution strategy upfront
defmodule FinanceAssistant do
  def process_request(user_message) do
    tools = [
      Tool.new!(%{
        name: "check_balance",
        description: "Check account balance",
        function: &BankAPI.check_balance/1
      }),
      Tool.new!(%{
        name: "transfer_money",
        description: "Transfer money between accounts",
        parameters_schema: %{
          type: "object",
          properties: %{
            "amount" => %{type: "number"}
          }
        },
        function: &BankAPI.transfer_money/1
      })
    ]
    
    chain = 
      LLMChain.new!(%{
        llm: ChatOpenAI.new!(%{model: "gpt-5"}),
        tools: tools
      })
      |> LLMChain.add_message(Message.new_user!(user_message))
    
    # Problem: No way to know if the AI will call risky tools
    case LLMChain.run(chain, mode: :while_needs_response) do
      {:ok, final_chain} -> 
        # All tools executed automatically - no chance to intervene
        extract_response(final_chain)
      {:error, reason} -> 
        handle_error(reason)
    end
  end
end

This approach meant that if the AI decided to transfer $10,000 to someone or pay any bill, it would happen automatically without any user confirmation.

The Solution: Step Mode

The new :step mode (PR #343, made by my friend and former Plataformatec coworker Caique Mitsuoka) provides exactly this kind of granular control as a first-class feature:

defmodule FinanceAssistant.StepMode do
  def process_request(user_message) do
    tools = [
      Tool.new!(%{
        name: "check_balance",
        description: "Check account balance",
        function: &BankAPI.check_balance/1
      }),
      Tool.new!(%{
        name: "transfer_money",
        description: "Transfer money between accounts",
        parameters_schema: %{
          type: "object",
          properties: %{
            "amount" => %{type: "number"}
          }
        },
        function: &BankAPI.transfer_money/1
      })
    ]
    
    chain = 
      LLMChain.new!(%{
        llm: ChatOpenAI.new!(%{model: "gpt-5"}),
        tools: tools
      })
      |> LLMChain.add_message(Message.new_user!(user_message))
    
    run_with_step_mode(chain)
  end
  
  defp run_with_step_mode(chain) do
    case LLMChain.run(chain, mode: :step) do
      {:ok, updated_chain} ->
        if requires_approval?(updated_chain) do
          # Pause and return to user for approval
          {:awaiting_approval, updated_chain, get_approval_context(updated_chain)}
        else
          # Continue automatically
          run_with_step_mode(updated_chain)
        end
      
      {:error, reason} ->
        {:error, reason}
    end
  end
  
  defp requires_approval?(chain) do
    case get_last_tool_call(chain) do
      %{name: "transfer_money", args: args} ->
        amount = Map.get(args, "amount", 0)
        amount > 100  # Require approval for transfers over $100
      
      %{name: "pay_bill", args: args} ->
        amount = Map.get(args, "amount", 0)
        amount > 500  # Require approval for large bill payments
      
      _ ->
        false
    end
  end
  
  def continue_after_approval(chain) do
    # User approved, continue from where we left off
    run_with_step_mode(chain)
  end
end

Even Cleaner: Using should_continue?

The latest enhancement (PR #361, also by Caique) adds a should_continue? function that makes the control logic even more declarative:

defmodule FinanceAssistant.Declarative do
  def process_request(user_message) do
    tools = [
      Tool.new!(%{
        name: "check_balance",
        description: "Check account balance",
        function: &BankAPI.check_balance/1
      }),
      Tool.new!(%{
        name: "transfer_money",
        description: "Transfer money between accounts",
        parameters_schema: %{
          type: "object",
          properties: %{
            "amount" => %{type: "number"}
          }
        },
        function: &BankAPI.transfer_money/1
      })
    ]
    
    chain = 
      LLMChain.new!(%{
        llm: ChatOpenAI.new!(%{model: "gpt-5"}),
        tools: tools
      })
      |> LLMChain.add_message(Message.new_user!(user_message))
    
    should_continue_fn = fn chain ->
      # Continue if chain needs response AND doesn't require approval
      chain.needs_response && !requires_approval?(chain) && under_iteration_limit?(chain)
    end
    
    case LLMChain.run(chain, mode: :step, should_continue?: should_continue_fn) do
      {:ok, final_chain} ->
        # Either completed naturally or stopped due to our criteria
        if requires_approval?(final_chain) do
          {:awaiting_approval, final_chain, get_approval_context(final_chain)}
        else
          {:completed, extract_response(final_chain)}
        end
      
      {:error, reason} ->
        {:error, reason}
    end
  end
  
  defp under_iteration_limit?(chain) do
    # Prevent infinite loops
    length(chain.messages) < 20
  end
  
  defp requires_approval?(chain) do
    # Same logic as before, but now used in the should_continue? function
    case get_last_tool_call(chain) do
      %{name: name} when name in ["transfer_money", "pay_bill"] ->
        tool_requires_confirmation?(name, get_tool_args(chain))
      _ ->
        false
    end
  end
end

Real-World Applications

This step mode control enables many sophisticated patterns:

E-commerce Assistant

should_continue_fn = fn chain ->
  case get_last_tool_call(chain) do
    %{name: "place_order"} -> false  # Always pause before placing orders
    %{name: "cancel_order"} -> false  # Always pause before cancellations
    _ -> chain.needs_response && length(chain.messages) < 15
  end
end

Investment Portfolio Manager

should_continue_fn = fn chain ->
  case get_last_tool_call(chain) do
    %{name: "sell_stock"} -> false     # Always require approval for sales
    %{name: "buy_stock"} -> false      # Always require approval for purchases
    %{name: "check_portfolio"} -> chain.needs_response  # Safe to continue
    _ -> chain.needs_response && safe_to_continue?(chain)
  end
end

Why This Matters

Step mode basically turns LangChain from "all or nothing" into "smart decisions." Instead of crossing your fingers and hoping the AI doesn't do anything crazy, you can now pause it when it's about to do something that needs your approval.

This is huge for any real application where you can't just let the AI run wild. Step mode plus should_continue? gives you the control you actually need.

The beauty of this approach is that it codifies what many teams were already building manually, turning a common pattern into a supported feature with clean APIs and robust error handling.

This is even more important now that people are exploring how tools can leverage and display UI elements, like MCP UI.

Step mode will be available on version v0.4.0 of LangChain. You can currently use it as v0.4.0-rc.2


Side note: I personally think Elixir's LangChain should consider rebranding to avoid the negative reputation that Python's LangChain has earned in the developer community. The Elixir implementation is genuinely well-designed and deserves to be judged on its own merits.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to George Guimarães..

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.