Loading...
「ツール」は右上に移動しました。
利用したサーバー: natural-voltaic-titanium
5いいね 129回再生

LLM-AutoDiff: Auto-Differentiate Any LLM Workflow (Jan 2025)

Title: LLM-AutoDiff: Auto-Differentiate Any LLM Workflow
Link: arxiv.org/abs/2501.16673
Date: 8 Jan 2025

Summary:
This paper introduces LLM-AutoDiff, a novel framework for Automatic Prompt Engineering (APE) that extends textual gradient-based methods to multi-component, potentially cyclic LLM architectures. Implemented within the AdalFlow library, LLM-AutoDiff treats each textual input as a trainable parameter and uses a frozen 'backward engine' LLM to generate feedback-akin to 'textual gradients'-that guide iterative prompt updates. The framework accommodates functional nodes, preserves time-sequential behavior in repeated calls, and combats the 'lost-in-the-middle' problem by isolating distinct sub-prompts. Across diverse tasks, including single-step classification, multi-hop retrieval-based QA, and agent-driven pipelines, LLM-AutoDiff consistently outperforms existing textual gradient baselines in both accuracy and training cost.

Key Topics:
Automatic Prompt Engineering (APE)
Textual gradients
Multi-component LLM architectures
Cyclic LLM workflows
AdalFlow library
Backward engine LLM
Functional nodes
Time-sequential behavior
Sub-prompt isolation
Multi-hop retrieval
Agent-driven pipelines

コメント