Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Open-World Agents: Synnergizing Reasoning and Decision-Making in Open-World Environments (OWA-2024)

Towards Humanoid: Value-Driven Agent Modeling Based on Large Language Models

Xuzheng Chen · Zhangshiyin · Guojie Song

Keywords: [ value-driven ] [ agent simulation ] [ large language model ] [ Believable agent ]


Abstract:

The humanoid agent aims to build a believable proxy of human behavior. However, existing memory-driven methods overlook human’s intrinsic value, only allowing agents to maintain believability of short-term simulations through external demands or predefined tasks. To this end, we propose the first value-driven humanoid agent architecture based on Large Language Models (LLMs), which includes three modules. Internal module stores the agent's values and basic needs for guiding the long-term behaviors and short-term action. Values are instantiated into specific goals through pursue module, thereby continuously driving rational behaviors of agent. Desire module adjusts the each single action of an agent to meet basic needs. By combining with powerful understanding and generation capability of LLM, values enable agent to exhibit life-long believability in a dynamic environment. In the experiment, we elaborately designed a world with one character as the protagonist, and created fixed behaviors of other Non Player Characters (NPCs) in advance, retaining only the initiative of protagonist. The difference in behavior between value-driven and memory-driven protagonist demonstrates the superiority of our framework.

Chat is not available.