Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning for Systems

FALCON: Long Short Term Memory Feedback-Driven Adaptive Code Generation for Enhanced Automated Programming Systems

Zeyuan Li · Yangfan He · Yuchen Li · TIANYU SHI · Bin Lei · Jianhui Wang · Lewei He · qiu wu chen


Abstract:

Recently, large language models (LLMs) have achieved significant progress in automated code generation. Despite their strong instruction-following capabilities, these models frequently struggle to align with user intent. Specifically, they are hampered by datasets that lack diversity and fail to address specialized tasks or edge cases. Furthermore, challenges in supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF) fail to generate precise, human-intent-aligned code. To tackle these challenges and improve the code generation performance for automated programming systems, we propose a novel Feedback-driven Adaptive Long/short-term memory based Coding Optimization framework (i.e., FALCON) to enhance automated programming system performance. From the global level, Long-term memory improves code quality by retaining and applying learned knowledge, while from the local level, Short-term memory allows for the incorporation of immediate feedback from compilers and AI systems. Additionally, we introduce meta-reinforcement learning with feedback rewards to solve the global-local bi-level optimization problem, enhancing the model’s adaptability across diverse code generation tasks. Evaluations using benchmarks such as APPs and CodeUltraFeedback demonstrate that our approach not only increases the functional correctness of the generated code but also improves its overall quality.

Chat is not available.