Skip to yearly menu bar Skip to main content


Poster

Instruction Embedding: Latent Representations of Instructions Towards Task Identification

Yiwei Li · Jiayi Shi · Shaoxiong Feng · Peiwen Yuan · Xinglin Wang · Boyuan Pan · Heda Wang · Yao Hu · Prof. Kan

[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Instruction data is crucial for improving the capability of Large Language Models (LLMs) to align with human-level performance. Recent research LIMA demonstrates that alignment is essentially a process where the model adapts instructions' interaction style or format to solve various tasks, leveraging pre-trained knowledge and skills. Therefore, for instructional data, the most important aspect is the task it represents, rather than the specific semantics and knowledge information. The latent representations of instructions play roles for some instruction-related tasks like data selection and demonstrations retrieval. However, they are always derived from text embeddings, encompass overall semantic information that influences the representation of task categories. In this work, we introduce a new concept, instruction embedding, and construct Instruction Embedding Benchmark (IEB) for its training and evaluation. Then, we propose a baseline Prompt-based Instruction Embedding (PIE) method to make the representations more attention on tasks. The evaluation of PIE, alongside other embedding methods on IEB with two designed tasks, demonstrates its superior performance in accurately identifying task categories. Moreover, the application of instruction embeddings in four downstream tasks showcases its effectiveness and suitability for instruction-related tasks.

Live content is unavailable. Log in and register to view live content