Files

Abstract

Knowledge editing in Large Language Models (LLMs) aims to make precise updates to specific pieces of information, correcting inaccuracies or biases without unintentionally altering unrelated knowledge or skills. This field of research addresses three essential challenges: generalization (how well the model applies edited information across various contexts), locality (making accurate changes without impacting unrelated information), and scalability (ensuring performance remains efficient as the number of edits increases). Our research introduces two key contributions: (1) a new knowledge editing benchmark that overcomes limitations in existing benchmarks, providing materials suitable for fine-tuning and thorough evaluations, and (2) a novel approach using external memory to manage knowledge edits. This approach, called Relevance-based Parameter Activation (rel-par-act), utilizes an embedding model and vector store to activate LoRA layers tailored to specific edits. Our method achieves state-of-the-art performance in both generalization and locality on our benchmark and can scale to hundreds of edits with high efficiency.

Details

Actions

PDF

from
to
Export
Download Full History