Poster
in
Workshop: Table Representation Learning Workshop
Generating Data Augmentation Queries Using Large Language Models
Christopher Buss · Jasmin Mousavi · Mikhail Tokarev · Arash Termehchy · David Maier · Stefan Lee
Keywords: [ Large language models ] [ Federated DBMS ] [ data integration ] [ Applied ML and AI for data management ] [ Information Integration ] [ Heterogeneous DBMS ]
Users often want to augment entities in their datasets with relevant informationAs many external sources are accessible only via keyword-search interfaces, a user usually has to manually formulate a keyword query that extracts relevant information for each entity.This is challenging as many data sources contain numerous tuples, only a small fraction of which may be relevant.Moreover, different datasets may represent the same information in distinct forms and under different terms.In such cases, it is difficult to formulate a query that precisely retrieves information relevant to a specific entity.Current methods for information enrichment mainly rely on resource-intensive manual effort to formulate queries to discover relevant information. However, it is often important for users to get initial answers quickly and without substantial investment in resources (such as human attention).We propose a progressive approach to discovering entity-relevant information from external sources with minimal expert intervention. It leverages end users' feedback to progressively learn how to retrieve information relevant to each entity in a dataset from external data sources.To bootstrap performance, we use a pre-trained large language model (LLM) to produce rich representations of entities. We evaluate the use of parameter efficient techniques for aligning the LLM's representations with our downstream task of online query policy learning.