Grounding LLMs: Building a Knowledge Layer atop the Intelligence Layer • Talk @ UMBC • Sept 17, 2024

Описание к видео Grounding LLMs: Building a Knowledge Layer atop the Intelligence Layer • Talk @ UMBC • Sept 17, 2024

"Grounding LLMs: Building a Knowledge Layer atop the Intelligence Layer" • Invited Talk at the University of Maryland, Baltimore County ‪@umbc‬ • Knowledge-Infused Learning (CMSC691) • September 17, 2024

• Overview:
The talk encompassed various methods of building a knowledge layer atop existing Large Language Models (LLMs), offering an overview of methods such as In-Context Learning (ICL), fine-tuning, Parameter Efficient Fine-Tuning (PEFT), Retrieval Augmented Generation (RAG), and the use of Knowledge Graphs (KGs) for better contextual understanding via structured data.

• Detailed Agenda:
The talk focused on modeling LLMs as an "intelligence" layer and the data associated with the task at hand as the "knowledge" layer, which lends itself as a way to model today's natural language-based interactive tasks with private task-oriented data. To this end, the following topics were covered to develop a knowledge layer atop a base LLM:
- Transformer Encoder/Decoder Architecture: The architecture of the Transformer Encoder/Decoder was briefly explained, highlighting the role of encoder models for input understanding and decoder models (LLMs) for generation tasks.
- Fine-tuning: Prevalent methods of fine-tuning LLMs were discussed -- full fine-tuning, which updates all model parameters, surgical fine-tuning, which selectively updates specific layers, and Parameter Efficient Fine-Tuning (PEFT), which focuses on a smaller subset of parameters, based on available data and task variation to reduce memory and computational demand.
PEFT enables faster adaptation and modular storage by storing one base model and individual adapters for each task.
- In-context learning/Few-shot prompting: Teaches the model how to carry out a desired task without a change in its weights.
- Retrieval Augmented Generation (RAG): Combines retrieval and generation methods to enhance performance. RAG retrieves relevant information from an external knowledge base to create an expanded prompt for the model. RAG is an effective method to reduce hallucination and render a grounded model for most tasks.
- Knowledge Graphs (KGs): KGs, owing to their structured representation, offer knowledge and contextual enrichment for LLMs, leading to better contextual understanding in applications. A case-study for claim-level fact verification using KGs (https://arxiv.org/abs/2403.09724) was discussed.

• Relevant Links/Papers:
- LoRA: Low-Rank Adaptation of Large Language Models: https://arxiv.org/abs/2106.09685
- Surgical Fine-Tuning Improves Adaptation to Distribution Shifts: https://arxiv.org/abs/2210.11466
- Gaussian Adaptive Attention is All You Need: Robust Contextual Representations Across Multiple Modalities: https://arxiv.org/abs/2401.11143
- ClaimVer: Explainable Claim-Level Verification and Evidence Attribution of Text Through Knowledge Graphs: https://arxiv.org/abs/2403.09724
- CMSC691: https://kil-workshop.github.io/CMSC69...

Комментарии

Информация по комментариям в разработке