About the workshop
RAGE-KG explores the state of the art and goes beyond in integrating Retrieval-Augmented Generation (RAG) with Knowledge Graphs as well as the synergies between Large Language Models and the Linked Open Data ecosystem. We aim to foster innovative RAG architectures relying on Semantic Web standards and new approaches to make Linked Open Data usable by LLMs, enhancing their ability to generate reliable, verifiable and context-aware responses based on structured, decentralized and authoritative data sources.
Where
ISWC 2025 | Nara, Japan
When
November 3, 2025
Keynote Talk
Image source: Rivista AI
Roberto Navigli
Roberto Navigli is a full professor of Natural Language Processing at Sapienza University of Rome and a Fellow of AAAI, ACL, EurAI and ELLIS. He is the creator of BabelNet, the largest multilingual encyclopedic computational dictionary, and co-founder of Babelscape, focusing on multilingual Natural Language Understanding. His pioneering work in knowledge graphs and semantic technologies positions him at the forefront of GraphRAG research, bridging the gap between structured knowledge and large language models. He leads the Minerva LLM family project, the first pretrained LLM in Italian, and has received prestigious ERC grants for his groundbreaking research in AI and NLP.
BabelNet, NounAtlas, Concept-pedia, and Other Marvels: Exploring Semantics in the Age of LLMs
Large Language Models (LLMs) have redefined the distributional paradigm in semantics, demonstrating that large-scale statistical learning can yield emergent representations of meaning. Yet, while these models exhibit impressive linguistic fluency and versatility, their internal representations of meaning remain largely opaque, data-driven, and detached from explicit conceptual structure. This talk revisits the problem of meaning representation from a complementary, knowledge-based perspective, presenting an integrated view of several large-scale semantic resources - including BabelNet, NounAtlas, and Concept-pedia - that aim to provide interpretable, multilingual, and multimodal conceptually-grounded frameworks for modeling lexical and conceptual knowledge.
We will also discuss the potential of explicit semantics to interface with LLMs for enhanced interpretability and semantic alignment. In doing so, the talk argues for a renewed synthesis between symbolic and subsymbolic approaches to meaning, illustrating how curated, multilingual knowledge graphs and data-driven models can jointly contribute to a more comprehensive and transparent account of semantics in the era of large-scale neural language modeling.
Workshop Topics
Exploring the frontiers of Retrieval-Augmented Generation
RAG Architectures
- RAG architectures leveraging Knowledge Graphs, Semantic Web standards and Linked Data
- RAG design patterns including GraphRAG and Agentic RAG
- Evaluating RAG architectures with structured data
LLMs and Structured Data
- Training and fine-tuning LLMs with structured data
- Prompting Language Models with structured data
- Language Model-supported and ontology-supported SPARQL query generation
Innovative Approaches
- Neurosymbolic approaches for integrating Language Models with Linked Open Data, Semantic Web and Knowledge Graphs
- Use Cases, Work-In-Progress and, especially, Bold Proposals for RAG systems