Hi Ole, I think this is likely to be great material. I have a question for you and Anis before you start.
A Knowledge Graph is just a way of representing stuff. I often look at tables and relationships and think they are similar to KG but less rich and expressive. An LLM is a probabilistic model that manipulates data (usually text) to generate new data (text, image etc).
So are you really talking in this series of posts about ways to represent data (KG being one) and ways to process and generate data (LLM being the relevant one) and progressing from one way interactions, to a more conversational style of interaction?
At its simplest the KG is basically a corpus of text describing concepts as nodes and edges. The LLM reads the text and generates output. Instead of text we specify KG as output, … and it goes on.
My question is that by simplifying this down (if accurate) have I destroyed or enhanced your argument?
valid point. It's not as much about the communicative interaction, as you suggest the movement towards conversation style (the mediation if you will) - but it's a fair assumption, than a deeper connection between the two types technologies.
Let's have Anis respond too, what's your comments, Anis?
I agree with your answer. Our focus is not primarily on the communicative or conversational aspect, even though in some cases the interaction may appear conversational. The core of the series is really about creating a deeper connection between KGs and LLMs, so they complement and strengthen each other.
Thanks for the thoughtful summary. You’ve actually captured the essence quite well. We are indeed looking at KGs primarily as structured knowledge representations and LLMs as probabilistic models that can process and generate data.
In this series, with Ole, we’ll look at approaches where KGs ground and guide LLM outputs with reliable, structured knowledge, and where LLMs can help enrich and expand KGs, and approaches to unify the KGs and LLMs.
While in some use cases and scenarios this unification of KGs and LLMs can appear conversational, our main focus is not conversation in the human–chat sense, but rather the fusion of KGs and LLMs so they function as a unified system. We will discuss this category of approaches in more details in the third article "Synergized KG + LLM".
Hi Ole, I think this is likely to be great material. I have a question for you and Anis before you start.
A Knowledge Graph is just a way of representing stuff. I often look at tables and relationships and think they are similar to KG but less rich and expressive. An LLM is a probabilistic model that manipulates data (usually text) to generate new data (text, image etc).
So are you really talking in this series of posts about ways to represent data (KG being one) and ways to process and generate data (LLM being the relevant one) and progressing from one way interactions, to a more conversational style of interaction?
At its simplest the KG is basically a corpus of text describing concepts as nodes and edges. The LLM reads the text and generates output. Instead of text we specify KG as output, … and it goes on.
My question is that by simplifying this down (if accurate) have I destroyed or enhanced your argument?
Hi Martin,
valid point. It's not as much about the communicative interaction, as you suggest the movement towards conversation style (the mediation if you will) - but it's a fair assumption, than a deeper connection between the two types technologies.
Let's have Anis respond too, what's your comments, Anis?
Hi Ole,
I agree with your answer. Our focus is not primarily on the communicative or conversational aspect, even though in some cases the interaction may appear conversational. The core of the series is really about creating a deeper connection between KGs and LLMs, so they complement and strengthen each other.
Hi Martin,
Thanks for the thoughtful summary. You’ve actually captured the essence quite well. We are indeed looking at KGs primarily as structured knowledge representations and LLMs as probabilistic models that can process and generate data.
In this series, with Ole, we’ll look at approaches where KGs ground and guide LLM outputs with reliable, structured knowledge, and where LLMs can help enrich and expand KGs, and approaches to unify the KGs and LLMs.
While in some use cases and scenarios this unification of KGs and LLMs can appear conversational, our main focus is not conversation in the human–chat sense, but rather the fusion of KGs and LLMs so they function as a unified system. We will discuss this category of approaches in more details in the third article "Synergized KG + LLM".