Why It’s The Fusion of ChatGPT and Knowledge Graphs Will Make All the Difference

According to analysis by UBS, ChatGPT is the fastest-growing app of all time. The analysis estimates that ChatGPT had 100 million active users in January, only two months after its launch; TikTok took nine months to reach 100 million users. 

As you probably know, ChatGPT can do more than answer simple questions. It can compose essays, letters, emails, have philosophical conversations, and create simple code. The generative AI underpinning allows it to create good-quality prose, art, and source code, and at a level of a well-informed person. It can even pass some very well-respected university exams, simply by predicting which words should come next.

As you also probably know, even the well-informed individuals it has used to build its answers can be mistaken, and ChatGPT makes it hard to detect those errors due to the certainty of its tone and the complex reasoning of the underlying model. The bot itself says: “My responses are not intended to be taken as fact, and I always encourage people to verify any information they receive from me or any other source”. OpenAI also notes that ChatGPT sometimes writes “plausible-sounding but incorrect or nonsensical answers” as in its famous hallucinations.

In systems where compliance or safety are important, we simply cannot take the chatbot at face value, nor can we ignore the potential benefits of generative AI.

Is there a way to move forward with the technology that powers generative AI, and other LLMs (Large Language Models), in ways that cut down these false positives and overconfidence, which will erode trust in the answers a chatbot will cause? Our response needs to blend the best of ChatGPT with other means of ensuring rigour and transparency.

A conversational interface over all that complexity

What I call a ‘small’ Large Language Model radically cuts down the kinds of errors that can occur with ChatGPT. While this approach may limit the range of responses the LLM can generate because it will have typically been trained on far less data than it consumes from the Internet in one large sweep, it also means that the responses it generates will be more reliable.

Consider what Amazon could achieve with a small LLM incorporating all of its product documentation from its databases, loading it into ChatGPT, and offering customers a conversational interface over all that complexity.

It’s important to note that you can’t achieve these outcomes simply by connecting ChatGPT with a document cache. If the CIO wants to start exploiting the untapped potential in their internal data stores by applying LLMs, then building and refining knowledge graphs using proven graph database technology is the way ahead. Here, a real breakthrough has been made by a group of researchers through the creation of BioCypher. This FAIR (findable, accessible, interoperable, reusable) framework transparently builds biomedical ‘knowledge graphs’ while preserving all the links back to the source data.

And what made the difference was using a graph-based knowledge graph to organise data from multiple sources, capture information about entities of interest in a given domain, and create connections between them. Let’s see how.

Going beyond generative AI’s current limitations with Small LLMs

The team behind BioCypher accomplished precisely this. The team took a big corpus of medical research papers, built a ‘Small’ Large Language Model around it and derived a knowledge graph from this new model. 

This approach allows researchers to more effectively interrogate and work with a mass of previously unstructured data in a well-organised and well-structured way. And having this information resource in a knowledge graph means it is transparent, and the reasons for its answers are clear. And no hallucinations!

There is nothing to stop you collecting a substantial amount of information in text form and running an LLM to do the natural language ingestion, too. That will give you a knowledge graph to help you make the most sense of your vital corporate knowledge.

The reverse is also true. You can take control of the training of a small language model by feeding it into a knowledge graph, as this would allow you to control the input to the model, resulting in a responsive, easy-to-interrogate natural language interface on top of your graph.

Analyst James Governor, co-founder of RedMonk, agrees. He has recently said that “As it touches all business functions, from legal to accounting, customer service to marketing, operations to software delivery and management, Generative AI is beginning to remake industries. Enterprises are justifiably worried about the dangers of incorrect information or ‘hallucinations’ entering their information supply chains, and are looking for technical solutions to the problem: graph databases are one well-established technology that may have a role to play here, and so-called small language models are an interesting approach to the problem.”

+ posts

Jim Webber is ​​Chief Scientist at graph database and analytics leaderNeo4j, co-author ofGraph Databases(1st and 2nd editions, O’Reilly) andGraph Databases for Dummies(Wiley), andBuilding Knowledge Graphs(O’Reilly)

 

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

How AI is Transforming Customer Communication Management

Business communication has evolved over the years. Today, it's...

Investment Opportunities for Startups and Technologies in AI 

Although artificial intelligence developed from niche technology has become...

Four Surprising Lessons I’ve Learned Leading Tech Teams

Techies. Geeks. Boffins. Whatever your organisation calls its IT...

A Business Continuity Cheat Sheet

Right, let's be honest. When you hear "business continuity,"...

Challenges of Cloud & Ultima’s Solution to Transform Business

With the way that AWS and Microsoft dominate technology...