June 20, 2023 Jeavio RnD Team

Chatbots are not the only Conversational Interface

Building a reporting tool using Large Language Models

Introduction

ChatGPT, Pi, Claude, Bard, BingChat, and others are all chatbots built on top of Large Language Models like GPT-4. 

ChatGPT is very popular because it is easy to use. It leverages the underlying large language model’s ability to parse natural language to present a compelling user experience. You ask it questions, prompt it to do something specific, and it responds in a familiar chat interface.

ChatGPT’s popularity has made its chatbot interface synonymous with “Conversational Interfaces” or more broadly “Natural Language Interfaces”. 

But this ability to use natural language as an interface opens up many other possibilities besides just chat bots or conversational agents.


Leveraging Conversational Interfaces for Reporting and Analytics

Data reporting and analytics is a massive driver of application development and presents a compelling use case for exploring building a Conversational Interface. 

Interpreting data via querying is routine, with numerous programmers and analysts leveraging SQL and its variants to generate informative dashboards.  Such dashboards often answer questions like, “What was my company’s widget production last year?”.

However,  not everyone can write SQL,  and learning to extract useful insights from data can be daunting.

A typical approach to developing reporting facilities for a less technical audience involves creating a reporting dashboard. 

The  reporting or analytics team would:

  1. Identifying relevant business questions
  2. Craft queries corresponding to these questions
  3. Using these queries to populate dashboards, tables, or widgets

While this approach is well understood, it requires technical expertise.

To overcome this limitation, we explored the potential of harnessing natural language comprehension and code generation abilities of large language models to develop a tool that empowers non-technical users to make sense of their data.


The Conversational Query Tool

We have built a Proof of Concept “Conversational Query Tool” to investigate building a conversational interface.

The tool converts natural language into a ChatGPT API prompt that generates a SQL query. The query is then validated and executed, and the results are displayed to the user.

The following video showcases the prototype of this tool in action:

To understand our process better, let’s delve into the architecture, prompt engineering, and security concerns:

Architecture

Outline of the Conversational Query Tool architecture

  • We used SQLite3 as the database for convenience and simulated data that you might find in a typical HR database.
  • The data resides in key tables relevant to our case: 
    • Employee – Demographic information about employees
    • Skill – List of skills pertinent to the company
    • SkillMapping – Mapping between an employee and a skill
  • We developed the backend in Python with the Flask framework and used the SQLAlchemy library for database interactions.
  • Finally, OpenAI’s GPT-4 model generates the SQL query using the Chat Completion API.

Prompt Engineering

  • To generate SQL queries, we designed a prompt that included a database schema outline in the prompt context.
  • We tasked the model with generating a query based on the prompt with additional column-specific data hints.
  • Optimization of the prompt resulted in the generation of accurate SQL queries. The GPT-4 model significantly outperformed the GPT-3.5-turbo model in generating complex queries and following instructions.
  • The model successfully crafted queries involving multiple tables, joins case-ignoring instructions, partial-string matches, and aggregations

Security Considerations

  • Direct user interaction with a large language model poses potential security issues, such as prompt injection.
    • To counteract this, we added guard rails in the prompt to force the model to dismiss irrelevant prompts.
    • We also implemented safeguards against potential database manipulation requests from users. Defensive coding validated SQL generation before its execution against the database.
  •  We strongly recommend thorough testing and caution before exposing LLM capabilities in a publicly available application.

Conclusion

While we had much fun building the Conversational Query Tool, we also encountered challenges when dealing with complex or ambiguous queries. The tool’s accuracy was also sensitive to the naming conventions used in the database.

However, there are also several advantages to taking this “Conversational Query” Approach.

  • It makes it possible to do reasonably complex queries using simple language.
  • If the underlying database or data model changes, we need to modify the prompt, not change multiple SQL Queries or Stored Procedures. This ability allows for much quicker development and experimentation for the data model

There are security and engineering challenges with this approach. However, LLMs’ language comprehension and code generation capabilities open up multiple possibilities for building rich reporting applications.

, , ,