EPI-USE MENDIX | Articles

RAD AI – Ignite | Episode 1: Unlock your organization's knowledge

Written by Matthew Daniels | May 21, 2024 2:00:00 PM

This is the first installment of RAD AI – Ignite, a three-part series on use cases to begin realizing the value of AI/ML technologies in your organization. These use cases illustrate the speed and power of combining AWS managed AI/ML services and rapid application development with Mendix.

 


Within every organization, there is a wealth of knowledge and experience that has been accumulated. This information takes many forms - articles, knowledge bases, documents, and ticketing systems - to name a few. This information is often locked away within departments and software systems, making it difficult to centralize while avoiding vendor lock-in. This restraint makes it difficult to capitalize on the competitive advantage that this information offers. What if you could unlock this knowledge's potential?

This common problem is the inspiration for the first installment in our Ignite series. We will show how Knowledge Bases for Amazon Bedrock, a managed service provided by Amazon Web Services (AWS), can help you overcome your informational silos and how rapid application development can make this service useful to your users.


AWS Service: Knowledge Bases for Amazon Bedrock

Knowledge Bases for Amazon Bedrock provides the centralization of documents, knowledge articles, service tickets among other data in a highly searchable database. This data is processed through an embedding model which vectorizes each "chunk". We won't go into detail in this article, but this process ultimately produces a mathematical representation of each document or item that you put into your Knowledge Base. This information is stored in a search-optimized vector database.

When this service is queried, it searches the database for the best matches in the dataset and returns them. These sources can be passed, along with the prompt, to a large language model of your choosing. This provides you with natural language responses to your questions that leverage and reference your specific data to provide answers. This service is largely configuration-driven – as new models are released or preferred within your organization, they can be easily plugged in.

Bringing the App to Life – Rapid Application Development with Mendix

Making information highly searchable is great, but how do we make this information accessible to the target audience? This is where rapid application development comes into play. Throughout the Ignite series, we will leverage the Mendix low-code platform and its robust suite of AWS connectors to bring these services to life. These connectors allow you to build custom applications that leverage AWS’ AI/ML tooling in a fraction of the time. This is a major benefit when exploring new technologies and use cases. Rapid iteration with tools that abstract the complexity of AI/ML and application development allow you to focus on the business value you’re trying to achieve.

One of the many AWS connectors Mendix provides out-of-the-box is for Amazon Bedrock. This connector is easily added to any project and we use it in this demo application to seamlessly integrate the Knowledge Base into our application.

Setting the Stage

While there are many use cases for this service – for this demo, we’ll focus on a legacy migration use case. Let's imagine we are an organization with several legacy ticketing systems. These systems include many years of information and accumulated knowledge. We are evaluating new platforms that could help better serve our customer base but are concerned about losing our accumulated knowledge in the legacy systems. How can we leverage Knowledge Bases and RAD to solve this problem?

Given that Knowledge Bases allow us to store and search this information easily, we just need to load the data and provide a user interface. We’d like to create an interface to query our ticket data using natural language and then display this information back to the user.

Part 1: Identify Matches

 At the most basic level we are looking to emulate a search engine for our internal tickets. We want to search through all our existing tickets and see the ones that best match our prompt. Upon clicking one of the matches, we’d like to see the full ticket data to continue along our workflow. This information may solve our question, prompt us to create a new ticket referencing this ticket or lead us to create a new ticket altogether. Regardless, the idea is that we can easily view our previous tickets as the first step in this new workflow.

We loaded a small sample data set of IBM service tickets to our Knowledge Base and built a simple interface to allow searching of tickets. Upon entering a prompt and searching, the top three results are presented along with a percentage match. This percentage allows us to gauge the level of confidence we have that the ticket matches our initial query. We can click the individual tickets and pull the full ticket data from AWS. With Knowledge Bases and a rapidly developed application with Mendix, it was straightforward to take this first step to making our legacy data more usable.

See the application in action here:

Part 2: Problem Solving Assistant

Returning the top matches is great, but this does require the user to open the sources and read through them to gather more information. Sometimes this is desirable – for example, if we’re searching content that we want to download or explore in detail – but sometimes we want a straightforward answer. Luckily, we can take our implementation a step further.

The Knowledge Bases service allows you to pass your query, along with the matches, to a large language model of your choosing. In this example, we’ve leveraged the Claude 2.1 LLM, but this can be adjusted with a simple configuration. For example, we could update our configuration to harness the power of the Claude 3 model that was recently released.

The Knowledge Base service will now return a natural-language response to your query – it does this by feeding both your query and the best matching sources through the LLM model you’ve selected. This model synthesizes the information and provides you an answer in natural language. The model will even cite the sources where it pulls the specific information it’s providing in the response.

Here, we see the Knowledge Base service respond with step-by-step instructions to solve the error code we’ve submitted. This response includes citations to the source tickets which we can dig into to verify or gather additional details. This is an important control that reduces “hallucination” in the responses, which is a major concern we see in market adoption of AI solutions. With this feature, all source information can be verified to ensure trust.

Part 3: Generalized Knowledge Assistant

We can expand on this use case widely. At the core, we’re just centralizing information and making it usable in a conversational way. This makes for an extremely powerful assistant when loaded with information relevant to your specific organization.

To illustrate this use case, we took publicly available wiki data on a range of topics and loaded this into a second Knowledge Base. One of the topics included in this wiki set is a short article on solar energy. We’ll use the solar energy topic to demonstrate the power of an application built around generalized knowledge.

Here we can see that, when prompted with “challenges of solar energy”, the service scanned through our Knowledge Base and returned a natural-language answer to our question. This summary included citations back to those knowledge articles we loaded into the database, allowing us to dig deeper as we deem necessary.

This second use-case also illustrates a powerful element of the custom application we’ve developed. Choosing the Knowledge Base you want to communicate with is a simple configuration. This allows you to separate your information as you see fit. For example, on a department, topic, or client level depending on your organization and governance models.

Wrapping Up / Final Thoughts

Unlocking your organization’s knowledge is a great jumping-off point into the world of AI/ML. As organizations grow and accumulate knowledge, it’s important that this knowledge is made accessible to the relevant stakeholders. Otherwise, your competitive advantages can end up under- or un-realized.

Large software companies will be working to create their own AI/ML tooling as well and make this available to their customer base. That said, these companies are likely to optimize their tools for their own ecosystems, creating a challenge when looking to consolidate information across the organization. Additionally, these point solutions will require you to adopt their roadmap that also services a wider client base. This will limit your agility in responding to future market changes.

By bringing knowledge together with Knowledge Bases, you are adding resiliency to your IT landscape. As information systems change or new processes are introduced, you have the power to make changes and meet the market.

Further Extensibility

This is just scratching the surface on the solutions that can be developed. We can integrate the Knowledge Base service in other ways (e.g. a chatbot, embedded application) and build out custom workflows to our needs.

This is a core strength in the Mendix platform – we have the flexibility to rapidly iterate our solution with the integrated full-stack IDE. This empowers business-oriented developers and encourages cross-functional development teams that incorporate business and technical personas. This vastly improves project success when dealing with emerging technologies and novel use cases, which we are highlighting in our RAD AI Ignite series.


Be sure to check out additional installments of our RAD AI – Ignite series here:

RAD AI – Ignite | Episode 2: Identify and Address Anomalies

Proactively catching mistakes and potential fraud is critical within organizations. Complex rules engines are costly, time-consuming, and generally require modification as new threats emerge. Instead, what if we could leverage past and future data to better identify anomalies in a dynamic environment?

Read More

RAD AI – Ignite | Episode 3: Extract and Organize Data

Forms, handwritten notes, and documents are important components in many of today's processes. Digitizing this information is oftentimes inefficient and costly. This leads to informational delays and reduced funding available for investment. How can we leverage AI/ML services to improve form-based processes and the timeliness of our data?

Read More