A core Generative AI use case is the practical application of large language models (LLM) in the generation of initial code, which accelerates the overall development lifecycle. We do not view Generative AI as a full replacement for well-structured, well-meaning code; however, the productivity gains that occurs from automatic generation of template code — or living repositories of previously created code for specific use cases — expedites the speed of delivery teams when done correctly.
Another practical application of text-based LLMs is the ability to convert legacy code bases into destination code bases and assist as a migration accelerator. A basic example of this is an analytics migration scenario, such as the conversion of Qlik Sense reporting to Power BI. A basic project element to this type of engagement would entail the refactoring of proprietary Qlik syntax into DAX code on the front end of the reporting. Ordinarily this would entail an individual conversant within both tool bases. Generative AI allows for all basic expressions to be converted from Qlik syntax to DAX to help expedite delivery of the solution itself.
For those toying with the idea of adding a chatbot to your site to expand your customer service options, the advent of broad-based LLMs makes the implementation and roll-out of chatbots far more accessible than in the past. Chatbot integration within front-end analytics can add a compelling contextual dimension to existing reporting independent of the visualizations themselves.
Cloud platforms are rapidly building “data-in” features to deploy cognitive search services and off-the-shelf LLMs against your own dataset, so the barrier to entry is becoming increasingly lower. These chat bots can then be integrated into workflows via API integration endpoints or native app deployment dependent on our hyper-scaler of choice. Open source frameworks such as LangChain can also be deployed in a similar manner.
The generation of visualizations and chart elements can be achieved based upon auto-prompt suggestions from a BI tool’s native AI capabilities. The same visual aesthetics can be applied to the construction of an entire report layout leveraging the same capabilities.
This can be a significant timesaver in the creation of new dashboards and refining existing visualizations that have grown to be less intuitive over time.
Lightweight application interfaces (Zapier, Power Apps, Power Automate) can leverage API calls to GPT and LLM models to create simple action/trigger response apps. Think of situations in which you can train GPT to adopt the persona of a customer service agent or trawl a company intranet to derive key information and return that information via a form application.
Simple automations can also be built, such as form email responses to inbound inquiries or emails that leverage a trigger mechanism to generate an auto prompt.
While there is great potential with Generative AI, there are certain risks that should be accounted for before employing Generative AI in your data strategy:
Most mainstream analytics tools offer Generative AI capabilities in different forms. Here is a list of tools you could harness for Generative AI and their typical application.
Familiarize yourself with the mechanics of open Generative AI platforms to understand how the platforms themselves function. Get a handle on structuring prompts and understand the verbosity necessary to generate the best possible results. The barrier to entry is incredibly low to familiarize yourself with something such as ChatGPT.
A tip: If necessary, adjust the “temperature” within your model (ChatGPT is on a 0.1-0.9 scale) to alter tone in responses.
Tone is extremely important in how we use generative AI. If you’re leveraging Generative AI as a mechanism to create standard content across an organization, it is crucial you set master prompting as a standard. This entails creating two declarative statements: the first is on who your organization is, and the second is on the tone in which you want the LLM to adopt in communication. By setting master prompting standards, the text generated will adopt a consistent structure and feel in communication and reduce the likelihood of companywide communication adopting differing voices.
There is a stark cost difference in using Generative AI for one-off, individual use cases versus scaling out for an organization. Cost optimization and strategy needs to be firm so as not to incur significant overruns when initially rolling a solution out to production. It’s recommended to keep a strong handle on cost and usage patterns during development, and to keep development access relatively restricted initially to regulate R&D within your organization.
You need to understand where your data lives and how it’s being maintained and structured — before rolling out AI. As part of your data strategy, you also need strong data governance protocols.
Generative AI is just another unique utility without firm use cases in mind. Think of the core processes within your organization, and how Generative AI could be leveraged for workflow automation.
Ensure that only the right data under the right protocols is making it into your LLMs. There are simple steps to enact, such as turning off chat history and training in ChatGPT. When expanding development into major cloud platform providers, its crucial to know things such as data retention policies and where your data is ultimately stored in the process of turning prompts into responses.
This is a pre-requisite for impactful use of LLMs within a closed, corporate setting. Open source LLMs such as ChatGPT, Bard, and LLaMA are engaging precisely because their transformer models are consuming large volumes of information derived from the open internet. The richness of your corporate data, and the volume of that data for training purposes, will be a critical component to effectuate something similarly engaging as open source LLMs.
Which approach fits your needs the best? Domain specific LLMs are trained using data specific to vertical use cases (i.e., BloombergGPT) versus general use LLMs that may not understand the industry specific terminology in your prompts.
Cloud platforms (AWS, Microsoft, Google) are in a race to bring accessible LLM development and deployment to the masses. Each platform brings a slightly different flavor to how document storage, vector databases, embeddings models, LLMs, and cognitive search come together to generate responses to your prompts. Its crucial to have firm handle on the necessary services and resources to deploy chat services and Generative AI apps. Also of paramount importance is understanding the usage based cost structure of your cloud platform when deploying these solutions to a wider audience.
LLMs undeniably present a fundamental shift in how we can interpret and access data. There is a future in which traditional dashboarding and reporting becomes obsolete as structured prompts enable users to surface insights with a central data platform powering the data collection and aggregation behind it. There is also a future of peaceful co-existence in which traditional BI tools leverage Generative AI to form hybrid analytics. Regardless of how it takes shape — end users and stakeholders should be keeping in mind how Generative AI can augment, or in certain cases replace, their overall BI strategy and landscape moving into the future.
No matter where you are on your generative AI journey, there is a strategy package that fits your needs.
Thank you. Check your email for details on your request.