3 Predictions to Guide You When Investing in a Generative AI-Powered SaaS Tool

These days if you are running a product team, you’re almost certainly grappling with the issues of how to integrate Generative AI-powered capabilities in your product. If you’ve already done it, then you’re now figuring out how to manage these capabilities long-term. These are both particularly complicated and potentially insidious challenges. 

While there is a lot of competitive pressure to remain relevant, generative AI technologies are still very young and in constant evolution. This makes the integration of generative AI technologies both an engineering challenge and a broader product design challenge.

It will come as no surprise that at OpenDialog AI we use LLMs extensively. As a conversational AI platform we, of course, use them to augment the conversational capabilities of solutions. We also use them to help automate the design of the applications themselves as well as facilitate the testing of the applications at scale. As such, what happens to LLMs and what is coming around the corner is important to us.

Here I’ll share a bit more widely three predictions on LLMs that we use to guide our own product choices about where and how we decide to dedicate our (always) limited product design and engineering resources. For each prediction I will discuss how it affects our product roadmap choices.

Costs – Strike a Balance

The first one is an easy one. How much will all this cost? The quick answer is ‘increasingly less’. 

This is a natural result of competition and innovation in a nascent market, driving down prices to attract mindshare early on. We will eventually see prices settle, but for now investing significant amounts of engineering effort to bring costs down by a few percentage points is probably not the best use of limited resources.

The cost of accessing LLMs over an API has been steadily going down. The last time OpenAI reduced their pricing was late June of 2023 (by about 70% in some cases) and another price reduction is imminent. AWS, GCP and Azure have been following suit.

For now, focus should be on getting the use case right and making sure the feature brings real value to your clients. Of course this does not mean that you should not worry about the costs. For example, a model such as GPT-4 is about 10 times more expensive that GPT-3.5. Try and strike the right balance for your use case. 

However, your resources are more likely to be better deployed in innovation in how you use LLMs rather than trying to bring the price down to only discover a few months later that the price will go down on its own once more.

Innovation – Drive a Competitive Edge

Last week I wrote an article saying that AI will keep surprising us because of the exponential nature of the technology. Now I am going to contradict myself and say that when it comes to LLMs I do not expect to see major changes in capabilities over then next 18-24 months. I predict that they’ll largely remain capable of performing similar types of reasoning tasks, as the real change will take new architectures at this point. It’s always risky making such assumptions, but I am willing to stick my neck out on this, for now.

This does not mean we will not see innovation. It will, however, come in terms of better managing hallucinations, integrating multiple modalities (we are already seeing examples of this), increases in context size, etc. It will not be about huge strides in the actual quality of reasoning they can perform. Certainly nothing like the step change we observed between GTP-3 and GPT-3.5 and GPT-4.

I think it is fair to make some assumptions that there is space here to innovate in how you combine current LLM capabilities and enhance them through RAGfine-tuning and advanced prompt engineering to deliver better results than your competitors. Understanding prompt techniques and understanding where you need to fine-tune, and how, based on the problem you want to solve is crucial.

Regulation – Weigh up Innovation and Safety

This is less of a prediction and more of a hope. I know this is startup hearsay and that we are supposed to be anti-regulation but I hope we see strong regulation come in soon – even at the cost of slowing down or halting the speed of innovation. A lot of things were broken very fast with the mainstream release of Generative AI technologies. We need to redress the balance and address the concerns of content creators and the impact such technologies will have in creating more economic imbalance.

As you are embedding these technologies in your product think careful about what it would look like if you had to switch them off. What would keep working and what would completely break? Is Generative AI the engine, which means it would all stop if you switched it off or is it an augmentation layer that makes things better, faster but it is not fundamental to the tool. This may feel like dooms mongering but I think it is a very useful exercise to perform for any product owner.

 

Original article published on October 23rd 2023 on LinkedIn by Ronald Ashri.

Share This Post

More To Explore

See How it works!

Get in touch for a showcase of how OpenDialog can help your business Deploy Conversational AI, at scale.