A recent prediction says that 40% of enterprise apps will integrate AI agents by the end of 2026. This should be compared to 5% in late 2025. However, analysis from the same company predicts that 40% of all agentic AI projects will be cancelled by the end of 2027 due to costs, unclear business value, and risks.
This raises a simple question: Why are so many generative AI projects started, only to be shut down again a few years later?
Generative AI is clearly hyped, and many of the AI projects started today will ultimately fail. We need to ask ourselves: why do we need generative AI, and what applications and providers help make these projects successful?
Summary: Generative AI allows us to effortlessly perform tasks that we could not even perform before. On the flip side, outsourcing tasks to generative AI can make us lazy problem solvers, causing us to miss opportunities to learn new skills.
There are many generative AI providers in the market today. We argue that Raffle’s search and chat solutions are of superior quality and that search and retrieval-augmented chat are examples of robust applications of artificial intelligence. They allow us to solve our current knowledge tasks more efficiently and with higher quality without the typical risks.
Why Generative AI?
A recent study from Anthropic (the makers of Claude) studied programmers learning a new task using a new software library. The results were not favorable for generative AI: programmers using AI assistance 1) did not finish faster than programmers without generative AI and 2) learned less about the library they used.
So generative AI is not necessarily good for learning. As a user of generative AI, you may very well recognize this experience. This is also why AI assistance buttons in many apps feel "glued on" and out of place. We are using an app for a purpose, and generative AI is not always useful for that purpose.
A better use case is information retrieval and chat. For those outside the technical environment, "retrieval" simply means the system's ability to find the exact, relevant data points before the AI generates an answer.
When Google introduced AI answers, social media was quickly overflown with examples of spectacular fails. However, with time the technology has become an integral part of our internet search experience. It is useful because we get a structured overview with links to sources. The user saves time by not having to find the content on the linked page(s) and Google wins because the user stays on their page more often.
This is definitely a good reason for companies to implement summary and chat on their own site with retrieval from their own content. This helps users find what they are looking for while the company stays in charge of the user journey. This is where Raffle comes in.
Why Raffle?
When we founded Raffle in 2018 with the mission of giving employees a search experience on company data on par with Google Search for the internet, the concept of semantic search and the BERT language model were not even invented yet. Back then it was a hard sell to convince companies that our machine learning-driven models added value.
All this changed with the November 2022 "ChatGPT moment" that made generative AI a part of conversation everywhere. Today generative AI is becoming part of many apps, and many companies have search and chat on their site and ChatGPT-like tools for their employees.
What sets Raffle apart in an age where three guys in a garage can build something in a short time that, at first glance, looks and feels very solid?
Behind the scenes, it is our "machine learning first" approach and hard-won experience making solutions that just work for lots of tricky customer cases on both public and internal data.
"Machine learning first" is nowadays a bit of an old term. It simply means that when confronted with a problem—say, organizing insights from user search logs—instead of considering it an engineering problem, we consider it a data problem and use our machine learning models to organize the data in a semantically meaningful way.
In other words, using our algorithms, questions that have the same meaning can be presented together as one cluster. Raffle’s customers can therefore quickly get an overview of what the users ask and, crucially, see the outliers—questions users ask that their content cannot answer.
For our customers, this provides a vital feedback loop: they get to know their users and can iterate fast if content needs to be updated.
Another important differentiator between Raffle’s tech and a solution built directly with API components from third-party providers is the amount of control and governance a Raffle solution gives the customer. Raffle’s models can be trained directly on the customer’s content and thereby learn to associate a question with the right answer even in very tough cases that a custom model would not be able to handle.
Combining search and generative AI, so-called retrieval augmented generation (RAG), is hard because generative models tend to go with the content they are given. Search is the key because the search has to provide the right answer close to the top to avoid hallucinations.
So the short answer to “why Raffle” is simple:
Do you want your generative AI initiative to become a real part of your product – or one of the 40% that gets cancelled by 2027?



.webp)
.webp)






.jpg)

