Back to Newsroom

How we use open source deep learning models

Discover Raffle's deep learning framework and models to create our unique AI algorithms that are the core of our products.
Back to Blog

This is the third post in our ‘The science behind the raffle-lutionseries. 

We recommend you read part one and part two first.

AI powered by deep learning has seen a lot of progress in the recent years. This would not have been possible without a strong open source movement.

The two main deep learning frameworks PyTorch and TensorFlow are open source. Interestingly, Facebook is a strong backer of PyTorch and TensorFlow is a Google project.

Rather than a long discussion of why Big Tech might back open source and open research, this blog post will look at some of the frameworks and models that are available in natural language processing (NLP) and how they can be used in products.     


A deep learning framework is a library that has the necessary elements to train deep learning models, so we need to be able to load and manipulate data, define, train and deploy models.

At Raffle we currently favor PyTorch because it is a “Pythonic” (= similar to Python) programming language. TensorFlow has some advantages in terms of putting models into production.

Keras is another library that makes it easier to get started with deep learning. But be warned, becoming a deep learning master is difficult.  

Open source NLP models

Raffle built NLP AI to make it easier for employees to find company information. So we need models that can understand natural language questions and connect the question with knowledge bases to deliver an appropriate answer.

Speech based interfaces (speech2text and text2speech) and understanding all languages are also highly desirable. So what is available today?

  1. Language modelling. As discussed in the previous post, we can build AI that understands natural text better when we use languages models like BERT as the foundation. Hugging face is a company who specializes in distributing code and trained models for NLP.

  2. Machine translation will eventually break down language barriers. Recently, Facebook AI open sourced a machine translation system that translates between 100 languages

  3. Speech recognition and speech synthesis are built into your smartphone already but are also available as open source so it can be integrated into products. 

The Babel fish brought to life?

Having access to machine translation can allow a user to pose a question in a different language than the model is trained on by:

1) detecting the language
2) translating
3) inputting the translated question to our system to get an answer
4) translating the answer back to the user's language.

Google Translate logo
Tools like Google Translate are already sophisticated

Speech recognition opens up the possibility for raffle Customer Service to run in voice-based customer service. What currently stops us from implementing these solutions are the size of the models. For example, the 100-to-100 machine translation model has 15 billion parameters! We simply cannot run such a big model.

Luckily, thanks to open source contributors, we know that we will soon get a much smaller model that we can run. So one day in the not so distant future raffle can answer your questions in many languages — looks like the Hitchhiker’s Guide to the Galaxy got it right after all!   

On top of that there are infrastructure tools that help us improve models:

  1. A/B test - for testing alternative models in production
  2. Bayesian optimization - for automated search over model architectures 
  3. Automatic retraining - to continuously train and improve our production models
  4. Active learning schemes - to pinpoint what data we should label next to improve most

Staying state-of-the-art

Open source has helped to enable Raffle's unique AI technology.

We collaborate closely with the Technical University of Denmark and the University of Copenhagen to stay on top of the latest developments within research. This serves as important inspiration for our own work.

It is still early days, and we have a strong belief that the technology we are developing will improve quickly and find uses far beyond the current products. 

In our final post in this series we will look at recent trends in NLP AI research to see what is just around the corner. 

Read about our user cases here.