Algorithmic Transparency: Generally about Generative AI at Wolt

Untitled design.png
  • png

Ever since ChatGPT entered our vocabulary in 2023, generative AI has become a household name for many. It is likely even your grandmother has heard about it. But what really is generative AI? In this post, we expand on our thoughts about it and how it may be used at Wolt. The text comes from Wolt's Algorithmic Transparency Report 2024.


Very simply put, Large Language Models (LLMs) churn through vast amounts of text to learn the patterns of words in those texts. Once the patterns are learned, the models are able to generate new text based on some input text, like a question. Generative models pick words based on probabilities, generating them one at a time. The model's ‘intelligence’ comes from learning many patterns from its vast amount of data. 


It's clear that generative AI has caused unforeseen waves in the technology space, and we at Wolt are strong believers in its potential to change the way we use tech in the medium to long term for the better. With such a significant shift, it's understandable that it will take some time for the tech industry to figure out how this technology can be applied in the most useful way. So far, we haven't generally seen any killer features based on generative AI, especially not in user-facing interfaces, and the early adopters are still testing the best application areas for the technology. 


Generative AI at Wolt

From the point of view of a company operating a business in the physical world, generative AI can help us improve efficiency in several areas of our operations; the physical world is a messy place and generative AI can help at the interface of the physical and digital world!


For example, we use third-party automation tools like X-Menu and OpenAI’s GPT models to help and speed up the online transition for our local merchant partners. X-Menu helps extract restaurant menu information directly into Wolt's merchant tools, avoiding boring and time-consuming manual tasks. The GPT models help with the integration of large, intricate product lists from retailers and merchants into our tools.


Other ways we use generative AI is through OpenAI’s API to analyze and summarize text, received through our Support chat, regardless of the language. This assists our Support in being more efficient in responding to partners on all sorts of matters ranging from feedback to delivery instructions. 


While the technology is useful for understanding the unstructured free text and images, and other data that we collect, the impact of generating new text and data is still lagging behind. One hindrance we currently see with LLM-powered features is the unsolved quality control and security aspects of the technology, not to mention the lack of warmth and human tone of voice in the generated text.


​​The space is obviously very new, and development in these areas, along with many others around generative AI, is extremely fast and well-resourced. Therefore, we are certain that best practices around these aspects will emerge. Wolt is following the space intensely, and developing our own solutions and governance processes to be able to use the technology to its full potential in a safe way.


Ensuring strong AI governance

At Wolt, we have an AI Usage Standard that defines how machine learning (ML) or AI solutions may be used. Whenever someone wants to use such a system at Wolt, there are three things they must go through:


  • Conduct a risk analysis 
  • Follow purpose-specific guidelines for specific usage scenarios of AI/ML
  • Follow general requirements that apply to AI/ML


Our AI governance is dynamic in nature. We constantly develop it further to make sure it meets our current and future needs.

Risk analysis

For any new or substantially altered application of AI, a comprehensive risk analysis is essential. This includes conducting Data Protection Impact Assessments if personal data is involved, and performing Transfer Impact Assessments if data is to be transferred out of the EU or processed by non-EU entities not covered by the EU Commission’s adequacy decision. 

When a new third-party vendor is involved, a specific onboarding process must be followed, including various checks and considerations. This process also includes Intellectual Property Rights considerations in cases involving generative AI. Additionally, before deploying any AI system, a thorough evaluation of implementation-specific risks, including security threat modeling, is needed. 

Purpose-specific guidelines

Depending on the purpose of the AI system, there are specific guidelines that have to be followed. For example, if a system is intended to be used for AI-based code generation, the generated code has to go through human review as there might be a risk that the automatically generated code would have security vulnerabilities.

General requirements

The AI Usage Standard also demands that any usage of an AI system needs to follow certain general requirements to ensure the safety and privacy of our users and partners. These are as follows: 


  • No autonomous automatic decisions: AI must not alone make automated decisions that have legal effects or similarly significantly affect an individual. In practice this means that the AI can mostly suggest alternatives, but the final decision should always be made and approved by a human. 
  • Don’t let generative AI speak for Wolt: The output of a generative AI should not be used in externally facing communications without a sufficient quality assurance action, to avoid a situation where a model would provide incorrect data with an implied authority
  • Enforce data retention periods: If the AI system will store personal data by itself, e.g., for model learning, validation, or fine-tuning, a data retention period must be defined and personal data storage must be aligned with Wolt’s data subject access rights and data deletion processes.
  • Ensure that AI systems are aligned with the legal basis of processing personal data: If the AI system processes personal data, it has to take account of the legal basis under which we process the data. For example, allow for the deletion of data, consent withdrawal or ensure options for restricting and objecting to processing.

The information on Wolt’s products and algorithms in this report are based on our operations as of February 2024 in Austria, Azerbaijan, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, Georgia, Greece, Hungary, Iceland, Israel, Japan, Kazakhstan, Latvia, Lithuania, Luxembourg, Malta, Norway, Poland, Serbia, Slovakia, Slovenia, Sweden. For more information, check out our Transparency web page.

About Wolt

About Wolt
Wolt is a Helsinki-based technology company with a mission to bring joy, simplicity and earnings to the neighborhoods of the world. Wolt develops a local commerce platform that connects people looking to order food, groceries, and other goods with people interested in selling and delivering them.

Wolt was founded in 2014 and joined forces with DoorDash (NASDAQ: DASH) in 2022. Together, we operate in more than 30 countries today, 28 of which are with the Wolt product and brand. You can read more on the Wolt website.

Related topics

Receive Wolt news on your RSS reader.

Or subscribe through Atom URL manually