This blog is the sixth of a series on AI business challenges, written by Xomnia’s analytics translators.
Imagine your business as a high-end restaurant. To create a memorable dining experience, there's more to it than just cooking a dish and serving it to the customer. You need to hire a Michelin star chef who not only prepares innovative, delicious dishes, but also curates the menu, sets the ambiance, collaborates with the team, and ensures the best ingredients are used - every detail matters to ensure the success of the restaurant.
In the realm of business, the role of a Michelin star chef is played by an Analytics Translator. Much like a chef, an analytics translator does not merely prepare and serve a predictive model. They ensure that the problem at hand is well-understood, facilitate the process of choosing the right methods and algorithms, align collaboration between various teams, and make sure the model is usable and performing in a practical, real-world environment. For the purpose of this blog, let's imagine our 'dish' is a predictive model. Yet remember, this 'dish' could be any data product. Regardless of what it is, it’s not just served, its deployed meticulously and with purpose.
But what does 'deployment' mean in this context? Simply put, deployment is the stage where the predictive model, after being thoroughly designed and tested, is put into operation within the business. This stage can often be challenging due to various reasons, such as technical issues, miscommunication, or a lack of understanding about how the model should be applied. Besides, it might be intimidating to deploy something that isn't perfect, and we might naturally prefer to improve all backlog features first before releasing.
As challenging as it may sound to those seeking a ‘perfect product’ ahead of deployment, lingering indefinitely in the proof-of-concept phase or avoiding deployment out of fear of imperfections does not bring any value. This is because data products typically add value incrementally, with each release. In other words, without proper deployment, we cannot fully harness the predictive models' potential.
At Xomnia, we understand the nuances and complexities of deployment, just as a Michelin star chef knows that serving a dish is only the final step in a well-orchestrated process. We have developed a structured approach, a 'go-live' plan, to ensure successful deployment. In the sections below, we will elaborate on our plan and how an analytics translator facilitates this process.
Step 0: Pre-deployment
When the chef first arrives at your restaurant, they will probably need to understand certain aspects before they can start designing mouth-watering dishes. For example, is the restaurant’s offer restricted (i.e., vegan, halal, locally sourced)? Who is the target customer? How will we determine which new dishes are “good enough”? How long does the chef have to develop the new menu? How big is the kitchen and how many cooks are available?
This initial phase is what we call pre-deployment, where we focus on the project scoping and preparation. During this phase, an analytics translator addresses the most relevant questions to clarify and outline the project:
- Defining the problem with clear goals and objectives
- Identifying stakeholders and (their) requirements, involving them and clarifying expectations, roles and communication
- Mapping 'the system' to understand which applications/teams are impacted by the deployment
- Aligning goals, intentions, timelines, and technical dependencies with the identified teams
- Allocating resources, both human and technical
- Collecting and preprocessing data to train and test the model
- Determining evaluation metrics and success criteria
- Setting timelines and milestones
- Selecting appropriate algorithms and techniques
Step 1: Model and infrastructure
Once the chef has a clear outline of what to do, the culinary creativity begins to flow. The chef's job involves a balance of crafting recipes, evaluating dishes against predefined standards, and preparing the resources necessary to train other cooks. Simultaneously, the chef ensures the right infrastructure is in place, such as sourcing specific ingredients or procuring the needed kitchen equipment.
Similarly, the next step in a go-live plan for a predictive model is to build your model and set up the necessary infrastructure for deployment. In this step you will also finalize your model and ensure it’s properly documented.
Model development may include the following steps:
- Developing, (re)training, and validating the model - forming the feedback loop
- Assessing the model's fairness, accountability, and transparency (FAT properties)
- Assessing model drift
- Preparing user guides, API documentation, and other supporting materials
Parallel to model development, setting up the correct infrastructure is key to successful deployment.
Examples of infrastructure set-up and data management activities include:
- Setting up infrastructure (development environment: Azure ML, AWS, GCP, etc.) and resources for deployment
- Integrating the model with the target system or application (and checking dependencies)
- Performing end-to-end integration testing
- Preparing for regular data updates
- Defining necessary data transformations and statements
Comparably to how the chef will involve the cooks, restaurant manager and other members of the team in this phase, an analytics translator will also need a team to perform all the activities listed. The team will most likely include data scientists, data engineers and machine learning engineers performing different tasks. However, the AT’s role extends beyond overseeing tasks; they are central to ensuring everyone is working towards the same goal and providing overall guidance for the project team. An analytics translator also acts as a systems thinker, viewing the model and its deployment not as standalone processes, but as part of a complex, interconnected landscape or network of systems and stakeholders.
Systems thinking involves understanding that changes made in one part of the system can affect others, often in unpredictable ways. Let’s take an example from one of Xomnia’s clients. They wanted to deploy a product recommendation engine on their data science platform (Azure ML). For it to be operational and effectively used by users, the recommendation engine needed access to their data virtualization platform and had to write its output to the CRM system (Salesforce). This web of interactions, and the fact that multiple teams owned different parts of the system, meant that priorities had to be carefully navigated and negotiated. Figuring out how the integration would work, conducting joint tests, and managing different teams' schedules, all contributed to a challenging deployment process.
Step 2: User Acceptance Testing and Roll-out preparation
At this stage, the chef has created a new, delicious menu; all necessary ingredients and tools are available, and all cooks can be trained on how to make the dishes. But how do we know whether these dishes will be well received? How do we ensure the dishes will impress the customers before committing to a substantial food stock or major investment? And how do we prepare to incorporate the new courses into the restaurant’s menu?
One approach might be to offer a tasting experience for a select group of customers to evaluate their reactions before fully incorporating the new dish(es) into your everyday business. Alongside this, there are other crucial tasks, such as setting a go-live date or organizing a launch event, creating a sense of excitement to boost the popularity of your new offer.
Likewise, once a predictive model is developed and the infrastructure is established, it's essential to conduct user-focused testing and prepare to roll it out. Testing is a crucial step before your model goes live. Involving a diverse group of end-users during testing helps you catch issues that might otherwise be overlooked.
There are two main aspects in this step. The first one is user acceptance testing, where a diverse group of real-life end-users will help you evaluate the model’s performance.
Examples of user acceptance testing activities may include:
- Conducting end-to-end testing to ensure proper data flow and integration
- Validating the model’s predictions and output
You will also need to prepare for the roll-out of your model, with activities such as:
- Setting a go-live date
- Identifying or confirming the target users
- Defining optimal practices for utilizing model predictions
- Preparing kick-off, demos and training materials for users
- Developing a communication plan for stakeholder updates
During this critical phase, an analytics translator's role becomes particularly valuable, guiding the process, verifying thorough testing, and ensuring roll-out activities are carried out properly. If testing unveils any insights, the analytics translator will process these insights and create an appropriate action plan to address them.
Step 3: Roll-out
So you have a number of new, exciting and exquisite dishes ready to be incorporated into your restaurant’s offer. You have the capacity to train your staff to cook the recipes, all the resources you need, and a successful testing phase. Congratulations! You are (almost) ready to go live!
But, before you release this new menu, there are a few last steps to ensure its success. For example, you want to make sure that enough people are aware and excited about it, so you plan a launch party. You ask the chef to train all cooks in how to create the dishes to perfection so no mistakes are made when serving customers. Furthermore, you want to keep monitoring the customers’ reaction to the menu, continuously collecting their feedback, so you can make timely adjustments that secure a steady flow of customers into your restaurant.
Moving on to the realm of data products, this corresponds with what we call the go-live phase, which plays a pivotal role in the success of your new model. It involves several tasks, including kick-off meetings with users, training materials, and demos, all of which are essential to a successful launch and ensuring the desired outcome.
Examples of go-live tasks include:
- Conducting kick-off meetings with users (explaining the model, working methods, potential actions, etc.)
- Providing training materials (e.g., documentation, demos, videos)
- Assigning super-users or champions for coaching and training
- Preparing and sending go-live communication to relevant stakeholders
Once your model is live, it’s important to track its performance and effectiveness. Also, plan regular data updates and data transformations to keep your model relevant and accurate.
Model usage and monitoring activities include:
- Setting up a system for monitoring user interactions with the model
- Planning and tracking experiments to measure the impact of the model on business outcomes
- Setting up experiments or field tests to evaluate model performance and effectiveness
- Establishing methods for tracking the success of specific actions, or 'interventions', that are taken based on the model's predictions
- Tracking interventions and outcomes using relevant tools (e.g. CRM)
- Monitoring and adjusting the model and working methods as needed, based on results and collected user feedback
Implementing a predictive model in a business environment is much like introducing a new menu in a restaurant. It requires careful planning, execution, and the ability to adapt based on feedback. Just as the chef remains involved in this final stage, an analytics translator guides this phase as well.
Through each stage of the process, Xomnia's analytics translators apply their expertise and strategic understanding. They set up roll-out activities, engage relevant stakeholders, and establish methods to track and process user feedback. They are instrumental in adjusting the model based on user feedback.
With a proven way of working and a strong focus on creating impact, Xomnia’s analytics translators know how to bring ideas to production. By deploying solutions as soon as possible (with an MVP) to end users, value can be validated early and often.
Our way of working leads to solutions that maximize value over time based on end-user feedback to ensure an optimal fit between problem and solution. Would you like to learn more or have a conversation about AI business challenges at your organization? Get in touch at firstname.lastname@example.org.