How to Create Successful Chatbot Experiments
Successful chatbots require work! And they won’t be perfect the first time around. The great thing about chatbots is that you can take what you learn from one chatbot and apply it to others! That’s where experiments really accelerate your chatbot game. Not only do experiments help you continually refine your existing chatbots, they also expand your knowledge core that can be used for future applications. Here at ChatFunnels we’ve led our fair share of experiments and we want to share 5 essential things to keep in mind when designing successful chatbot experiments.
1. Decide on the type of test
There are several different types of experiments you can choose from depending on the elements you want to test. Three that ChatFunnels uses frequently are split testing, historical comparison, and control/experiment groupings. Split testing is essentially an A/B test. For example, an A/B test bot would fire two versions of the same bot randomly to different people and there would only be one variable that’s different between the two. This can help you test which of two specific variable, like subject lines or Call-To-Actions, is better.
Historical comparison tests take past data and compare that to new data collected by the experiment. For example, if you wanted to test whether a bot captured more emails than a traditional form on a website, you could create a bot and compare how many emails it captures over a certain period of time to how many emails the traditional form captured in the same amount of time in the past.
A control/experiment group test creates a control group and an experiment group where the experiment group is exposed to the experiment and the control group is not. For example, if you want to test whether an in-product message affects a user’s retention rate, you could send informative messages to one group of people and no message to another group of people to see how the retention rate is affected.
There is potential for overlap in these types of tests but try to keep them separate as much as possible in order to avoid confusion when trying to determine what really affected the outcome of the experiment. An experiment with too many variables gets messy really quickly.
2. Establish clear parameters
A parameter is essentially a characteristic of a population. Which means when you set parameters for your experiment, you essentially define who will be part of your experiment and what that experiment will look like. These parameters can vary depending on who you’re targeting and what you’re trying to test. A parameter can be how a certain bot is fired, which url a bot fires on, how much time it takes before a bot is fired, etc. Make sure these parameters are clear and documented! This will help you later when you’re analyzing the success of your experiment. It can also be useful to compare to future experiments or to inform the creation of future experiments.
3. Define success
No experiment is complete without a goal. To keep things simple, measure your experiment’s success based on one defined goal. Maybe that’s email captures, closed deals, subscription renewals, or product interaction. You can change the goals from version to version or maintain a goal throughout various iterations of an experiment to see what really makes a difference.
Pro tip: Align your experiment goals with your company goals to give the experiments more purpose. Establish a goal based on what will help your company take its next step in growth and progress.
4. Don’t wait for perfection
Version 1 of any experiment is never going to be perfect, no matter how much of an expert you are. That’s what makes experiments so interesting! There’s always something new to learn and develop. However, knowing that any early version isn’t foolproof can make it hard to pull the trigger and actually start experimenting. But if you wait until your experiment is perfect to start experimenting, you’ll never do anything! Remember that the point of an experiment is to throw well-researched ideas out there, see what works and what doesn’t, and then revamp for the next round. There’s a very real possibility of failure…not every experiment can be a home-run! But those failures will ultimately provide the learning that will heavily impact all future experiments you decide to implement.
5. Compile and analyze data
Now your experiment is up and running! Give it time for the data to accumulate. This is especially important for identifying trends and results that have actual statistical significance. For example, if you’re running a test where you send the same email with different subject lines to two different groups of people, but only five people have received it, that’s not really going to tell you much about the relative success of either subject line. A good rule of thumb is to wait until your test size is at least 100 people before you start drawing any conclusions from the data.
Once you’re to a point where you can collect data, start compiling it into reports regularly, either weekly or monthly. Take the time to clean and organize the data, then analyze it! Look for key insights that will drive experiment improvements for the next version and subsequent bots.
Now that you have some idea of where to start, get at it and never stop experimenting! There’s so much to learn in the world of chatbots and how customers want to interact with them. Test new variables, test new bots, and get creative! Through experimentation, ChatFunnels has been able to identify interesting insights about the effects of videos, emojis, and capitalization on chatbots and in-product messaging success, plus so much more!