How to Increase Your Live Chat Response Rate by 70.4%

Reading Time: 6 minutes

Imagine a group of basketball players who are casually shooting hoops. Now image those same players in a tournament game. What is the main factor causing the shift in their commitment, intensity, and ultimately performance? The difference is whether or not they are keeping score.

Either by human nature or the context of society, people tend to respond to competitive environments with increased commitment, so we applied this trait to a chatbot experiment. We ranked members of a regional sales team against each other based on points that were gained or lost for each chat using a score card.  TV monitors displayed this scorecard in the work area of the chat team to increase competition in the environment. Our initial hypothesis was that gamification of chat performance would improve the chat team’s metrics.

Performance metrics

The two most important metrics that we tracked were (1) Response Rate and (2) Response Time.

(1) Response rate measures whether or not the agent conversed with the site visitor after being routed into the conversation. To calculate this metric, we took the total number of chats with agent participation and divided that number by the total number of chats where an agent was routed. In other words, the sum of chats where an agent participated divided by the sum of chats where an agent should have participated.

Response Rate=Number of Chats with Agent Participation/ Total Number of Chats with Agent Routing

For example, let’s say Colton was routed into 20 chats during the week. He was able to have 18 conversations with site visitors, but he missed two conversations because he forgot to change his availability before leaving his desk for a meeting. His response rate for the week would be 18 responses divided by 20 opportunities. Colton’s response rate was 90%.

(2) Response time is the time that it takes an agent to send their first message in the conversation after being routed in. The average of all their response times during the week and by their median response time give the agents their scored.


In the chat pictured above, the agent’s response time was one minute.

For example, Kristina only had five chats this week. For her first chat, the site visitor’s question was answered 30 seconds into the conversation. Her response rates for the other four conversations were 45, 60, 20, and 50 seconds. Her average response time for the week was 41 seconds and her median response time was 45 seconds.

Pre-gamification Metrics

This test was a historical comparison, taking past data and comparing it to data collected after the scorecard was utilized. Before the implementation of the scorecard, the average response rate for the regional sales team was 49%.

The average response time was two minutes and 40 seconds.

The team’s median response time was 58 seconds.

The Experiment

We ran the experiment over the course of 100 days (just over three months). During this time, we tracked the performance of ten sales representatives. The agents were routed into a bot which mainly fired on the company’s homepage (along with several other pages throughout the website) and gave the site visitor the expectation that “It’ll take me less than two minutes to find a human.” The ten reps were randomly selected to connect with the bot using round robin routing rules.

In the Sales Team Scorecard, we gave points to agents for participating in conversations and awarded additional points depending on their response time (two points for under one minute and one point for under two minutes). Every time an agent never responded to the site visitor we subtracted points. Each week we created a similar, points-based scorecard for the last 30 days.

We also produced scorecards for individual agent performance metrics for the week and for the last 30 days. These scorecards included number of chats, number of non-response chats, response rate, and average response time.In conjunction with the new scorecard, we also conducted chat trainings to teach account set up (including notification settings), navigating the chat window, best practices for live chat, Q&A sessions, and common chat mistakes.

The Results

ChatFunnels’ service of creating a weekly scorecard for the sales team had the following impact:

The team’s response rate increased by 70.4% as the response rate improved to 83.5% (from 49%).Furthermore, out of the fourteen-week experiment, the team’s response rate was over 90% for six of those weeks. The highest response rate achieved by the team was 96%.

Average response time decreased by 94 seconds, becoming 1:06 (from 2:40). During the last week of the experiment, response time was the lowest at only 37 seconds.Median response time decreased by 22 seconds, with a team median of 36 seconds (from 58 seconds).Throughout the course of the experiment, the highest median response time was only 48 seconds and the lowest median achieved was 22 seconds.

Limitations

Before the experiment started, we told the chat agents about the importance of chat and the upcoming implementation that was planned for the team. This impacted the data for the historical comparison of the study because part of the increase in performance caused by the scorecard occurred before the experiment even started as the agents were becoming excited and aware of the emphasis on live chat. When the experiment did officially start, there was an immediate response and performance improvement as the agents were fired up about live chat and the new scorecard. However, after eight weeks there was a drop in response rate and overall performance metrics; as the chat agents got comfortable, they became a little complacent and their performance started to slip.

To get back on track, we worked closely with the with the chat and demand generation managers to adjust our tactics for regulating performance.  During our daily conversation reviews, we started leaving more direct training notes for non-response chats and long response times. We also included in our weekly reporting a list of agents to receive one-on-one training for specific improvements. Furthermore, we arranged a new program where performance is directly linked to an agent’s opportunity to take chats. If an agent performs well, they maintain the privilege to take live chats. If an agent’s metrics drop, they are temporarily taken off but given an opportunity to earn their way back. Along with initial excitement during the implementation process of the scorecard, there must be ways to keep the scorecard continuously relevant for the team, so performance does not revert to the same level as before implementation.

Conclusions

Implementing a scorecard took this good group of chat agents and changed them into an all-star team. As our partner says, “The proof is in the data!” We saw outstanding increases in team response rates and decreases in response time. Adding a competitive element to the environment brought chat performance to a new level and helped agents see the priority of these metrics. Ultimately the end goal is about giving site visitors a better experience and capturing leads, but carefully tracking these numbers and gamification of the process keeps performance metrics at the top of every agent’s mind and is the means to reaching the end goal.

For more information on this experiment, read “The Tortoise or the Hare? Which will win you more business?” by Trace Hansen on how response time affects the probability of email capture.

Ready to learn how our expertise can turn your team into all-stars? Set up a free 10-minute consultation now!