Happy new year! Oh wait, it's almost May...
(let me try again.)
Long time no see, dear readers! Ever since we ended the month of January on a high note with a trip and a newsletter summing up our 2020 journey, it has been pretty quiet around here. You may wonder what we have been up to for the last few months, and what we have in store for 2021. It may not look like it, but things have been heating up at Palexy recently and we do have a few exciting revelations in place. First of all, we are in the process of revamping our website in order to create a better experience for visitors. Please stay tuned and check back in a few weeks! More importantly, our CS team and engineer team have joined forces to develop a whole new metric. It is called EoS (effectiveness of staff) and it may potentially be a game changer for retailers. So what is EoS, and how does it work?
The Catch-22 of staff performance in retail
Retail work may look straightforward on the surface, but it is anything but. In this previous article, we talked about the need to measure the performance of retail worker. But as we all know, retail staff are people and people are complex. How would one go about weighing various human attributes to arrive at the composition of the ideal worker? After 2 years of working with retailers of all sizes in Vietnam, we have narrowed it down to two components:
Staff interaction rate: the quantity of customer engagements. The more the merrier! (Going wide)
Staff conversion rate: the quality of customer engagements. How skillful the staff are at converting shoppers to buyers. (Going deep)
Obeying the laws of physics, it is nearly impossible for retail personnels to excel at both. They could either try to connect with as many customers as possible, or devote their time and energy to a few. Since they are only human, the top interaction rate and top conversion rate could not coexist. However, the Customer Success team at Palexy hypothesized that there existed a point of equilibrium where employee effectiveness was the highest. Thus EoS was born.
Choosing between opposites
Before we get into the specifics of EoS, let's revisit a fairly technical concept: Recall versus Precision.
Think about population-wide testing of Covid-19, for example. Let's say you had three test kits at your disposal and 1000 subjects, among which 10 were positive.
Test kit A could detect the largest number of positive patients. Test kit A identified all 1000 subjects as positive. Test A was the recall champion.
Test kit B focused on exactitude. Every single subject test B identified as positive was indeed positive. Test B could only detected 1 subject though. Test B won on the precision front.
Test C identified 20 patients as positive, 8 of whom were actually Covid-19 carrier.
Test C seemed like the best, doesn't it? We would instinctively think so, but let's quantify it.
Test A's F1 Score is 0.0198.
Test B's F1 Score is 0.1818.
Test C's F1 Score is 0.5333.
Let's hope your government chose test C!
That was an extreme example for the sake of demonstration. But dilemmas like that abound in daily life, and finding the definite optimal point is not always so straightforward. Imagine how helpful it would be if every conundrum in life comes with its own scores!
The test kits in the above example are simple. Test kit A went wide (Recall). Test kit B went deep (Precision). Test kit C landed at a spot wide enough and deep enough to be chosen (best F1 Score). If you have noticed, this is quite similar to the problem of assessing staff effectiveness. Following the same logic, the optimal outcome should be the unifier of two opposing aspects: staff interaction rate and staff conversion rate. Fortunately, the machine learning world has come up with its own formula to address this problem.
Recall versus Precision: the old tug-of-war
Many machine learning professionals would testify that of the many confusing concepts in their field, Precision and Recall rank pretty highly on the difficulty scale. People usually could differentiate between those two easily, it is when they need to identify exactly what these two mean that they run into troubles. It turns out that half-baked ideas and definitions of Precision and Recall float around, baffling the baffled even more. So here is a super mini crash course of Precision versus Recall for the uninitiated.
Precision is the ratio of correct instances that were retrieved/all retrieved instances.
Recall is the ratio of correct instances that were retrieved/all correct instances.
Both of these revolve around correctness but in different ways.
The problem that Precision versus Recall poses for data scientists is that these two are like jealous sisters. You cannot have too much of one without sacrificing the other. To balance the tradeoff between Precision and Recall and achieve a good fit, data scientists came up with F Score, which unifies these two into one single metric. Of course in certain cases, it might be helpful to factor in Precision and Recall as well. But F Score is a good, solid step to evaluate machine learning models. To go even further, F1 Score is the weighted average of Precision and Recall and often regarded as the most useful parameter when it comes to unequal class distribution.
At this point, you may scratch your head and think: this is pretty interesting and all, but how does it help a retailer like me? What does it have to do with assessing my staff?
Do not worry, we are getting close!
Introducing the EoS
To calculate staff effectiveness, we needed two ingredients: the right Staff interaction rate and the right Staff conversion rate. It goes without saying that both needed to be absolutely on the mark.
Thankfully, we already had them owing to our proprietary softwares.
Here is where it gets interesting. Using the same formula as the F1 Score, we delivered the EoS (effectiveness of staff), an quantifiable compound metric that harmoniously blends two contradictory features of retail staff. With this new hybrid metric, retailers could select the best framework for their staffing strategy going forward.
The EoS in action
The beauty of EoS lies in its simplicity. When we tested the EoS in numerous stores, a pattern emerged: the higher the EoS, the better the stores were doing in terms of overall conversion rate, sales, and customer satisfaction. A low EoS invariably meant something was amiss. Therefore EoS could be used as a new KPI for retail stores, laying the groundwork for more constructive adjustments to come.
The correlation between conversion rate and EoS, demonstrated in store B: as its EoS score rose, its conversion rate also increased.
Store A told a similar story: higher EoS = higher conversion rate
After six months of deploying EoS to various retailers, we have mapped all the possible correlations of a low EoS to see what exactly it means for a store.
Store X had the misfortune of both a low EoS and low traffic. Further examination revealed that the staff also had a low interaction rate. There was simply no excuse for the staff not interacting with customers given how few they were. In this case, store X managers had to conduct some heavy pondering to see which course of action they should take: they could either replace the staff, retrain them, or change the direction of their hiring.
Store Y seemed to suffer from the same symptoms of store X: a low EoS and low traffic. But after some investigation, it turned out that the staff at store Y had a high interaction rate! The problem here lied either with the store itself or the marketing department, both of which needed to step up to bring more visitors to the store.
Store Z had regular traffic but still a low EoS. The initial analysis showed that the staff had low interaction rate with customers. The follow-up confirmed that they also had a low conversion rate! (Store Z personnel really seemed to drop the ball on this one). This combination could indicate either a lack of staff who could not cover the whole store, bad sales skills, or both.
Store W was similar to store Z, except that the staff of store W had a high conversion rate. They were good at converting customers, but could not provide support to a lot of them. There could be two possibilities: a lack of staff, or the staff were simply not good at serving multiple customers. It was up to the managers of store W to decide whether to increase the number of staff, boost their training, or a combination of both.
Last but not least, store V had a low EoS with regular traffic. The staff were diligent at serving customers, shown by their high interaction rate. But their conversion rate was low. The most favorable interpretation could be their inexperience: the staff were energetic but lacking in skills, which could be rectified with some additional training and motivation.
The correlation between the average conversion rate and EoS over time. The white panel showed the average conversion rate prior to EoS optimization. The light turquoise panel showed the average conversion rate when EoS-raising measures were applied. The dark turquoise panel showed the average conversion rate as EoS-raising measures were fully incorporated into the store processes.
The above scenarios have served as helpful guidelines for our clients by informing them of underlying problems with their stores and staff. After the root causes of low EoS scores were determined and fixed, all the revenues of our clients increased by at least 20%.
We are confident in the promising potential of the EoS, but it is not the last of our inventions by any means. To us, data are not just numbers on a screen. Bent into the right shapes and viewed through the right lenses, they are revealing, vivid, dynamic, alive. At Palexy, we analyze, shake up, reconfigure, and play with data all day long. Bring your data to us today, and be prepared for the trove of wonders they surrender!