Customer Reviews Outed as Marketing Vehicle
Since at least the aughts, marketers have known that asking customers to review the products and services they’ve just purchased is a good business practice for many reasons — including beneficial SEO. However, those reviews don’t guarantee consumers quality products and services, according to the Harvard Business Review.
On Monday, an author of the July 4 article in HBR spoke on public radio about the piece titled “High Online User Ratings Don’t Actually Mean You’re Getting a Quality Product.”
Philip Fernbach, assistant professor at the Leeds School of Business at the University of Colorado Boulder, and two other authors wrote: “Online ratings may not reflect a product’s quality at all. There are a whole host of issues with user ratings — assuming they are even authentic. These can be divided into three categories: statistical, sampling, and evaluation.”
On Monday, Here & Now's Jeremy Hobson complained about getting repeated emails from businesses asking him to review what he viewed as quotidian products and services. Many times, he doesn’t care enough about what he bought to review it, he says. Also, he hates retargeting that shows him a product online that he already bought, he says.
Fernbach explains that email is cheap for marketers to send and analytics enable the retargeting. Hobson didn’t interview any marketers during the radio segment.
In the HBR piece, here’s what Fernbach and his co-authors said may be wrong with customer reviews:
Only a subset of customers respond. [Author’s note: The article cites this as a research-based “flaw” that many marketers may say they’re unable to control; as in, they can’t control who responds. However, during the radio program, Fernbach says marketers may only be asking happy customers, for instance, for reviews. That does go against best practices, which I’ve cited repeatedly in my articles — ask everyone for reviews and don’t delete the negative ones.]
Fernbach’s article says this is the problem with the responding subset: “The average rating from this sample does not perfectly coincide with the average rating we would have obtained if all product users had left a review. We can be more confident in an average star rating if the sample size is large and if the variability of the distribution of ratings is smaller (i.e. if different reviewers tend to agree). Unfortunately, sample sizes are often not large enough for statistical comfort. Variability also tends to be high for multiple reasons, including random noise. A reviewer may rate the wrong product or leave a low rating due to a complaint about shipping, for instance, which has little to do with the product itself.”
Even allowing for positive and negative reviews to coexist, the HBR piece says the reviews still aren’t random enough.
“Consumers with extreme opinions are more likely to post reviews, which is referred to as a ‘brag-and-moan’ bias,” the HBR article says. “As a consequence, many rating distributions are J-shaped with mostly 5-star ratings, some 1-star ratings, and hardly any ratings in between. Positive ratings also increase the likelihood of later positive ratings.”
[Author’s note: Marketers who accept all customer reviews may have less of a problem with this issue.]
The HBR article says customers may not be qualified to review products and services. [Author’s note: Marketers who have customer-centric viewpoints doubtless disagree with this assessment.]
“Accurately evaluating product performance requires a scientific approach,” reads the article. “Alternatives need to be tested side-by-side under the same conditions, and objective performance measured with sophisticated and often expensive instruments. Users who post reviews do not have the knowledge, equipment and time to assess product performance in this way.”
The HBR piece says that customers may be more swayed by a car seat’s brand image, price and physical appearance than its safety and reliability, for instance.
What do you think, marketers?
Please respond in the comments section below.