Skip to content Skip to navigation

Writing Blog: What good are bad reviews?

Tuesday, May 3, 2016 - 08:00

Usually I like to focus this blog on the creative part of the writing process, but I'm in an unusual pause at the moment so I thought I'd talk about the analytic end. I know the common wisdom in mainstream publishing is that an author should pay no mind to reviews and ratings. At most, we should do comic readings of our one-star reviews to show how little we care. (Only cry in private behind locked doors.) So this essay isn't really for anyone whose book came out from a major publisher. But I have this weird bi-cultural existence, suspended between what I consider my "home" writing community (mainstream SFF) and the community in which I was published (small-press/self-published lesbian fiction), so I get a lot of opportunities to compare and contrast. This essay is for people who don't have mainstream publication and for people who may be bewildered by some of those cultural differences.

There is major anxiety within the LesFic community (quite possibly within all marginal publishing communities) around the crowd-sourced rating-and-review sites like Amazon and Goodreads. A big reason for this anxiety is that it's what they (we) have: they don't get a big publicity blitz, they don't get bookstore placement, they don't get advance reviews in all the highly-respected sites. What they do get is an aggregate of individual reader opinions when those readers are motivated enough to post them.

In my experience, this anxiety is expressed in two major ways. There is a strong community pressure that readers--that is, readers "within the community"--should only ever review and rate books that they absolutely love, and therefore that they will rate highly. This philosophical position is expressed explicitly by many LGBT review sites, and in social media forums for LGBT book communities. This anxiety walks hand in hand with the second: the tendency to react to less-than-perfect reviews as a personal attack. Given a supposed "community standard" of only reviewing/rating books you love, there is an interpretation that to rate a book badly (where "badly" is anything less than a four-star review…or sometimes less than a five-star one) must have been done out of personal malice against the author. That either the reviewer is deliberately giving a false opinion (because, of course, the book must be objectively excellent!), or that, even if they genuinely didn't care for it, the act of publicly expressing that opinion could only come from personal malice.

Viewed from within the community (and it is very much an expression of the assumption that the reader/writer/publisher nexus is a community whose purpose is to support each other against the world) this can look a lot more reasonable than it does from outside the community--where it tends to look fairly toxic.

But beyond the damage to the usefulness of ratings/reviews when only glowing opinions are authorized, there is a damage to authors' perceptions of their own work. Express skepticism of the usefulness of all-five-star ratings and some authors will loudly proclaim that their book is so great that of course it earned all those five-star ratings.

No. I believe that almost every book can earn some genuine and sincere five-star ratings. But no book is universally beloved. Let me repeat that with emphasis: NO BOOK IS UNIVERSALLY BELOVED.

Because I wanted to throw some data at this essay, I took a look at Amazon for the top 100 sellers in Historical Fantasy and the top 100 sellers in Lesbian Romance. You know who has spent a very long time in the top 10 books sold in Historical Fantasy? Diana Gabaldon's Outlander. Do you know how many one-star reviews Outlander has on Amazon? 749. Seven hundred and forty-fucking-nine one-star reviews (4% of the total). No book is universally beloved.

Do you know what unfavorable reviews and ratings mean? They mean that your book is engaging readers who are outside your narrow inner-core target audience. Not just that it's reaching them, but it's engaging them sufficiently to express their opinion in public. And up to a certain point (I'll talk about that point later) the more reviews you get, the lower your average rating is. Because the more people you engage, the more likely you are to engage people who may have liked your book but didn't absolutely love it. Sure, that hurts. It would be nice to be universally beloved. NO BOOK IS UNIVERSALLY BELOVED.

Ah, but I'm a data person, so I ask myself, is it possible to quantify to what extent a less-than-perfect average rating reflects getting your book in the hands of people outside your core target audience? Let's see.

I went through the top 100 Amazon sellers in the Historical Fantasy category and recorded the number of reviews and the average star rating. Then I calculated the average number of reviews at each rating. I mention Diana Gabaldon above because I ended up pulling her out as her own little category for the analysis. And then I plotted those pairs of data.

graph of ratings vs number of reviews for historic fantasy

I had to put Diana Gabaldon on a separate y-axis that differed by nearly an order of magnitude from the rest of the data. But here's the take-away: from around an average rating of around 4.3 on up, a lower rating correlates with a larger engagement (expressed as overall number of reviews). This holds true for the overall average of that top 100 and it holds true in the specific case of Diana Gabaldon. The books that had only five-star reviews? (And keep in mind, these are ones that are currently in the top 100 sellers of the category.) None of them had more than four (4) reviews. Your first, most engaged reviewers are quite naturally going to be people from your core target audience. But an extremely high average is a sign that you haven't expanded far beyond that (yet).

Now let's take a look at the same sort of data for the Amazon category of lesbian romance. (In this case, I looked specifically at Kindle sales because for small press books the dynamics of e-book versus paper are peculiar.) I also cut the data off at a rating of 4.0, not only to compare better with the historical fantasy data (for which that was the lowest average rating) but because the values below that represented only one or two books each and so are less reliable for trending purposes.

graph of ratings vs number of reviews for lesbian fiction

And what do we find? Pretty much the same thing. In this case, a rating of 4.4 and up correlates very closely with the average number of reviews at that rating. No book that had an average five-star rating had more than five (5) reviews. Interestingly, if you look at the plot for the maximum number of reviews at each average-rating point, you get the same effect: a very strong correlation between number of reviews and a lower average rating.

Now, of course, at some point this effect breaks down. At some point a lower average rating does start to reflect people's opinion of the specific book, even in the aggregate. And that seems to start somewhere in the lower 4's depending on the data set. And these trends are looking at aggregate behavior. It doesn't mean that there's no difference at all between a 4.4 rating and a 5.0 rating. What it means is that the meaningful difference between a book that has a 5.0 rating with X number of reviews, and a book in the same marketing category with a 4.4 rating and 50X number of reviews is not necessarily one of quality. What it means is that the second book is reaching outside its core audience. And is engaging them.

As someone who has been working very hard to reach outside what my publisher believes to be my core target audience, this is what I keep reminding myself when I get a "meh" rating or review. It means that I've succeeded.

Major category: