by J.D. Rhoades
Love ‘em or loathe ‘em, they’re part of any working writer’s life. Back when I was in dead-tree publishing, the joy of seeing a new book released was always tempered at least a little with the dread of opening the Publisher’s Weekly or Kirkus website and praying they didn’t savage it too badly. I even left a perfectly good beach house on a lovely sunny day to drive into town and find a café with WiFi (smartphones with ‘net access weren’t everywhere in those days) and check out the PW review for Breaking Cover that was coming out that day.
To my relief, it was a good review, and the majority of mine in various publications have been generally positive, although the aforementioned Kirkus did always seem to find a way to kick me in the teeth, even in a “good” review.
Like everything else in this business, the review landscape has changed with bewildering swiftness over the last few years. One newspaper after another dropped their review section. Kirkus folded, then was bought and resurrected with a “pay for reviews” model, swearing all the time that you weren’t necessarily paying for a good review. Professional book reviewers became more and more rare, even as website after blog after tumblr sprang up, offering the opinions of everyday readers. And, of course, people turned to the Amazon reviews on a book’s pageand to sites like Goodreads.
So is this a good thing? Well, as with so many things, the answer is, “it depends.” I’m a great believer in the idea that the more voices get heard, the better. On the other hand, not all voices are created equal. Most amateur reviewers are thoughtful readers who can clearly and cogently express what they find good or bad about a particular book in such a way that the reader of the review can make up their mind about whether to try it. Some reviewers, particularly anonymous ones, seem to be in a contest to see who can be the meanest or most cutting. And some are just batshit insane. That’s the Internet for you.
In addition, it soon became obvious that it was childishly easy to game the Amazon review system. In 2012, a furor erupted when investigative work revealed that thriller writer R.J. Ellory had been using “sock-puppet” accounts—false names and internet personas—to not only give his own work glowing reviews, but to attack the works of others. Fellow Brit Stephen Leather asserted defiantly that not only had he used sock-puppet accounts to promote his own work, but that it was “common practice.” A backlash ensued during which authors (including myself) signed a pledge not to use such tactics, followed by a counter-backlash by writers like Barry Eisler, who, even though he’d also signed the pledge himself, wrote that upon reflection, it was “disproportionate,” and that the document itself was “devoid of evidence and argument, relying instead only on an unsupported conclusion that purchased reviews and sock puppet reviews are ‘damaging to publishing at large.’” It should be noted that Eisler was not himself promoting sock-puppetry, he just had a problem with how it was being addressed in this instance. Meanwhile, Amazon went on a frenzy, deleting thousands of reviews that seemed to be from friends or family members of the authors or even ones from fellow writers. They did not, however, delete reviews from people who had clearly not read the book, stating that “We do not require people to have experienced the product in order to review.” Well, then. Glad to see they care about the integrity of the review process.
Wait, it gets worse. Now, social science researchers are confirming that people’s evaluation of a work is inevitably influenced by evaluations they see before it. In one experiment, researchers “allowed people to download various songs and randomly assigned people to see the opinions of others who had downloaded these songs. Sometimes a particular song was shown to be well-liked by the masses, and in other versions of the study, that same song was shown to be disliked. Regardless of quality, people evaluated the songs they believed to be well-liked positively and the songs they believed to be disliked negatively.” In another, the researchers went to a website, like Reddit, where certain comments could be “up-voted” or “down-voted” by clicking a button. They up-voted some and down-voted others at random, and discovered what they called “significant bias in rating behavior and a tendency toward ratings bubbles.” In plain English, up-votes tended to create more up-votes, and down-votes more down-votes. Interestingly, “people also ‘corrected’ the down-voted comments by up-voting them more than baseline levels, but even this correction never spurred them to the level of positivity that artificially up-voted comments attained.”
So what do we make of all this? In a world where self-publishing is exploding, Sturgeon’s Law (“95% of everything is crud”) applies, and professional reviewers are being supplanted by talented amateurs mixed in with some trolls, lunatics, and sock-puppets, who do you trust? In a market flooded with material that desperately needs curation, how do you make decisions when any stranger can be a curator? I have some thoughts of my own, but let’s hear from the Thalians…