Rotten Tomatoes is a Battered Beefsteak!
Anyone that has seen or heard about the film, Batman v. Superman, has undoubtedly noticed its failure to resonate with critics and subsequent box office disappointment. On other side of that coin, Captain America: Civil War was considered a rousing success, due to its critical acclaim and box office achievement. As with any film, critics are paid to watch a movie and provide their opinion about it so that moviegoers have an idea about whether it may be worth spending their hard-earned money to view. There are websites dedicated to this specific endeavor and I’d like to provide readers with a review of one of them: Rotten Tomatoes. This website houses the reviews of thousands of critics and attaches percentages and numerical values using their proprietary “Tomatometer”. They also provide opinions of the general audience that the quantify using a rating system. Many people have used this site to support their opinion of a movie’s success or failure and based on the data I’ve uncovered, there’s quite a bit of faulty logic in doing so.
As a high school teacher, I’m tasked with grading a great deal of writing assignments within a year’s time. I see approximately 160 different students each year and they are required to submit approximately 10 different writing assignments. This shows I’m responsible for grading approximately 1600 essays, projects, and portfolios each year. Inherently, that makes me a critic of sorts. In order to facilitate that process I use a grading rubric, as most of us in the profession do. It provides not only a faster way to grade a large amount of assignments, but also gives the students a clear idea of what they’re being graded on and provides them with their level of performance on a given task. The reason I mention this is that when I grade a specific project, I use the same rubric for all the students who have submitted it. With Rotten Tomatoes and its critics, they are “grading” films that they’ve viewed. Unfortunately they aren’t using any discernable rubric to provide audiences with the information that details how they’ve scored the movie. In many instances, some critics are using a 4-point scale. Other critics use 5 or even 10-point scales. Even still, there are those that use letter grades, ranging from A+ to F, with no numerical value attached to it. Yet there are numerous critics that don’t even provide a score; they simply indicate whether they feel movie is Fresh or Rotten. I find this system highly troubling and disconcerting! I can’t even imagine the uproar there would be if I graded the same assigned project from 2 students by using a different rubric that doesn’t use the same scale. I tell student A that their project is worth 4 points, but that student B’s project is worth 5. Student C gets 10 points to work with and student D will earn a letter grade. Even if 2 different teachers graded the same project, they should use the same rubric so students have a better understanding of what the evaluator is attempting to judge. The margin for error on a 4-point scale is slim to none, where as on a 10-point scale there is more leeway. This is essentially what Rotten Tomatoes is doing and in my estimation is a situation that needs to be rectified.
Unfortunately, there are other issues I have with Rotten Tomatoes and their scoring systems. Their “Tomatometer” is the score that most people use to determine if a movie is good or not. Using the number of reviews that have been submitted by critics and then diving that number by the amount of reviews that are “fresh” calculate it. For example, as of this writing, Batman v. Superman has had 332 reviews submitted, of which 91 are fresh, for a 27% rotten rating. On the contrary, Captain America: Civil War has had 306 reviews submitted, of which 274 are fresh for a 90% fresh rating. There are several things that are not factored into these numbers which people are basing much of their judgment on. For instance, if one were to use the average critic’s rating for Batman v. Superman the score would be 4.9/10 or 49%; still not “fresh” but a sizeable increase of 22% from the Tomatometer score it is currently listed at. Captain America: Civil War’s average critic’s rating is 7.6/10 or 76%, a 14% drop from its Tomatometer score. I honestly believe that the average rating should be the one used to show the overall consensus of the critic’s ratings because it calculates all the scores, and ignores the more arbitrary “fresh” or “rotten” moniker. The major discrepancy I found is that out of Batman v. Superman’s 241 “rotten” reviews, approximately 84 (35%)of them didn’t even provide a score or rating. Another 36 (15%)of those “rotten” reviews actually scored/rated the film above a 60%, the threshold Rotten Tomatoes requires for a movie to be considered “fresh”. Unfortunately for Batman v. Superman those 36 reviews which scored/rated the movie a 60% or in some cases higher, gave it a “rotten” rating. How can they possibly allow this to occur? It is nonsensical to use the 60% mark as the site’s threshold for a fresh score, yet critics that review movies, whose scores factor into the Tomatometer percentage, can use the same score and indicate the film is rotten. The definition of the Tomatometer according to the Rotten Tomatoes website is, “the percentage of approved Tomatometer critics who have given the movie a positive review.” If there is no score given do we really know if the review is positive or negative, outside of a “fresh” or “rotten” indication? As I’ve shown, there are many instances in which scores are “positive” according to the 60% or above threshold, yet a reviewer can still indicate the movie is rotten. There needs to be a standard way to score/rate movies through Rotten Tomatoes to give moviegoers a more accurate depiction of the information being presented.
Another somewhat baffling issue is the “Audience Score” on the Rotten Tomatoes site. The audience score, defined by Rotten Tomatoes as “the percentage of users who have rated this movie 3.5 stars or higher” for Batman v. Superman is listed at 67% from the 209,618 people that gave it a score. However, the average rating from the audience is 3.6/5 or 72%. Captain America: Civil War’s audience score is a 91% out of 138,084 who scored it, yet it’s average rating is 4.4/5 or 88%. Again, why wouldn’t the average score be used? Isn’t that a better indicator of the overall opinion of a film? My other question lies in the fact that they are using a standard 5 stars for the audience to rate a film for the audience, but not the critics? What is good for one should be good for the other.
It is my humble opinion that there should be a more streamlined rubric in place that critics should have to use in order to have their reviews posted on the Rotten Tomatoes site. I wouldn’t want one person who is grading my work using a different grading scale from the next person. Also if they want to continue using the “fresh” or “rotten” indicator, they should require consistency. If a critic rates the film above 60%, 3/5 stars, 6/10 points or whatever scoring system they decide upon, it should have to be fresh. If the score falls below that, it is rotten. Personally, I find the average score of critics to be a better indicator of the overall opinion of the movie and then critics could continue to use a different scale and moviegoers would still get an idea of whether the film is worth seeing. Maybe Rotten Tomatoes simply feels the “Tomatometer” is more exciting and provides the site with more views than using the average scores. I guess maybe they’ve adopted the Siskel and Ebert, thumbs up or thumbs down approach, but have made a mess of the data, which is so easily compiled in the process and would make a more cohesive reviewing experience for film viewers.