vote coasters

the world’s largest annual roller coaster poll

How does it work?

There are two parts to vote coasters; voting and counting. Each year we ask the community to rank the roller coasters they’ve ridden from best to worst, and, from that, produce a list of the top rides in the world!

voting

The voting process is simple. Search for the roller coasters you’ve ridden and drag them into your list. Rides are ordered from best at the top, to worst at the bottom.

There’s no limit to the number of roller coasters you can rank and your list is automatically saved. You can return to your list at any time by logging back into your account.

counting

Once everyone has ranked the roller coasters they’ve ridden it’s time for us to crunch the numbers. We use a pairwise ranking method to find the world’s best rides.

We compare every roller coaster with every other roller coaster. With each comparison, we count how many people prefer one ride over the other. The more times a roller coaster is preferred over another by the majority of people, the higher it will rank.

an example

Sometimes it's easier to see an example; let's imagine we have four lists to analyse:

We’ll pick two rides to compare, Maverick and Skyrush. Now, let's see which ride ranks above the other in every list. Maverick appears above Skyrush in list 3, while Skyrush appears above Maverick in lists 1 and 2. List 4 doesn’t feature Skyrush so a comparison can’t be made.

We can repeat this counting process for Maverick, comparing it against all four other rides:

Maverick (1) - (2) Skyrush

Maverick (3) - (1) El Toro

Maverick (1) - (3) Twisted Timbers

Maverick (2) - (1) Fury 325

Maverick ‘wins’ against two roller coasters and ‘loses’ against the other two. Its win rate, the number of ‘wins’ divided by the total comparisons, is 50%. We can calculate the win rate for every roller coaster and compare them. The ride with the highest win rate takes the number one spot.

List 1
El Toro
Skyrush
Twisted Timbers
Fury 325
Maverick

List 3
Twisted Timbers
Maverick
El Toro
Skyrush

List 2
Skyrush
Twisted Timbers
Maverick
El Toro
Fury 325

List 4
Maverick
Twisted Timbers
Fury 325
El Toro

the detail

The example above shows that two separate lists are never compared. We don’t compare your ranking of Maverick with someone else’s, but instead, find the preference of everyone who has ridden the two rides we’re comparing.

To ensure one person can’t disrupt the results, 5 people are required per comparison of two roller coasters. Therefore none of the comparisons in the example above would count. On top of this, roller coasters require 100 or more comparisons to be included in the results.

Unlike other roller coaster polls, roller coasters that don’t meet these requirements are excluded and don’t influence the outcome. We run vote coasters twice - first to determine which rides meet the requirements and second to find the results of the poll with the remaining roller coasters.

vote coasters is an annual poll. Each year, users are required to verify or update their rankings to be included. As a result, older rankings are removed to keep the data fresh. Despite this, over 1,000 users return each year to update their rankings and take part in the poll.

the challenges

No poll is perfect. Users often prioritise ranking their favourite roller coasters. These tend to be the higher-ranked rides in the results. Smaller and lesser-known roller coasters appear in fewer lists, reducing the accuracy of their rankings. As a result, the accuracy of the vote coaster poll reduces with rank - this is why we have limited our results pages to show only the top 500 ranked rides.

The methodology above describes the results process for 95% of the roller coasters ranked within vote coasters. Well-known but less-ridden rides, such as many in Asia, suffer from a particular bias. The few who have ridden these rides often compare them against only the best roller coasters elsewhere in the world. To reduce this bias, a secondary methodology has been created.

reducing bias

To improve the accuracy of specific lesser-ridden rides, we first gather more data by combining rankings for every year of vote coasters. We use the most up-to-date ranking from each user to do this. As over 1,000 previous users take part each year, the combined data pool remains relatively fresh.

Secondly, we reduce the requirements for ride comparisons. Only three people are required per comparison of two roller coasters, however, each one must still have 100 or more comparisons to be included. Similarly, the results are run twice to remove roller coasters that don’t meet the requirements.

This larger dataset produces more comparisons. Despite this, many lesser-ridden rides still feature far fewer comparisons compared to popular rides. For example, Python in Bamboo Forest has just 190 comparisons, compared to 1052 for Steel Vengeance. To improve the comparability of the lesser-ridden rides we need to estimate the missing comparisons.

We can do this by using a logistic regression model. The model estimates the probability of a ride winning against another. This is represented by a curve that begins near 0 at lower ranks and plateaus towards 1 at higher ranks. The earlier the curve climbs, the closer to #1 the rank will be.

The logistic regression model is trained on roller coasters with a lot of data, like Steel Vengeance. It takes a subset of vote coasters data and with it attempts to estimate the ranking of Steel Vengeance, for example. We can then compare the estimated rank with the real rank of Steel Vengeance. Tweaks are made to several model variables to match these two ranks as closely as possible.

The results are estimated ranks for lesser-ridden rides that better reflect the more popular roller coasters found within the vote coasters results. With these improvements, Dinoconda’s rank rose from #101 to #32, several positions above a more popular near-clone of the ride, X2.