LA 2028 Olympic Survey FAQ

Earlier this year, NOlympics LA conducted an online survey to gauge public opinion of the 2028 Summer Olympics across California and within LA County. This was primarily a response to the overwhelming lack of public input and dialogue around LA’s bid for the 2024 and 2028 Games, as well as the lack of independent polling conducted by local media and research institutions.

To date, only 3 polls and surveys have been conducted around the 2028 Olympics: one commissioned by the bid committee, one commissioned by the IOC and one commissioned by us. We do not consider any of them to be independent or adequate substitutes for meaningful dialogue with the communities who will be most affected and at risk.

The results of our survey were markedly different than those commissioned by the bid committee and IOC, showing that almost half of respondents in California and L.A. County oppose hosting the 2028 Summer Olympics in Los Angeles. You can read our analysis of the results here, and see some examples of open responses from people who took the survey here.

In the spirit of continuing to provide as much information as possible to people who ask us questions, we wanted to share responses to some of the questions we got from supporters, journalists, and skeptics who saw the results of our survey, but weren’t sure how to interpret them. We hope that these are helpful for looking at the results of any survey or poll that you see reported in the media or referenced anywhere else.

Q: What’s the difference between a poll and a survey?

A: Technically, a poll refers to a research instrument with one question, while a survey has multiple questions. In practice, they’re often used interchangeably. We’ll use “survey” moving forward in this FAQ, but most of the answers apply to all forms of polling and surveying.

Q: What makes a survey “valid” or “scientific”?

A: There are three key elements to a survey: sample, questionnaire design, and data collection. The sample refers to the people who take the survey, and a “valid” or “scientific” survey is usually defined around having a “representative sample.” Any time you conduct a survey (except for a census), you’re essentially making an educated guess about an entire group of people based on how a few people from within that group respond, so if that group is too small, you’re not going to get an accurate read on the larger group. Depending on how big the larger group is and what level of accuracy you’re aiming for, a scientifically valid sample size could be less than 100 people or more than 1000. We were looking at the population of California (39.5M people) and our survey had 1032 complete responses, which gave us a +/-3.1% margin of error at a 95% confidence interval, which exceeds the threshold for scientific validity. For reference, the LMU/LA2028 survey from 2017 had a +/-4% margin of error at a 95% confidence interval, which also meets the criteria for scientific validity. There are other criteria for assessing the representative quality of a survey sample beyond size, including demographics, which we’ll get to in a moment.

Q: How can a survey of 1000 people tell you anything about the entire state of California, especially if they’re all random? California is so diverse!

This is the other key component to defining a “representative sample” – does the small group that you’re talking to generally reflect the demographic composition of the larger group? Samples that don’t meet this criteria are known as “skewed” – for example, a sample that’s 80% male. This is a ubiquitous concern and risk across all surveys, all methodologies, and pretty much any other form of research that involves talking to human beings – you can’t force people to take a survey or participate in research, which means that you can’t design a perfect, objective, representative sample from scratch. There are a lot of great articles about what this means in practice (including this one) and also a lot of different tools that researchers have developed to make sure that the samples we get up front and the data we analyze on the back end is as representative as possible.

We conducted two smaller test surveys before running our 1000+ person survey through SurveyMonkey to see how representative its audience panel was off the bat, without setting any parameters for demographics (which is very expensive). Respondents for online surveys are often more male and more white than the general population, so to correct for that you can provide extra weight to responses from women (if there were fewer women in the sample), or more weight to Black/Latinx/Asian respondents (if the sample of respondents had a low percentage of Black/Latinx/Asian people compared to the population of California or Los Angeles). We found that for both test surveys and our final survey, the demographics within the randomized sample were representative of basic demographics for both California and L.A. in terms of gender and race/ethnicity. Our final survey sample skewed very slightly young, so we reweighted the results for age and it did not make a difference.

One criticism that we would like to bring up around the definition of “representative” is that it typically looks at gender, age, and race/ethnicity but doesn’t include categories for marginalized communities and other minorities. For example, around .8% of L.A. City residents are unhoused but a survey with 0 respondents experiencing homelessness (including our survey and the LMU/LA2028 survey) would still be considered “representative.”

Q: Why did you survey people across California instead of just in L.A. County or City?

A: The first reason was cost. When you conduct an online survey and purchase randomized respondents as we did through SurveyMonkey, the more parameters you set and the more specific those parameters are, the more expensive it is. The second reason is that while there’s been almost no public dialogue around the Olympics within L.A., there has been absolutely no attempt from LA2028 to speak to or inform residents of California, who are on the hook for cost overruns among other things. We believe that it’s important to hear from everyone who stands to be affected.

We asked early on in the survey if people lived in L.A. County or not, which allowed us to identify and separate out responses from L.A. County residents. In earlier test versions of the survey, we asked additional demographic questions, including if people rent or own their homes and if they primarily speak English or Spanish, but each question you ask during a survey has a cost in terms of time, attention, or money paid. We think having this information would be extremely valuable, but since we knew our sample was representative in terms of general demographics and would meet scientific standards, we decided to increase the sample size rather than include additional questions.

Q: What’s the most accurate way to conduct a survey and collect data? Is an online survey better or more trustworthy than a phone survey?

A: There’s no one best way conduct a survey, and the decision to choose one method of data collection over another usually depends on budget, sample size, survey objectives, and personal preference of the people designing the survey. The main methods of data collection are telephone (landline or cell phone), online, paper/mail, or a mix. The Pew Research Center has a helpful write-up of all the different ways to conduct a survey on its site here, as well as the different pros and cons for each, and how those considerations have evolved over time. For example, landline phone surveys were considered the “gold standard” for a long time, because researchers could get the most randomized sample possible for pretty much any population, since everyone had a landline. But now an increasing number of people don’t have a landline, so conducting this type of survey can limit the type of people you’ll end up talking to and you can wind up with a skewed sample. Calling people on their cellphones is one way of solving this issue, but can lead to other complications – for example, people often answer their cell phones outside the privacy of their homes, which can affect how people respond to certain questions that they may not feel comfortable answering honestly in public. In short, there is no “perfect” form of data collection that completely avoids the risk of bias.

Q: How did you find people to take this survey?

A: Most online surveys now use randomized pools of respondents provided by online polling companies. Survey designers pay the companies (Qualtrics, SurveyMonkey, Google) to provide random respondents of a certain quality (no bots, no people who run through the surveys quickly without paying attention, etc). We used SurveyMonkey Audience for our survey, which the members of our group who designed the survey have also used in professional settings. It’s important to note that this is different from designing a survey in SurveyMonkey and then posting the link online or emailing it to a group of people you know and asking them to take it. The people who took our survey were random and non-self-selecting (meaning that they didn’t see the survey in advance or know what it was about before they took it). We did post a link to the public version of the survey online after it had been completed by our random 1032 respondents within California, but did not collect or analyze any of these responses. Conducting a survey this way would be considered biased and unscientific, since the people who saw the link and took the survey would have necessarily been following us on Twitter or signed up for our email newsletter, meaning that they already agreed with our platform or are Olympic boosters hate-following us.

While online companies do not explain exactly how or where these panels are sourced and found, they do provide incentives (money, or sometimes in-game tokens for online games) to get people online to respond. These respondents often have demographic information associated with them (probably sourced from the same sources that compile tracking information for online ads) that can be purchased for more money. This is also how focus group participants are typically recruited.

Q: How did LMU and LA2028 find respondents for their survey? Why didn’t you replicate their methodology?

LMU possesses a survey center that mixes phone and online responses. Phone polls are extremely costly, time intensive, and demand a great deal of labor. Typical response rates for phone surveys are around 9% and that’s decreasing every day – people have caller ID and do not pick up the phone calls from strangers, you can’t robo-dial cell phones, and younger people are much less likely to own landlines. In order to get the quality and size of sample that we needed, we estimated that we would have needed to make more than 10,000 calls.

In other words, with a phone survey you need a lot of people and a lot of work to make sure that you hit the necessary sample size and get respondents who are demographically representative of the general population (more than for online respondents). Universities can bear these costs because they generally employ students to do the calling – for free or very cheap. That’s why many political polls associate with colleges or universities – cheap survey labor!

Q: Shouldn’t the LA Times or KCRW or USC be doing these types of surveys?

A: Well, yes. And we’ve tried to get them to do these surveys. For variety of reasons – the defunding of newsrooms, conflicts of interest, and lack of interest – they have decided not to. In contrast, Boston media outlets consistently polled residents for years leading up to and following the defeat of the Boston 2024 bid. Calgary may not have had the extensive polling Boston did, but there was at least some polling conducted. We recognize that one of the reasons local media outlets may have balked at conducting their own surveys is a fear that any results they might find would be close to the results of the LMU/LA2028 survey – which would mean they wouldn’t have a story to publish. We hope that our survey results give them enough evidence to the contrary to push forward with their own research.

Q: Why did you include information about the Olympics first rather than just asking people their opinion of the Games straight out?

A: In this survey we provided objective factual information about the history of the Olympics, about the bid process, about the costs involved in previous bids and likely costs for Los Angeles prior to asking people whether they support or oppose bringing the Games to Los Angeles. The goal with this type of approach isn’t to influence people’s opinions – it’s to gauge if and how people’s opinions shift as they learn more information and gain context, and see if there are measurable differences between informed opinions and uninformed opinions.

Consider: in one of the first questions on this survey, we did what LMU did in its survey commissioned by LA2028, and asked without providing any context or information, “what three words do you associate with the Olympics?” This allowed us to ground and isolate people’s emotional feelings about and associations with the Olympics, and contrast that with how they viewed the Olympics after learning about the impact the Games have had on previous host cities. We don’t see a particularly greater value in knowing what people think of the Olympics in a vacuum, because people don’t make decisions or form opinions in a vacuum. Having a “before” and “after” comparison within our survey gives us a more contextual and nuanced understanding of why people are answering a certain way. Knowing that someone opposes the Olympics because they just learned a certain fact about them doesn’t make their opposition any less real.

Q: Why didn’t you include any facts or information about how hosting the Olympics benefits cities?

A: It’s true that the facts about the Olympics we included were generally negative or neutral; this is mostly a reflection of the reality that there are almost no positive statements about the Games that are measurable or objective. For example, someone who reviewed our survey and results suggested that to make it more balanced, we could have included a positive statement about how the LA2028 Games might generate a surplus before asking people if they were concerned about cost overruns. We would consider this a hypothetical statement, compared to the factual statement that we did include prior to asking people if they were concerned about cost overruns (“According to a 2016 study conducted by Oxford University, the average Olympic Games go over budget by 156% and Summer Olympics go over budget by 176%.”).

For more information on survey questionnaire design (i.e., how to phrase and frame questions), check out this post from the Pew Research Center.

Q: Was the survey conducted by LMU and commissioned by LA2028 completely objective and unbiased?

A: We don’t think that the LMU/LA2028 survey question on support or opposition around the 2028 Olympics violated any major standards or best practices, but we did have a few concerns, which are reflected in our survey design. For reference, here’s their version of the question, followed by our version with changes in bold:

How supportive are you of the City of Los Angeles hosting the Summer Olympic Games in 2028: strongly supportive, somewhat supportive, somewhat opposed, strongly opposed?

How supportive or opposed are you of the City of Los Angeles hosting the Summer Olympic Games in 2028: strongly oppose, somewhat oppose, neutral, somewhat supportive, strongly supportive?

The first change we made was to include “or opposed” in the question in order to avoid response bias, which is a common practice for Likert scale questions like these (which measure opinion on a spectrum and gauge intensity of opinion as opposed to just a “yes/no” binary). In other words, there’s a concern with questions that are phrased this way that people may be unconsciously primed to answer that they support the Olympics, since the question specifically asks how supportive they are (versus if they’re supportive).

We also added a midpoint option of “neutral” to our scale, which theirs did not include. Scales without a midpoint create a forced choice for respondents, whose actual feelings may not be best represented by either a positive or negative response. This is one of the reasons we were always skeptical of the LMU poll and survey total figures – how many of their “somewhat supportive” responses represented people who are just apathetic about the Games versus truly supportive, particularly given the phrasing of the question?