Facts or Opinions?: Surveys as a Method to Democratize Research
- Libby Maman
- 7 days ago
- 8 min read
Updated: 11 hours ago
Survey design is one of the most widely used research strategies in current times, as they provide us with deeper insights on more “abstract” concepts which are difficult to operationalize. They provide a geographically-wide mechanisms for widespread participation, capture diverse perspectives and experiences and even produce statistically significant results based on public opinion or perceptions. These traits are particularly vital in the context of democratization, where understanding the attitudes and values of the population through their active and, above all, voluntary consent is imperative for both accurate and critical thinking.
We mention this given the fact that, an increasingly digitized world, does not necessarily translate into a democratization of information sources. First, because access to the “Big Data” is not universal: according to Robert Opp, only 60% of world population has an stable access to the internet, and nearly 3 billion people (an average of ⅕) where unconnected globally in 2023, according to the World Bank. Using the internet as an inexhaustible source of data for survey design, then, implicates excluding such a huge potential sample of study whose valuable participation is limited by technical reasons rather than because of the sample’s profile.
Second, even where access exists, information quality is increasingly commodified. As Becerril Gil argues, power asymmetries allow database owners to impose restrictive terms, monetize high-quality data, and indefinitely retain user profiles (“IPPs”), oftenly, without full transparency or consent. The result is a paradox: individuals transition from being “ad hoc”consumers of information, to integrated into the database without full knowledge or consent, even after discontinuing the service. In other words, the user transitions from consuming the product to becoming part of the product.

Whether for practical or moral reasons, surveys depend on intentional participation of individuals as active voices with agency for creating comprehensive knowledge rather than passive sources of information. This distinction is foundational to any democracy: if governance is to reflect the will of the people, then research methods must prioritize consent, representation and dialogue over extraction and inference. We shall provide a general comparison between both benefits and disadvantages of both research approaches in order to argue why people, not algorithms, must define the narratives of their own lives.
Surveys and interviews
According to De Vaus, is surveys are one of the most important tools for descriptive research, which “deals with questions of ‘what’ things are like, not ‘why’ they are that way”. For instance, survey-based research tends to focus on indicators like “public opinion”, “voting intentions” or “political ideology” when studying political polarization in the US. This shows that surveys focus on people’s experiences, opinions and perceptions on a given phenomena, as variables which are difficult (while not impossible) to operationalize. In the healthcare sector, “Hospital surveys in driving evidence‐based [brought up] improvements by developing a robust instrument to assess participants’ perceptions of their research experiences and the fulfillment of the principles of human research protections“. What’s more, surveys not only gather data carried out by sociological studies can identify social structures and their impact on the daily life of a population, such as the Indian National Sample Survey exposed that the caste system leads to discrimination in the labour market, which might trigger popular opinion and behaviour. Competent description makes it more difficult to deny the existence of problems .
All these cases share a common factor: the active participation as the backbone of research. “Voluntary participation”, with “fully informed” participants about the procedures involved, any potential risks, and how their data will be used are not but the base moral imperatives and commitment of inquirers. This is why surveys must avoid doing too intrusive questions that may invade participants’ “Privacy”, so that “Anonymity & Confidentiality” of their identities can be granted and prevent them from suffering “No harm”, whether physically, psychologically, or socially
The limitations of these moral principles are significant. First, voluntary participation conflicts with the need for a representative sample, averaging 45% for mail surveys and 34% for web surveys, and may often require researchers to specify their questions and offer incentives. Second, consent may be implied in formats like phone or government surveys, where formal agreements could deter participation. Third, information bias must be managed, as excessive details can lead to socially desirable responses—especially in authoritarian states. In China, indirect inquiries on “governmental system” and “corruption” showed 16%-22% higher response deviations than direct questions, indicating “strong evidence of self-censorship”. Lastly, surveys are costly and time-consuming: online surveys range from $1-$50 per person, phone surveys $20-$200, and in-person $100-$1500, plus incentives ($5-$50) and data processing fees (10%-30%), according to all the examples cited by Olson & Anderson, while the time required for each may oscil·late from a few weeks, up to 8 months to complete on average.
AI and algorithmic data
All these difficulties may turn the quantitative behavioral approach on research way more tempting. Surveys are acknowledged as an extended practice within mid-large companies for understanding customers’ preferences and satisfaction with their service, asking for their feedback in order to promote their engagement and loyalty; as well as they still represent one of the best methods for employees’ engagement. Many of them work within a framework of “Corporate Social Responsibility” (CSR) in the commitment to contribute to social improvements, but it is essential to identify when such tools are being used as little more than social relations strategy. After all, “Data is one thing. Doing something with that data, that’s the key. A lot of companies have to get going with not just surveying customers but doing something about what it is that they’re saying”.
After all, while behavioral data and user opinion are almost equally important for most companies to properly analyze market conditions Koagan & Mersault, they count on alternative data gathering tools and strategies to collect this information without direct user participation or consent. One of the most common ones are Consent pop-ups, more commonly known as “cookies”. A report carried out in the US by Nowens et al., found concerning “dark-patterns” in cookies designs in the studied websites: 32,5% accepted actions like closing the banner as “consent”, 50,1% did not even have a “reject all button”. This has proven to modify user’s behavior, among which 93.1% ignored advanced options of pop-ups and, consequently, average “implicit consent” rate is estimated to have increased between 22-23%. Another study by Papadogiannakis et al. reveals that 14,238 (75%) of studied apps on track the user’s ID before accepting the cookies, and they filter them to third-parties up to 25% more when rejecting them by using finger typing. The same can be said of 1,297 free apps in such important platforms as Google PlayStore, of which 71.3% established contact with tracking dominions like “App X-Ray” or “Disconnect.me” when started, whereas only 9,9% offer explicit consent via cookies and 3,5% do offer a genuine way to reject tracking. This allows to predict users’ behavior to extend the traffic in the platform.
These practices not only violate research principles of consent that surveys endure such as consentment and privacy, but also, legal frameworks such as the US General Data Protection Regulation (GDPR). Yet, their benefits (like representative samples, unbiased behavioral data, lower costs, and monetizable predictive models) often outweigh ethical concerns for some. A notorious example is Cambridge Analytica, which secretly harvested data from 87 million Facebook users via quizzes without consent. When leaked, 85% of profiles were exposed, damaging Facebook’s reputation and shuttering Cambridge Analytica. Most critically, the leak revealed data manipulation: the firm micro-targeted political ads exploiting psychological vulnerabilities to sway voters, selling insights to entities like SCL Group and influencing both the Brexit and Trump’s 2016 campaign. The scandal had lasting repercussions: despite stricter privacy laws by UK’s Parliament and many users staying on Facebook, willingness to share personal data online dropped by 67% post-leak.
Luminata's Approach
At Luminata, transparency is the cornerstone of our research, both with our clients and the participants who make social impact studies possible. We are concerned that participatory research through surveys is gradually being replaced by the purchase of statistical data from a few large companies with massive databases. We advocate for a balanced approach to research: one that reflects both statistical evidence and the experiences, opinions, and perspectives of stakeholders, democratizing knowledge. Even if it is more challenging, costly, and time-consuming, we refuse to let research values and participant trust be overshadowed by the efficiency and quick availability of data offered by some companies. Through our projects, we helped communities and stakeholders to understand the environment they wanted to move in. We carefully designed an online user-focused survey, in order to gather insights on community demographics, pain points and desired resources, as well as their usage patterns, their needs and challenges. This project has shown us that transparency ultimately yields higher-quality social impact results than covert data collection, which leads to scandals like those we’ve seen before.
References
Attewell, P. (2007). Caste discrimination in the Indian urban labour market: Evidence from the National Sample Survey. Economic and Political Weekly, 42(41), 4146–4153. https://www.jstor.org/stable/40276549
Baptista, D. (2022, October 26). Data privacy rights stronger after Cambridge Analytica scandal. Context. https://www.context.news/digital-rights/data-privacy-rights-stronger-after-cambridge-analytica-scandal
Becerril Gil, A. (2016). Información de identificación personal, Big Data y Almacenamiento de Datos Personales [Digital]. Informática y Derecho: Revista Iberoamericana de Derecho Informático (segunda época), 1(2), 29–44. https://dialnet.unirioja.es/servlet/articulo?codigo=6835584
Brown, A. J. (2020). "Should I stay or should I leave?": Exploring (dis)continued Facebook use after the Cambridge Analytica scandal. Social Media + Society, 6(1). https://doi.org/10.1177/2056305120913884
De Vaus, D. (2013). Surveys in social research. Routledge. https://doi.org/10.4324/9780203519196
Heawood, J. (2018). Pseudo-public political speech: Democratic implications of the Cambridge Analytica scandal. Information Polity, 23(4), 429–434. https://doi.org/10.3233/ip-180009
Judd, S., O’Rourke, E., & Grant, A. (2018, March 14). Employee surveys are still one of the best ways to measure engagement. Harvard Business Review. https://hbr.org/2018/03/employee-surveys-are-still-one-of-the-best-ways-to-measure-engagement
Kogan, S., & Meursault, V. (2021). Corporate disclosure: Facts or opinions? SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3963260
Kollnig, K., Dewitte, P., Van Kleek, M., Wang, G., Omeiza, D., Webb, H., & Shadbolt, N. (2021). A fait accompli? An empirical study into the absence of consent to third-party tracking in Android apps. USENIX Symposium on Usable Privacy and Security (SOUPS). https://www.usenix.org/conference/soups2021/presentation/kollnig
Marino, M., Iacono, R., & Mollerstrom, J. (2024). (Mis-)Perceptions, information, and political polarization: A survey and a systematic literature review. European Journal of Political Economy, 85, 102578. https://doi.org/10.1016/j.ejpoleco.2024.102578
Morgan, B. (2022, October 20). Are surveys really customer-centric? Forbes. https://www.forbes.com/sites/blakemorgan/2022/10/20/are-surveys-really-customer-centric/
Nouwens, M., Liccardi, I., Veale, M., Karger, D., & Kagal, L. (2020). Dark patterns after the GDPR: Scraping consent pop-ups and demonstrating their influence. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–13. https://doi.org/10.1145/3313831.3376321
Olson, K., Wagner, J., & Anderson, R. (2020). Survey costs: Where are we and what is the way forward? Journal of Survey Statistics and Methodology, 9(5), 921–942. https://doi.org/10.1093/jssam/smaa014
Papadogiannakis, E., Papadopoulos, P., Kourtellis, N., & Markatos, E. P. (2021). User tracking in the post-cookie era: How websites bypass GDPR consent to track users. Proceedings of the Web Conference 2021, 2130–2141. https://doi.org/10.48550/arXiv.2102.08779
Robinson, D., & Tannenberg, M. (2018). Self-censorship in authoritarian states: Response bias in measures of popular support in China [Report]. University of Gothenburg. http://hdl.handle.net/2077/56175
Shih, T., & Fan, X. (2008). Comparing response rates from web and mail surveys: A meta-analysis. Field Methods, 20(3), 249–271. https://doi.org/10.1177/1525822X08317085
UNDP. (2021, June 23). La evolución de la brecha digital. United Nations Development Programme. https://www.undp.org/es/blog/la-evolucion-de-la-brecha-digital
World Bank. (2023). Households with fixed broadband Internet access at home, 2002-2023 (%) [Dataset]. Prosperity Data 360. https://prosperitydata360.worldbank.org/en/indicator/OECD+ICT_HH2+B21A
Yessis, J. L., Kost, R. G., Lee, L. M., Coller, B. S., & Henderson, D. K. (2012). Development of a research participants’ perception survey to improve clinical research. Clinical and Translational Science, 5(6), 452–460. https://doi.org/10.1111/j.1752-8062.2012.00443.x
Comments