How do we ensure you can trust what you read?

There are some problems with science on the internet:

- It’s hard to trust what you find on the web about science.

- Scientific literature is written in an impenetrable, complex technical language

- It’s not only difficult to understand, but often it’s hidden behind paywalls

What are we doing about it?

At Sparrow, we care about your trust in science. No wonder, because Sparrow was born out of our scientist-founders' frustration that most people do not have access to or cannot make sense of high-integrity scientific content. They knew that this situation helps the spread of misinformation and pseudoscience.

That’s why at Sparrow we’ve hired scientists to break down the most relevant topics in their field of expertise for you!

So here we’re taking the opportunity to explain our process to ensure what you read is based on solid and trustworthy scientific studies.

But, how do our experts figure out what is trustworthy?

Our writer-researchers pick the most relevant scientific papers to explain to you a given topic. They, like all scientists, learn from the internet as well but primarily search the scientific literature and are extensively trained to evaluate the strength of the evidence.

Still skeptical?

We get it! That’s why we prioritize transparency with our writing process. You shouldn’t have to take our word for it when it comes to integrity. Read below to learn how to evaluate our sources for yourself!

“According to a recent study…” - yawn or run?

While this phrase seems scientific–it's possible that the author leveraged pseudoscience to back up a non-scientific argument. Yes, there might be a recent study…but did you know? Not all studies and scientific evidence are created equal!

When a scientist sees a news article with “a recent study has shown”, they would immediately ask: yes, but what kind of study?

That’s why our authors avoid citing weak evidence, or, if they cite early research, they put it into context. To make sure you can trust our content, they:

  • create their digests around high-quality science,
  • provide links to the relevant scientific literature in case you want to deepen your knowledge and see the proof.

Types of Scientific Literature

The Hierarchy of Scientific Evidence

But what metrics do scientists use to deem a study worthy of mention? They try to balance early research with more weighty studies that give an overview of the current state of science. To understand this, let’s introduce you to the types of scientific studies.

Experimental vs. Observational Studies

There are two branches of scientific studies, experimental and observational.

Experimental studies manipulate a single variable and then compare outcomes between groups. For example:

  • Variable = Drug X for weight gain
  • In two identical groups of mice, one is given drug X and one is given a placebo and then weight gain is measured between the groups.
  • Variable = Fertilizer Y for plant growth.
  • In a greenhouse, tomato plants are either treated with Fertilizer Y or a placebo and plant growth is measured and compared at certain time points.

Observational studies are used when the nature of the study is too large or complex to be manipulated by an experimenter. For example:

  • Impacts of temperature changes of coastal waters in X region
  • Scientists measure the temperature of a body of water over time and also record data about animal life. They use statistical methods to determine if any observed changes are causing any significant effects.
  • Rates of lung cancer in coal-mining regions
  • Scientists may source data from hospitals near coal mining sites and compare them to similar regions not near coal mining sites. They will then use statistical methods to determine if proximity to coal mining increases risk for lung cancer.

Both experimental and observational studies are important. But let’s dig a bit deeper into discerning when each type of study is appropriate.

Types of Scientific Literature

Systematic Reviews - The CEO

A systematic review is a study that sifts through all of the scientific evidence on a clearly formulated question. It then uses rigorous and explicit methods to extract and critically evaluate the evidence from each study and states the conclusions.

The Pros:

  • Systematic reviews are often considered the most reliable types of studies and let us gauge if research findings in a given topic are valid and/or consistent.
  • For example: one particular study may claim that new drug X works better to treat a condition than another. A systematic review takes all the studies that compare the two drugs, sets standards for quality, and reports the findings.

The Cons:

  • If two studies on the same topic report opposite findings, systematic reviews can’t lend clarity as to why that may be the case
  • For novel discoveries, it takes many years for enough literature to be published on a particular topic in order to be able to perform a systematic review

What a Systematic Review Looks Like

As with any type of scientific study, it should clearly state the type of study it is in the title and/or the abstract (AKA the summary of the work).

Note: take a look at the date the systematic review was published. It’s possible that practices and paradigms have changed if it is not recent (the last 5 years; for fast-moving fields, such as cancer immunotherapy, even 2-3 years might mean research needs updating).

Meta-Analyses - the Operations Director

These types of studies unite quantitative data–lots of numbers–from several studies, which increases the sample size. Then, they evaluate the data through statistical methods to provide a more reliable picture than the individual studies in the meta-analysis would do on their own. Meta-analyses are often used in systematic reviews to draw conclusions to quantitative questions.

The Pros:

  • Increased sample size for more statistical power
  • Offer a more reliable overview than individual studies

The Cons:

They can’t be used to assess qualitative data.

  • It could address a question like “How much do farming interventions increase crop yields in an indigenous community?”
  • It could not address a question like “How accepted are these farming interventions among indigenous populations?”

What a Meta-Analysis Looks Like

Randomized controlled trials (RCT) - the HR Director

Randomized controlled trials are important experimental studies for testing out how well new interventions work (frequently new medical treatments). For example, this is how new COVID-19 vaccines were tested.

In RCTs, researchers separate participants into a treatment group–that receives the drug and a placebo group–that does not. The placebo group is necessary to control for chance occurrences or biases the researchers (or patients) may have in interpreting the results.

Before the RCT starts, the researchers must report what is the ‘primary outcome’ they will monitor to measure the study’s success.

It’s crucial to evaluate funding sources and the primary outcome before ascertaining an RCT’s importance and integrity.

For example, suppose a company ran an RCT to evaluate if its new drug can be used to treat lung cancer and started promoting the success of the drug saying that 87% of patients met the primary outcome. If the primary outcome was “Percentage of patients without cancer progression within 6 months of treatment (AKA no one’s cancer got worse)” the result is much less powerful than if the primary outcome was “Percentage of patients with complete cancer regression (AKA their cancer went away) within 6 months of treatment”.

How to Tell it’s an RCT and Find the Primary Outcome:

Sometimes the primary outcome might be stated elsewhere within the abstract as well.

The Pros:

  • They are the most rigorous scientific testing method of a hypothesis
  • They are the gold standard for evaluating the effectiveness of interventions or treatments (example: future drugs)

The Cons:

  • Sample size: Some RCTs don’t include sufficiently high numbers of participants (this is often the case with, for example, rare diseases), which makes the results more subject to chance. (This is where systematic reviews come in handy)
  • Variation in study participants (AKA study design) matters: One RCT may use primarily elderly women while a parallel study may use primarily young men. This may lead to differences in study outcomes. But are the differences due to age? Gender? Environment? It’s hard to say…

Cohort Studies and Case-Control Studies - the Mid-Level Managers

Cohort and case control studies are observational and non-experimental studies (meaning the researchers don’t manipulate the study participants in any way–like they do in RCTs). They are frequently used for epidemiological studies (AKA studies trying to understand trends among diverse populations). Cohort and case control studies differ in their approach to an epidemiological question.

The drawbacks: These studies show correlation but often cannot prove causation, i.e. that “being exposed to A definitely causes condition B”.

Cohort studies start by hypothesizing the cause of a disease.

Cohort study example:

Hypothesis: Fertility treatment X increases risk for polycystic ovarian syndrome (PCOS).

Researchers enroll participants in a study who previously underwent fertility treatment X and follow up with them over 10 years to see if they developed PCOS. They will compare the results to patients who did not receive the fertility treatment and use statistical methods to determine the significance of the results.

Case control studies start with the disease and then investigate underlying causes

Case control study example:

Goal: Identify causes of polycystic ovarian syndrome (PCOS)

Researchers evaluate the medical records of all the patients with PCOS within a given hospital system and perform patient interviews to look for statistically significant commonalities, not apparent in patients without PCOS, that may indicate a potential cause.

The Pros:

  • They often help to understand epidemiological situations
  • They help understand trends among diverse populations

The Cons:

  • These studies show correlation but cannot prove causation, i.e. that “being exposed to A definitely causes condition B”.

Pre-clinical Studies (Animal and Cell Studies) - Junior Managers

The key to evaluating whether a pre-clinical study reports reliable outcomes or not is having an understanding of the strengths and weaknesses of the methods used in the study. This frequently requires academic training in the specific topic… not something most people have!

But there is a useful hack!

To determine if the claims the study makes are valid you can use this method:

Within the scientific community there is a ranking system of scientific journals called ‘impact factor’. If you are unsure if a study’s claims are to be trusted, you can use a search engine to look up the impact factor of the journal that published the study.

Generally speaking, the higher the better (the impact factor is a little less relevant for niche fields, where there is less volume of research). Each field of study (e.g. physics, chemistry, medicine, nutrition, etc.) has its own ranking system, but it’s usually pretty straightforward to determine if a journal is trustworthy based on its impact factor (e.g. Search: What impact factors are ‘good’ for nutrition journals?).

Keep in mind: The impact factor system is not flawless. This is because science is competitive and top-tier journals may prioritize publications based on how trendy the topic is versus how well the research is done. That means some high-quality research may end up in lower tier journals based on the topic. That being said, one can usually trust the claims made in journals in top-tier journals.

The Pros:

  • They are great for defining the role a newly discovered gene plays in DNA replication
  • They are great for identifying the proteins a specific virus uses to infect cells

The cons:

  • They CANNOT determine if a new drug can cure a disease (although they may provide baseline evidence to encourage studies to be conducted in humans)
  • They CANNOT determine if a biological process discovered in mice can be broadly applied to humans
  • It’s important to understand the methodology of the study for which you’d need to have an academic background or you need to do a bit of research

Literature Reviews - the Friendly, Accessible Employee

Reviews are scientific summaries about certain topics. The best reviews unite the most important research in the field (could be a mix of many types of studies) and discuss overarching themes. They tend to be written in less technical language and make fewer assumptions about the academic background of the reader so they are very useful for researchers who hope to get a handle on the research done in a field they know little about.

Unlike systematic reviews, however, reviews aren’t based on a clearly defined question and don’t use statistical methods to evaluate the overarching themes that they discuss.

Example:

  • Systematic review (clearly defined scope) = Which checkpoint blockade treatment regimen is associated with longest-term survival in patients with lung cancer?
  • Review (broad scope) = Checkpoint blockade and Lung Cancer. Lessons learned

How to Determine if a Literature Review is Trustworthy:

The review on CRIPSR below was published in a top-tier biophysics journal and written by Jennifer Doudna, a scientist who received a Nobel Prize in 2020 for her pioneering work in discovering CRISPR. From this quick google search, we can determine that this is likely a trustworthy review.

The Pros:

  • Reviews are written in an accessible language that an educated lay person can understand
  • They pull together the most important research in the field to date

The Cons:

  • Reviews are subject to the opinion and integrity of the author.
  • You need to use the ‘impact factor hack’ to help determine if a review is trustworthy.
  • You might need to do a web search of the authors to see how much research they have published in the field (more research means more expertise) to evaluate if their opinions are trustworthy.

The Scientific Integrity Check: How we ensure the quality of the data we share

Picking the right and relevant evidence is just one part in creating trustworthy Digests.

To uphold our authors to a high standard, we also run an internal ‘science integrity check’. Essentially, it models the peer-review process, a quality control conducted by scientific journals.

The integrity check is done to ensure that the content is relevant, accurate, representative, cannot be misinterpreted, is easy-to-understand and is independent (no financial conflicts of interest).

We ask six questions to ensure that you can consume content that you can trust:

1. Is it relevant? - is the content of the article appropriate to the title, topic and questions posed?

2. Is it accurate? - are all claims and facts in the article adequately evidenced by the underlying references and literature?

3. Is it representative? - are the evidence and facts communicated in a way that reflects the wider body of research, discloses inconclusive or conflicting results, and reflects the hierarchy of evidence (e.g. prioritizes evidence from meta-analyses over overstating the impact of a single primary research article)?

4. Is it unambiguous? - e.g. does each statement clearly reflect the facts being presented with limited scope for misinterpretation?

5. Is it accessible? - e.g. was the summary written in plain language in clear sentences, with the minimum of technical terminology and jargon (with lay explanations of any terms used)?

6. Is it independent? - e.g. were editorial decisions, curators and underlying research articles independent of conflict of interest (and were any conflicts disclosed)?

And that’s it! You’ve got so far in reading and that means you deeply care about trust and integrity. Great, we hope we managed to show you that we are on the same page. Feeling equipped to judge the data for yourself? Let us know how we hold up!



Try Sparrow today

  • Read the latest science updates in just three minutes
  • Get five Digests emailed to you every week—100% free
  • No angle. No agenda. Just the facts.
  • Premium subscribers get access to the complete Sparrow library