There are problems with learning about science:

  • Knowing what to trust on the web is extremely difficult
  • Scientific literature is written in jargon and complex technical language
  • You have to pay in order to read the research

The solution: Sparrow

  • All of our authors are experts in their field
  • You don’t need a background in science to understand our digests
  • We believe that you deserve access to science - No matter the size of your wallet

How can you trust our scientists' judgment?

Our writer-researchers pick the most relevant scientific papers to explain a given topic. They, like all scientists, learn from the internet as well, but primarily search the scientific literature and are extensively trained to evaluate the strength of the evidence.

Still skeptical?

We get it! That’s why you get full transparency with our writing process. You shouldn’t have to take our word for it when it comes to integrity. Our authors avoid citing weak evidence, or, if they cite early research, they put it into context. To make sure you can trust our content, they:

  • create their digests around high-quality science,
  • provide links to the relevant scientific literature in case you want to expand your knowledge and see the proof.

Types of Scientific Literature

(Click to expand/collapse)

Systematic Reviews - The CEO

A systematic review is a study that sifts through all of the scientific evidence on a clearly formulated question. It then uses rigorous and explicit methods to extract and critically evaluate the evidence from each study and states the conclusions.

The Pros:

  • Systematic reviews are often considered the most reliable types of studies and let us gauge if research findings on a given topic are valid and/or consistent.
  • For example, one particular study may claim that new drug X works better to treat a condition than another. A systematic review takes all the studies that compare the two drugs, sets standards for quality, and reports the findings.

The Cons:

  • If two studies on the same topic report opposite findings, systematic reviews can’t lend clarity as to why that may be the case
  • For novel discoveries, it takes many years for enough literature to be published on a particular topic in order to be able to perform a systematic review

Meta-Analyses - the Operations Director

These types of studies unite quantitative data–lots of numbers–from several studies, which increases the sample size. Then, they evaluate the data through statistical methods to provide a more reliable picture than the individual studies in the meta-analysis would do on their own. Meta-analyses are often used in systematic reviews to draw conclusions to quantitative questions.

Note: take a look at the date the systematic review was published. It’s possible that practices and paradigms have changed if it is not recent (the last 5 years).

The Pros:

  • The increased sample size for more statistical power
  • Offer a more reliable overview than individual studies

The Cons:

They can’t be used to assess qualitative data.

  • It could address a question like “How much do farming interventions increase crop yields in an indigenous community?”
  • It could not address a question like “How accepted are these farming interventions among indigenous populations?”

Randomized controlled trials (RCT)

Randomized controlled trials are important experimental studies for testing out how well new interventions work (frequently new medical treatments). For example, this is how new COVID-19 vaccines were tested.

In RCTs, researchers separate participants into a treatment group–that receives the drug and a placebo group–that does not. The placebo group is necessary to control for chance occurrences or biases the researchers (or patients) may have in interpreting the results.

Before the RCT starts, the researchers must report what is the ‘primary outcome’ they will monitor to measure the study’s success.

It’s crucial to evaluate funding sources and the primary outcome before ascertaining an RCT’s importance and integrity.

For example, suppose a company ran an RCT to evaluate if its new drug can be used to treat lung cancer and started promoting the success of the drug saying that 87% of patients met the primary outcome. If the primary outcome was “Percentage of patients without cancer progression within 6 months of treatment (AKA no one’s cancer got worse)” the result is much less powerful than if the primary outcome was “Percentage of patients with complete cancer regression (AKA their cancer went away) within 6 months of treatment”.

The Pros:

  • They are the most rigorous scientific testing method for a hypothesis
  • They are the gold standard for evaluating the effectiveness of interventions or treatments (for example future drugs)

The Cons:

  • Sample size: Some RCTs don’t include sufficiently high numbers of participants (this is often the case with, for example, rare diseases), which makes the results more subject to change. (This is where systematic reviews come in handy)
  • Variation in study participants (AKA study design) matters: One RCT may use primarily elderly women while a parallel study may use primarily young men. This may lead to differences in study outcomes. But are the differences due to age? Gender? Environment? It’s hard to say…

Cohort Studies and Case-Control Studies - the Mid-Level Managers

Cohort and case-control studies are observational and non-experimental studies (meaning the researchers don’t manipulate the study participants in any way–as they do in RCTs). They are frequently used for epidemiological studies (AKA studies trying to understand trends among diverse populations). Cohort and case-control studies differ in their approach to an epidemiological question.

The drawbacks: These studies show correlation but often cannot prove causation, i.e. that “being exposed to A definitely causes condition B”.

Cohort studies start by hypothesizing the cause of a disease.

Cohort study example:

Hypothesis: Fertility treatment X increases the risk for polycystic ovarian syndrome (PCOS).

Researchers enroll participants in a study who previously underwent fertility treatment X and follow up with them over 10 years to see if they developed PCOS. They will compare the results to patients who did not receive the fertility treatment and use statistical methods to determine the significance of the results.

Case-control studies start with the disease and then investigate underlying causes

Case-control study example:

Goal: Identify causes of the polycystic ovarian syndrome (PCOS)

Researchers evaluate the medical records of all the patients with PCOS within a given hospital system and perform patient interviews to look for statistically significant commonalities, not apparent in patients without PCOS, that may indicate a potential cause.

The Pros:

  • They often help to understand epidemiological situations
  • They help understand trends among diverse populations

The Cons:

  • These studies show correlation but cannot prove causation, i.e. that “being exposed to A definitely causes condition B”.

Pre-clinical Studies (Animal and Cell Studies) - Junior Managers

The key to evaluating whether a pre-clinical study reports reliable outcomes or not is having an understanding of the strengths and weaknesses of the methods used in the study. This frequently requires academic training in the specific topic… not something most people have!

But there is a useful hack!

To determine if the claims the study makes are valid you can use this method:

Within the scientific community, there is a ranking system of scientific journals called the ‘impact factor’. If you are unsure if a study’s claims are to be trusted, you can use a search engine to look up the impact factor of the journal that published the study.

Generally speaking, the higher the better (the impact factor is a little less relevant for niche fields, where there is less volume of research). Each field of study (e.g. physics, chemistry, medicine, nutrition, etc.) has its own ranking system, but it’s usually pretty straightforward to determine if a journal is trustworthy based on its impact factor (e.g. Search: What impact factors are ‘good’ for nutrition journals?).

Keep in mind: The impact factor system is not flawless. This is because science is competitive and top-tier journals may prioritize publications based on how trendy the topic is versus how well the research is done. That means some high-quality research may end up in lower-tier journals based on the topic. That being said, one can usually trust the claims made in journals in top-tier journals.

The Pros:

  • They are great for defining the role a newly discovered gene plays in DNA replication
  • They are great for identifying the proteins a specific virus uses to infect cells

The cons:

  • They CANNOT determine if a new drug can cure a disease (although they may provide baseline evidence to encourage studies to be conducted in humans)
  • They CANNOT determine if a biological process discovered in mice can be broadly applied to humans
  • It’s important to understand the methodology of the study for which you’d need to have an academic background or need to do a bit of research

Literature Reviews - the Friendly, Accessible Employee

Reviews are scientific summaries of certain topics. The best reviews unite the most important research in the field (which could be a mix of many types of studies) and discuss overarching themes. They tend to be written in less technical language and make fewer assumptions about the academic background of the reader so they are very useful for researchers who hope to get a handle on the research done in a field they know little about.

Unlike systematic reviews, however, reviews aren’t based on a clearly defined question and don’t use statistical methods to evaluate the overarching themes that they discuss.

Example:

  • Systematic review (clearly defined scope) = Which checkpoint blockade treatment regimen is associated with longest-term survival in patients with lung cancer?
  • Review (broad scope) = Checkpoint blockade and Lung Cancer. Lessons learned

The Pros:

  • Reviews are written in an accessible language that an educated lay person can understand
  • They pull together the most important research in the field to date

The Cons:

  • Reviews are subject to the opinion and integrity of the author.
  • You need to use the ‘impact factor hack’ to help determine if a review is trustworthy.
  • You might need to do a web search of the authors to see how much research they have published in the field (more research means more expertise) to evaluate if their opinions are trustworthy.

Scientific Integrity check: How we ensure the quality of the data we share

Picking the right and relevant evidence is just one part of creating trustworthy Digests.

To uphold our authors to a high standard, we also run an internal ‘science integrity check’, modeling the peer-review process conducted by scientific journals.

The integrity check is done to ensure that the content is relevant, accurate, representative, cannot be misinterpreted, is easy to understand, and is independent (no financial conflicts of interest).

We ask six questions to ensure that you consume content that you can trust:

1. Is it relevant? - Is the content of the article appropriate to the title, topic and questions posed?

2. Is it accurate? - Are all claims and facts in the article adequately evidenced by the underlying references and literature?

3. Is it representative? - Are the evidence and facts communicated in a way that reflects the wider body of research, discloses inconclusive or conflicting results, and reflects the hierarchy of evidence (e.g. prioritizes evidence from meta-analyses over overstating the impact of a single primary research article)?

4. Is it unambiguous? - Does each statement clearly reflect the facts being presented with limited scope for misinterpretation?

5. Is it accessible? - Was the summary written in plain language in clear sentences, with a minimum of technical terminology and jargon (with lay explanations of any terms used)?

6. Is it independent? - Were editorial decisions, curators, and underlying research articles independent of conflict of interest (and were any conflicts disclosed)?