As the different study designs have different strengths, and weakness, they have been ranked in levels of evidence. Simply put, the strongest designs are at the top, the weakest at the bottom. The pyramid is a convenient symbol for this, as there are usually fewer randomized trials say, than case reports.
For an interactive version, go to: http://library.muhealth.org/docs/pyramid/EvidencePyramid.cfm
Case Report - a write up of the case of an individual patient; a clinical presentation. Often the first report of a new disease or disease trend.
Case Series - a write up of the cases of several patients all undergoing similar treatment.
Case-Control Study - a comparison of study subjects with a particular disease/risk factor (cases) to those without (controls). These have also been called retrospective studies. A good design for rare diseases but easy to get poor data.
Clinical Trial - an experimental study in which subjects receive an intervention. Preferably subjects are assigned to either treatment or no treatment/placebo (see Controlled Clinical Trial). Some trials compare multiple treatments, e.g. the subjects could be assigned to: Treatment A, Treatment B, No treatment/placebo. The different groups are called arms. This is the best study design for testing effect of interventions.
Cohort Study - a group of subjects followed through time. Cohort studies can be used to track effect of an exposure, e.g. all subjects had been exposed to lead in their housing, or they can track a cohort not exposed. They have also been called prospective studies. This is a strong design for determining risk and incidence.
Controlled Clinical Trial - a Clinical Trial where there is a control group receiving a comparison treatment or no treatment/placebo.
Cross Sectional Study - a descriptive study that documents the number of people with a particular disease or risk factor.
Randomized Controlled Trial - Same as Controlled Clinical Trial, with the added benefit of the subjects being randomly assigned to treatment/no treatment arms. This avoids selection bias as all subjects have an equal chance of being assigned to any one of the treatment/no treatment arms. Random assignment can be accomplished using machine generated random number tables. Assigning subjects using methods such as coin tosses or assigning even-odd numbers is considered pseudo-randomization.
Review articles are not considered evidence. One exception to this are Systematic Reviews - including their subset, Meta-Analysis.
Why are Systematic Reviews included in the evidence based pantheon? They aim for documented, exhaustive and comprehensive searching for all research on a specified topic. Most other types of reviews either do not document their search and collection of the research nor do they verify that they have completed an exhaustive and comprehensive search.
In evidence based practice, much is made of Randomized Control Trials (i.e. clinical trials where participants are randomized assigned to an interventions, including a control group.) Why? Randomized Control Trials - RCTs - can show cause and effect not just association.
However, they are not the only study design available. This page provides a glossary (left column) of some of them. For more info & a nifty chart, check out CEBM's Study Design site.
Making sense of it all
Study designs can be classified as descriptive or analytical. The analytical studies have more power, or ability to predict, than descriptive studies and therefore rank higher in the evidence based world.
Descriptive studies give us a snap shot of what is happening. Surveys, case reports, cross sectional studies (using surveys) are descriptive. They cannot show cause and effect, but they can show prevalence or incidence. They can also be useful in spotting trends.
Analytical studies measure the relationship between factors. Cohort studies and randomized control trials are analytical. They show cause and effect.
For more information or in depth definitions, see College of Veterinary Medicine, Washington State University.