Background The term “evidence-based” has increasingly been used in the advocacy of drug prevention programs over the past decade in the United States by researchers, program administrators, and government agencies. As a result a number of lists of model and exemplary prevention programs that are deemed to represent the best practices in this area have been produced. This approach is now being integrated into the UK National Drug Strategy with the development of its Blueprints initiative.
Methods This presentation will (a) provide background on the purpose and development of evidence-based lists in the area of drug abuse prevention in the United States, (b) give a brief summary of ten of the main lists that have been produced over the past decade (e.g., the National Institute on Drug Abuse’s Preventing Drug Use among Children and Adolescents: A Research-Based Guide, and the University of Colorado’s Blueprints model programs), (c) identify those prevention programs that appear most often on these lists, and (d) review the evidence pertaining to school-based drug prevention programs that appear most often on the lists (e.g., the Life Skills Training program and the Seattle Social Development program).
Results A number of practices used in the data analysis and reporting from evaluations of supposedly evidence-based drug prevention programs are inconsistent with the type of rigorous hypothesis testing required to produce scientific evidence of program effectiveness. These practices include multiple subgroup analysis, changing outcome variables across studies (thereby introducing potential measurement dependence), post hoc sample refinement, moving the baseline to calculate attrition rates, and selective reporting of findings.
Conclusions The implication of these practices for evaluation research, the identification of evidence-based programs, the development of prevention policy, and the improvement of service delivery will be discussed.