Suggestions for Using Project TND Student Survey Data

Below are comments about specific items included in the post-test version of the survey. {Note that the pretest and posttest surveys are the same except the pretest does not include the process evaluation items (numbers 40-43, 45, and 46).}

Part I. (Post-test)

Questions 1-7:

Assess demographic characteristics. They are useful to describe your population.

Part II. (Post-test)

Questions 8-9:

Assess lifetime and 30-day use of various drugs. One would not expect to see changes on these variables (since the pretest or baseline) until at least one year after the end of the program. When we analyze them, we create an index of “hard drug use” that averages hallucinogens, stimulants, cocaine, and “other drugs.” We look at cigarettes, alcohol, marijuana, and inhalants separately (i.e., as individual items). To compute use rates, we use a cut-point of 0 times (non-use) vs. 1 or more times (use). The 30-day use rates are considered to be measures of “current” use, and are more commonly used as indicators of program effects (at one year). The lifetime use rates are indicators of onset of use.

Questions 10-13:

The last two items (12-13) are violence outcome measures. If you follow the students for one year after the program, you might expect to see a decrease in weapon carrying. We average these two items to create a weapon carrying index.

Questions 14-17:

These are questions about the extent to which students have been victims of violence. If you follow the students for one year after the program, you might expect to see a decrease in the prevalence of victimization after receiving the program.

Part III. (Post-test)

Questions 18-39:

These are program-specific knowledge questions. The correct answers for the knowledge items are as follows:

18 d
19 c
20 a
21 a
22 b
23 a
24 c
25 b
26 b
27 d
28 c
29 c
30 b
31 c
32 b
33 a
34 c
35 a
36 a
37 b
38 a
39 a

To calculate the amount of change in knowledge that occurred for program participants, you may do the following. First, create a knowledge score for each student at pre-test and post-test. If you are using statistical software, you may recode each item so that the correct answer=1 and the incorrect answer=0. Then, add up the knowledge items to create a sum, and divide it by the total number of items. If you would like, you may multiply the answer by 100 to get the percent correct. Second, calculate the average score at pre-test and post-test for the group of students. Subtract the average pre-program score from the average post-program score. A statistician can help you determine whether this change is large enough to be considered statistically significant. As the program planner, you will know if the difference is large enough to be meaningful. Another approach to calculating change is to compute the number of students whose scores increased from pre-test to post-test. To do this, simply compute each student’s score on both tests, and subtract each student’s pre-test score from the post-test score. Then count the number of students whose scores increased. Divide this number by the total number of students and you will know the percentage of students whose scores increased.

Part IV. (Post-test)

Questions 40-43:

These are process evaluation questions. They indicate the extent to which students find drug prevention classes to be helpful.

Question 44:

These are drug use intention items. You might expect changes on these, from pretest to immediate (post-program) posttest. To analyze these items, you could subtract the average mean at pre-test from that at post-test.

Question 45:

These are process evaluation questions. Students are asked to rate how much they liked each of the Project TND sessions. To analyze these items, you could either examine the average rating (1-10) for each session individually, or you could create a composite rating of the program, averaging the ratings across the 12 sessions.

Question 46:

These are process evaluation questions. You examine the mean on each item individually or you could average them to create an index of reactions to the program. If you average them, be sure to recode the “boring, waste of time, and difficult” items so that all of the items are scaled in the same direction (low to high).