Pretest

Assessment of learning objects

Lori S. Mestre , in Designing Effective Library Tutorials, 2012

Pre- and post-tests

Pre- and post-tests are similar to checklists in that they may measure student needs before a tutorial and then evaluate how well the tutorial met those needs. However, unless the same student is asked to complete both, there is no way to measure change in a particular student's experience. Checklists can be useful in gathering information that may inform the design of a tutorial. If used after a participant completes a tutorial, they can provide information about the relevance of the tutorial or other attitudes and beliefs. Pre- and post-tests, however, are used with the same student to indicate whether that student was better able to complete a task, process, or function as a result of a tutorial.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781843346883500093

Pretest and Sample Selection Issues in Spatial Analysis

Harry Kelejian , Gianfranco Piras , in Spatial Econometrics, 2017

Abstract

Pretest procedures abound in applied economic work, and typically entail statistical problems which are ignored. As one example, consider a researcher who estimates his model but the results are not consistent with prior expectations. Suppose that researcher then reformulates his/her model by dropping certain variables, adding others, etc., and then estimates it and notes whether or not the results are consistent with prior notions. If not, the process is repeated until a model is arrived at, which is consistent with prior notions. The results are then presented as if the final model was the only model that was considered. In this procedure, prior notions (hypotheses) of the researcher will never be rejected. In this chapter we point out the various statistical issues with such pretest procedures. We also give large sample results.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128133873000081

Experiments, Psychology

Peter Y. Chen , Autumn D. Krauss , in Encyclopedia of Social Measurement, 2005

Pretest/Posttest Control Group Design

Without having a pretest in the basic design, any results may be tenuous if attrition is a problem. Attrition occurs when participants drop out of an experiment or fail to complete outcome measures. Without the pretest information, experimenters are not able to determine if those who drop out are different from those who complete the experiment. Furthermore, the pretest information can be used to examine if all groups are similar initially, which should be the case if random assignment is employed. Researchers are also able to examine if there are differences between scores on the pretest and posttest measures. Note that although the following example includes the pretest (O pre) after random assignment occurs, another potential design structure involves the pretest administration before random assignment.

R O pre X O post R O pre O post

In general, the measures at pretest (Opre) and posttest (Opost) should be identical. For practical reasons, the measures are sometimes different. Assuming both measures assess the same underlying construct (e.g., cognitive ability), the underlying construct (or latent trait) of each participant can be estimated from both pretest and posttest measures according to item response theory (IRT). By substituting the pretest and posttest scores with the correspondent latent trait scores, the newly calibrated Opre and Opost become identical.

There are two variations of the pretest/posttest control group design; once again, these designs may be altered by administering the pretest before conducting random assignment:

(Variant 2.1) R O pre X 1 O post R O pret X n O post

or

(Variant 2.2) R O pre X 1 O post R O pre X n O post R O pre O post

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0123693985003273

LORETA Neurofeedback in College Students with ADHD

Scott L. Decker , ... Jessica J. Green , in Z Score Neurofeedback, 2015

Case Studies

In order to more fully demonstrate the utility of LORETA Z-score neurofeedback with a college population, cases from a study examining the effectiveness of LORETA Z-score neurofeedback training are presented below. The study employed a delayed treatment design such that for the first 10 sessions, participants were randomly assigned to either a sham or treatment condition. After the completion of those 10 sessions, all participants received neurofeedback training. As such, two case studies are provided. The first is an individual who received neurofeedback training for the entire study, and the second, an individual who first received the sham condition, followed by LORETA Z-score training. Both individuals were 20-year-old female university students. Both were diagnosed in late adolescence by a general physician, and were prescribed stimulant medication to alleviate their symptoms. As the study took place during the academic year, students were not asked to stop their current medication regimen. The results described below were found in spite of their medication use.

Pre- and posttest behavioral data was collected at three time points—prior to session one, after session 10, after the final session—in addition to EEG data. Given that deficits in WM and self-regulatory processes are common in individuals with ADHD, the behavioral measures selected include three measures of short-term and WM—numbers reversed (NR), auditory working memory (AWM), memory for words (MW)—from the Woodcock Johnson Tests of Cognitive Abilities, Third Edition (WJ III) and the Connor's Continuous Performance Test, Second Edition (CPT-II), which includes measures of inattention and impulsivity. Both behavioral data and EEG data are presented (see Table 14.1) and discussed below.

Table 14.1. Case Study Scores

Numbers Reversed Auditory Working Memory Memory for Words
Pretest Session 10 Posttest Pretest Session 10 Posttest Pretest Session 10 Posttest
Student A 18 19 22 27 34 34 20 20 19
Student B 14 15 16 29 31 32 17 19 19
Omissions Commisions Reaction Time
Pretest Session 10 Posttest Pretest Session 10 Posttest Pretest Session 10 Posttest
Student A 44.87 47.96 44.87 62.87 62.87 61.32 34.64 32.26 30.20
Student B 47.31 52.21 47.31 65.96 70.61 52.04 35.12 35.98 39.48

Note: NR, AWM, MW=Woodcock Johnson Tests of Cognitive Abilities, 3rd Edition; omissions, commisions, reaction time=Conners' Continuous Performance Test, 2nd Edition.

Case 1

Treatment Condition

Student A was a 20-year-old female who was diagnosed with ADHD at age 18. At pretest she reported symptoms of inattention and hyperactivity/impulsivity, though the latter was more prevalent. In terms of her short-term/WM performance, she obtained the following raw scores on the WJ III subtests: NR=18, AWM=27, and MW=20. Raw scores were used because WJ III measures were constructed using item response models and there were minimal sources of developmental variation ( Decker, 2008). Student A was also administered the CPT-II at pretest and obtained the following T-scores: omission errors (i.e., measure of inattention)=44.87 commission errors (i.e., measure of impulsivity)=62.87, and hit reaction time=34.64. T-scores were used for this measure, as the raw scores obtained on the CPT are not directly interpretable.

The first posttest was conducted after 10 sessions of NF. Student A's performance on the WJ III measures was as follows, NR=19, AWM=34, and MW=20. This suggests that her performance stayed fairly consistent on two of the three measures, while she made a substantial improvement on the third measure, AWM. Her raw score of 27 at pretest corresponds to a standard score of 97 (M=100, SD=15). The 7-point increase in her raw score (AWM=34) corresponds to a 16-point increase in standard score (SS=113), which is greater than one standard deviation of change. In terms of the CPT-II, her midpoint scores were as follows, omissions=47.96, commissions=62.87, and hit reaction time=32.26, suggesting that her performance remained fairly consistent, though she was a little faster at responding. At the second posttest (i.e., after the final session), her performance remained fairly consistent on both the WJ III (NR=22, AWM=34, MW=19) and CPT-II (omissions=44.87, commissions=61.32, hit RT=30.20), suggesting she reached a plateau, though the improvements she did make from pretest to the first posttest were maintained.

Case 2

Delayed Treatment Condition

Student B was also a 20-year-old female, who was diagnosed with ADHD at age 17. At pretest, she too reported symptoms of inattention and hyperactivity/impulsivity. However, per self-report, her distress was more equally distributed across the two domains. In terms of her short-term/WM performance, Student B obtained the following raw scores on the WJ III subtests: NR=14, AWM=29, and MW=17. She also obtained the following T-scores on the CPT-II pretest: omission errors=47.31, commission errors=65.96, and hit reaction time=35.12.

At midpoint, Student B obtained the following scores on the WJ III measures, NR=15, AWM=31, and MW=19. This suggests that her performance stayed fairly consistent across all three measures, as would be expected given that she was receiving sham treatment. In terms of the CPT-II, her midpoint scores were as follows, omissions=52.21, commissions=70.61, and hit reaction time=35.98, suggesting that her performance remained fairly consistent as well. While it is of note that her commission errors increased from Time 1 to Time 2, her scores fell within the clinically significant range at both time points. As such, it is likely this was due to random variation, rather than suggesting that her performance worsened, especially considering the other measures were commensurate with her earlier performance. Given the delayed treatment design of the study, Student B was not expected to make significant gains from pretest to the first posttest measurement. However, it was hypothesized that she would begin to demonstrate changes by the end of the study.

At posttest 2, Student B's scores on the WJ III were as follows, NR=16, AWM=32, and MW=19, suggesting little change in her WM as a result of the training. However, her performance on the CPT-II suggested the opposite. Her posttest 2 scores on the CPT-II were as follows, omissions=47.31, commissions=52.04, and hit RT=39.48. Commission errors are indicative of impulsivity and her performance on this measure decreased from the clinically significant range into the "average" range after beginning the training.

Figures 14.2A and B illustrate the significant change from pretest to midpoint for Students A and B respectively. As shown below, Student A demonstrated significant change in alpha and posterior beta and high beta after completing 10 sessions of neurofeedback. As a caveat, it is possible that the full scalp alpha change could be due to nonspecific changes in alertness between the two sessions. However, the consistency in this change at both posttests suggests that the changes are likely due to more than chance. For example, as suggested by our theoretical model, self-regulation may produce nonspecific effects that account for changes in brain activity. Student B also demonstrated some significant change (i.e., theta, and some bilateral delta and high beta) during the sham condition, suggesting there was somewhat of a placebo effect as well. However, it is prudent to remember that these are individual cases, and as such, individual variation can have an immense impact on analyses such as these.

Figure 14.2. (A) Changes in absolute power from pretest to session 10 following NF treatment for Student A. (B) Changes in absolute power from pretest to session 10 following NF treatment for Student B. Note: Scales at the bottom designate p-values.

Figure 14.3A and B illustrates the significant change from posttest 1 to posttest 2 (end of the study) for both individuals. As expected, Student A showed significant changes in theta, alpha, beta, and high beta, as well as some highly localized change in prefrontal delta by the end of the study. This suggests that Student A continued to experience change following the first 10 sessions, likely building on the changes that occurred at the start of the study. This is most notable in the theta and beta (including high beta) ranges, which are often associated with ADHD. Student B also demonstrated additional significant change after receiving the real treatment. Specifically, this change was evident in theta, beta, and high beta, as well as some posterior and right frontal alpha. Again, it seems as though the treatment impacted the theta and beta wavelengths the most, which is consistent with previous and current research, suggesting that the ratio of theta and beta waves is often atypical in individuals with ADHD.

Figure 14.3. (A) Changes in absolute power from session 10 to posttest following NF treatment for Student A. (B) Changes in absolute power from session 10 to posttest following NF treatment for Student B. Note: Scales at the bottom designate p-values.

The results of these two case studies are promising, as both demonstrated change in the expected directions. More interesting still, is that the impact of LORETA Z-score training was able to indirectly affect the students' performance on more commonly used behavioral measures as well. The implications of this are far reaching as many psychological disorders are diagnosed through the use of such tools, rather than through the use of neuroimaging techniques.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128012918000145

Internal Validity

M.H. Clark , S.C. Middleton , in International Encyclopedia of Education (Third Edition), 2010

Designs that Improve Internal Validity

A pretest–posttest control group design assesses the likelihood that history and maturation are plausible threats to internal validity. Participants are randomly assigned to treatment conditions and all participants are measured at pretest, in which the outcome (or effect) is observed prior to an intervention (cause), and again at posttest, in which the outcome is measured after the intervention. The design can be diagrammed as:

R O 1 X O 2 R O 1 O 2

in which R indicates that participants were randomly assigned to conditions, O 1 is the pretest observation, X indicates that an intervention was administered, and O 2 is the posttest observation.

A two-factor, mixed-design analysis of variance (ANOVA) can be used to assess a treatment effect. A treatment effect is evident when there is a significant interaction between time of observation and treatment condition. If the intervention is responsible for the outcome, there will be a change in scores from pretest to posttest for the treatment group, but not for the control group. If there are changes in scores from pretest to posttest for both groups, then history and maturation are likely threats to internal validity.

Olusi (2008) used this design to study secondary school students' use of computers to solve mathematic problems. Participants were randomly assigned to either the experimental or the control group. The experimental group was taught mathematics using a computer-aided learning intervention and the control group was taught mathematics using their traditional method of instruction. All participants took a pretest and posttest of mathematic ability, allowing the researcher to assess how both groups changed over time. The treatment group had a greater increase in scores from pretest to posttest than the control group, indicating a treatment effect for the intervention.

A pretest–posttest non-equivalent control group design assesses the likelihood that selection is a plausible threat to validity. In this design, participants are not randomly assigned to treatment conditions nor do researchers know the mechanism for assignment; in many cases, participants self-select into conditions. Like in the previous design, participants are measured at both pretest and posttest. The design can be diagrammed as:

NR O 1 X O 2 NR O 1 O 2

in which NR indicates that participants were not randomly assigned to conditions.

The ANOVA used with the previous design is also used with this design. A significant difference between pretest scores indicates that selection is a likely threat. If treatment and control groups are not equivalent before the intervention, any differences at posttest cannot be interpreted as an effect. To adjust for the threat to selection, a simple – but not always effective – solution is to use an analysis of covariance (ANCOVA), in which the dependent variable is the posttest scores and the covariate is the pretest scores. This analysis controls for the bias of the pretest scores; however, it does not account for any unobserved bias, variance that is not measured by the pretest scores, but which may contribute to the posttest scores.

McGarvey et al. (2004) used this design to study obese children of preschool age. The investigators had access to two similar state clinics that provided nutritional information and services to preschoolers. Rather than being randomly assigned, all of the participants in one clinic were assigned to the treatment condition and all of the participants from the other clinic were assigned to the control group. The treatment group received an intervention that focused on increasing healthy activities and the control group received standard clinical services. Both groups were administered batteries of pretests and posttests that measured daily activities, fruit and vegetable consumption, and physical activity. Because the treatment and control groups did not differ on any of the outcome measures (or most of the covariates) at pretest, selection was not likely to have threatened internal validity at posttest.

A solomon four-group design assesses the likelihood that testing is a threat to validity. In this design, participants are randomly assigned to one of four conditions: (1) treatment with pretest, (2) control with pretest, (3) treatment without pretest, and (4) control without pretest. The design can be diagrammed as:

R O 1 X O 2 R O 1 O 2 RX O 2 R O 2

A two-factor ANOVA is used to assess (1) a treatment effect – whether posttest scores from the treatment group were different from those in the control group; (2) a testing effect, whether the groups who received the pretest had posttest scores that were different from those who did not; and (3) whether taking a pretest interacts with treatment effects. A significant interaction between treatment and pretest suggests that exposure to the pretest may influence the treatment effect. If there is not an interaction, a significant main effect for treatment indicates that the treatment had an effect, and a significant main effect for the pretest indicates that pretest may influence posttest.

Kvalem et al. (1996) used this design to study condom use among high school students. Researchers randomly assigned 124 classes of students to one of the four groups described above. One of the treatment groups and one of the control groups were administered a pretest consisting of a measure assessing behavioral activities. Two groups received an intervention in which the teens formed solutions to cope with negative behavior, and all of the participants were administered two posttests, 6 and 12 months after the intervention. At the 6-month posttest, researchers found that the group that received both the pretest and the intervention differed from the other three groups. Because there was no difference between the two control groups, the pretest alone did not contribute to the effect. However, because there was no difference between the two posttest-only groups, the treatment alone did not cause an effect.

A regression discontinuity design is a design that may be used to avoid selection, although it does not control or test for it. Participants are assigned to conditions by an assignment variable, which does not need to be related to the effect, but it must be continuous. A cut-off value, usually the mean of the assignment variable, is determined. Those who score above the cut-off are assigned to one treatment condition and those scoring below the cut-off are assigned to the other condition.

An ANCOVA is used to account for the relationship between the assignment variable and the outcome and provides an unbiased estimate of the treatment effect. If the treatment effect is statistically significant after accounting for the assignment variable, we can safely assume that the treatment is making its own contribution to the outcome without being confounded by other variables. Although this design still requires that the researchers control assignment, it is easier to account for variables that may influence participants' selection into conditions.

Moss and Yeaton (2006) used this design to assess the impact of developmental English programs within a college setting. Because their intervention was meant to improve English composition, researchers believed that students with deficiencies in composition would benefit most from this program. Therefore, they used English placement-test scores as the assignment variable. Those who scored below the university's score requirement were placed in the treatment group and those who scored at or above the requirement were placed in the control group. The treatment group took a developmental course in English composition before enroling in the required English courses, and those in the control group enroled in the required English courses without additional assistance. The effectiveness of the program was determined by comparing the course grades between the treatment and control group, while accounting for the placement test scores.

A cross lag panel design assesses the likelihood that ambiguous temporal precedence is a threat to validity. This design is used when participants are not randomly assigned to conditions and the cause is not controlled by the researchers; therefore, it is not clear whether or not the cause preceded the effect. Both the cause and the effect are measured at two different time periods. In some cases, there is not a clear treatment or control group; instead, the cause may be the extent or strength of a characteristic and is measured by a continuous variable rather than a dichotomous one ( Figure 1 ).

Figure 1. A Cross-Lag Panel Design.

The relationships designated by the arrows can be measured by a path analysis or a series of correlations. If the cause did precede the effect, then the path coefficient (or correlation) between the cause at Time 1 and effect at Time 2 should be larger than the path coefficient between the effect at Time 1 and cause at Time 2.

Van de gaer et al. (2008) used this design to assess how class participation affected achievement in mathematics. Students were measured on four occasions as they progressed from the seventh grade through the twelfth grade. Path analyses (structural equation modeling) were used to predict achievement from participation and vice versa. As expected, researchers found a stronger relationship between the two variables when participation was modeled as the cause and achievement was the effect than when achievement was modeled as the cause and participation was the effect. Therefore, the researchers could reasonably conclude that participation preceded achievement in mathematics.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978008044894700292X

TIMING IS EVERYTHING

Anita Y. Wonder , in Bloodstain Pattern Evidence, 2007

FURTHER DISCUSSION OF VELOCITY IMPACT SPATTER TERMINOLOGY-SUBJECTIVITY

After Dr. Kirk's death from cancer in 1970, changes to the original semantics occurred. Instead of the impact and relative velocity being considered at the target surface, the meaning was shifted to the contact point between a weapon and a blood source/injury. This has caused confusion in part because the original concept of contact between a single blood drop and a target also was retained. So impact site became both Dr. Kirk's definition as the site where an individual stain was recorded on a surface, as well as the new definition of the area where a weapon opened a blood source to distribute drops.

Pretests, submitted to classes of students with little background up to completion of two or more 40-hour Bloodstain Pattern workshops, show that the velocity impact spatter (VIS) terms often are regarded in a subjective context. Although this has been corrected in most training formats, many participants still feel that VIS means a specific size of bloodstain identifies specific events, i.e., gunshot is identified by a specific size spatter (less than 1 millimeter in diameter) called high velocity impact spatter (HVIS). Beating bloodspatters are identified by specific size (1–4 mm) bloodstains called medium velocity impact spatter (MVIS). Impact events, however, involve a variety of drop sizes within each degree of force, and in fact are characterized by the presence of an array of sizes, never limited to a single one nor narrow range of sizes. Different pattern dynamics, impact, cast offs, and arterial damage, also distribute drop arrays with considerable overlap in stain size ranges. 3 There is no such thing as one identifying bloodstain size, nor narrow limit range, for an entire dynamic category. Patterns consisting of many spots of blood can result from different acts or events, not all are criminal.

Cast offs: Blood drops which separate from the surface of a carrier.

The main problem with using VIS terms as originating at an injury rather than as Dr. Kirk's approach of a blood drop at contact with a surface, is that we are no longer dealing with identifying individual bloodstains at a single location in time and space. By current methodology labeling is based on the collection of stains on a surface separated from the defining velocity event. This is a shift in application of the term in an analysis, from a single item of physical evidence at one point in time and space to the behavior of a group of spots between a highly variable event (not an item of evidence) and recording upon a distant surface with its own set of variable conditions as well as conditions between the two. To perform this analysis, the analyst must first assume a link between the two locations, contact with a blood source and recorded spatter pattern. Experience has shown that the recorded spatter grouping is not always from the assumed dynamic event. Shifting from a velocity component at the contact between target and blood drop to the contact of a weapon at a blood source, then reading the results of an arrangement of spatters on a target adds many variables that must be considered before conclusions can be stated with regard to the analyzed spatter. This is shown in Table 5-1.

Table 5-1. Variables for Identification of Velocity Impact Spatter (VIS) Events

VIS per INDIVIDUAL Spatter VIS per EVENT to a Blood Source
Blood drop size Blood drop size
Velocity drop is traveling at contact Velocity of weapon at contact
Target surface characteristics Characteristics of the weapon
Characteristics of blood source (hardness, presence of hair, clothing, fat, bone, skin)
Nature of blood vessels injured
Amount of blood distributed
Degree of blood source break up
Distance traveled to target
Conditions between injury and target (wind currents, heat, obstructions)
Velocity of array of drops at contact with target
Target surface characteristics
Overlap of other events
Only three variables are considered relative to interpretation of individual stain appearance 12+ different variables need to be considered before conclusions can be stated regarding the whole group of spatters

Bloodstain patterns identified on the basis of velocity of a weapon striking a blood source involves so many uncontrollable variables that representatives of the scientific community now doubt that BPE can be considered a science. Many claim it is just police work.

More important to the future of the evidence is the shift in logic regarding recorded bloodspatter patterns. Dr. Kirk's logic, as understood from his lectures to the California Trial Lawyers, was to look at how the blood spots (spatters) could be interpreted to identify the type of dynamics that distributed them. He felt a way could be found to determine from their arrangement what act was involved, whether impact, cast off, or arterial. Dr. Kirk analyzed the bloodstains first, from which he felt someday one could postulate the type and condition of dynamics that distributed the whole array. The appearance of individual bloodstains could indicate velocity as one of many criteria. The bloodstain patterns of a case were treated as physical evidence, not as a conclusion to other investigative information.

When shifting the source for determining velocity, the logic changed to assumptions which weaken the evidence. Velocity as the key to identification of whole groups of spatters requires that the source be assumed before the pattern can be labeled. The revised approach deleted evidence from the initial consideration, and thus, became a subjective approach. The assumed dynamics on occasion also has ignored other noncriminal events, such as blood dripping into blood and respiratory distribution.

An apparent attempt to correct the lack of physical evidence in the identification is to claim that size of the drops from the break up of a blood source can provide identity of velocity. Unfortunately size is one of the variables that became considerably more complex when the site of velocity estimation was shifted. Bloodstain size is dependent upon many factors, and cannot be estimated from any assumed full-sized drop, so that it is scientifically unsound to claim size alone identifies an entire category of dynamic events. The size of a bloodspatter, also, provides information regarding that spatter, not the whole arrangement. The sizes of the associated stains within a group provide information regarding the whole pattern group if (and only if) it is first established that the group was distributed by a single event, specifically an impact. To do this one should first justify the identification of impact, not use the stain sizes to justify identity of the event and then use the event to identify the stains, a loop of unsupported reasoning. Crime scenes involve considerable overlap of spatter arrays. It must be shown that a group of spatters are not overlapping separate events, especially as often occurs at violent crime scenes that may include cast offs, arterial damage, respiratory projection, and explosive blood into blood as well as impact.

In consequence, identification applications may now be subjective, such as when an "expert" requires complete interview background before they can identify a pattern as MVIS or HVIS. With this type of analysis, a crime involving a gunshot would have any spatter pattern labeled as high velocity impact spatter. If no gunshot is stated, the identification will be medium velocity impact spatter. The result is the loss of investigative leads information derived from a complete bloodstain pattern analysis. The identification is based on hearsay, not upon independent criteria of the stains as evidence themselves. If a gunshot did not occur where speculated, or did occur but was unknown when the scene was processed, the interpretation of the bloodstain patterns may be discredited along with any erroneous earlier assumed scenario. In fact, a misidentification of a single VIS pattern can prejudice all other pattern identifications within a case.

It will always be more professional, and consistent with Paul Kirk's initial approach, first to label an arrangement as bloodspatters (blood spots, bloodstains), then identify major classifications as cast off, impact, arterial, respiration (exhalation, expiration), or blood into blood. Follow this with final classification in terms of velocity, if necessary, rather than leap to an immediate specific VIS term at first sight of a pattern; i.e., "That's medium velocity impact spatter!" It might be, but it also might lead to embarrassment in admitting later that the pattern was really cessation cast offs, blood into blood, respiratory wheeze, etc.

Exhalation: Blood drop distribution from respiratory functions like breathing and wheezing.

Blood into blood: Scattered secondary spatters distributed when blood drips into a volume of blood.

Cessation cast offs: A cast off pattern where blood drops are distributed at the moment the carrier stops.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123704825500604

Experimental research

Kerry Tanner , in Research Methods (Second Edition), 2018

Pre-test/post-test control group design (or four-cell experimental design, or before-and-after two-group design)

The pre-test/post-test control group design is another 'classic' experimental design. See Figure 14.4. It involves experimental and control groups, selected through randomisation. Both groups are initially tested, and measured on the variable under consideration. Then the experimental group is subjected to the treatment and subsequently re-tested. The control group is isolated from the experimental treatment and is also re-tested. In analysing results, comparisons are made between pre-test and post-test scores for each group (i.e., within-group scores: O1 and O2; O3 and O4), and also the between-group scores (i.e., O1 and O3; O2 and O4), with any observed differences presumed to be attributable to the experimental treatment.

An example of a pre-test/post-test control group design is an experiment testing the efficacy of a new training program for enhancing information retrieval skills. Here participants would be randomly assigned to either the experimental or control condition and all given the same initial information retrieval test (i.e., O1 and O3). After the experimental group has undertaken the information retrieval skills training program, both experimental and control groups would be re-tested (i.e., O2 and O4). Both within-group scores (i.e., O1 and O2; and O3 and O4) and between-group scores (i.e., O1 and O3; and O2 and O4) would be included in the statistical analysis of results. Besides analysing the effectiveness of the training program, researchers may also be interested in assessing the impact on results of moderating variables such as age and prior experience.

One potential problem with a pre-test preceding a post-test is that subjects may be sensitised to the purpose of the experiment, and hence bias their post-test scores (i.e., a maturation effect).

Again, there are variations on this basic design, such as where there are multiple experimental groups and a control, or multiple experimental groups instead of a control.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780081022207000145

Building a Coherent Conception of HIV Transmission

Terry Kit-fong Au , Laura F. Romo , in Psychology of Learning and Motivation, 1996

3 Pretest

The pretest included seven questions about the transmission, seriousness, treatment, and prevention of AIDS. Children saw these questions in a random order rather than the order used here. Children were asked to respond "Yes," "No," or "I don't know" to each question.

1.

Can a person get AIDS by sharing a needle with a drug user who has AIDS?

2.

Can a pregnant woman with AIDS give AIDS to her unborn baby?

3.

Can doctors cure AIDS?

4.

Will people with AIDS die from it?

5.

Can doctors cure AIDS if they catch it early?

6.

Can doctors now use a new vaccine to protect people from AIDS?

7.

If people eat healthy foods, can they avoid AIDS?

After administering this short AIDS knowledge questionnaire, we asked children to tell us what they wanted to know most about AIDS. The sixth-and eighth-grade students wrote down their questions privately; the fourth graders told us their questions aloud in class, and we wrote down the questions verbatim. We collected children's questions because the questions may tell us not only what they wanted to know, but also something about their conceptions, misconceptions, and fears about AIDS.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0079742108605769

Impact/Outcome Evaluation

Frederick M. Hess , Amy L. Klekotka , in Encyclopedia of Social Measurement, 2005

Pretest-Posttest Design

The pretest-posttest design is the most common method of assessment that looks at only one group. A group of participants is given the intervention, which can also be considered the treatment, thus creating a treatment group. Data are collected from the group both before and after the treatment. In this model, the expectation is that without the treatment, no changes would occur. Data are collected before treatment to establish a baseline for the individuals in terms of the behavior or skill being intervened on; data are collected after the treatment to look for differences, whether they be positive or negative. The differences that are observed are attributed to the intervention. This design, which uses only one group, is relatively easy and inexpensive to administer; a drawback is that it may not be possible to definitively attribute changes in outcomes to the treatment because extraneous variables are not accounted for.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B012369398500387X

Couples' voluntary counseling and testing

Kathy Hageman , ... Susan Allen , in HIV Prevention, 2009

Pre-test counseling and risk assessment

The aim of pre-test counseling is to assist couples to make an informed decision about whether to test. The pre-test counseling session also allows the counselor to conduct a couple-specific risk assessment. Most couples have already started considering and identifying their own risks, and are quite prepared to discuss them with the counselors. Regardless, this needs to be done sensitively, and does not necessarily require deep probing for details. Instead, the pre-test risk assessment should address the specific risks that the couple are willing to discuss at that point in time. A counselor may also choose, using his or her own discretion, to separate a particularly problematic couple during pre-test counseling. However, this should only be done as a last resort, as couples' VCT ultimately aims to empower the couple together.

During the pre-test counseling session, the counselor seeks to:

Assess a couple's reasons for seeking testing

Review each partner's basic understanding of HIV and modes of transmission

Identify both partners' history of HIV testing

Discuss and clarify the couple's understanding of different HIV test results

Discuss how they, as a couple, will handle receiving each type of result, and what they can do to move forward as a couple

Explore the couple's feelings about taking an HIV test and receiving results together

Discuss the advantages/disadvantages of knowing their serostatus

Help the couple to identify sources of support

Discuss disclosure issues, including the importance of keeping one's own and one's partner's results confidential, unless the couple mutually agrees to disclose to other parties

Assess the couple's readiness to be tested and to receive results together

Describe the HIV test process.

Pre-test counseling also allows for the assessment of couple dynamics to ensure mutual readiness and intentions. For example, if a counselor suspects conflicts with one or both partners, that one or both partners are trying to use HIV testing as an excuse to leave the relationship, or suspects that there will be potential problems after leaving the counseling session, the counselor may defer testing and suggest instead that the couple think things over and come back at another time.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123742353000091