Estimated time to complete experiment: 10 minutes
Much of cognitive psychology involves gathering data from experimental participants. Gathering good data is not always easy, especially when an experiment uses a variety of people as participants. Researchers must carefully design an experiment to be certain that participants are following the instructions and are motivated to try their best. Even despite these efforts, experimental results can be contaminated by individual differences if the researcher does not properly analyze the data.
For example, consider two participants in visual detection of a faint target. The researcher wants to explore a property of the visual system, so he/she presents a visual stimulus and asks the participants to report whether they saw the target. After 50 trials, Participant A reports seeing the target 25 times and Participant B reports detection 17 times. Did Participant A do better? Not necessarily, perhaps Participant A is simply more prone to report seeing the target whereas Participant B is more conservative and requires more evidence before reporting a target. That is, the two participants may have equivalent visual systems, but differences in their criterion for reporting. Reports of simple detection do not allow the researcher to compare participants' results.
A better experiment is a modification of the one above and has two kinds of trials, one with the target present and one with the target absent. Again, the participants report whether they saw the target. There are four statistics to be calculated from this experiment. (1) A hit is when the participant correctly detects the target. (2) A miss is when the target was there but the participant did not detect it. (3) A false alarm is when the participant reports seeing the target when it was not actually there. (4) A correct rejection is when the participant correctly reports that the target was not present.
Suppose that after 100 trials (50 for target present and 50 for target absent) the researcher again finds that on the trials in which the target was in fact present, Participant A reports seeing it 25 times and Participant B 17 times. Who is doing better? It depends on the frequency of false alarms. If Participant A has 25 false alarms and Participant B has 5 false alarms, then B is better than A at distinguishing the trials in which the target is present from the trials in which the target is absent. That is, in this case, A tends to guess that the target is there, but she/he is wrong (a false alarm) as often as she/eh is correct (a hit). B is more selective about saying he/she detects the target, but rarely says the target is there when it is not. Thus, B is doing better.
This type of analysis suggests that you need to consider two numbers, hits and false alarms, to really be able to compare performance across participants. Fortunately, you can combine the numbers in a careful way to produce a single number that gives an indication of the sensitivity of the participant to the presence of the target. The calculation is structured so that, with certain assumptions, it will not matter whether a participant takes a conservative or liberal approach to claiming to detect the target. The most common measure of sensitivity is called d' (d-prime), and a common measure of bias (whether the person took a conservative or liberal approach) is called C.
You will see a black rectangle below, where the experimental stimuli will be displayed.
On each trial, a group of randomly placed dots (sort of like a "star field") will appear. The number of random dots varies from trial to trial. Also, on some trials (target present) an additional set of ten dots arranged in a straight line that slants downward from left to right is randomly placed among the dot field. On the other trials (target absent), the line is not included. Your task is to report whether the target — the set of ten dots arranged in a line — is present or absent. There are 60 trials.
After the last trial, new information will appear that summarizes your data.
Trials to go: 60