Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This protocol is based on Experiment 3 of the classic Werker et al. (1998) study cited below. Infants are habituated to a single word-object pair. Later, they are presented with four test items: familiar object with familiar word, familiar object with novel word, novel object with familiar word, and novel object with novel word. Pre-test and post-test trials, consisting of a novel object and word that do not appear in any other trials, are also included.
Werker, J. F., Cohen, L. B., Lloyd, V. L., Casasola, M., & Stager, C. L. (1998). Acquisition of word–object associations by 14-month-old infants. Developmental psychology, 34(6), 1289.
This protocol will use one central display and a light positioned directly below the display. These would be our minimal starting definitions:
However, if you have more lights connected in your system (e.g. for headturn preference procedure) and will be leaving them all connected during testing, you may need to define more lights so that you can identify which one should be CENTER
.
Now we'll define our tags that reference files. We have three audio files and three video files - two of each for use in habituation/test trials, and one of each reserved for pre-test and post-test.
We could define these with LET
statements. But because these will always be paired, it is convenient to use TYPEDLET. This will allow us to define all of our possible word-object pairings and randomly select a pair for presentation, rather than independently selecting audio and video, and having less control over which appear together.
Our videos, which we'll call round
, green
, and blue
, show toys being rotated by a hand on a black background. Our audio files consist of infant-directed repetitions of a nonword: deeb, geff, or mip.
Having defined these tags with TYPEDLET
, we can use LINKED to define tags that pair them up appropriately. The tags mip
and round
will be reserved for pre-test and post-test, but all other pairings will occur in the test phase, and one of them will be featured in the habituation phase.
We'll define two groups consisting of our LINKED
tags. Both will contain all of the possible pairings of deeb
, geff
, blue
, and green
. At the start of each test session, we will randomly select one pairing from the first group for the infant to see during habituation. The test phase for all infants will consist of all four pairings, so our test_trials
group will also contain all four.
Now we define our experiment settings. Because this protocol has a habituation phase, we must define all of our habituation criteria here.
We could also define key assignments for live-coding - without them, experimenters will use C for looks CENTER
to the light/screen, and W for AWAY
.
Now for the STEPs that will start once we run the protocol. First, we'll define our pre-trial phase. Before each trial, we'll have a center light flash until the infant is paying attention. We'll have the experimenter press C to begin a CENTER
look.
Once the infant is looking, we can start a trial. We'll turn off the light and display the pre-test stimuli. Note that because we defined the LINKED
tag prepost
with both an audio tag component and a video tag component, we can reference the LINKED
tag in the action statements for both AUDIO
and VIDEO
.
Our trials throughout the experiment will last a maximum of 14 seconds. Trials will end when we reach this limit, or when the infant is recorded as looking away for at least 1 second. There is only one pre-test trial, so once it is over, we end the phase and move on to habituation.
As we start the habituation phase, the first thing we need to do is assign the participant randomly to a word-object pair that they will see during habituation. There were four possibilities in the habit_pairs
group.
Having selected in advance which word-object pair
the participant will see, we can define the inter-trial period (with the blinking light) and a habituation trial. Note that the pair
tag we selected from habit_pairs
is again a LINKED
tag that can be referenced in both of the action statements to play the audio and video.
We'll play the exact same stimuli for the rest of our habituation trials, so now we'll define a loop. We want to either keep playing trials until the infant meets our habituation criteria or reaches the maximum of 20 habituation trials (19 of them via the loop).
Note that our loop goes back to STEP 8
, where we started the light blinking for the inter-trial period, but excludes the assignment of the dynamic tag pair
back in STEP 7
. This is why we chose which word-object pair would be presented before we needed to use it to display trial stimuli in STEP 10
: we didn't want this LET
statement to be included in the loop. We want this dynamic tag to be assigned as one pair and stay that way so that we are repeatedly presenting the same word-object pair. If the LET
statement were inside the loop steps, we would repeat the choose statement on every loop iteration, and we would show different word-object pairings on different trials. In general, when you want to select from a group and be able to refer to the result of that selection throughout a phase, it's a good practice to make that selection in the same STEP
where you define your phase's start. "Getting it out of the way" like this makes it easier to not accidentally loop over and reassign a dynamic tag that you would prefer to stay static.
Once the infant has habituated or met the maximum number of trials, we move on to the test phase. We'll begin again with the inter-trial light flashing before beginning a trial.
Recall that our four test items are all in the test_trials
group, and are LINKED
tags with all the possible pairings of the audio deeb
and geff
, and the videos of the objects blue
and green
. We want to display these four word-object pairs in a random order, without replacement. We'll define one trial, then use a loop to run the remaining trials.
The test phase is where we see the advantage of defining our pairs of tags via LINKED tags rather than selecting video and audio tags separately. If we had defined a test audio group and test video group, they would look like this:
LET test_audio = {deeb, deeb, geff, geff}
LET test_video = {blue, green, blue, green}
With random selection from each in turn across trials, there would be nothing to stop us from repeating a pairing, and thus failing to show all the combinations of words and objects. For example, on our first test trial we could randomly select deeb
and blue
- but there is no way to specify that if we choose deeb
again from the audio group, green
must be selected rather than blue
from the video group. We could define groups that would be chosen from in a fixed order, arranging each of the audio and video tags so that all the pairings are present when they are selected using FIRST (and creating multiple copies of this protocol to counterbalance test trial order.) But without LINKED
tags, we could not use RANDOM selection in this protocol.
We'll use another loop to play the remaining three test trials, after our first one is done, and this concludes our test phase.
Lastly, we have a post-test trial, which is identical to our pre-test phase.
Now our experiment is done!
See the resources page for a copy of this protocol.
This protocol is based on the Newman et al. (2020) study cited below, testing toddlers' fast-mapping from noise-vocoded speech via a preferential looking paradigm with an initial training period on the word-object mappings. In a training phase, participants are taught the names for two objects, which appear alternately on the screen. Following training, both objects appear on-screen simultaneously with no accompanying audio to assess baseline preferences and familiarize participants to the idea that the objects will now appear together. Subsequently, the objects appear together in these same positions across several test trials. Sometimes the speaker asks the child to look toward one object ("find the coopa!") and sometimes directs them to look at the other object ("find the needoke!")
Newman, R. S., Morini, G., Shroads, E., & Chatterjee, M. (2020). Toddlers' fast-mapping from noise-vocoded speech. The Journal of the Acoustical Society of America, 147(4), 2432-2441.
This study was not originally run in BITTSy, and it used four fixed-order experiment files to control condition assignment and trial order rather than the randomization set up in this example protocol. However, this kind of multi-phase study, with multiple conditions and restrictions on stimulus presentation order in each phase, is a great example of a more complex preferential looking study (see this example for a simpler case). Below is a walk-through of how you could recreate its structure in BITTSy.
As in any protocol, first come the starting definitions:
In this protocol, we are only using a single display, and our audio is playing from stereo speakers. We therefore only name one display in the DISPLAYS
definition, and leave out the definitions for LIGHTS
and AUDIO
. We only need to name one SIDE
to match with the display.
Next, we're going to set up all the tags that reference our video stimuli files.
Our visual stimuli are two animated 3D models, one of a spikey ball on a pedestal (which we call "spike") and one that looks like an F in profile (which we call "fred"). In training trials, we'll present one of these two objects and an audio track will identify it as either the needoke or the coopa.
After the training phase, we'll have a baseline trial with both objects on-screen and no accompanying audio. We'll want to randomize which is on the left and which is on the right. In the test phase, to keep the task as easy as possible, we'll keep the object positions the same as in the baseline trial.
This baseline trial has "fred" on the left and "spike" on the right. (The order of objects in our video filenames identify their ordering on screen left-to-right.)
And this baseline trial can be presented with either of these two sets of test videos with that same object positioning, depending on how the objects were labeled in the training phase.
Comments (lines starting with a #) do not affect the execution of your study, and are helpful for leaving explanation of your tag naming conventions and study structure.
Here are the rest of our tags, for the opposite object positioning.
Note that in our test video tag names, we've labelled which object is the target object and which side is the correct answer. This makes it extremely convenient for us to determine these later, looking at the reports from study sessions - it is directly in the tag name! This is not necessary for this protocol, just nice. It does lead to some redundancy in files - for example, coopa_spike_left
and coopa_fred_right
are the same video file. Which tag we use to call up that video just depends on which object was named coopa earlier in the study.
Lastly, we'll have an attention-getter video that plays in-between all the trials.
Now, let's walk through the groups we'll define for this study. There are two possible ways to pair our two objects and two audio labels. Either coopa goes with the object "spike" and needoke goes with the object "fred," or vice versa. We will call these two training phase configurations Order 1 and Order 2, respectively. We will want to randomly assign participants to one of these two ways of pairing the labels and objects.
We're also going to randomly determine which sides the two objects will appear on in baseline and test trials. With this next layer of random assignment, we'll end up with four distinct orders. In the "A" orders, "fred" will be on the left, and in the "B" orders, "spike" is on the left.
Now for the test phase groups. In Order 1A, coopa goes with "spike" and needoke goes with "fred," and "fred" is on the left and "spike" on the right. These are the test video tags that fit that.
Similarly, we'll set up the test video groups for the other orders.
Now we're ready to define our groups of groups - each whole order, from training to test.
So that we can randomly pick one of these four orders at the start of the experiment, there will be one more layer to this group structure. We'll have a group that contains all of the orders.
At this point, we would define any optional experiment settings that we needed. But we don't need any for this protocol. For other preferential looking studies, you most notably might want to change the background color that is visible when there are no stimuli on the screen, so that gaps between videos are not disruptive.
Now that the experiment is set up, we'll define what will actually happen when we click to run the protocol.
In the first STEP
, we'll start the training phase. We'll want to delineate which trials are in the training phase and which are in the test phase so that we can easily separate them in reports from study sessions. Then, we choose the order the child will participate in (1A, 1B, 2A, or 2B) from our orders group. This resulting group contains three other groups that together make up the videos that can be in that order: its training phase videos, its baseline video, and its test videos. We defined them in this order, so we can use TAKE statements to choose them from first to last, without replacement, and assign them dynamic tag names that we can use to refer to them later. We've named these dynamic tags in a way that makes explicit that these are still groups. We'll need to select particular stimuli from these groups later, to display in their respective phases.
Now that we've picked an order, time to start displaying stimuli. First, an attention getter. This is played until the experimenter indicates that the child is looking at the screen and ready for a trial to start, by pressing the X key on the keyboard.
Now that the child is ready to see a trial, let's show one.
The group of training videos contains two tags. We'll choose one of them randomly. We use a randomization clause {with max 0 repeats in succession}
to restrict which tags are chosen from this group after we make the first selection. This one means that we can never display the same tag in the group twice in a row. Because there are only two in the group, trials will alternate between them: children will either hear coopa, needoke, coopa, needoke... or needoke, coopa, needoke, coopa...
With this randomization clause, we have specified exactly what we want for the whole phase, right here. We can use a loop to run the rest of our trials.
When we finish the first trial and reach this loop, we'll jump back to STEP 2
, where the attention getter video started, and we'll continue, running another whole trial, until we reach STEP 7
again. This will repeat until the loop terminating condition is reached: when we've looped back over the section seven times. Along with the first execution of this section, before we started looping, we display a total of eight training trials. This is the entirety of the training phase.
Now we'll start the test phase. We'll still have the attention-getter appear in between every trial, and our first video we show here will be our baseline video.
Recall that the baseline_group had the baseline video we wanted in this order as its only tag. We'll still have to make a selection and assign it the dynamic tag baseline_trial
in order to play the video. But with only one tag in the group, there are many possible ways to define the choose statement.
Next, the rest of our test videos, which we will again run in a loop.
Note that these steps are almost identical to steps 2-7, which made up the training phase. Parallel trial structures like these make it really simple to write protocol files - simply copy and paste the section of STEPs and change step numbers, tags, and other flags as necessary.
In the test phase, we want to display eight total test trials. We achieve this when the loop in STEP 19
has run seven times. There are two tags available in the test_group
for each order, and we want each to be displayed exactly four times. After each tag's first selection, it can have 3 repeat selections. We define this in the randomization clause in STEP 17
. If we wanted to further restrict the trial order (for example, if we wanted to prevent a random order that showed the same video four times in a row, then the other one four times in a row) we could do so with a further randomization clause.
Once this loop is completed, the test phase is done, and so is our experiment!
See the resources page for a copy of this full protocol.
This section features walkthroughs of examples of some common study paradigms. Check the Resources page for the full protocols and stimuli sets if you would like to try running these example protocols or adapt them for your own studies.
This protocol is based on Newman & Morini (2017), cited below. This study focuses on toddlers' ability to recognize known words when another talker is speaking in the background. The target talker is always female, but the background talker is sometimes male, and sometimes another female talker. When there is a big difference in the fundamental frequency of two voices (as there is for the target female talker and background male talker in this study) adults will readily use this cue to aid in segregating the two speech signals and following the target talker. When the fundamental frequency of the two talkers are similar (as in the female background talker condition), the task is more difficult. This study asks whether toddlers also take advantage of a fundamental frequency cue when it is present, and demonstrate better word recognition when the background talker is male.
Newman, R.S. & Morini, G. (2017). Effect of the relationship between target and masker sex on infants' recognition of speech. Journal of the Acoustical Society of America, 141(2). EL164-169.
This study presents videos showing four pairs of objects. Each object pair is presented on 5 trials, for a total of 20 trials. One of the five for each pair is a baseline trial, in which the speaker talks generically about an object but does not name either one ("Look at that!"). In the other four trials per pair, the target talker names one of the objects. Half of the trials the target object is on the left side of the screen, and half of the trials on the right. All trials, including baseline trials, are presented with the target speaker audio mixed with either a male or female background talker.
Although only 20 test videos are presented to each participant, far more combinations of object pair, target object, target position, and background talker are possible. In this example, we'll demonstrate how to set up this study in several different ways. First, we'll set up the whole protocol with a pre-selected set of 20 videos that satisfy the balancing requirements above, and present all of them in a random order. Next, we'll talk about what we would change in that protocol to present them in a fixed order, and create multiple fixed-order study versions. Lastly, we'll talk about how to use selection from groups and psuedorandomization in BITTSy to select and present appropriate subsets of the possible stimuli for different participants.
The opening section of a protocol, before any STEPs are defined, typically takes up the bulk of your protocol file. This is especially the case in preferential looking studies, in which there are often large sets of video stimuli to define as tags and arrange in groups. While this part may seem to take a while, by the time you do this, you're almost done!
Tip for setting up tags and groups in your own studies: Copy+Paste and Find+Replace tools in text editors like Notepad and Notepad++ are your friend! When adapting an existing protocol to reference a different set of stimuli, use find+replace all to change the whole file path, up to the file name, on ALL of your stimuli at once to be your new study folder. When adding more tags to reference more stimulus files, copy+paste another tag definition, then go back and fix the tag names and the names of the files to be your new ones.
The first lines in any protocol are your starting definitions. Here, we will only use one central TV display. We'll name it CENTER
.
Before creating this protocol, we pre-selected 20 videos that fit balancing requirements for the study, such as having an equal number of trials with the female background talker as with the male background talker. We happen to have more possible videos for this study, but a study with fewer factors or more trials, that displays all stimuli to every participant, could be constructed exactly like this. (Later, we'll cover how to select stimuli within the protocol itself.)
Here are the definitions for our 20 trial video tags. All are named by 1) which object is on the left, 2) which object is on the right, 3) m or f for the background talker, 4) the target word spoken by the target talker ("generic" for baseline videos in which the target speaker said "look at that!")
We have one more tag to define, which is the attention-getter video we'll present before starting each trial.
We have just one group to define for this protocol. It will contain our 20 trial videos, so that we can later randomly select these videos from the group to present.
After defining tags and groups, we would define optional experiment settings if we had any to include. Most of these are unimportant for preferential looking studies. One that is important is the background color. BITTSy will cover the screen in either black or white whenever there are no visual stimuli being presented on the display. In preferential looking, we are often presenting video stimuli back-to-back, but there can sometimes be perceptible gaps between one video ending and the next starting. If you are using cues for coding when trial timing that depend on the appearance of the screen or level of light that is cast from the screen onto the participant's face, you will want to ensure that the background color defined here matches the background of your inter-trial attention-getters, rather than your trial videos, so that the background color being displayed is not mistaken for part of a trial.
First, we will begin a phase. In a study like this that has only one phase, this is completely optional - but it is not bad to be in the habit of including them.
The next thing we want to do is display our attention-getter video, before we show a trial. We'll want to come back to this part of the protocol again later (in a loop), to show more attention-getter videos. We can't put the attention-getter in the same step as the phase start flag without also repeatedly starting the phase - which doesn't make sense.
So we'll start a new STEP
and display the attention-getter with a VIDEO
action statement. The short attention-getter clip will be played repeatedly until the experimenter decides the participant is ready for a trial - by pressing C.
When you have an UNTIL statement in a STEP
, it must always be the last line. So we'll start a new STEP
that will happen as soon as the experimenter presses the key. In it, we'll just stop the video.
Next, we need to start a trial.
Our lab prefers to consistently put trial start flags in a STEP
all by themselves, just so it is easier to visually scan a protocol file and find where a trial is defined. But this is not necessary. This trial start flag could equivalently be the last line of the previous STEP
or in the next STEP
preceding the line that plays the video.
Our trials need 1) a tag of a video to display, 2) a command to play the video, and 3) a step terminating condition that tells us when to move on. We define the dynamic tag vid
that will hold whatever tag we select, which we'll then play in our action statement. We'll use the UNTIL FINISHED
step terminating condition to not move on to the next step until the video has played through.
Now the trial ends. This has to be a new STEP
, because UNTIL
statements are always the last line of the STEP
that contains them, but this happens immediately once the terminating condition (the video ending) is met.
Even though we know the video has finished playing by the start of this STEP
, we still have included a statement to explicitly turn off the video now. When videos finish playing, the processes that BITTSy uses to play them stay active. A "garbage collector" will take care of this so that the videos that are done stop taking up any resources, but this is not immediate. Explicitly turning videos OFF
frees up these resources right away. It is not required, but ensures best performance, particularly if your computer is under-powered or if a lot of programs are running in the background. However, if you do include OFF
commands in this type of case, you should always turn off trial stimuli after the Trial End
command. This is not crucial in this study, but for studies that are live-coded, in-progress looks are logged when the Trial End
line is executed. If the stimulus is explicitly turned OFF
before the end of the trial, BITTSy may not be able to associate the in-progress look with the stimulus that was removed.
We've now defined the basic structure of our experiment - attention-getter, then trial. For the rest of the trials, we can use a loop. We'll jump back to STEP 2
, where the attention-getter started, and go back through a whole trial, and repeat the loop again - until the loop has executed a total of 19 times, giving us a total of 20 trials.
We could have combined STEPs 6 & 7. Like trial starts, our lab likes to put loops in a step by themselves to make them more visually obvious, but it doesn't matter whether you combine these or break them into two STEPs. All that BITTSy requires is that a loop and its terminating condition are the final lines in the STEP
that contains them.
Lastly, since we defined a phase start, we'll need to end it.
What if you don't want trial videos to be presented totally randomly? Or what if you don't want to pre-define a set of videos to present, but rather pull different sets of videos for each participant? Below are some examples of how you might set up your protocol differently for different levels of control over stimuli selection and presentation order.
You might wish to define a fixed trial order, where every time the protocol is run, participants see the same stimuli in the same order. This requires minimal changes to the example protocol in the previous section.
Making a fixed-order version of this study would also involve pre-selecting the 20 trial videos that will be shown. You can define just these videos in your tags section, or you can define all of them - it doesn't matter if some are unused in your experiment. Parsing your protocol will take a little longer if you have extra tags defined, but this step is typically done while setting up for a study rather than when the participant is ready, and does not present any issues for running study sessions.
Crucially for a fixed-order protocol, you will define your trial_videos
group to list tags in the exact order in which you want them to appear in the study.
Later, when you select a tag from the trial_videos
group to present on a particular trial, you'll select by FIRST rather than RANDOM
.
Because you're using TAKE
to choose without replacement, each time you select a tag from trial_videos
, you can't select that same tag again. So each time, the tag that is the FIRST
tag in the list that is still available for selection will be the one you want - the next tag in line from the trial order you defined when making thetrial_videos
group.
All of the rest of the protocol would be the same!
If you wished to define multiple fixed-order versions of the study, you could simply save additional copies of the protocol file in which you change the ordering of tags in the definition of trial_videos
, or swap in different tags that you have defined. This would be the only change necessary to make the additional versions.
You might want to not have a pre-defined set of 20 trial videos to use, given that with the different stimulus types and conditions in our study, we actually have 48 possible videos. You might also want to psuedorandomize when trial types are selected - for example, participants may get bored more quickly if they see the same object pair for several trials in a row, so we might want to keep the same pair from being randomly selected too many times back-to-back.
Below, we'll walk through one way that you could define the protocol to select trial videos from the entire set, in a manner that satisfies particular balancing requirements and presents trials in a psuedorandom order. This is a significantly more complicated case than the previous versions of this study we've defined - not for running your study, or for writing your whole BITTSy protocol, but specifically for planning the structure of the groups in your protocol, upon which stimuli selection will critically rely. Before beginning to create such a protocol, it is useful to take time to think about the layers of selection you will do - if it can be drawn as a selection tree structure (like the example here), with selections between branches all at equal probability, it can be implemented in BITTSy.
We'll be defining all the possible video files and tags for this version of the study, because we won't specify in advance which ones the participant will see. First, our baseline trials. We have 4 object pairs, 2 left-right arrangements of each pair, and 2 background talkers, for a total of 16 possible baseline videos.
Now our trial videos. We have even more of these, because now the target word said by the main talker could be either of the two objects - a total of 32 videos & tags.
Repetitive tag definitions like these do not need to all be typed out by hand! We used spreadsheet functions like clicking-and-dragging to copy cells, JOIN, and CONCATENATE to quickly make these lines in Google Sheets and copy-pasted them into our protocol file. See here for an example.
The last tag to define is our attention-getter video.
Defining the groups in this protocol is the really critical step for allowing us to control how stimuli are later balanced - how many of each target pair we show, how many times the named object is on the left or the right, how many trials have a male background talker vs. female.
We will construct a nested structure of groups. It is often helpful to work backwards and plan your group structure from the top down - that is, from the highest-level group that you will select from first in your protocol, to the lowest-level group that directly contains the tags referencing your stimuli files. We'll end up with 3 levels of groups for this protocol.
Your highest-level group's components should be defined around the characteristic that you want the most control over for pseudorandomization. Here, we'll make that be our object pairs. Our highest-level group, trial_videos
, will contain groups for each of the four object pairs.
If your study has more than one factor that you crucially need to control with psuedorandomization, you may find constructing multiple fixed-order protocols that follow your restrictions to be the most practical solution.
Now let's define one of these groups for an object pair. We're going to have five trials per object pair. One of these has to be a baseline trial. For the remaining four, we'll stipulate that they are:
a video with a female background talker where the object named by the target talker is on the right
a video with a female background talker where the object named by the target talker is on the left
a video with a male background talker where the object named by the target talker is on the right
a video with a male background talker where the object named by the target talker is on the left
This example balances some factors across trials, but not others. For example, we do not balance how many times one object is named versus the other. We could just as easily balance this in place of left-right answers, and define those groups accordingly. These decisions of what to keep balanced and what to leave random is the crux of setting up your group structure!
Now we'll define these component groups, which will contain our video tags. First, the baseline group. There were four baseline videos for each object pair.
For the other four groups, we defined two videos that match each trial type.
We'll define the nested group structure for our other object pairs the same way.
Once you've defined one set, the rest is quick and easy: copy it over to a blank document, use find+replace to swap out the object names, and paste the new groups back into your protocol!
Because BITTSy will validate the protocol line by line, we can't actually define our groups in backwards order - we'll need the definitions of the lower-level groups to come before the higher-level groups that reference them. So we'll just swap them around. We end up with this.
Now that we've defined our group structure, implementing the stimulus selection is really easy. Our protocol starts off just like the original version with playing the attention-getter video before every trial.
In our original setup with the pre-determined stimulus set, we then started a trial, picked a video, and displayed it. In this one, we have multiple selection steps, and BITTSy takes a (very, very small) bit of time to execute each one. Given that we have several, we might prefer to place these before our trial starts, just to ensure that there's never a gap between when BITTSy marks the beginning of a trial and when the video actually starts playing. (If you are not relying on BITTSy logs for trial timing and instead use cues in your participant video, such as lighting changes in the room or an image of your display in a mirror, this doesn't matter at all!)
Above, you can see how we selected from each layer of our group structure, highest to lowest. First, we picked an object pair. We need to restrict how many times we can select each pair using with max 4 repeats
- we require that we have exactly 5 trials of each object pair. This repeat clause, combined with the number of times we'll loop over this section, will ensure we get the intended result. We also choose to add the with max 2 repeats in succession
clause, which ensures that we don't ever show videos of the same pair more than 3 trials in a row.
Next, we pick a trial type - baseline, female background right target, female background left target, male background right target, or male background left target. We only want one of each to come up in the experiment for any given object pair, so we'll use TAKE
to choose without replacement. The next time the same object pair's group is picked to be pair
, subgroups chosen previously to be type
will not be available for selection.
Lastly, we need to pick a particular video tag from type
. We will do this randomly. It doesn't matter whether we define this selection with FROM
or TAKE
because, per the previous selection where we chose type
, we can't get this same subgroup again later.
Now that we have the trial video we want to display, we can run a trial.
Just like in the original example version of this protocol, we'll loop over the steps that define an attention-getter + a trial to display a total of 20 trials. Then, our experiment is done.
See the resources page for copies of the versions of this protocol.
The following protocol is based on the Newman (2009) paper cited below. A headturn preference procedure is used to assess whether infants can recognize their own name being spoken when multiple other talkers are speaking in the background.
Newman, R. S. (2009). Infants' listening in multitalker environments: Effect of the number of background talkers. Attention, Perception & Psychophysics, 71, 822-836
This study was run prior to the development of BITTSy on an older, hard-wired system in our lab. However, the following example BITTSy protocol replicates its structure and settings, and has been used in our lab for studies in this same body of work.
In headturn preference procedure, we'll start off with the participant facing a neutral direction, then have the participant turn their head to listen to an audio stimulus. For as long as they are interested in that stimulus, they tend to keep looking toward where the sound is coming from. When they get bored, they tend to look away. Across trials, we can use how long they listen to stimuli as a measure of listening preference.
Infants as young as four months old will listen longer to their own name (a highly familiar word) than other, unfamiliar names. In this study, we'll present names with noise of other people talking in the background. If infants still listen longer to the audio that contains their own name, we can say that they can still discern their name and recognize it as familiar, despite these harder listening conditions. By testing infants' success in this task at different age ranges, with different types of background noise, and at different noise levels, we can better understand infants' speech perception abilities.
This study begins with a phase that familiarizes infants with the procedure, and continues until they accumulate a certain amount of looking time toward the audio stimuli in this phase. In the test phase, we present three blocks of four trials each, with the trial order randomized within each block.
As always, we open with starting definitions. For this headturn preference study, we will use three lights in our testing booth: one directly in front of the participant, and one each on the left and right sides of the testing booth which the participant must turn their head 90 degrees to view. Our lab's starting definitions look like this:
In this headturn preference study, we will be turning on light/speaker pairs that are located on the same side of the testing booth. It will be important to define your LEFT
and RIGHT
lights in a way that matches up with your left and right speaker channels, so that each light/speaker pair can be turned on and off with the same side name. Here, the lights that we define as LEFT
and RIGHT
match what we see in our booth from the experimenter's perspective, and we have set up our stereo audio system with the left-channel and right-channel speakers directly behind these LEFT
and RIGHT
lights, respectively.
You can choose to name the sides of your booth according to your experimenter's perspective or your participant's perspective, whenever they don't match - it doesn't matter, as long as the lights and speakers that should be treated as a pair have the same side name, and it's clear to your experimenters how to identify these sides via your key assignments for live-coding.
See the starting definitions page for more on how to order these definitions in a way that fits your own testing setup.
In this protocol, we have six files to use and assign tag names - two clips of classical music to use in the familiarization phase, and four audio files for the test phase, in which a speaker repeatedly calls a name with the noise of several people talking in the background. One name file is the name the participant is most commonly called ("Bella" in this example), one has a name that matches the stress pattern of the participant's name ("Mason") and two foil names have the same number of syllables but a different stress pattern ("Elise" and "Nicole").
Because the stimuli in this experiment are particular to each participant (one of them must be that participant's name), we make a copy of the protocol before each participant's visit that has the appropriate filenames filled in for their session. Tag names let us label these generically by what trial type it is (name, matched foil, or unmatched foil) rather than the particular name that was being called, and once we change the filenames they reference, no further changes to the protocol are necessary to prepare for each participant's session.
These tag names, which are kept consistent, will also appear in the reports of session data rather than the filenames, allowing us to easily combine data across participants even when stimulus files themselves differ. (And keep reports de-identified, in this study in which their first name is a stimulus!)
Having defined our tags, now we'll create groups that will help us define our stimulus selection later, in the familiarization and test phases of our experiment.
The training phase is straightforward - we'll just present those two music clips throughout.
But in the test phase, we'll use a block structure. Each of the four trial types will be presented in a random order in each block, and we'll have a total of three blocks. All our test blocks have the same four stimuli. However, we'll define three copies of this group - testblock1
to testblock3
- which will allow us to randomize the order of stimuli within each block completely independently from the other blocks.
You might wonder why we can't define a single testblock
group, and just restrict selection from that group to produce the desired block structure. See the last example in the max <number> repeats in <number> trials section for why this doesn't work, and its following section for more on why this nested group structure is a good solution for creating blocks in experiments.
And we need an overarching group for the test phase that contains our blocks:
In addition to randomly ordering stimuli, we will want to randomly order stimulus presentation locations. We can set up for this by creating groups of sides. CENTER
in this experiment is used only for in between trials; LEFT
and RIGHT
are the sides for selection here. We'll place some restrictions on the randomization order (e.g. to prevent too many trials in a row on the same side), but we'll have these restrictions reset between the familiarization phase and test phase. Therefore, we'll make two groups of sides, one to use in each phase, so that when we switch to the test phase, we're starting fresh on which side choices are allowable.
At this point in the protocol, we would define any optional experimental settings we needed. In a HPP study, relevant ones for consideration are COMPLETELOOK and COMPLETELOOKAWAY as well as key assignments for live coding. Here, we'll use all the defaults, so we won't need to include any - but this is how they would appear.
See also our page on live coding for recommendations on key assignments and details on how live coding works in BITTSy.
Now we're ready for the body of the protocol - what will happen when we click the button to run the protocol.
First, we'll start off the training phase, in which we simply familiarize the participant to the basic procedure. First, a light will start blinking in the center of the booth, getting the participant to attend to this neutral direction before a trial starts, so that they later will demonstrate a clear headturn to attend to stimuli on the sides of the booth. Once the experimenter judges they are looking at it (by pressing the key assigned to CENTER
), it turns off.
Immediately, one of the side lights will turn on. We choose this side randomly to be either LEFT
or RIGHT
, but restrict the randomization to not choose the same side more than 3 times in a row. Once the participant turns and looks at this side light, the experimenter will press a key to indicate they are now looking in that direction.
In a protocol that assigned different keys to LEFT
and RIGHT
, the terminating conditions for this step should be until those keys were pressed, rather than L and R
Now, immediately once the look toward the light is recorded, a trial starts. Audio will begin to play from the speaker directly behind the light. It will continue to play until either the file ends, or the participant looks away for at least 2 seconds - whichever comes first.
Here, we choose the audio for the training trial from the trainingmusic
group such that we cannot pick the same music clip twice in a row. The two clips will play alternately throughout the training phase.
Once one of the step terminating conditions is met, we want the trial to end. The light should turn off, and so should the audio. (If the FINISHED
terminating condition was the one that was met, it has already stopped playing, and the OFF
command does nothing. But if the SINGLELOOKAWAY
one was met instead, it would continue to play if we didn't turn it off now.)
We end this with UNTIL TIME 100
just to have a tiny perceptible break between turning this side light off, and the start of the next inter-trial period - which we'll get to via a loop.
Up to now, our steps have defined the basic structure of our training phase: start the CENTER
light, wait for a look and turn off, start a side light, wait for a look and play a trial. We can now define a loop that will repeat this basic structure until the participant is sufficiently familiarized to proceed to the test phase.
We'll loop back to STEP 2
(where we turned on the CENTER
light to re-orient the participant between trials), and execute all the steps back to STEP 7
, running another trial in the process. We keep looping back over these STEPs until the child accumulates 25 seconds of total looking time to both of the music clips. Once this happens, we consider the participant to be sufficiently familiarized with the study procedure, and end the training phase.
Now, time for the test phase. Recall that we have three test blocks. Each is a group of tags, within the group testaudio
that contains all three blocks. The first thing we'll need to do is pick a block.
All the blocks in this experiment contain the same tags, so it doesn't matter whether we choose by FIRST or RANDOM. But we do want to use TAKE rather than FROM. When we later choose the stimuli from within the block for each trial, we're going to use TAKE
to remove them so that they aren't chosen on more than one trial within the block. If at the end of the block, this empty block was still available for choosing from testaudio
(i.e. if we used FROM
) we could get in trouble if we picked it again - we'd try to select stimulus files from the block, but there wouldn't be any more in there to pick, and our protocol would have an execution error and stop running.
The test phase of the protocol will start off the same way that the familiarization phase did - by defining the inter-trial period, with the flashing CENTER
light, then choosing which side of the booth the test trial will be presented on. We'll select from testsides
this time, and give it the dynamic tag side2
so that we can refer to this side for the light, and later the audio. No two tags, including dynamic tags, can have the same name, which is why we define our placeholder tag for the active side as side1
in training and side2
now.
Now for a test trial. From the block that we chose back in STEP 10, we'll select a random tag to play - either ownname
, matchedfoil
, or one of the two unmatched foils. We want to select these without replacement using TAKE
, so that we never repeat a single stimulus within the same block.
Now, we'll define a loop that will let us play a trial (with the blinking CENTER
light in between) for each trial type in the block. We loop the section 3 times, so that we end with 4 trials total.
Note that we loop back to STEP 11
. That was after we selected a block from the testaudio
group, so the dynamic tag block
refers to the same block throughout this whole loop. This is what we want - to TAKE
each of the items from the same block and play them in a trial by the time we're done with this loop.
From STEP 10
, where we select a block, up to this point constitutes one block out of three in our test phase. This is only a third of our trials - but we're almost done with writing our protocol. Because each block has the same structure, we can make another loop to repeat the process of selecting a block and playing four trials within it - we'll add an outer loop, which will contain the loop we've already defined.
When we loop back to STEP 10
, we pick the next block out of testaudio
, and repeat the whole structure, including the loop at STEP 16
that lets us run through all four trials within the new block we've selected. When this loop in STEP 17
finishes and our second block of trials have played, we loop through again for our third block. Then, when the STEP 17
loop has run 2 times - all three of our test blocks have been completed - our experiment is done.
These kinds of loops may not be totally intuitive if you are used to thinking of your experiments as a linear progression of trials. However, loops are well-suited for any kind of experiment that has repeating units with the same internal structure, and they will save you tons of effort in defining your protocols!
See the resources page for a copy of this protocol.
This protocol is based on a commonly-given example of habituation rather than a particular study. The question is whether young infants can recognize that objects of the same kind but with their own unique appearances all belong to the same category. Can infants recognize a pattern in the images that are being presented - that they are all cats, or that they are all dogs? And when an image is presented that does not belong to the category, do they notice that it is different?
As infants recognize the pattern or "rule" in the habituation phase (that all the images are the same kind of animal) we expect them to start to pay less attention to the individual images. This decrement in looking time across trials is the principle of habituation. But what really demonstrates whether they have learned the pattern is whether they show an increase in looking time when presented with something outside of the category, relative to another novel example of the trained category.
Here, we'll habituate infants to either instances of cats or instances of dogs. Later, to both groups, we'll present one new image each of a dog and a cat. We might think that breeds of dogs are more visually dissimilar from each other than breeds of cats, and we might expect that infants who are habituated to cats will have greater success in detecting which test phase image doesn't belong to their learned category. They may be more likely to recognize that the dog presented in the test phase is something different and more interesting than another cat image, while infants who are habituated to dogs may be less likely to recognize the cat in the test phase as particularly novel.
At the very beginning and very end of our experiment, we'll have pre-test and post-test trials. These help us see whether the participant is as actively engaged at the end of the experiment as they were at the beginning. It lets us differentiate children who didn't recognize the category switch but were still paying attention in general (looking longer at the post-test stimulus, even if they didn't look long during the test phase) from children who were simply bored or inattentive (looking very little at the post-test stimulus).
We'll begin with our starting definitions. In this protocol, we'll only use one central display, and code looks towards/away that monitor.
We have a lot of tags to define for this protocol, because we'll be using a lot of image files. We'll have a maximum of twenty images displayed of either cats or dogs in the habituation phase of the experiment, plus one of each for test, and two unrelated images for a pre-test and post-test (these should be consistently really interesting to infants - but as a placeholder, we'll use turtles). We'll also show an attention-getter video in between trials.
This file definition section is a very large portion of our protocol. We've abbreviated it below.
For tags like our cat and dog stimuli, which are named systematically, never type the whole thing out - there's a much easier way! We use Excel or Google Sheets and click and drag to fill cells with components of these lines that will be the same, auto-fill series to make columns of the numbers in increasing order, and a CONCATENATE function that we then click and drag to apply down the rows to put each of the lines together. Then you can just copy and paste the results into your protocol!
First, let's define groups for habituation and our pre-test and post-test.
We'll have two conditions that participants are randomly assigned to - they will either be habituated to the group cats
, or the group dogs
. After they are habituated to that group, they will see the images dog_test
and cat_test
. In this protocol, we'll have infants always see the image that wasn't the category they were habituated to as the first image of the test phase, then the other example of the habituated category on their second test trial. Let's create groups that list what they'll see in each of these possible conditions, in order of presentation.
Note that these two groups, habitcats
and habitdogs
, contain first a group, then two tags that refer directly to files. We'll need to remember this as we use these groups for stimulus selection and presentation later. In the habituation phase, when we're pulling items from the group cats or dogs, we'll need a choose statement to pick a tag from the group before we present it onscreen. But in the test phase, we can display these tags directly - they refer to particular files, rather than a group of files.
Lastly, so that we can randomly select a condition for the participant at the start of a study session, we'll make a group that contains both.
Some optional experiment settings may be useful here. This is a live-coded experiment, so we might want to decide what minimum length of look "counts" for looking time calculations. We also might want to change key assignments to something other than C for CENTER
and W for AWAY
, or the background color.
We also have habituation settings to specify for this study. These should be decided based on previous studies, or some piloting in your lab. Here are the settings we chose for this example.
Now, our experiment begins. First, we'll have a pre-test phase, where we'll show something unrelated to our habituation and test images. Here we're using two similar images for pre- and post-test and will select them randomly from the prepost
group, but it is common to use the same image for both.
Like all other trials in our study, we'll play an attention-getter video beforehand, and only start the trial once the experimenter indicates the child is looking at the screen. Then, the experimenter will continue to code the child's looks toward and away from the image being displayed. If the child has a single look away from the screen that lasts at least 2 seconds, or if the image has been displayed for a maximum length of 20 seconds, the trial will end.
Next comes our habituation phase. First, we will need to assign this participant to one of our study conditions - habituating to cats, or habituating to dogs. We'll choose this randomly.
Recall that this condition
we pick (either habitcats
or habitdogs
) contains three tags:
the group of tags to display in habituation
a tag referring to the test item from the novel category
a tag referring to the test item from the familiarized category
We need to refer to the first one in the habituation phase, so in this same step, let's pull it out of our chosen condition
with another choose statement.
Now, we'll display another attention-getter and a habituation trial.
We want to keep running habituation trials until the child either meets our habituation criterion, or reaches a maximum number of trials. We'll do this by looping back to STEP 8
and repeating this section until one of those conditions are met. When one of them is, our habituation phase is over.
Now for our test phase. We'll start it off, and keep having attention-getter videos in between trials.
What goes in our test trial? Recall that the group called condition
that we chose earlier for this participant has two remaining items in it, after we selected the habituation group from it with our earlier TAKE statement. They are our two test items, in the order we want them presented. We can use another TAKE
statement to remove them in order and display them.
There are two test trials - so let's loop to repeat this section to play the second one.
Lastly, the post-test. This is identical to our pre-test phase earlier, so we can specify it in our protocol the same way. We just copy-pasted it and changed the step numbers, phase name, and name of the dynamic tag.
And the experiment is done! See the resources page for a full copy of this protocol.
This protocol demonstrates a common application of a conditioned headturn procedure: signal detection. It isn't strictly based on any particular study, although it is similar conceptually to many in the realm of infant signal detection in noise, e.g. Trehub, Bull, & Schneider 1981; Werner & Bargones 1991.
In this example study, multi-talker background noise plays continuously throughout. During some trials, an additional voice that repeatedly calls the participant's name begins playing too. When the name is being called and the participant turns toward the source of the voice (a "hit"), they are rewarded with a desirable stimulus (e.g. a toy being lit up or beginning to move). When the name is being called but their behavior does not indicate that they detected it (i.e. they don't turn toward it - a "miss"), the reward stimulus is not presented, nor is it presented when the name is not being played, whether they turn (a "false alarm") or not (a "correct rejection").
This is a simple example of a signal detection task because during the test phase, it does not adjust the intensity of the signal (voice calling the name) relative to the noise, as you would when attempting to establish a detection threshold.
This protocol will use two separate channels on a DMX dimmer pack. In the protocol we'll refer to these as LIGHT LEFT
and LIGHT RIGHT
, but crucially, these won't both correspond to lights.
The LEFT light will be our reward stimulus. Depending on the implementation of conditioned headturn, it might be an actual, simple light that illuminates an otherwise invisible toy. Or the "light" might be a plugged-in device or electronic toy that will begin to move when receiving power - BITTSy doesn't know the difference!
The RIGHT light will refer to an empty channel on the dimmer pack, with either nothing plugged in or something plugged in that will remain powered off regardless of the dimmer pack command (i.e. by having the power switch on the device be in the off position). We need this empty channel so that trials can always be structured the same way, whether they are trials with a potential to show the reward or not.
The above are our starting definitions. However, if you have more lights connected in your system (e.g. for headturn preference procedure) and will be leaving them all connected during testing, you may need to define more lights so that you can identify which channel has the reward stimulus and which is empty.
Now we'll define our tags that reference files. We have only audio files for this type of study. One is the background noise that will play continuously throughout. Then we have our signal of interest - the file that contains the voice calling the participant's name. We'll call this audio file changestim
. We'll need another to be the controlstim
so that regardless of the trial type, we've got a command to play an audio file. But in this case, the control stimulus is actually a completely silent file, since we're interested in comparing the presentation of the name to a case where the background noise simply continues with nothing else added.
We'll use groups to set up a key aspect of this paradigm: the "change stimulus" or signal of interest (the voice calling the participant's name) gets paired with the reward stimulus (plugged into the LEFT
light channel) and the "control stimulus" (the absence of the voice - a silent audio file) does not.
A "change trial" will use both the "change" stimulus and the LEFT
light. So we'll define a group that pairs them, called changetrial
.
If you wanted change trials to contain one of several possible audio files (e.g. with different tokens/words/voices), your changetrialstim
group would contain multiple tags for the different audio files. In later steps of the protocol when the audio files are played, you could randomize which audio file is selected from the group. However, it's important to set up your selection criteria in a way that ensures that there will always be a valid selection that can be made for the audio file, regardless of whether the current trial is a change trial or control trial.
Similarly, we'll make a controltrial
group that pairs the "control" stimulus (silence) with the empty RIGHT
channel, so that this trial type can have the same structure within the protocol, but not actually turn on a device.
You may wonder why we define the first two groups in each set rather than directly defining LET changetrial = {changestim, LEFT}
. At time of writing, BITTSy throws a validation error when a group directly contains both tags and sides. However, creating a "dummy" group that contains each and making changetrial
be a group of other groups sidesteps the validation error and allows you to run the protocol.
Now that we have structures representing change and control trials, we can define the groups that we'll use to present these trial types within phases - training phase, conditioning phase, and test phase.
In this study, we can choose to not include control trials in the training phase - since the control trials are the absence of the voice and absence of the reward stimulus, the gaps between change trials essentially function as reinforcing that the reward does not occur until the change stimulus is present. A study focusing on discrimination between two signals, where one is rewarded and the other is not, would likely have control trials during training. But it is not necessary here, so the training
group can contain only changetrial
.
The conditioning phase will also only contain change trials - the goal of this phase, after the initial exposure to the paired signal and reward in the training phase, will be to ensure that the participant predicts the reward after hearing the stimulus, and makes an anticipatory turn toward the location of the reward.
The test phase will contain both change trials and control trials. Here, we'll have them occur with equal probability, but we could weight the relative probabilities by adding more change trials or more control trials to the test
group.
There are no special experiment settings required for this protocol. However, if we planned on using the reporting module to analyze participant logs, it is helpful to make sure that the key that is used to mark the participant turning toward the signal matches the key assigned to the LEFT
side (where the signal will be presented). That way, the turn is logged as a look toward that stimulus rather than just a timestamped keypress, and we can easily summarize which trials had turns and the time it took before the participant turned via a standard report of looking time by trial. This protocol is configured to use L, the default key for LEFT
, so it doesn't require a key assignment definition, but if you wished to use another key other than L, you could include one in this section.
First in this study, we'll start playing the background noise file that will play on loop throughout the entire study. Then we'll want to wait until the experimenter judges that the infant is ready for a trial to start. (Typically in conditioned headturn studies, a second experimenter sits in front of the infant to engage their attention with some small toys, and the experimenter running the protocol would wait until the infant was attending to the toys, relaxed and unfussy.) We'll have the main experimenter press C when the infant is ready.
The starting definitions we used didn't specify a CENTER
audio channel (i.e. for use with a multichannel speaker array), so here we have the default behavior of CENTER
, where it's exactly the same as STEREO
- played simultaneously from the left and right channels. This means that when we later have trial audio play from LEFT
, BITTSy will play it simultaneously from one of the speakers that is already playing the noise file (while the speaker on the right channel continues with only the noise). Having the noise play from CENTER
allows us to turn off the trial audio on LEFT
without interrupting the noise at all. If you wanted the noise to come from only the same speaker as the trial audio, while keeping the ability to turn off trial audio without interrupting the noise, you could set up your noise audio file to have information only on the left channel and silence on the right channel, and still instruct BITTSy to play it CENTER
.
Next we'll need to pick the trial type from the training
group so that we can subsequently use its components. But there's only actually one trial type in this group, so it doesn't matter how we select it, so long as we allow for repeats.
The dynamic tag trial_tr
will now refer to that one trial type group, changetrial
, which itself has two tags inside - the first referring to the audio file that will be played, and the second referring to the LIGHT
channel that will be used during those trials - for change trials, the location of the reward stimulus. We can use selection restrictions to ensure we pick first the audio file group, then the reward location group, to access these components and later use them to display stimuli.
Recall that these subgroups were themselves groups that contained one tag/side each in order to avoid making a group that directly contained both a regular tag and a side name. So we have one more layer of selection to get to our final tags.
Now we can begin to present the change trial audio. We'll add a brief delay after it starts so that the reward doesn't turn on simultaneously with the audio signal, but rather the start of the audio indicates that the reward will appear soon.
Audio set to LOOP
does not turn off until we use an AUDIO OFF
command. The change trial audio will continue playing after this step, while we present the reward! If you wanted the audio and reward to be presented sequentially and not overlap, you could adjust this UNTIL TIME 1000
statement above to be the full length of time you want the audio to play, and put the AUDIO LEFT OFF
command to before the LIGHT ON
statement in STEP 4
.
Now it's time to turn on the reward. In the training phase, this isn't dependent on the behavior the participant exhibits yet, we'll simply always turn on the reward. We'll keep it on for a little while, then turn off both the reward and audio together. (Note that since the background noise was playing AUDIO CENTER
and the change trial audio plays AUDIO LEFT
, we can turn off the change stimulus with AUDIO LEFT OFF
without interrupting the background noise.)
From STEP 2
to here constitutes the repeating structure of the training phase - waiting for the experimenter to indicate the participant is ready for a trial, starting to present the change audio, turning on the reward, and turning both off. We can use a loop to repeat this structure until the end of the phase (in a real study, we would likely repeat it more times.)
Now that the participant has some exposure to the audio signal and has had an opportunity to learn that the reward turns on when it plays, in the conditioning phase, we'll ensure that the participant has learned that the audio stimulus predicts the onset of the reward by requiring that they make anticipatory headturns toward the location of the reward stimulus whenever they hear the audio signal. We'll begin delaying the onset of the reward to give them more opportunity to do so, and require three trials in a row where the participant turns in anticipation before the reward stimulus turns on.
Like the training trials, we need to wait for the experimenter to press C to indicate when we're ready for a trial to start.
Next we'll select the trial audio (always the change stimulus in this phase, since that's the only trial type in the conditioning
group) and reward location, and start playing the audio. Note that this selection structure is identical to the training phase.
What's different in this phase is that we want to do different things depending on whether the participant makes an anticipatory headturn or not. If they don't make the headturn within the time limit before the reward stimulus comes on, we want to continue with more trials exactly like this one so they have more opportunity to learn the pattern. But if they've started to learn that the audio predicts the reward and make an anticipatory turn, we're ready to increase the delay in the next trial, and need to keep track of this successful trial as the start of a possible three in a row that would result in ending this phase.
We can use an OR combination of terminating conditions with JUMP to create two separate "branches" for each of these possibilities. If there's a headturn before the time limit, we'll jump to STEP 14
. If not, we'll keep going to the next step. (For a more detailed explanation of this type of usage of JUMP
, see this page.)
The second UNTIL statement could also be written UNTIL TIME 2000
without the JUMP clause - JUMP isn't required when the result is simply to go to the next step. But when the protocol is shown broken up into parts like this, adding it makes it a little easier to track what's happening!
From here until STEP 14
defines what we'll do when the participant failed to make an anticipatory headturn on the trial. It looks just like in the training phase. We'll still turn on the reward, so that the participant continues to have opportunities to learn the relationship between the audio stimulus and reward.
After, we'll use a LOOP statement to run more trials. We should include some way to end the study in case the participant doesn't begin making anticipatory headturns. This loop terminating condition will have us jump to the very end of the protocol (STEP 37
), ending the study, if the participant fails to make an anticipatory headturn 10 times in a row.
Using JUMP
to move outside of this loop when the participant successfully makes an anticipatory headturn will reset the count on this UNTIL 9 TIMES
loop statement, which is why it requires 10 times in a row rather than 10 times total. If you didn't want to reset the count and use a total number of misses instead, you could A) restructure your steps and JUMPs to not require moving to steps that are outside this "core loop", or B) use a loop terminating condition that depends on phase information, like TOTALLOOKAWAY (from the reward stimulus) or global information like whether a group is EMPTY, which are unaffected by moving outside the loop. If you preferred to continue presenting trials until the experimenter judged the participant was tired, you could use KEY instead.
Recall that STEP 14
was where we jumped if the participant successfully made an anticipatory turn. This will also result in presenting the reward.
But crucially after this, instead of looping back, we'll keep going to STEP 17
, which is only reached when the participant has a current streak of exactly one anticipatory turns in a row. This way, we can track what they need in order to reach three in a row to move to the next phase. (See this page for further explanation of this type of usage of JUMP
.)
STEP 17
will start another trial, which is largely identical to before - the only difference is the amount of delay between when the audio starts and when the reward turns on, during which they have an opportunity to make an anticipatory turn.
STEP 18
begins what will happen when the participant had a current streak of one anticipatory turn, but missed this one, resetting their streak. So after presenting the reward as usual, it will jump back to STEP 9
to present the next trial, which is used whenever the current streak is zero.
STEP 20
, however, is where we go when the participant made another anticipatory turn - two in a row now. So after presenting the reward as usual, we'll move on to STEP 23
, which presents another trial but is only reached when the current streak is two in a row. The delay where the participant has an opportunity to make an anticipatory turn increases again.
STEP 24
is where we end up when the participant missed the time window for the anticipatory turn. So after presenting the reward, we'll specify a jump back to STEP 9
to reset their streak.
But if we jumped to STEP 26
instead, we got here because the participant made another anticipatory turn. They already had two in a row to get to STEP 23
, so the jump straight to STEP 26
means that they've now achieved three in a row, our criteria for ending the conditioning phase. So after presenting the reward, we'll end the phase and move on straight to the test phase.
In the test phase, trials will sometimes contain the change audio, and sometimes not. The participant should only be rewarded for successfully detecting (making an anticipatory headturn) the change stimulus - this possible outcome is called a "hit." There are three more possible outcomes - the change audio is present and the participant does not make a headturn to show they detected it (a "miss"), the change audio is not present but the participant makes a headturn anyway (a "false alarm"), and the change audio is not present and the participant does not make a headturn (a "correct rejection").
We'll start a trial just like before, waiting for the experimenter to indicate the participant is ready, then selecting from the highest-level group for this phase until we have the final tags we'll use to play audio and turn on a light channel.
Since the test
group contains both change trials and control trials, audiostim_test
and reward_test
end up being either the change audio and LEFT
(the reward stimulus channel) respectively, or a silent audio file and RIGHT
(the empty channel on the DMX box).
The test
group was defined at the start of the protocol as having one copy of changetrial
and one copy of controltrial
, so (FROM test RANDOM)
in the step above will select them with equal probability but not guarantee an equal number of each across the test phase. If we wanted to ensure a particular number of each, we could define test
to have that number of copies - e.g., for four of each trial type, LET test = {changetrial, changetrial, changetrial, changetrial, controltrial, controltrial, controltrial, controltrial}
Then use (TAKE test RANDOM)
instead.
Like in the conditioning phase, we're interested in whether the participant makes an anticipatory headturn toward the reward (L), and we need to do different actions depending on their behavior. We'll have another OR combination of UNTIL statements that will allow us to jump to different sections depending on what happens.
STEP 32
is what will execute when the participant makes a headturn within 5 seconds of when the trial audio (either the change audio or the beginning of a silent file) plays. The case where the participant makes a headturn actually represents two different possible trial outcomes, depending on which audio file was playing. If the trial audio was chosen to be the change audio, this represents a hit, and we should present the reward. But if it was the silent file, this is a false alarm, and we shouldn't present the reward.
Because the audio and side name were paired originally, so that the change audio went with the side of the reward (LEFT
) and the silent file went with the empty channel (RIGHT
), we don't actually have to differentiate between these two outcomes in order to correctly reward or not reward the headturn. The tag reward_test
contains the appropriate side name so that if the change audio was chosen, the LEFT
channel reward will now turn on, but if it was a control trial, nothing will visibly happen, and we'll simply wait a few seconds before we can start another trial.
STEP 34
is only reached when the participant does not make a headturn shortly after the start of a trial. This behavior could be a miss if the change audio signal was actually present, or it could be a correct rejection if it wasn't. Although when we ultimately analyze data we'll consider the first outcome as an incorrect response on the trial and the second as a correct one, we don't need to differentiate between these possible situations now, because neither results in the presentation of a reward. In both cases, we simply need to end the trial and turn off whatever audio had been playing.
Regardless of which "branch" was taken to handle the outcome of the trial, we now end up at STEP 35
(there was a JUMP
to here from the end of the step that handled hits and false alarms). We could have a larger delay specified here if we wanted to make sure there was a certain minimum time in between trials, and in the next step we'll loop back to STEP 30
to run more test trials. This one will run a total of 8 test trials.
When all eight test trials have been executed and the loop is done, the study is over! We'll end the test phase and turn off the background noise that's been playing from CENTER
since the beginning.
See the resources page to download a copy of this protocol.