arrow-left

All pages
gitbookPowered by GitBook
1 of 7

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Habituation example - word-object pairings

hashtag
About this protocol

This protocol is based on Experiment 3 of the classic Werker et al. (1998) study cited below. Infants are habituated to a single word-object pair. Later, they are presented with four test items: familiar object with familiar word, familiar object with novel word, novel object with familiar word, and novel object with novel word. Pre-test and post-test trials, consisting of a novel object and word that do not appear in any other trials, are also included.

Werker, J. F., Cohen, L. B., Lloyd, V. L., Casasola, M., & Stager, C. L. (1998). Acquisition of word–object associations by 14-month-old infants. Developmental psychology, 34(6), 1289.

hashtag
Starting the protocol

hashtag
Starting definitions

This protocol will use one central display and a light positioned directly below the display. These would be our minimal :

However, if you have more lights connected in your system (e.g. for headturn preference procedure) and will be leaving them all connected during testing, you may need to define more lights so that you can identify which one should be CENTER.

hashtag
Tags

Now we'll define our . We have three audio files and three video files - two of each for use in habituation/test trials, and one of each reserved for pre-test and post-test.

We could define these with LET statements. But because these will always be paired, it is convenient to use . This will allow us to define all of our possible word-object pairings and randomly select a pair for presentation, rather than independently selecting audio and video, and having less control over which appear together.

Our videos, which we'll call round, green, and blue, show toys being rotated by a hand on a black background. Our audio files consist of infant-directed repetitions of a nonword: deeb, geff, or mip.

Having defined these tags with TYPEDLET, we can use to define tags that pair them up appropriately. The tags mip and round will be reserved for pre-test and post-test, but all other pairings will occur in the test phase, and one of them will be featured in the habituation phase.

hashtag
Groups

We'll define two groups consisting of our LINKED tags. Both will contain all of the possible pairings of deeb, geff, blue, and green. At the start of each test session, we will randomly select one pairing from the first group for the infant to see during habituation. The test phase for all infants will consist of all four pairings, so our test_trials group will also contain all four.

hashtag
Experiment settings

Now we define our . Because this protocol has a habituation phase, we must define all of our here.

We could also define for live-coding - without them, experimenters will use C for looks CENTER to the light/screen, and W for AWAY.

hashtag
STEPS for execution

hashtag
Pre-test

Now for the STEPs that will start once we run the protocol. First, we'll define our pre-trial phase. Before each trial, we'll have a center light flash until the infant is paying attention. We'll have the experimenter press C to begin a CENTER look.

Once the infant is looking, we can start a trial. We'll turn off the light and display the pre-test stimuli. Note that because we defined the LINKED tag prepost with both an audio tag component and a video tag component, we can reference the LINKED tag in the for both AUDIO and VIDEO.

Our trials throughout the experiment will last a maximum of 14 seconds. Trials will end when we reach this limit, or when the infant is recorded as looking away for at least 1 second. There is only one pre-test trial, so once it is over, we end the phase and move on to habituation.

hashtag
Habituation

As we start the habituation phase, the first thing we need to do is assign the participant randomly to a word-object pair that they will see during habituation. There were four possibilities in the habit_pairs group.

Having selected in advance which word-object pair the participant will see, we can define the inter-trial period (with the blinking light) and a habituation trial. Note that the pair tag we selected from habit_pairs is again a LINKED tag that can be referenced in both of the action statements to play the audio and video.

We'll play the exact same stimuli for the rest of our habituation trials, so now we'll define a . We want to either keep playing trials until the infant or reaches the maximum of 20 habituation trials (19 of them via the loop).

Note that our loop goes back to STEP 8, where we started the light blinking for the inter-trial period, but excludes the assignment of the pair back in STEP 7. This is why we chose which word-object pair would be presented before we needed to use it to display trial stimuli in STEP 10: we didn't want this LET statement to be included in the loop. We want this dynamic tag to be assigned as one pair and stay that way so that we are repeatedly presenting the same word-object pair. If the LET statement were inside the loop steps, we would repeat the on every loop iteration, and we would show different word-object pairings on different trials. In general, when you want to select from a group and be able to refer to the result of that selection throughout a phase, it's a good practice to make that selection in the same STEP where you define your phase's start. "Getting it out of the way" like this makes it easier to not accidentally loop over and reassign a dynamic tag that you would prefer to stay static.

hashtag
Test phase

Once the infant has habituated or met the maximum number of trials, we move on to the test phase. We'll begin again with the inter-trial light flashing before beginning a trial.

Recall that our four test items are all in the test_trials group, and are LINKED tags with all the possible pairings of the audio deeband geff, and the videos of the objects blue and green. We want to display these four word-object pairs in a random order, . We'll define one trial, then use a loop to run the remaining trials.

The test phase is where we see the advantage of defining our pairs of tags via rather than selecting video and audio tags separately. If we had defined a test audio group and test video group, they would look like this:

LET test_audio = {deeb, deeb, geff, geff} LET test_video = {blue, green, blue, green}

With random selection from each in turn across trials, there would be nothing to stop us from repeating a pairing, and thus failing to show all the combinations of words and objects. For example, on our first test trial we could randomly select deeb and blue - but there is no way to specify that if we choose deeb again from the audio group, green must be selected rather than blue from the video group. We could define groups that would be chosen from in a fixed order, arranging each of the audio and video tags so that all the pairings are present when they are selected using (and creating multiple copies of this protocol to counterbalance test trial order.) But without LINKED tags, we could not use selection in this protocol.

We'll use another loop to play the remaining three test trials, after our first one is done, and this concludes our test phase.

hashtag
Post-test

Lastly, we have a post-test trial, which is identical to our pre-test phase.

Now our experiment is done!

See the page for a copy of this protocol.

starting definitions
tags that reference files
TYPEDLET
LINKED
experiment settings
habituation criteria
key assignments
action statements
loop
meets our habituation criteria
dynamic tag
choose statement
without replacement
LINKED tags
FIRST
RANDOM
resources
SIDES ARE {CENTER}
DISPLAYS ARE {CENTER}
LIGHTS ARE {CENTER}
SIDES ARE {CENTER, RIGHT, LEFT}
DISPLAYS ARE {CENTER}
LIGHTS ARE {LEFT, CENTER, RIGHT}
TYPEDLET audio deeb = "C:\Users\ldev\Desktop\BITTSy\HabitExample\stim\deeb.wav"
TYPEDLET audio geff = "C:\Users\ldev\Desktop\BITTSy\HabitExample\stim\geff.wav"
TYPEDLET audio mip = "C:\Users\ldev\Desktop\BITTSy\HabitExample\stim\mip.wav"
TYPEDLET video blue = "C:\Users\ldev\Desktop\BITTSy\HabitExample\stim\blue.mp4"
TYPEDLET video green = "C:\Users\ldev\Desktop\BITTSy\HabitExample\stim\green.mp4"
TYPEDLET video round = "C:\Users\ldev\Desktop\BITTSy\HabitExample\stim\round.mp4"
LINKED prepost = {mip, round}
LINKED deeb_blue = {deeb, blue}
LINKED deeb_green = {deeb, green}
LINKED geff_blue = {geff, blue}
LINKED geff_green = {geff, green}
LET habit_pairs = {deeb_blue, deeb_green, geff_blue, geff_green}
LET test_trials = {deeb_blue, deeb_green, geff_blue, geff_green}
DEFINE WINDOWSIZE 3
DEFINE WINDOWOVERLAP NO
DEFINE CRITERIONREDUCTION 0.65
DEFINE BASISCHOSEN FIRST
DEFINE WINDOWTYPE FIXED
STEP 1
Phase Pretrial Start

STEP 2
LIGHT CENTER BLINK 250
UNTIL KEY C
STEP 3
LIGHT CENTER OFF
Trial Start

STEP 4
VIDEO CENTER prepost LOOP
AUDIO CENTER prepost LOOP
UNTIL SINGLELOOKAWAY prepost GREATERTHAN 1000
UNTIL TIME 14000

STEP 5
Trial End
VIDEO CENTER OFF
AUDIO CENTER OFF

STEP 6
Phase End
STEP 7
Phase Habituation Start
LET pair = (FROM habit_pairs RANDOM)
STEP 8
LIGHT CENTER BLINK 250
UNTIL KEY C

STEP 9
LIGHT CENTER OFF
Trial Start

STEP 10
VIDEO CENTER pair LOOP
AUDIO CENTER pair LOOP
UNTIL SINGLELOOKAWAY pair GREATERTHAN 1000
UNTIL TIME 20000

STEP 11
Trial End
VIDEO CENTER OFF
AUDIO CENTER OFF
STEP 12
LOOP STEP 8
UNTIL 19 TIMES
UNTIL CRITERIONMET

STEP 13
Phase End
STEP 14
Phase Test Start

STEP 15
LIGHT CENTER BLINK 250
UNTIL KEY C

STEP 16
LIGHT CENTER OFF
Trial Start
STEP 17
LET test_item = (TAKE test_trials RANDOM)
VIDEO CENTER test_item LOOP
AUDIO CENTER test_item LOOP
UNTIL TIME 14000
UNTIL SINGLELOOKAWAY test_item GREATERTHAN 1000

STEP 18
Trial End
VIDEO CENTER OFF
AUDIO CENTER OFF
STEP 19
LOOP STEP 15
UNTIL 3 TIMES

STEP 20
Phase End
STEP 21
Phase Posttest Start

STEP 22
LIGHT CENTER BLINK 250
UNTIL KEY C

STEP 23
LIGHT CENTER OFF
LET prepost2 = (TAKE prepost RANDOM)
Trial Start

STEP 24
VIDEO CENTER prepost LOOP
AUDIO CENTER prepost LOOP
UNTIL SINGLELOOKAWAY prepost GREATERTHAN 1000
UNTIL TIME 14000

STEP 11
Trial End
VIDEO CENTER OFF
AUDIO CENTER OFF

STEP 26
Phase End

Preferential looking example - word recognition

hashtag
About this protocol

This protocol is based on Newman & Morini (2017), cited below. This study focuses on toddlers' ability to recognize known words when another talker is speaking in the background. The target talker is always female, but the background talker is sometimes male, and sometimes another female talker. When there is a big difference in the fundamental frequency of two voices (as there is for the target female talker and background male talker in this study) adults will readily use this cue to aid in segregating the two speech signals and following the target talker. When the fundamental frequency of the two talkers are similar (as in the female background talker condition), the task is more difficult. This study asks whether toddlers also take advantage of a fundamental frequency cue when it is present, and demonstrate better word recognition when the background talker is male.

Newman, R.S. & Morini, G. (2017). Effect of the relationship between target and masker sex on infants' recognition of speech. Journal of the Acoustical Society of America, 141(2). EL164-169.

This study presents videos showing four pairs of objects. Each object pair is presented on 5 trials, for a total of 20 trials. One of the five for each pair is a baseline trial, in which the speaker talks generically about an object but does not name either one ("Look at that!"). In the other four trials per pair, the target talker names one of the objects. Half of the trials the target object is on the left side of the screen, and half of the trials on the right. All trials, including baseline trials, are presented with the target speaker audio mixed with either a male or female background talker.

Although only 20 test videos are presented to each participant, far more combinations of object pair, target object, target position, and background talker are possible. In this example, we'll demonstrate how to set up this study in several different ways. First, we'll set up the whole protocol with a pre-selected set of 20 videos that satisfy the balancing requirements above, and present all of them in a random order. Next, we'll talk about what we would change in that protocol to present them in a fixed order, and create multiple fixed-order study versions. Lastly, we'll talk about how to use selection from groups and psuedorandomization in BITTSy to select and present appropriate subsets of the possible stimuli for different participants.

hashtag
Starting the protocol

The opening section of a protocol, before any STEPs are defined, typically takes up the bulk of your protocol file. This is especially the case in , in which there are often large sets of video stimuli to define as tags and arrange in groups. While this part may seem to take a while, by the time you do this, you're almost done!

circle-info

Tip for setting up tags and groups in your own studies: Copy+Paste and Find+Replace tools in text editors like Notepad and Notepad++ are your friend! When adapting an existing protocol to reference a different set of stimuli, use find+replace all to change the whole file path, up to the file name, on ALL of your stimuli at once to be your new study folder. When adding more tags to reference more stimulus files, copy+paste another tag definition, then go back and fix the tag names and the names of the files to be your new ones.

hashtag
Starting definitions

The first lines in any protocol are your starting definitions. Here, we will only use one central TV display. We'll name it CENTER.

hashtag
Tags

Before creating this protocol, we pre-selected 20 videos that fit balancing requirements for the study, such as having an equal number of trials with the female background talker as with the male background talker. We happen to have more possible videos for this study, but a study with fewer factors or more trials, that displays all stimuli to every participant, could be constructed exactly like this. (.)

Here are the definitions for our 20 trial video . All are named by 1) which object is on the left, 2) which object is on the right, 3) m or f for the background talker, 4) the target word spoken by the target talker ("generic" for baseline videos in which the target speaker said "look at that!")

We have one more tag to define, which is the attention-getter video we'll present before starting each trial.

hashtag
Groups

We have just one to define for this protocol. It will contain our 20 trial videos, so that we can later randomly select these videos from the group to present.

hashtag
Optional experiment settings

After defining tags and groups, we would define if we had any to include. Most of these are unimportant for preferential looking studies. One that is important is the . BITTSy will cover the screen in either black or white whenever there are no visual stimuli being presented on the display. In preferential looking, we are often presenting video stimuli back-to-back, but there can sometimes be perceptible gaps between one video ending and the next starting. If you are using cues for coding when trial timing that depend on the appearance of the screen or level of light that is cast from the screen onto the participant's face, you will want to ensure that the background color defined here matches the background of your inter-trial attention-getters, rather than your trial videos, so that the background color being displayed is not mistaken for part of a trial.

hashtag
STEPs for execution

First, we will begin a . In a study like this that has only one phase, this is completely optional - but it is not bad to be in the habit of including them.

The next thing we want to do is display our attention-getter video, before we show a trial. We'll want to come back to this part of the protocol again later (in a ), to show more attention-getter videos. We can't put the attention-getter in the same step as the phase start flag without also repeatedly starting the phase - which doesn't make sense.

So we'll start a new STEP and display the attention-getter with a VIDEO . The short attention-getter clip will be played repeatedly until the experimenter decides the participant is ready for a trial - by pressing C.

When you have an in a STEP, it must always be the last line. So we'll start a new STEP that will happen as soon as the experimenter presses the key. In it, we'll just stop the video.

Next, we need to start a .

circle-info

Our lab prefers to consistently put trial start flags in a STEP all by themselves, just so it is easier to visually scan a protocol file and find where a trial is defined. But this is not necessary. This trial start flag could equivalently be the last line of the previous STEP or in the next STEP preceding the line that plays the video.

Our trials need 1) a tag of a video to display, 2) a command to play the video, and 3) a that tells us when to move on. We define the vid that will hold whatever tag we select, which we'll then play in our . We'll use the UNTIL FINISHED to not move on to the next step until the video has played through.

Now the trial ends. This has to be a new STEP, because UNTIL statements are always the last line of the STEP that contains them, but this happens immediately once the terminating condition (the video ending) is met.

circle-info

Even though we know the video has finished playing by the start of this STEP, we still have included a statement to explicitly turn off the video now. When videos finish playing, the processes that BITTSy uses to play them stay active. A "garbage collector" will take care of this so that the videos that are done stop taking up any resources, but this is not immediate. Explicitly turning videos OFF frees up these resources right away. It is not required, but ensures best performance, particularly if your computer is under-powered or if a lot of programs are running in the background. However, if you do include OFF commands in this type of case, you should always turn off trial stimuli after the Trial End command. This is not crucial in this study, but for studies that are live-coded, in-progress looks are logged when the Trial End

We've now defined the basic structure of our experiment - attention-getter, then trial. For the rest of the trials, we can use a . We'll jump back to STEP 2, where the attention-getter started, and go back through a whole trial, and repeat the loop again - until the loop has executed a total of 19 times, giving us a total of 20 trials.

circle-info

We could have combined STEPs 6 & 7. Like trial starts, our lab likes to put loops in a step by themselves to make them more visually obvious, but it doesn't matter whether you combine these or break them into two STEPs. All that BITTSy requires is that a loop and are the final lines in the STEP that contains them.

Lastly, since we defined a phase start, we'll need to end it.

hashtag
Alternate options for stimuli selection and randomization

What if you don't want trial videos to be presented totally randomly? Or what if you don't want to pre-define a set of videos to present, but rather pull different sets of videos for each participant? Below are some examples of how you might set up your protocol differently for different levels of control over stimuli selection and presentation order.

hashtag
Fixed orders

You might wish to define a fixed trial order, where every time the protocol is run, participants see the same stimuli in the same order. This requires minimal changes to the example protocol in the previous section.

hashtag
Tags

Making a fixed-order version of this study would also involve pre-selecting the 20 trial videos that will be shown. You can define just these videos in your tags section, or you can define all of them - it doesn't matter if some are unused in your experiment. Parsing your protocol will take a little longer if you have extra tags defined, but this step is typically done while setting up for a study rather than when the participant is ready, and does not present any issues for running study sessions.

hashtag
Groups

Crucially for a fixed-order protocol, you will define your trial_videosgroup to list tags in the exact order in which you want them to appear in the study.

hashtag
STEPs

Later, when you select a tag from the trial_videos group to present on a particular trial, you'll select by rather than RANDOM.

Because you're using TAKE to choose , each time you select a tag from trial_videos, you can't select that same tag again. So each time, the tag that is the FIRST tag in the list that is still available for selection will be the one you want - the next tag in line from the trial order you defined when making thetrial_videos group.

All of the rest of the protocol would be the same!

If you wished to define multiple fixed-order versions of the study, you could simply save additional copies of the protocol file in which you change the ordering of tags in the definition of trial_videos, or swap in different tags that you have defined. This would be the only change necessary to make the additional versions.

hashtag
Pseudorandom orders

You might want to not have a pre-defined set of 20 trial videos to use, given that with the different stimulus types and conditions in our study, we actually have 48 possible videos. You might also want to psuedorandomize when trial types are selected - for example, participants may get bored more quickly if they see the same object pair for several trials in a row, so we might want to keep the same pair from being randomly selected too many times back-to-back.

Below, we'll walk through one way that you could define the protocol to select trial videos from the entire set, in a manner that satisfies particular balancing requirements and presents trials in a psuedorandom order. This is a significantly more complicated case than the previous versions of this study we've defined - not for running your study, or for writing your whole BITTSy protocol, but specifically for planning the structure of the groups in your protocol, upon which stimuli selection will critically rely. Before beginning to create such a protocol, it is useful to take time to think about the layers of selection you will do - if it can be drawn as a selection tree structure (like the example ), with selections between branches all at equal probability, it can be implemented in BITTSy.

hashtag
Tags

We'll be defining all the possible video files and tags for this version of the study, because we won't specify in advance which ones the participant will see. First, our baseline trials. We have 4 object pairs, 2 left-right arrangements of each pair, and 2 background talkers, for a total of 16 possible baseline videos.

Now our trial videos. We have even more of these, because now the target word said by the main talker could be either of the two objects - a total of 32 videos & tags.

circle-info

Repetitive tag definitions like these do not need to all be typed out by hand! We used spreadsheet functions like clicking-and-dragging to copy cells, JOIN, and CONCATENATE to quickly make these lines in Google Sheets and copy-pasted them into our protocol file. .

The last tag to define is our attention-getter video.

hashtag
Groups

Defining the groups in this protocol is the really critical step for allowing us to control how stimuli are later balanced - how many of each target pair we show, how many times the named object is on the left or the right, how many trials have a male background talker vs. female.

We will construct a nested structure of groups. It is often helpful to work backwards and plan your group structure from the top down - that is, from the highest-level group that you will select from first in your protocol, to the lowest-level group that directly contains the tags referencing your stimuli files. We'll end up with 3 levels of groups for this protocol.

Your highest-level group's components should be defined around the characteristic that you want the most control over for pseudorandomization. Here, we'll make that be our object pairs. Our highest-level group, trial_videos, will contain groups for each of the four object pairs.

circle-info

If your study has more than one factor that you crucially need to control with psuedorandomization, you may find constructing multiple fixed-order protocols that follow your restrictions to be the most practical solution.

Now let's define one of these groups for an object pair. We're going to have five trials per object pair. One of these has to be a baseline trial. For the remaining four, we'll stipulate that they are:

  1. a video with a female background talker where the object named by the target talker is on the right

  2. a video with a female background talker where the object named by the target talker is on the left

  3. a video with a male background talker where the object named by the target talker is on the right

circle-info

This example balances some factors across trials, but not others. For example, we do not balance how many times one object is named versus the other. We could just as easily balance this in place of left-right answers, and define those groups accordingly. These decisions of what to keep balanced and what to leave random is the crux of setting up your group structure!

Now we'll define these component groups, which will contain our video tags. First, the baseline group. There were four baseline videos for each object pair.

For the other four groups, we defined two videos that match each trial type.

We'll define the nested group structure for our other object pairs the same way.

circle-info

Once you've defined one set, the rest is quick and easy: copy it over to a blank document, use find+replace to swap out the object names, and paste the new groups back into your protocol!

Because BITTSy will , we can't actually define our groups in backwards order - we'll need the definitions of the lower-level groups to come before the higher-level groups that reference them. So we'll just swap them around. We end up with this.

hashtag
STEPs

Now that we've defined our group structure, implementing the stimulus selection is really easy. Our protocol starts off just like the original version with playing the attention-getter video before every trial.

In our original setup with the pre-determined stimulus set, we then started a trial, picked a video, and displayed it. In this one, we have multiple selection steps, and BITTSy takes a (very, very small) bit of time to execute each one. Given that we have several, we might prefer to place these before our trial starts, just to ensure that there's never a gap between when BITTSy marks the beginning of a trial and when the video actually starts playing. (If you are not relying on BITTSy logs for trial timing and instead use cues in your participant video, such as lighting changes in the room or an image of your display in a mirror, this doesn't matter at all!)

Above, you can see how we selected from each layer of our group structure, highest to lowest. First, we picked an object pair. We need to restrict how many times we can select each pair using with max 4 repeats - we require that we have exactly 5 trials of each object pair. This , combined with the number of times we'll loop over this section, will ensure we get the intended result. We also choose to add the with max 2 repeats in succession clause, which ensures that we don't ever show videos of the same pair more than 3 trials in a row.

Next, we pick a trial type - baseline, female background right target, female background left target, male background right target, or male background left target. We only want one of each to come up in the experiment for any given object pair, so we'll use TAKE to . The next time the same object pair's group is picked to be pair, subgroups chosen previously to be type will not be available for selection.

Lastly, we need to pick a particular video tag from type. We will do this randomly. It doesn't matter whether we define this selection with FROM or TAKE because, per the previous selection where we chose type, we can't get this same subgroup again later.

Now that we have the trial video we want to display, we can run a trial.

Just like in the original example version of this protocol, we'll loop over the steps that define an attention-getter + a trial to display a total of 20 trials. Then, our experiment is done.

See the page for copies of the versions of this protocol.

line is executed. If the stimulus is explicitly turned
OFF
before the end of the trial, BITTSy may not be able to associate the in-progress look with the stimulus that was removed.
  • a video with a male background talker where the object named by the target talker is on the left

  • preferential looking studies
    Later, we'll cover how to select stimuli within the protocol itself
    tags
    group
    optional experiment settings
    background color
    phase
    loop
    action statement
    UNTIL statement
    trial
    step terminating condition
    dynamic tag
    action statement
    step terminating condition
    loop
    its terminating condition
    FIRST
    without replacement
    here
    See here for an examplearrow-up-right
    validate the protocol line by line
    repeat clause
    choose without replacement
    resources
    SIDES ARE {CENTER}
    DISPLAYS ARE {CENTER}
    LET truck_ball_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_f_generic.mp4"
    LET horse_bird_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_m_generic.mp4"
    LET cat_dog_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_m_generic.mp4"
    LET blocks_keys_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_f_generic.mp4"
    
    LET truck_ball_f_BALL = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_f_BALL.mp4"
    LET truck_ball_f_TRUCK = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_f_TRUCK.mp4"
    LET truck_ball_m_BALL = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_m_BALL.mp4"
    LET truck_ball_m_TRUCK = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_m_TRUCK.mp4"
    LET blocks_keys_f_BLOCKS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_f_BLOCKS.mp4"
    LET blocks_keys_f_KEYS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_f_KEYS.mp4"
    LET blocks_keys_m_BLOCKS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_m_BLOCKS.mp4"
    LET blocks_keys_m_KEYS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_m_KEYS.mp4"
    LET cat_dog_f_CAT = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_f_CAT.mp4"
    LET cat_dog_f_DOG = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_f_DOG.mp4"
    LET cat_dog_m_CAT = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_m_CAT.mp4"
    LET cat_dog_m_DOG = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_m_DOG.mp4"
    LET horse_bird_f_BIRD = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_f_BIRD.mp4"
    LET horse_bird_f_HORSE = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_f_HORSE.mp4"
    LET horse_bird_m_BIRD = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_m_BIRD.mp4"
    LET horse_bird_m_HORSE = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_m_HORSE.mp4"
    LET attentiongetter = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\baby.mp4"
    LET trial_videos = {blocks_keys_m_KEYS, horse_bird_m_HORSE, cat_dog_f_CAT, horse_bird_m_BIRD, horse_bird_f_HORSE, truck_ball_m_TRUCK, blocks_keys_f_generic, cat_dog_f_DOG, blocks_keys_f_KEYS, blocks_keys_m_BLOCKS, truck_ball_m_BALL, truck_ball_f_generic, cat_dog_m_generic, cat_dog_m_CAT, truck_ball_f_TRUCK, horse_bird_f_BIRD, horse_bird_m_generic, truck_ball_f_BALL, blocks_keys_f_BLOCKS, cat_dog_m_DOG}
    STEP 1
    Phase Test Start
    STEP 2
    VIDEO CENTER attentiongetter LOOP
    UNTIL KEY C
    STEP 3
    VIDEO CENTER OFF
    STEP 4
    Trial Start
    STEP 5
    LET vid = (TAKE trial_videos RANDOM)
    VIDEO CENTER vid ONCE
    UNTIL FINISHED
    STEP 6
    Trial End
    VIDEO CENTER OFF
    STEP 7
    LOOP STEP 2
    UNTIL 19 TIMES
    STEP 8
    Phase End
    STEP 5
    LET vid = (TAKE trial_videos FIRST)
    VIDEO CENTER vid ONCE
    UNTIL FINISHED
    LET ball_truck_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\ball_truck_f_generic.mp4"
    LET ball_truck_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\ball_truck_m_generic.mp4"
    LET truck_ball_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_f_generic.mp4"
    LET truck_ball_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_m_generic.mp4"
    LET horse_bird_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_f_generic.mp4"
    LET horse_bird_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_m_generic.mp4"
    LET bird_horse_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\bird_horse_f_generic.mp4"
    LET bird_horse_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\bird_horse_m_generic.mp4"
    LET cat_dog_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_f_generic.mp4"
    LET cat_dog_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_m_generic.mp4"
    LET dog_cat_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\dog_cat_f_generic.mp4"
    LET dog_cat_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\dog_cat_m_generic.mp4"
    LET blocks_keys_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_f_generic.mp4"
    LET blocks_keys_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_m_generic.mp4"
    LET keys_blocks_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\keys_blocks_f_generic.mp4"
    LET keys_blocks_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\keys_blocks_m_generic.mp4"
    LET ball_truck_f_BALL = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\ball_truck_f_BALL.mp4"
    LET ball_truck_f_TRUCK = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\ball_truck_f_TRUCK.mp4"
    LET ball_truck_m_BALL = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\ball_truck_m_BALL.mp4"
    LET ball_truck_m_TRUCK = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\ball_truck_m_TRUCK.mp4"
    LET truck_ball_f_BALL = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_f_BALL.mp4"
    LET truck_ball_f_TRUCK = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_f_TRUCK.mp4"
    LET truck_ball_m_BALL = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_m_BALL.mp4"
    LET truck_ball_m_TRUCK = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_m_TRUCK.mp4"
    LET blocks_keys_f_BLOCKS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_f_BLOCKS.mp4"
    LET blocks_keys_f_KEYS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_f_KEYS.mp4"
    LET blocks_keys_m_BLOCKS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_m_BLOCKS.mp4"
    LET blocks_keys_m_KEYS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_m_KEYS.mp4"
    LET keys_blocks_f_BLOCKS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\keys_blocks_f_BLOCKS.mp4"
    LET keys_blocks_f_KEYS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\keys_blocks_f_KEYS.mp4"
    LET keys_blocks_m_BLOCKS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\keys_blocks_m_BLOCKS.mp4"
    LET keys_blocks_m_KEYS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\keys_blocks_m_KEYS.mp4"
    LET cat_dog_f_CAT = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_f_CAT.mp4"
    LET cat_dog_f_DOG = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_f_DOG.mp4"
    LET cat_dog_m_CAT = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_m_CAT.mp4"
    LET cat_dog_m_DOG = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_m_DOG.mp4"
    LET dog_cat_f_CAT = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\dog_cat_f_CAT.mp4"
    LET dog_cat_f_DOG = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\dog_cat_f_DOG.mp4"
    LET dog_cat_m_CAT = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\dog_cat_m_CAT.mp4"
    LET dog_cat_m_DOG = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\dog_cat_m_DOG.mp4"
    LET bird_horse_f_BIRD = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\bird_horse_f_BIRD.mp4"
    LET bird_horse_f_HORSE = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\bird_horse_f_HORSE.mp4"
    LET bird_horse_m_BIRD = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\bird_horse_m_BIRD.mp4"
    LET bird_horse_m_HORSE = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\bird_horse_m_HORSE.mp4"
    LET horse_bird_f_BIRD = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_f_BIRD.mp4"
    LET horse_bird_f_HORSE = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_f_HORSE.mp4"
    LET horse_bird_m_BIRD = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_m_BIRD.mp4"
    LET horse_bird_m_HORSE = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_m_HORSE.mp4"
    LET attentiongetter = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\baby.mp4"
    LET trial_videos = {ball-truck_ALL, bird-horse_ALL, blocks-keys_ALL, cat-dog_ALL}
    LET ball-truck_ALL = {ball-truck_baseline, ball-truck_f_right, ball-truck_f_left, ball-truck_m_right, ball-truck_m_left}
    LET ball-truck_baseline = {ball_truck_f_generic, ball_truck_m_generic, truck_ball_f_generic, truck_ball_m_generic}
    LET ball-truck_f_right = {ball_truck_f_TRUCK, truck_ball_f_BALL}
    LET ball-truck_f_left = {ball_truck_f_BALL, truck_ball_f_TRUCK}
    LET ball-truck_m_right = {ball_truck_m_TRUCK, truck_ball_m_BALL}
    LET ball-truck_m_left = {ball_truck_m_BALL, truck_ball_m_TRUCK}
    LET ball-truck_baseline = {ball_truck_f_generic, ball_truck_m_generic, truck_ball_f_generic, truck_ball_m_generic}
    LET ball-truck_f_right = {ball_truck_f_TRUCK, truck_ball_f_BALL}
    LET ball-truck_f_left = {ball_truck_f_BALL, truck_ball_f_TRUCK}
    LET ball-truck_m_right = {ball_truck_m_TRUCK, truck_ball_m_BALL}
    LET ball-truck_m_left = {ball_truck_m_BALL, truck_ball_m_TRUCK}
    
    LET bird-horse_baseline = {bird_horse_f_generic, bird_horse_m_generic, horse_bird_f_generic, horse_bird_m_generic}
    LET bird-horse_f_right = {bird_horse_f_horse, horse_bird_f_bird}
    LET bird-horse_f_left = {bird_horse_f_bird, horse_bird_f_horse}
    LET bird-horse_m_right = {bird_horse_m_horse, horse_bird_m_bird}
    LET bird-horse_m_left = {bird_horse_m_bird, horse_bird_m_horse}
    
    LET blocks-keys_baseline = {blocks_keys_f_generic, blocks_keys_m_generic, keys_blocks_f_generic, keys_blocks_m_generic}
    LET blocks-keys_f_right = {blocks_keys_f_keys, keys_blocks_f_blocks}
    LET blocks-keys_f_left = {blocks_keys_f_blocks, keys_blocks_f_keys}
    LET blocks-keys_m_right = {blocks_keys_m_keys, keys_blocks_m_blocks}
    LET blocks-keys_m_left = {blocks_keys_m_blocks, keys_blocks_m_keys}
    
    LET cat-dog_baseline = {cat_dog_f_generic, cat_dog_m_generic, dog_cat_f_generic, dog_cat_m_generic}
    LET cat-dog_f_right = {cat_dog_f_dog, dog_cat_f_cat}
    LET cat-dog_f_left = {cat_dog_f_cat, dog_cat_f_dog}
    LET cat-dog_m_right = {cat_dog_m_dog, dog_cat_m_cat}
    LET cat-dog_m_left = {cat_dog_m_cat, dog_cat_m_dog}
    
    LET ball-truck_ALL = {ball-truck_baseline, ball-truck_f_right, ball-truck_f_left, ball-truck_m_right, ball-truck_m_left}
    LET bird-horse_ALL = {bird-horse_baseline, bird-horse_f_right, bird-horse_f_left, bird-horse_m_right, bird-horse_m_left}
    LET blocks-keys_ALL = {blocks-keys_baseline, blocks-keys_f_right, blocks-keys_f_left, blocks-keys_m_right, blocks-keys_m_left}
    LET cat-dog_ALL = {cat-dog_baseline, cat-dog_f_right, cat-dog_f_left, cat-dog_m_right, cat-dog_m_left}
    
    LET trial_videos = {ball-truck_ALL, bird-horse_ALL, blocks-keys_ALL, cat-dog_ALL}
    STEP 1
    Phase Test Start
    
    STEP 2
    VIDEO CENTER attentiongetter LOOP
    UNTIL KEY C
    
    STEP 3
    VIDEO CENTER OFF
    STEP 4
    LET pair = (FROM trial_videos RANDOM {with max 4 repeats, with max 2 repeats in succession})
    LET type = (TAKE pair RANDOM)
    LET vid = (FROM type RANDOM)
    STEP 5
    Trial Start
    VIDEO CENTER vid ONCE
    UNTIL FINISHED
    
    STEP 6
    Trial End
    VIDEO CENTER OFF
    STEP 7
    LOOP STEP 2
    UNTIL 19 TIMES
    
    STEP 8
    Phase End

    Preferential looking example - fast-mapping

    hashtag
    About this protocol

    This protocol is based on the Newman et al. (2020) study cited below, testing toddlers' fast-mapping from noise-vocoded speech via a preferential looking paradigm with an initial training period on the word-object mappings. In a training phase, participants are taught the names for two objects, which appear alternately on the screen. Following training, both objects appear on-screen simultaneously with no accompanying audio to assess baseline preferences and familiarize participants to the idea that the objects will now appear together. Subsequently, the objects appear together in these same positions across several test trials. Sometimes the speaker asks the child to look toward one object ("find the coopa!") and sometimes directs them to look at the other object ("find the needoke!")

    Newman, R. S., Morini, G., Shroads, E., & Chatterjee, M. (2020). Toddlers' fast-mapping from noise-vocoded speech. The Journal of the Acoustical Society of America, 147(4), 2432-2441.

    This study was not originally run in BITTSy, and it used four fixed-order experiment files to control condition assignment and trial order rather than the randomization set up in this example protocol. However, this kind of multi-phase study, with multiple conditions and restrictions on stimulus presentation order in each phase, is a great example of a more complex preferential looking study (see for a simpler case). Below is a walk-through of how you could recreate its structure in BITTSy.

    hashtag
    Starting the protocol

    hashtag
    Starting definitions

    As in any protocol, first come the :

    In this protocol, we are only using a single display, and our audio is playing from stereo speakers. We therefore only name one display in the DISPLAYS definition, and leave out the definitions for LIGHTS and AUDIO. We only need to name one SIDE to match with the display.

    hashtag
    Tags

    Next, we're going to set up all the .

    Our visual stimuli are two animated 3D models, one of a spikey ball on a pedestal (which we call "spike") and one that looks like an F in profile (which we call "fred"). In training trials, we'll present one of these two objects and an audio track will identify it as either the needoke or the coopa.

    After the training phase, we'll have a baseline trial with both objects on-screen and no accompanying audio. We'll want to randomize which is on the left and which is on the right. In the test phase, to keep the task as easy as possible, we'll keep the object positions the same as in the baseline trial.

    This baseline trial has "fred" on the left and "spike" on the right. (The order of objects in our video filenames identify their ordering on screen left-to-right.)

    And this baseline trial can be presented with either of these two sets of test videos with that same object positioning, depending on how the objects were labeled in the training phase.

    circle-info

    Comments (lines starting with a #) do not affect the execution of your study, and are helpful for leaving explanation of your tag naming conventions and study structure.

    Here are the rest of our tags, for the opposite object positioning.

    Note that in our test video tag names, we've labelled which object is the target object and which side is the correct answer. This makes it extremely convenient for us to determine these later, looking at the reports from study sessions - it is directly in the tag name! This is not necessary for this protocol, just nice. It does lead to some redundancy in files - for example, coopa_spike_left and coopa_fred_right are the same video file. Which tag we use to call up that video just depends on which object was named coopa earlier in the study.

    Lastly, we'll have an attention-getter video that plays in-between all the trials.

    hashtag
    Groups

    Now, let's walk through the we'll define for this study. There are two possible ways to pair our two objects and two audio labels. Either coopa goes with the object "spike" and needoke goes with the object "fred," or vice versa. We will call these two training phase configurations Order 1 and Order 2, respectively. We will want to randomly assign participants to one of these two ways of pairing the labels and objects.

    We're also going to randomly determine which sides the two objects will appear on in baseline and test trials. With this next layer of random assignment, we'll end up with four distinct orders. In the "A" orders, "fred" will be on the left, and in the "B" orders, "spike" is on the left.

    Now for the test phase groups. In Order 1A, coopa goes with "spike" and needoke goes with "fred," and "fred" is on the left and "spike" on the right. These are the test video tags that fit that.

    Similarly, we'll set up the test video groups for the other orders.

    Now we're ready to define our groups of groups - each whole order, from training to test.

    So that we can randomly pick one of these four orders at the start of the experiment, there will be one more layer to this group structure. We'll have a group that contains all of the orders.

    hashtag
    Optional experiment settings

    At this point, we would define any that we needed. But we don't need any for this protocol. For other preferential looking studies, you most notably might want to change the that is visible when there are no stimuli on the screen, so that gaps between videos are not disruptive.

    hashtag
    STEPS for execution

    Now that the experiment is set up, we'll define what will actually happen when we click to run the protocol.

    hashtag
    Training phase

    In the first STEP, we'll start the training phase. We'll want to delineate which trials are in the training phase and which are in the test phase so that we can easily separate them in from study sessions. Then, we choose the order the child will participate in (1A, 1B, 2A, or 2B) from our orders group. This resulting group contains three other groups that together make up the videos that can be in that order: its training phase videos, its baseline video, and its test videos. We defined them in this order, so we can use to choose them from first to last, without replacement, and assign them names that we can use to refer to them later. We've named these dynamic tags in a way that makes explicit that these are still groups. We'll need to select particular stimuli from these groups later, to display in their respective phases.

    Now that we've picked an order, time to start displaying stimuli. First, an attention getter. This is played until the experimenter indicates that the child is looking at the screen and ready for a trial to start, by pressing the X key on the keyboard.

    Now that the child is ready to see a trial, let's show one.

    The group of training videos contains two tags. We'll choose one of them randomly. We use a randomization clause {with max 0 repeats in succession} to restrict which tags are chosen from this group after we make the first selection. This one means that we can never display the same tag in the group twice in a row. Because there are only two in the group, trials will alternate between them: children will either hear coopa, needoke, coopa, needoke... or needoke, coopa, needoke, coopa...

    With this randomization clause, we have specified exactly what we want for the whole phase, right here. We can use a to run the rest of our trials.

    When we finish the first trial and reach this loop, we'll jump back to STEP 2, where the attention getter video started, and we'll continue, running another whole trial, until we reach STEP 7 again. This will repeat until the loop terminating condition is reached: when we've looped back over the section seven times. Along with the first execution of this section, before we started looping, we display a total of eight training trials. This is the entirety of the training phase.

    hashtag
    Test phase

    Now we'll start the test phase. We'll still have the attention-getter appear in between every trial, and our first video we show here will be our baseline video.

    Recall that the baseline_group had the baseline video we wanted in this order as its only tag. We'll still have to make a selection and assign it the baseline_trial in order to play the video. But with only one tag in the group, there are many possible ways to define the choose statement.

    Next, the rest of our test videos, which we will again run in a loop.

    Note that these steps are almost identical to steps 2-7, which made up the training phase. Parallel trial structures like these make it really simple to write protocol files - simply copy and paste the section of STEPs and change step numbers, tags, and other flags as necessary.

    In the test phase, we want to display eight total test trials. We achieve this when the loop in STEP 19 has run seven times. There are two tags available in the test_group for each order, and we want each to be displayed exactly four times. After each tag's first selection, it can have 3 repeat selections. We define this in the in STEP 17. If we wanted to further restrict the trial order (for example, if we wanted to prevent a random order that showed the same video four times in a row, then the other one four times in a row) we could do so with .

    Once this loop is completed, the test phase is done, and so is our experiment!

    See the page for a copy of this full protocol.

    Headturn preference paradigm example

    hashtag
    About this protocol

    The following protocol is based on the Newman (2009) paper cited below. A headturn preference procedure is used to assess whether infants can recognize their own name being spoken when multiple other talkers are speaking in the background.

    Newman, R. S. (2009). Infants' listening in multitalker environments: Effect of the number of background talkers. Attention, Perception & Psychophysics, 71, 822-836

    This study was run prior to the development of BITTSy on an older, hard-wired system in our lab. However, the following example BITTSy protocol replicates its structure and settings, and has been used in our lab for studies in this same body of work.

    In headturn preference procedure, we'll start off with the participant facing a neutral direction, then have the participant turn their head to listen to an audio stimulus. For as long as they are interested in that stimulus, they tend to keep looking toward where the sound is coming from. When they get bored, they tend to look away. Across trials, we can use how long they listen to stimuli as a measure of listening preference.

    Infants as young as four months old will listen longer to their own name (a highly familiar word) than other, unfamiliar names. In this study, we'll present names with noise of other people talking in the background. If infants still listen longer to the audio that contains their own name, we can say that they can still discern their name and recognize it as familiar, despite these harder listening conditions. By testing infants' success in this task at different age ranges, with different types of background noise, and at different noise levels, we can better understand infants' speech perception abilities.

    This study begins with a phase that familiarizes infants with the procedure, and continues until they accumulate a certain amount of looking time toward the audio stimuli in this phase. In the test phase, we present three blocks of four trials each, with the trial order randomized within each block.

    hashtag
    Starting the protocol

    hashtag
    Starting definitions

    As always, we open with starting definitions. For this headturn preference study, we will use three lights in our testing booth: one directly in front of the participant, and one each on the left and right sides of the testing booth which the participant must turn their head 90 degrees to view. Our lab's starting definitions look like this:

    hashtag
    A note about HPP and defining LEFT and RIGHT

    In this headturn preference study, we will be turning on light/speaker pairs that are located on the same side of the testing booth. It will be important to define your LEFTand RIGHTlights in a way that matches up with your left and right speaker channels, so that each light/speaker pair can be turned on and off with the same side name. Here, the lights that we define as LEFTand RIGHTmatch what we see in our booth from the experimenter's perspective, and we have set up our stereo audio system with the left-channel and right-channel speakers directly behind these LEFT and RIGHT lights, respectively.

    You can choose to name the sides of your booth according to your experimenter's perspective or your participant's perspective, whenever they don't match - it doesn't matter, as long as the lights and speakers that should be treated as a pair have the same side name, and it's clear to your experimenters how to identify these sides via your key assignments for live-coding.

    See the page for more on how to order these definitions in a way that fits your own testing setup.

    hashtag
    Tags

    In this protocol, we have six files to use and assign tag names - two clips of classical music to use in the familiarization phase, and four audio files for the test phase, in which a speaker repeatedly calls a name with the noise of several people talking in the background. One name file is the name the participant is most commonly called ("Bella" in this example), one has a name that matches the stress pattern of the participant's name ("Mason") and two foil names have the same number of syllables but a different stress pattern ("Elise" and "Nicole").

    Because the stimuli in this experiment are particular to each participant (one of them must be that participant's name), we make a copy of the protocol before each participant's visit that has the appropriate filenames filled in for their session. Tag names let us label these generically by what trial type it is (name, matched foil, or unmatched foil) rather than the particular name that was being called, and once we change the filenames they reference, no further changes to the protocol are necessary to prepare for each participant's session.

    These tag names, which are kept consistent, will also appear in the reports of session data rather than the filenames, allowing us to easily combine data across participants even when stimulus files themselves differ. (And keep reports de-identified, in this study in which their first name is a stimulus!)

    hashtag
    Groups

    Having defined our tags, now we'll create that will help us define our stimulus selection later, in the familiarization and test phases of our experiment.

    The training phase is straightforward - we'll just present those two music clips throughout.

    But in the test phase, we'll use a block structure. Each of the four trial types will be presented in a random order in each block, and we'll have a total of three blocks. All our test blocks have the same four stimuli. However, we'll define three copies of this group - testblock1 to testblock3 - which will allow us to randomize the order of stimuli within each block completely independently from the other blocks.

    circle-info

    You might wonder why we can't define a single testblock group, and just restrict selection from that group to produce the desired block structure. See the last example in the for why this doesn't work, and its for more on why this nested group structure is a good solution for creating blocks in experiments.

    And we need an overarching group for the test phase that contains our blocks:

    In addition to randomly ordering stimuli, we will want to randomly order stimulus presentation locations. We can set up for this by creating . CENTER in this experiment is used only for in between trials; LEFT and RIGHT are the sides for selection here. We'll place some restrictions on the randomization order (e.g. to prevent too many trials in a row on the same side), but we'll have these restrictions reset between the familiarization phase and test phase. Therefore, we'll make two groups of sides, one to use in each phase, so that when we switch to the test phase, we're starting fresh on which side choices are allowable.

    hashtag
    Optional experiment settings

    At this point in the protocol, we would define any optional experimental settings we needed. In a HPP study, relevant ones for consideration are as well as for live coding. Here, we'll use all the defaults, so we won't need to include any - but this is how they would appear.

    circle-info

    See also our for recommendations on key assignments and details on how live coding works in BITTSy.

    hashtag
    STEPS for execution

    Now we're ready for the body of the protocol - what will happen when we click the button to run the protocol.

    hashtag
    Familiarization

    First, we'll start off the training phase, in which we simply familiarize the participant to the basic procedure. First, a light will start blinking in the center of the booth, getting the participant to attend to this neutral direction before a trial starts, so that they later will demonstrate a clear headturn to attend to stimuli on the sides of the booth. Once the experimenter judges they are looking at it (by pressing the key assigned to CENTER), it turns off.

    Immediately, one of the side lights will turn on. We this side randomly to be either LEFT or RIGHT, but restrict the randomization to not choose the same side . Once the participant turns and looks at this side light, the experimenter will press a key to indicate they are now looking in that direction.

    circle-info

    In a protocol that assigned different keys to LEFT and RIGHT, the terminating conditions for this step should be until those keys were pressed, rather than L and R

    Now, immediately once the look toward the light is recorded, a trial starts. Audio will begin to play from the speaker directly behind the light. It will continue to play until either the file ends, or the participant looks away for at least 2 seconds - .

    Here, we choose the audio for the training trial from the trainingmusic group such that we cannot pick the same music clip twice in a row. The two clips will play alternately throughout the training phase.

    Once one of the is met, we want the trial to end. The light should turn off, and so should the audio. (If the FINISHED terminating condition was the one that was met, it has already stopped playing, and the OFF command does nothing. But if the SINGLELOOKAWAY one was met instead, it would continue to play if we didn't turn it off now.)

    We end this with UNTIL TIME 100 just to have a tiny perceptible break between turning this side light off, and the start of the next inter-trial period - which we'll get to via a .

    Up to now, our steps have defined the basic structure of our training phase: start the CENTER light, wait for a look and turn off, start a side light, wait for a look and play a trial. We can now define a loop that will repeat this basic structure until the participant is sufficiently familiarized to proceed to the test phase.

    We'll loop back to STEP 2 (where we turned on the CENTER light to re-orient the participant between trials), and execute all the steps back to STEP 7, running another trial in the process. We keep looping back over these STEPs until the child accumulates 25 seconds of total looking time to both of the music clips. Once this happens, we consider the participant to be sufficiently familiarized with the study procedure, and end the training phase.

    hashtag
    Test

    Now, time for the test phase. Recall that we have three test blocks. Each is a group of tags, within the group testaudio that contains all three blocks. The first thing we'll need to do is pick a block.

    All the blocks in this experiment contain the same tags, so it doesn't matter whether we choose by . But we do want to use rather than . When we later choose the stimuli from within the block for each trial, we're going to use TAKE to remove them so that they aren't chosen on more than one trial within the block. If at the end of the block, this empty block was still available for choosing from testaudio (i.e. if we used FROM) we could get in trouble if we picked it again - we'd try to select stimulus files from the block, but there wouldn't be any more in there to pick, and our protocol would have an execution error and stop running.

    The test phase of the protocol will start off the same way that the familiarization phase did - by defining the inter-trial period, with the flashing CENTER light, then choosing which side of the booth the test trial will be presented on. We'll select from testsides this time, and give it the side2 so that we can refer to this side for the light, and later the audio. No two tags, including dynamic tags, can have the same name, which is why we define our placeholder tag for the active side as side1 in training and side2 now.

    Now for a test trial. From the block that we chose back in STEP 10, we'll select a random tag to play - either ownname, matchedfoil, or one of the two unmatched foils. We want to select these without replacement using TAKE, so that we never repeat a single stimulus within the same block.

    Now, we'll define a that will let us play a trial (with the blinking CENTER light in between) for each trial type in the block. We loop the section 3 times, so that we end with 4 trials total.

    Note that we loop back to STEP 11. That was after we selected a block from the testaudio group, so the dynamic tag block refers to the same block throughout this whole loop. This is what we want - to TAKE each of the items from the same block and play them in a trial by the time we're done with this loop.

    From STEP 10, where we select a block, up to this point constitutes one block out of three in our test phase. This is only a third of our trials - but we're almost done with writing our protocol. Because each block has the same structure, we can make another loop to repeat the process of selecting a block and playing four trials within it - we'll add an outer loop, which will contain the loop we've already defined.

    When we loop back to STEP 10, we pick the next block out of testaudio, and repeat the whole structure, including the loop at STEP 16 that lets us run through all four trials within the new block we've selected. When this loop in STEP 17 finishes and our second block of trials have played, we loop through again for our third block. Then, when the STEP 17 loop has run 2 times - all three of our test blocks have been completed - our experiment is done.

    These kinds of loops may not be totally intuitive if you are used to thinking of your experiments as a linear progression of trials. However, loops are well-suited for any kind of experiment that has repeating units with the same internal structure, and they will save you tons of effort in defining your protocols!

    See the page for a copy of this protocol.

    this example
    starting definitions
    tags that reference our video stimuli files
    groups
    optional experiment settings
    background color
    reports
    TAKE statements
    dynamic tag
    loop
    dynamic tag
    randomization clause
    a further randomization clause
    resources
    starting definitions
    groups
    max <number> repeats in <number> trials section
    following section
    groups of sides
    COMPLETELOOK and COMPLETELOOKAWAY
    key assignments
    page on live coding
    choose
    more than 3 times in a row
    whichever comes first
    step terminating conditions
    loop
    FIRST or RANDOM
    TAKE
    FROM
    dynamic tag
    loop
    resources
    DISPLAYS ARE {CENTER}
    SIDES ARE {CENTER}
    LET coopa_fred = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\coopa_fred.mp4"
    LET coopa_spike = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\coopa_spike.mp4"
    LET needoke_fred = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\needoke_fred.mp4"
    LET needoke_spike = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\needoke_spike.mp4"
    LET silent_fred_spike = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\silent_fred_spike.mp4"
    # Fred = Coopa, Spike = Needoke
    LET coopa_fred_left = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\coopa_fred_spike.mp4"
    LET needoke_spike_right = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\needoke_fred_spike.mp4"
    # Fred = Needoke, Spike = Coopa
    LET needoke_fred_left = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\needoke_fred_spike.mp4"
    LET coopa_spike_right = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\coopa_fred_spike.mp4"
    LET silent_spike_fred = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\silent_spike_fred.mp4"
    # Spike = Coopa, Fred = Needoke
    LET coopa_spike_left = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\coopa_spike_fred.mp4"
    LET needoke_fred_right = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\needoke_spike_fred.mp4"
    # Spike = Needoke, Fred = Coopa
    LET needoke_spike_left = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\needoke_spike_fred.mp4"
    LET coopa_fred_right = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\coopa_spike_fred.mp4"
    LET attentiongetter = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\baby.mp4"
    LET order1_train = {coopa_spike, needoke_fred}
    LET order2_train = {coopa_fred, needoke_spike}
    LET order1A_baseline = {silent_fred_spike}
    LET order1B_baseline = {silent_spike_fred}
    LET order2A_baseline = {silent_fred_spike}
    LET order2B_baseline = {silent_spike_fred}
    LET order1A_test = {coopa_spike_right, needoke_fred_left}
    LET order1B_test = {coopa_spike_left, needoke_fred_right}
    LET order2A_test = {coopa_fred_left, needoke_spike_right}
    LET order2B_test = {coopa_fred_right, needoke_spike_left}
    LET order1A = {order1_train, order1A_baseline, order1A_test}
    LET order1B = {order1_train, order1B_baseline, order1B_test}
    LET order2A = {order2_train, order2A_baseline, order2A_test}
    LET order2B = {order2_train, order2B_baseline, order2B_test}
    LET orders = {order1A, order1B, order2A, order2B}
    STEP 1
    Phase Train Start
    LET order = (FROM orders RANDOM)
    LET training_group = (TAKE order FIRST)
    LET baseline_group = (TAKE order FIRST)
    LET test_group = (TAKE order FIRST)
    STEP 2
    VIDEO CENTER attentiongetter LOOP
    UNTIL KEY X
    
    STEP 3
    VIDEO CENTER OFF
    STEP 4
    Trial Start
    
    STEP 5
    LET training_trial = (FROM training_group RANDOM {with max 0 repeats in succession})
    VIDEO CENTER training_trial ONCE
    UNTIL FINISHED
    
    STEP 6
    Trial End
    VIDEO CENTER OFF
    STEP 7
    LOOP STEP 2
    UNTIL 7 TIMES
    STEP 8
    Phase End
    STEP 9
    Phase Test Start
    
    STEP 10
    VIDEO CENTER attentiongetter LOOP
    UNTIL KEY X
    
    STEP 11
    VIDEO CENTER OFF
    
    STEP 12
    Trial Start
    
    STEP 13
    LET baseline_trial = (FROM baseline_group FIRST)
    VIDEO CENTER baseline_trial ONCE
    UNTIL FINISHED
    
    STEP 14
    Trial End
    VIDEO CENTER OFF
    STEP 15
    VIDEO CENTER attentiongetter LOOP
    UNTIL KEY X
    
    STEP 16
    Trial Start
    
    STEP 17
    LET test_trial = (FROM test_group RANDOM {with max 3 repeats})
    VIDEO CENTER test_trial ONCE
    UNTIL FINISHED
    
    STEP 18
    Trial End
    VIDEO CENTER OFF
    
    STEP 19
    LOOP STEP 15
    UNTIL 7 TIMES
    STEP 20
    Phase End
    SIDES ARE {CENTER, RIGHT, LEFT}
    LIGHTS ARE {LEFT, CENTER, RIGHT}
    LET trainingmusic1 = "C:\Users\ldev\Desktop\BITTSy\HPPExample\trainingmusic1.wav"
    LET trainingmusic2 = "C:\Users\ldev\Desktop\BITTSy\HPPExample\trainingmusic2.wav"
    LET ownname = "C:\Users\ldev\Desktop\BITTSy\HPPExample\bella_sm_10dB.wav"
    LET matchedfoil = "C:\Users\ldev\Desktop\BITTSy\HPPExample\mason_sm_10dB.wav"
    LET unmatchedfoil1 = "C:\Users\ldev\Desktop\BITTSy\HPPExample\elise_sm_10dB.wav"
    LET unmatchedfoil2 = "C:\Users\ldev\Desktop\BITTSy\HPPExample\nicole_sm_10dB.wav"
    LET trainingmusic = {trainingmusic1, trainingmusic2}
    LET testblock1 = {ownname, matchedfoil, unmatchedfoil1, unmatchedfoil2}
    LET testblock2 = {ownname, matchedfoil, unmatchedfoil1, unmatchedfoil2}
    LET testblock3 = {ownname, matchedfoil, unmatchedfoil1, unmatchedfoil2}
    LET testaudio = {testblock1, testblock2, testblock3}
    LET trainingsides = {LEFT, RIGHT}
    LET testsides = {LEFT, RIGHT}
    DEFINE COMPLETELOOK 100
    DEFINE COMPLETELOOKAWAY 100
    DEFINE ASSIGN LEFT KEY L
    DEFINE ASSIGN RIGHT KEY R
    DEFINE ASSIGN CENTER KEY C
    DEFINE ASSIGN AWAY KEY W
    STEP 1
    Phase Train Start
    
    STEP 2
    LIGHT CENTER BLINK 250
    UNTIL KEY C
    
    STEP 3
    LIGHT CENTER OFF
    STEP 4
    LET side1 = (FROM trainingsides RANDOM {with max 2 repeats in succession})
    LIGHT side1 BLINK 250
    UNTIL KEY L
    UNTIL KEY R
    STEP 5
    Trial Start
    LET training_audio = (FROM trainingmusic RANDOM {with max 0 repeats in succession})
    AUDIO side1 training_audio ONCE
    UNTIL FINISHED
    UNTIL SINGLELOOKAWAY training_audio GREATERTHAN 2000
    STEP 6
    Trial End
    LIGHT side1 OFF
    AUDIO side1 OFF
    UNTIL TIME 100
    STEP 7
    LOOP STEP 2
    UNTIL TOTALLOOK trainingmusic1 GREATERTHAN 25000 THIS PHASE and TOTALLOOK trainingmusic2 GREATERTHAN 25000 THIS PHASE
    
    STEP 8
    Phase End
    STEP 9
    Phase Test Start
    
    STEP 10
    LET block = (TAKE testaudio FIRST)
    STEP 11
    LIGHT CENTER BLINK 250
    UNTIL KEY C
    
    STEP 12
    LIGHT CENTER OFF
    
    STEP 13
    LET side2 = (FROM testsides RANDOM {with max 2 repeats in succession})
    LIGHT side2 BLINK 250
    UNTIL KEY L
    UNTIL KEY R
    STEP 14
    Trial Start
    LET trial_audio = (TAKE block RANDOM)
    AUDIO side2 trial_audio ONCE
    UNTIL FINISHED
    UNTIL SINGLELOOKAWAY trial_audio GREATERTHAN 2000
    
    STEP 15
    Trial End
    AUDIO side2 OFF
    LIGHT side2 OFF
    UNTIL TIME 100
    STEP 16
    LOOP STEP 11
    UNTIL 3 TIMES
    STEP 17
    LOOP STEP 10
    UNTIL 2 TIMES
    STEP 18
    Phase End

    Habituation example - familiarization to a category

    hashtag
    About this protocol

    This protocol is based on a commonly-given example of habituation rather than a particular study. The question is whether young infants can recognize that objects of the same kind but with their own unique appearances all belong to the same category. Can infants recognize a pattern in the images that are being presented - that they are all cats, or that they are all dogs? And when an image is presented that does not belong to the category, do they notice that it is different?

    As infants recognize the pattern or "rule" in the habituation phase (that all the images are the same kind of animal) we expect them to start to pay less attention to the individual images. This decrement in looking time across trials is the principle of habituation. But what really demonstrates whether they have learned the pattern is whether they show an increase in looking time when presented with something outside of the category, relative to another novel example of the trained category.

    Here, we'll habituate infants to either instances of cats or instances of dogs. Later, to both groups, we'll present one new image each of a dog and a cat. We might think that breeds of dogs are more visually dissimilar from each other than breeds of cats, and we might expect that infants who are habituated to cats will have greater success in detecting which test phase image doesn't belong to their learned category. They may be more likely to recognize that the dog presented in the test phase is something different and more interesting than another cat image, while infants who are habituated to dogs may be less likely to recognize the cat in the test phase as particularly novel.

    At the very beginning and very end of our experiment, we'll have pre-test and post-test trials. These help us see whether the participant is as actively engaged at the end of the experiment as they were at the beginning. It lets us differentiate children who didn't recognize the category switch but were still paying attention in general (looking longer at the post-test stimulus, even if they didn't look long during the test phase) from children who were simply bored or inattentive (looking very little at the post-test stimulus).

    hashtag
    Starting the protocol

    hashtag
    Starting definitions

    We'll begin with our starting definitions. In this protocol, we'll only use one central display, and code looks towards/away that monitor.

    hashtag
    Tags

    We have a lot of tags to define for this protocol, because we'll be using a lot of image files. We'll have a maximum of twenty images displayed of either cats or dogs in the habituation phase of the experiment, plus one of each for test, and two unrelated images for a pre-test and post-test (these should be consistently really interesting to infants - but as a placeholder, we'll use turtles). We'll also show an attention-getter video in between trials.

    This file definition section is a very large portion of our protocol. We've abbreviated it below.

    circle-check

    For tags like our cat and dog stimuli, which are named systematically, never type the whole thing out - there's a much easier way! We use Excel or Google Sheets and click and drag to fill cells with components of these lines that will be the same, auto-fill series to make columns of the numbers in increasing order, and a that we then click and drag to apply down the rows to put each of the lines together. Then you can just copy and paste the results into your protocol!

    hashtag
    Groups

    First, let's define for habituation and our pre-test and post-test.

    We'll have two conditions that participants are randomly assigned to - they will either be habituated to the group cats, or the group dogs. After they are habituated to that group, they will see the images dog_test and cat_test. In this protocol, we'll have infants always see the image that wasn't the category they were habituated to as the first image of the test phase, then the other example of the habituated category on their second test trial. Let's create groups that list what they'll see in each of these possible conditions, in order of presentation.

    Note that these two groups, habitcats and habitdogs, contain first a group, then two tags that refer directly to files. We'll need to remember this as we use these groups for stimulus selection and presentation later. In the habituation phase, when we're pulling items from the group cats or dogs, we'll need a to pick a tag from the group before we present it onscreen. But in the test phase, we can display these tags directly - they refer to particular files, rather than a group of files.

    Lastly, so that we can randomly select a condition for the participant at the start of a study session, we'll make a group that contains both.

    hashtag
    Experiment settings

    Some optional experiment settings may be useful here. This is a live-coded experiment, so we might want to decide what "counts" for looking time calculations. We also might want to change to something other than C for CENTER and W for AWAY, or the .

    We also have to specify for this study. These should be decided based on previous studies, or some piloting in your lab. Here are the settings we chose for this example.

    hashtag
    STEPS for execution

    hashtag
    Pre-test

    Now, our experiment begins. First, we'll have a pre-test phase, where we'll show something unrelated to our habituation and test images. Here we're using two similar images for pre- and post-test and will select them randomly from the prepost group, but it is common to use the same image for both.

    Like all other trials in our study, we'll play an attention-getter video beforehand, and only start the trial once the experimenter indicates the child is looking at the screen. Then, the experimenter will continue to code the child's looks toward and away from the image being displayed. If the child has a single look away from the screen that lasts at least 2 seconds, or if the image has been displayed for a maximum length of 20 seconds, the trial will end.

    hashtag
    Habituation

    Next comes our habituation phase. First, we will need to assign this participant to one of our study conditions - habituating to cats, or habituating to dogs. We'll choose this randomly.

    Recall that this condition we pick (either habitcats or habitdogs) contains three tags:

    1. the group of tags to display in habituation

    2. a tag referring to the test item from the novel category

    3. a tag referring to the test item from the familiarized category

    We need to refer to the first one in the habituation phase, so in this same step, let's pull it out of our chosen condition with another .

    Now, we'll display another attention-getter and a habituation trial.

    We want to keep running habituation trials until the child either , or reaches a maximum number of trials. We'll do this by back to STEP 8 and repeating this section until one of those conditions are met. When one of them is, our habituation phase is over.

    hashtag
    Test phase

    Now for our test phase. We'll start it off, and keep having attention-getter videos in between trials.

    What goes in our test trial? Recall that the group called condition that we chose earlier for this participant has two remaining items in it, after we selected the habituation group from it with our earlier . They are our two test items, in the order we want them presented. We can use another TAKE statement to remove them in order and display them.

    There are two test trials - so let's loop to repeat this section to play the second one.

    hashtag
    Post-test

    Lastly, the post-test. This is identical to our pre-test phase earlier, so we can specify it in our protocol the same way. We just copy-pasted it and changed the step numbers, phase name, and name of the .

    And the experiment is done! See the page for a full copy of this protocol.

    CONCATENATE functionarrow-up-right
    groups
    choose statement
    minimum length of look
    key assignments
    background color
    habituation settings
    choose statement
    meets our habituation criterion
    looping
    TAKE statement
    dynamic tag
    resources
    SIDES ARE {CENTER}
    DISPLAYS ARE {CENTER}
    LET attentiongetter = "C:\Users\ldev\Desktop\BITTSy\HabitExample\stim\baby.mp4"
    LET cat1 = "C:\Users\ldev\Desktop\BITTSy\HabitExample\stim\cats\cat1.png"
    LET cat2 = "C:\Users\ldev\Desktop\BITTSy\HabitExample\stim\cats\cat2.png"
    ...
    LET cat20 = "C:\Users\ldev\Desktop\BITTSy\HabitExample\stim\cats\cat20.png"
    LET cat_test = "C:\Users\ldev\Desktop\BITTSy\HabitExample\stim\cats\cat21.png"
    LET dog1 = "C:\Users\ldev\Desktop\BITTSy\HabitExample\stim\dogs\dog1.png"
    LET dog2 = "C:\Users\ldev\Desktop\BITTSy\HabitExample\stim\dogs\dog2.png"
    ...
    LET dog20 = "C:\Users\ldev\Desktop\BITTSy\HabitExample\stim\dogs\dog20.png"
    LET dog_test = "C:\Users\ldev\Desktop\BITTSy\HabitExample\stim\dogs\dog21.png"
    LET turtle1 = "C:\Users\ldev\Desktop\BITTSy\HabitExample\stim\turtles\turtle1.png"
    LET turtle2 = "C:\Users\ldev\Desktop\BITTSy\HabitExample\stim\turtles\turtle2.png"
    LET cats = {cat1, cat2, cat3, cat4, cat5, cat6, cat7, cat8, cat9, cat10, cat11, cat12, cat13, cat14, cat15, cat16, cat17, cat18, cat19, cat20}
    LET dogs = {dog1, dog2, dog3, dog4, dog5, dog6, dog7, dog8, dog9, dog10, dog11, dog12, dog13, dog14, dog15, dog16, dog17, dog18, dog19, dog20}
    LET prepost = {turtle1, turtle2}
    LET habitcats = {cats, dog_test, cat_test}
    LET habitdogs = {dogs, cat_test, dog_test}
    LET conditions = {habitdogs, habitcats}
    BACKGROUND WHITE
    DEFINE COMPLETELOOKAWAY 100
    DEFINE COMPLETELOOK 100
    DEFINE WINDOWSIZE 2
    DEFINE WINDOWOVERLAP NO
    DEFINE CRITERIONREDUCTION 0.65
    DEFINE BASISCHOSEN LONGEST
    DEFINE WINDOWTYPE SLIDING
    STEP 1
    Phase Pretrial Start
    
    STEP 2
    VIDEO CENTER attentiongetter LOOP
    UNTIL KEY C
    
    STEP 3
    VIDEO CENTER OFF
    LET prepost1 = (TAKE prepost RANDOM)
    Trial Start
    
    STEP 4
    IMAGE CENTER prepost1
    UNTIL SINGLELOOKAWAY prepost1 GREATERTHAN 2000
    UNTIL TIME 20000
    
    STEP 5
    Trial End
    IMAGE CENTER OFF
    
    STEP 6
    Phase End
    STEP 7
    Phase Habituation Start
    LET condition = (FROM conditions RANDOM)
    LET habit_category = (TAKE condition FIRST)
    STEP 8
    VIDEO CENTER attentiongetter LOOP
    UNTIL KEY C
    
    STEP 9
    VIDEO CENTER OFF
    Trial Start
    
    STEP 10
    LET habit_item = (TAKE habit_category RANDOM)
    IMAGE CENTER habit_item
    UNTIL SINGLELOOKAWAY habit_item GREATERTHAN 2000
    UNTIL TIME 20000
    
    STEP 11
    Trial End
    IMAGE CENTER OFF
    STEP 12
    LOOP STEP 8
    UNTIL 19 TIMES
    UNTIL CRITERIONMET
    
    STEP 13
    Phase End
    STEP 14
    Phase Test Start
    
    STEP 15
    VIDEO CENTER attentiongetter LOOP
    UNTIL KEY C
    
    STEP 16
    VIDEO CENTER OFF
    Trial Start
    STEP 17
    LET test_item = (TAKE condition FIRST)
    IMAGE CENTER test_item
    UNTIL TIME 20000
    UNTIL SINGLELOOKAWAY test_item GREATERTHAN 2000
    
    STEP 18
    Trial End
    IMAGE CENTER OFF
    STEP 19
    LOOP STEP 15
    UNTIL 1 TIMES
    
    STEP 20
    Phase End
    STEP 21
    Phase Posttest Start
    
    STEP 22
    VIDEO CENTER attentiongetter LOOP
    UNTIL KEY C
    
    STEP 23
    VIDEO CENTER OFF
    LET prepost2 = (TAKE prepost RANDOM)
    Trial Start
    
    STEP 24
    IMAGE CENTER prepost2
    UNTIL SINGLELOOKAWAY prepost2 GREATERTHAN 2000
    UNTIL TIME 20000
    
    STEP 25
    Trial End
    IMAGE CENTER OFF
    
    STEP 26
    Phase End

    Putting it all together: Example protocols

    This section features walkthroughs of examples of some common study paradigms. Check the Resources page for the full protocols and stimuli sets if you would like to try running these example protocols or adapt them for your own studies.

    hashtag
    Preferential looking

    Preferential looking example - word recognitionchevron-rightPreferential looking example - fast-mappingchevron-right

    hashtag
    Headturn Preference Paradigm

    hashtag
    Habituation

    hashtag
    Conditioned Headturn

    Headturn preference paradigm examplechevron-right
    Habituation example - familiarization to a categorychevron-right
    Habituation example - word-object pairingschevron-right
    Conditioned Headturn - signal detectionchevron-right

    Conditioned Headturn - signal detection

    hashtag
    About this protocol

    This protocol demonstrates a common application of a conditioned headturn procedure: signal detection. It isn't strictly based on any particular study, although it is similar conceptually to many in the realm of infant signal detection in noise, e.g. Trehub, Bull, & Schneider 1981; Werner & Bargones 1991.

    In this example study, multi-talker background noise plays continuously throughout. During some trials, an additional voice that repeatedly calls the participant's name begins playing too. When the name is being called and the participant turns toward the source of the voice (a "hit"), they are rewarded with a desirable stimulus (e.g. a toy being lit up or beginning to move). When the name is being called but their behavior does not indicate that they detected it (i.e. they don't turn toward it - a "miss"), the reward stimulus is not presented, nor is it presented when the name is not being played, whether they turn (a "false alarm") or not (a "correct rejection").

    This is a simple example of a signal detection task because during the test phase, it does not adjust the intensity of the signal (voice calling the name) relative to the noise, as you would when attempting to establish a detection threshold.

    hashtag
    Starting the protocol

    hashtag
    Starting definitions

    This protocol will use two separate channels on a . In the protocol we'll refer to these as LIGHT LEFT and LIGHT RIGHT, but crucially, these won't both correspond to lights.

    The LEFT light will be our reward stimulus. Depending on the implementation of conditioned headturn, it might be an actual, simple light that illuminates an otherwise invisible toy. Or the "light" might be a plugged-in device or electronic toy that will begin to move when receiving power - BITTSy doesn't know the difference!

    The RIGHT light will refer to an empty channel on the dimmer pack, with either nothing plugged in or something plugged in that will remain powered off regardless of the dimmer pack command (i.e. by having the power switch on the device be in the off position). We need this empty channel so that trials can always be structured the same way, whether they are trials with a potential to show the reward or not.

    The above are our . However, if you have more lights connected in your system (e.g. for headturn preference procedure) and will be leaving them all connected during testing, you may need to define more lights so that you can identify which channel has the reward stimulus and which is empty.

    hashtag
    Tags

    Now we'll define our . We have only audio files for this type of study. One is the background noise that will play continuously throughout. Then we have our signal of interest - the file that contains the voice calling the participant's name. We'll call this audio file changestim. We'll need another to be the controlstim so that regardless of the trial type, we've got a command to play an audio file. But in this case, the control stimulus is actually a completely silent file, since we're interested in comparing the presentation of the name to a case where the background noise simply continues with nothing else added.

    hashtag
    Groups

    We'll use groups to set up a key aspect of this paradigm: the "change stimulus" or signal of interest (the voice calling the participant's name) gets paired with the reward stimulus (plugged into the LEFT light channel) and the "control stimulus" (the absence of the voice - a silent audio file) does not.

    A "change trial" will use both the "change" stimulus and the LEFT light. So we'll define a group that pairs them, called changetrial.

    circle-info

    If you wanted change trials to contain one of several possible audio files (e.g. with different tokens/words/voices), your changetrialstim group would contain multiple tags for the different audio files. In later steps of the protocol when the audio files are played, you could randomize which audio file is selected from the group. However, it's important to set up your in a way that ensures that there will always be a valid selection that can be made for the audio file, regardless of whether the current trial is a change trial or control trial.

    Similarly, we'll make a controltrial group that pairs the "control" stimulus (silence) with the empty RIGHT channel, so that this trial type can have the same structure within the protocol, but not actually turn on a device.

    circle-info

    You may wonder why we define the first two groups in each set rather than directly defining LET changetrial = {changestim, LEFT}. At time of writing, BITTSy throws a validation error when a group directly contains both tags and sides. However, creating a "dummy" group that contains each and making changetrial be a group of other groups sidesteps the validation error and allows you to run the protocol.

    Now that we have structures representing change and control trials, we can define the groups that we'll use to present these trial types within phases - training phase, conditioning phase, and test phase.

    In this study, we can choose to not include control trials in the training phase - since the control trials are the absence of the voice and absence of the reward stimulus, the gaps between change trials essentially function as reinforcing that the reward does not occur until the change stimulus is present. A study focusing on discrimination between two signals, where one is rewarded and the other is not, would likely have control trials during training. But it is not necessary here, so the training group can contain only changetrial.

    The conditioning phase will also only contain change trials - the goal of this phase, after the initial exposure to the paired signal and reward in the training phase, will be to ensure that the participant predicts the reward after hearing the stimulus, and makes an anticipatory turn toward the location of the reward.

    The test phase will contain both change trials and control trials. Here, we'll have them occur with equal probability, but we could weight the relative probabilities by adding more change trials or more control trials to the test group.

    hashtag
    Experiment settings

    There are no special required for this protocol. However, if we planned on using the to analyze participant logs, it is helpful to make sure that the key that is used to mark the participant turning toward the signal matches the to the LEFT side (where the signal will be presented). That way, the turn is logged as a look toward that stimulus rather than just a timestamped keypress, and we can easily summarize which trials had turns and the time it took before the participant turned via a of looking time by trial. This protocol is configured to use L, the default key for LEFT, so it doesn't require a key assignment definition, but if you wished to use another key other than L, you could include one in this section.

    hashtag
    STEPS for execution

    hashtag
    Training

    First in this study, we'll start playing the background noise file that will play on loop throughout the entire study. Then we'll want to wait until the experimenter judges that the infant is ready for a trial to start. (Typically in conditioned headturn studies, a second experimenter sits in front of the infant to engage their attention with some small toys, and the experimenter running the protocol would wait until the infant was attending to the toys, relaxed and unfussy.) We'll have the main experimenter press C when the infant is ready.

    circle-info

    The starting definitions we used didn't specify a CENTER audio channel (i.e. for use with a speaker array), so here we have the default behavior of CENTER , where it's exactly the same as STEREO - played simultaneously from the left and right channels. This means that when we later have trial audio play from LEFT, BITTSy will play it simultaneously from one of the speakers that is already playing the noise file (while the speaker on the right channel continues with only the noise). Having the noise play from CENTER allows us to turn off the trial audio on LEFT without interrupting the noise at all. If you wanted the noise to come from only the same speaker as the trial audio, while keeping the ability to turn off trial audio without interrupting the noise, you could set up your noise audio file to have information only on the left channel and silence on the right channel, and still instruct BITTSy to play it

    Next we'll need to pick the trial type from the training group so that we can subsequently use its components. But there's only actually one trial type in this group, so it doesn't matter how we select it, so long as we allow for repeats.

    The trial_tr will now refer to that one trial type group, changetrial, which itself has two tags inside - the first referring to the audio file that will be played, and the second referring to the LIGHT channel that will be used during those trials - for change trials, the location of the reward stimulus. We can use to ensure we pick first the audio file group, then the reward location group, to access these components and later use them to display stimuli.

    Recall that these subgroups were themselves groups that contained one tag/side each in order to avoid making a group that directly contained both a regular tag and a side name. So we have one more layer of selection to get to our final tags.

    Now we can begin to present the change trial audio. We'll add a brief delay after it starts so that the reward doesn't turn on simultaneously with the audio signal, but rather the start of the audio indicates that the reward will appear soon.

    circle-info

    Audio set to LOOP does not turn off until we use an AUDIO OFF command. The change trial audio will continue playing after this step, while we present the reward! If you wanted the audio and reward to be presented sequentially and not overlap, you could adjust this UNTIL TIME 1000 statement above to be the full length of time you want the audio to play, and put the AUDIO LEFT OFF command to before the LIGHT ON statement in STEP 4.

    Now it's time to turn on the reward. In the training phase, this isn't dependent on the behavior the participant exhibits yet, we'll simply always turn on the reward. We'll keep it on for a little while, then turn off both the reward and audio together. (Note that since the background noise was playing AUDIO CENTER and the change trial audio plays AUDIO LEFT, we can turn off the change stimulus with AUDIO LEFT OFF without interrupting the background noise.)

    From STEP 2 to here constitutes the repeating structure of the training phase - waiting for the experimenter to indicate the participant is ready for a trial, starting to present the change audio, turning on the reward, and turning both off. We can use a to repeat this structure until the end of the phase (in a real study, we would likely repeat it more times.)

    hashtag
    Conditioning

    Now that the participant has some exposure to the audio signal and has had an opportunity to learn that the reward turns on when it plays, in the conditioning phase, we'll ensure that the participant has learned that the audio stimulus predicts the onset of the reward by requiring that they make anticipatory headturns toward the location of the reward stimulus whenever they hear the audio signal. We'll begin delaying the onset of the reward to give them more opportunity to do so, and require three trials in a row where the participant turns in anticipation before the reward stimulus turns on.

    Like the training trials, we need to wait for the experimenter to press C to indicate when we're ready for a trial to start.

    Next we'll select the trial audio (always the change stimulus in this phase, since that's the only trial type in the conditioning group) and reward location, and start playing the audio. Note that this selection structure is identical to the training phase.

    What's different in this phase is that we want to do different things depending on whether the participant makes an anticipatory headturn or not. If they don't make the headturn within the time limit before the reward stimulus comes on, we want to continue with more trials exactly like this one so they have more opportunity to learn the pattern. But if they've started to learn that the audio predicts the reward and make an anticipatory turn, we're ready to increase the delay in the next trial, and need to keep track of this successful trial as the start of a possible three in a row that would result in ending this phase.

    We can use an with to create two separate "branches" for each of these possibilities. If there's a headturn before the time limit, we'll jump to STEP 14. If not, we'll keep going to the next step. (For a more detailed explanation of this type of usage of JUMP, see .)

    circle-info

    The second UNTIL statement could also be written UNTIL TIME 2000 without the JUMP clause - JUMP isn't required when the result is simply to go to the next step. But when the protocol is shown broken up into parts like this, adding it makes it a little easier to track what's happening!

    From here until STEP 14 defines what we'll do when the participant failed to make an anticipatory headturn on the trial. It looks just like in the training phase. We'll still turn on the reward, so that the participant continues to have opportunities to learn the relationship between the audio stimulus and reward.

    After, we'll use a LOOP statement to run more trials. We should include some way to end the study in case the participant doesn't begin making anticipatory headturns. This will have us jump to the very end of the protocol (STEP 37), ending the study, if the participant fails to make an anticipatory headturn 10 times in a row.

    circle-info

    Using JUMP to move outside of this loop when the participant successfully makes an anticipatory headturn will reset the count on this UNTIL 9 TIMES loop statement, which is why it requires 10 times in a row rather than 10 times total. If you didn't want to reset the count and use a total number of misses instead, you could A) restructure your steps and JUMPs to not require moving to steps that are outside this "core loop", or B) use a loop terminating condition that depends on phase information, like (from the reward stimulus) or global information like whether a group is , which are unaffected by moving outside the loop. If you preferred to continue presenting trials until the experimenter judged the participant was tired, you could use instead.

    Recall that STEP 14 was where we jumped if the participant successfully made an anticipatory turn. This will also result in presenting the reward.

    But crucially after this, instead of looping back, we'll keep going to STEP 17, which is only reached when the participant has a current streak of exactly one anticipatory turns in a row. This way, we can track what they need in order to reach three in a row to move to the next phase. (See for further explanation of this type of usage of JUMP.)

    STEP 17 will start another trial, which is largely identical to before - the only difference is the amount of delay between when the audio starts and when the reward turns on, during which they have an opportunity to make an anticipatory turn.

    STEP 18 begins what will happen when the participant had a current streak of one anticipatory turn, but missed this one, resetting their streak. So after presenting the reward as usual, it will jump back to STEP 9 to present the next trial, which is used whenever the current streak is zero.

    STEP 20, however, is where we go when the participant made another anticipatory turn - two in a row now. So after presenting the reward as usual, we'll move on to STEP 23, which presents another trial but is only reached when the current streak is two in a row. The delay where the participant has an opportunity to make an anticipatory turn increases again.

    STEP 24 is where we end up when the participant missed the time window for the anticipatory turn. So after presenting the reward, we'll specify a jump back to STEP 9 to reset their streak.

    But if we jumped to STEP 26 instead, we got here because the participant made another anticipatory turn. They already had two in a row to get to STEP 23, so the jump straight to STEP 26 means that they've now achieved three in a row, our criteria for ending the conditioning phase. So after presenting the reward, we'll end the phase and move on straight to the test phase.

    hashtag
    Test

    In the test phase, trials will sometimes contain the change audio, and sometimes not. The participant should only be rewarded for successfully detecting (making an anticipatory headturn) the change stimulus - this possible outcome is called a "hit." There are three more possible outcomes - the change audio is present and the participant does not make a headturn to show they detected it (a "miss"), the change audio is not present but the participant makes a headturn anyway (a "false alarm"), and the change audio is not present and the participant does not make a headturn (a "correct rejection").

    We'll start a trial just like before, waiting for the experimenter to indicate the participant is ready, then selecting from the highest-level group for this phase until we have the final tags we'll use to play audio and turn on a light channel.

    Since the test group contains both change trials and control trials, audiostim_test and reward_test end up being either the change audio and LEFT (the reward stimulus channel) respectively, or a silent audio file and RIGHT (the empty channel on the DMX box).

    circle-info

    The test group was defined at the start of the protocol as having one copy of changetrial and one copy of controltrial, so (FROM test RANDOM) in the step above will select them with equal probability but not guarantee an equal number of each across the test phase. If we wanted to ensure a particular number of each, we could define test to have that number of copies - e.g., for four of each trial type, LET test = {changetrial, changetrial, changetrial, changetrial, controltrial, controltrial, controltrial, controltrial}Then use (TAKE test RANDOM)

    Like in the conditioning phase, we're interested in whether the participant makes an anticipatory headturn toward the reward (L), and we need to do different actions depending on their behavior. We'll have another OR combination of UNTIL statements that will allow us to jump to different sections depending on what happens.

    STEP 32 is what will execute when the participant makes a headturn within 5 seconds of when the trial audio (either the change audio or the beginning of a silent file) plays. The case where the participant makes a headturn actually represents two different possible trial outcomes, depending on which audio file was playing. If the trial audio was chosen to be the change audio, this represents a hit, and we should present the reward. But if it was the silent file, this is a false alarm, and we shouldn't present the reward.

    Because the audio and side name were paired originally, so that the change audio went with the side of the reward (LEFT) and the silent file went with the empty channel (RIGHT), we don't actually have to differentiate between these two outcomes in order to correctly reward or not reward the headturn. The tag reward_test contains the appropriate side name so that if the change audio was chosen, the LEFT channel reward will now turn on, but if it was a control trial, nothing will visibly happen, and we'll simply wait a few seconds before we can start another trial.

    STEP 34 is only reached when the participant does not make a headturn shortly after the start of a trial. This behavior could be a miss if the change audio signal was actually present, or it could be a correct rejection if it wasn't. Although when we ultimately analyze data we'll consider the first outcome as an incorrect response on the trial and the second as a correct one, we don't need to differentiate between these possible situations now, because neither results in the presentation of a reward. In both cases, we simply need to end the trial and turn off whatever audio had been playing.

    Regardless of which "branch" was taken to handle the outcome of the trial, we now end up at STEP 35 (there was a JUMP to here from the end of the step that handled hits and false alarms). We could have a larger delay specified here if we wanted to make sure there was a certain minimum time in between trials, and in the next step we'll loop back to STEP 30 to run more test trials. This one will run a total of 8 test trials.

    When all eight test trials have been executed and the loop is done, the study is over! We'll end the test phase and turn off the background noise that's been playing from CENTER since the beginning.

    See the to download a copy of this protocol.

    CENTER
    .
    instead.
    DMX dimmer pack
    starting definitions
    tags that reference files
    selection criteria
    experiment settings
    reporting module
    key assigned
    standard report
    multichannel
    dynamic tag
    selection restrictions
    loop
    OR combination of terminating conditions
    JUMP
    this page
    loop terminating condition
    TOTALLOOKAWAY
    EMPTY
    KEY
    this page
    resources page
    SIDES ARE {CENTER, LEFT, RIGHT}
    LIGHTS ARE {LEFT, RIGHT}
    LET noise = "C:\Users\ldev\Desktop\BITTSy\ConditionedHeadturn_signaldetection\9-voices.wav"
    LET changestim = "C:\Users\ldev\Desktop\BITTSy\ConditionedHeadturn_signaldetection\Alexis.wav"
    LET controlstim = "C:\Users\ldev\Desktop\BITTSy\ConditionedHeadturn_signaldetection\silence.wav"
    LET changetrialstim = {changestim}
    LET changetrialside = {LEFT}
    LET changetrial = {changetrialstim, changetrialside}
    LET controltrialstim = {controlstim}
    LET controltrialside = {RIGHT}
    LET controltrial = {controltrialstim, controltrialside}
    LET training = {changetrial}
    LET conditioning = {changetrial}
    LET test = {changetrial, controltrial}
    STEP 1
    Phase Training Start
    AUDIO CENTER noise LOOP
    
    STEP 2
    UNTIL KEY C
    STEP 3
    LET trial_tr = (FROM training RANDOM)
    LET audiostim = (FROM trial_tr FIRST {with max 0 repeats in succession})
    LET reward = (FROM trial_tr FIRST {with max 0 repeats in succession})
    LET audiostim_tr = (FROM audiostim FIRST)
    LET reward_tr = (FROM reward FIRST)
    AUDIO LEFT audiostim_tr LOOP
    UNTIL TIME 1000
    STEP 4
    LIGHT reward_tr ON
    UNTIL TIME 5000
    
    STEP 5
    LIGHT reward_tr OFF
    AUDIO LEFT OFF
    STEP 6
    LOOP STEP 2
    UNTIL 3 TIMES
    
    STEP 7
    Phase End
    STEP 8
    Phase Conditioning Start
    
    STEP 9
    UNTIL KEY C
    STEP 10
    Trial Start
    LET trial_c2 = (FROM conditioning RANDOM)
    LET audiostim2 = (FROM trial_c2 FIRST {with max 0 repeats in succession})
    LET reward2 = (FROM trial_c2 FIRST {with max 0 repeats in succession})
    LET audiostim_c2 = (FROM audiostim2 FIRST)
    LET reward_c2 = (FROM reward2 FIRST)
    AUDIO LEFT audiostim_c2 LOOP
    UNTIL KEY L JUMP STEP 14
    UNTIL TIME 2000 JUMP STEP 11
    STEP 11
    LIGHT reward_c2 ON
    UNTIL TIME 5000
    
    STEP 12
    Trial End
    LIGHT reward_c2 OFF
    AUDIO LEFT OFF
    STEP 13
    LOOP STEP 9
    UNTIL 9 TIMES JUMP STEP 37
    STEP 14
    LIGHT reward_c2 ON
    UNTIL TIME 5000
    
    STEP 15
    Trial End
    LIGHT reward_c2 OFF
    AUDIO LEFT OFF
    
    STEP 16
    UNTIL KEY C
    STEP 17
    Trial Start
    LET trial_c3 = (FROM conditioning RANDOM)
    LET audiostim3 = (FROM trial_c3 FIRST {with max 0 repeats in succession})
    LET reward3 = (FROM trial_c3 FIRST {with max 0 repeats in succession})
    LET audiostim_c3 = (FROM audiostim3 FIRST)
    LET reward_c3 = (FROM reward3 FIRST)
    AUDIO LEFT audiostim_c3 LOOP
    UNTIL KEY L JUMP STEP 20
    UNTIL TIME 3000 JUMP STEP 18
    STEP 18
    LIGHT reward_c3 ON
    UNTIL TIME 5000
    
    STEP 19
    Trial End
    LIGHT reward_c3 OFF
    AUDIO LEFT OFF
    UNTIL TIME 100 JUMP STEP 9
    STEP 20
    LIGHT reward_c3 ON
    UNTIL TIME 5000
    
    STEP 21
    Trial End
    LIGHT reward_c3 OFF
    AUDIO LEFT OFF
    
    STEP 22
    UNTIL KEY C
    
    STEP 23
    Trial Start
    LET trial_c4 = (FROM conditioning RANDOM)
    LET audiostim4 = (FROM trial_c4 FIRST {with max 0 repeats in succession})
    LET reward4 = (FROM trial_c4 FIRST {with max 0 repeats in succession})
    LET audiostim_c4 = (FROM audiostim4 FIRST)
    LET reward_c4 = (FROM reward4 FIRST)
    AUDIO LEFT audiostim_c4 LOOP
    UNTIL KEY L JUMP STEP 26
    UNTIL TIME 5000 JUMP STEP 24
    STEP 24
    LIGHT reward_c4 ON
    UNTIL TIME 5000
    
    STEP 25
    Trial End
    LIGHT reward_c4 OFF
    AUDIO LEFT OFF
    UNTIL TIME 100 JUMP STEP 9
    STEP 26
    LIGHT reward_c4 ON
    UNTIL TIME 5000
    
    STEP 27
    Trial End
    LIGHT reward_c4 OFF
    AUDIO LEFT OFF
    
    STEP 28
    Phase End
    STEP 29
    Phase Test Start
    
    STEP 30
    UNTIL KEY C
    
    STEP 31
    Trial Start
    LET trial_test = (FROM test RANDOM)
    LET audiostim_5 = (FROM trial_test FIRST {with max 0 repeats in succession})
    LET reward_5 = (FROM trial_test FIRST {with max 0 repeats in succession})
    LET audiostim_test = (FROM audiostim_5 FIRST)
    LET reward_test = (FROM reward_5 FIRST)
    AUDIO LEFT audiostim_test LOOP
    UNTIL TIME 5000 JUMP STEP 34
    UNTIL KEY L JUMP STEP 32
    # hit or false alarm
    STEP 32
    LIGHT reward_test ON
    UNTIL TIME 5000
    
    STEP 33
    Trial End
    LIGHT reward_test OFF
    AUDIO LEFT OFF
    UNTIL TIME 100 JUMP STEP 35
    # miss or correct rejection
    STEP 34
    Trial End
    AUDIO LEFT OFF
    UNTIL TIME 100
    STEP 35
    UNTIL TIME 100
    
    STEP 36
    LOOP STEP 30
    UNTIL 7 TIMES
    STEP 37
    Phase End
    AUDIO CENTER OFF