Preferential looking example - word recognition

About this protocol

This protocol is based on Newman & Morini (2017), cited below. This study focuses on toddlers' ability to recognize known words when another talker is speaking in the background. The target talker is always female, but the background talker is sometimes male, and sometimes another female talker. When there is a big difference in the fundamental frequency of two voices (as there is for the target female talker and background male talker in this study) adults will readily use this cue to aid in segregating the two speech signals and following the target talker. When the fundamental frequency of the two talkers are similar (as in the female background talker condition), the task is more difficult. This study asks whether toddlers also take advantage of a fundamental frequency cue when it is present, and demonstrate better word recognition when the background talker is male.

Newman, R.S. & Morini, G. (2017). Effect of the relationship between target and masker sex on infants' recognition of speech. Journal of the Acoustical Society of America, 141(2). EL164-169.

This study presents videos showing four pairs of objects. Each object pair is presented on 5 trials, for a total of 20 trials. One of the five for each pair is a baseline trial, in which the speaker talks generically about an object but does not name either one ("Look at that!"). In the other four trials per pair, the target talker names one of the objects. Half of the trials the target object is on the left side of the screen, and half of the trials on the right. All trials, including baseline trials, are presented with the target speaker audio mixed with either a male or female background talker.

Although only 20 test videos are presented to each participant, far more combinations of object pair, target object, target position, and background talker are possible. In this example, we'll demonstrate how to set up this study in several different ways. First, we'll set up the whole protocol with a pre-selected set of 20 videos that satisfy the balancing requirements above, and present all of them in a random order. Next, we'll talk about what we would change in that protocol to present them in a fixed order, and create multiple fixed-order study versions. Lastly, we'll talk about how to use selection from groups and psuedorandomization in BITTSy to select and present appropriate subsets of the possible stimuli for different participants.

Starting the protocol

The opening section of a protocol, before any STEPs are defined, typically takes up the bulk of your protocol file. This is especially the case in preferential looking studies, in which there are often large sets of video stimuli to define as tags and arrange in groups. While this part may seem to take a while, by the time you do this, you're almost done!

Tip for setting up tags and groups in your own studies: Copy+Paste and Find+Replace tools in text editors like Notepad and Notepad++ are your friend! When adapting an existing protocol to reference a different set of stimuli, use find+replace all to change the whole file path, up to the file name, on ALL of your stimuli at once to be your new study folder. When adding more tags to reference more stimulus files, copy+paste another tag definition, then go back and fix the tag names and the names of the files to be your new ones.

Starting definitions

The first lines in any protocol are your starting definitions. Here, we will only use one central TV display. We'll name it CENTER.

SIDES ARE {CENTER}
DISPLAYS ARE {CENTER}

Tags

Before creating this protocol, we pre-selected 20 videos that fit balancing requirements for the study, such as having an equal number of trials with the female background talker as with the male background talker. We happen to have more possible videos for this study, but a study with fewer factors or more trials, that displays all stimuli to every participant, could be constructed exactly like this. (Later, we'll cover how to select stimuli within the protocol itself.)

Here are the definitions for our 20 trial video tags. All are named by 1) which object is on the left, 2) which object is on the right, 3) m or f for the background talker, 4) the target word spoken by the target talker ("generic" for baseline videos in which the target speaker said "look at that!")

LET truck_ball_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_f_generic.mp4"
LET horse_bird_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_m_generic.mp4"
LET cat_dog_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_m_generic.mp4"
LET blocks_keys_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_f_generic.mp4"

LET truck_ball_f_BALL = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_f_BALL.mp4"
LET truck_ball_f_TRUCK = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_f_TRUCK.mp4"
LET truck_ball_m_BALL = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_m_BALL.mp4"
LET truck_ball_m_TRUCK = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_m_TRUCK.mp4"
LET blocks_keys_f_BLOCKS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_f_BLOCKS.mp4"
LET blocks_keys_f_KEYS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_f_KEYS.mp4"
LET blocks_keys_m_BLOCKS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_m_BLOCKS.mp4"
LET blocks_keys_m_KEYS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_m_KEYS.mp4"
LET cat_dog_f_CAT = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_f_CAT.mp4"
LET cat_dog_f_DOG = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_f_DOG.mp4"
LET cat_dog_m_CAT = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_m_CAT.mp4"
LET cat_dog_m_DOG = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_m_DOG.mp4"
LET horse_bird_f_BIRD = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_f_BIRD.mp4"
LET horse_bird_f_HORSE = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_f_HORSE.mp4"
LET horse_bird_m_BIRD = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_m_BIRD.mp4"
LET horse_bird_m_HORSE = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_m_HORSE.mp4"

We have one more tag to define, which is the attention-getter video we'll present before starting each trial.

LET attentiongetter = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\baby.mp4"

Groups

We have just one group to define for this protocol. It will contain our 20 trial videos, so that we can later randomly select these videos from the group to present.

LET trial_videos = {blocks_keys_m_KEYS, horse_bird_m_HORSE, cat_dog_f_CAT, horse_bird_m_BIRD, horse_bird_f_HORSE, truck_ball_m_TRUCK, blocks_keys_f_generic, cat_dog_f_DOG, blocks_keys_f_KEYS, blocks_keys_m_BLOCKS, truck_ball_m_BALL, truck_ball_f_generic, cat_dog_m_generic, cat_dog_m_CAT, truck_ball_f_TRUCK, horse_bird_f_BIRD, horse_bird_m_generic, truck_ball_f_BALL, blocks_keys_f_BLOCKS, cat_dog_m_DOG}

Optional experiment settings

After defining tags and groups, we would define optional experiment settings if we had any to include. Most of these are unimportant for preferential looking studies. One that is important is the background color. BITTSy will cover the screen in either black or white whenever there are no visual stimuli being presented on the display. In preferential looking, we are often presenting video stimuli back-to-back, but there can sometimes be perceptible gaps between one video ending and the next starting. If you are using cues for coding when trial timing that depend on the appearance of the screen or level of light that is cast from the screen onto the participant's face, you will want to ensure that the background color defined here matches the background of your inter-trial attention-getters, rather than your trial videos, so that the background color being displayed is not mistaken for part of a trial.

STEPs for execution

First, we will begin a phase. In a study like this that has only one phase, this is completely optional - but it is not bad to be in the habit of including them.

STEP 1
Phase Test Start

The next thing we want to do is display our attention-getter video, before we show a trial. We'll want to come back to this part of the protocol again later (in a loop), to show more attention-getter videos. We can't put the attention-getter in the same step as the phase start flag without also repeatedly starting the phase - which doesn't make sense.

So we'll start a new STEP and display the attention-getter with a VIDEO action statement. The short attention-getter clip will be played repeatedly until the experimenter decides the participant is ready for a trial - by pressing C.

STEP 2
VIDEO CENTER attentiongetter LOOP
UNTIL KEY C

When you have an UNTIL statement in a STEP, it must always be the last line. So we'll start a new STEP that will happen as soon as the experimenter presses the key. In it, we'll just stop the video.

STEP 3
VIDEO CENTER OFF

Next, we need to start a trial.

STEP 4
Trial Start

Our lab prefers to consistently put trial start flags in a STEP all by themselves, just so it is easier to visually scan a protocol file and find where a trial is defined. But this is not necessary. This trial start flag could equivalently be the last line of the previous STEP or in the next STEP preceding the line that plays the video.

Our trials need 1) a tag of a video to display, 2) a command to play the video, and 3) a step terminating condition that tells us when to move on. We define the dynamic tag vid that will hold whatever tag we select, which we'll then play in our action statement. We'll use the UNTIL FINISHED step terminating condition to not move on to the next step until the video has played through.

STEP 5
LET vid = (TAKE trial_videos RANDOM)
VIDEO CENTER vid ONCE
UNTIL FINISHED

Now the trial ends. This has to be a new STEP, because UNTIL statements are always the last line of the STEP that contains them, but this happens immediately once the terminating condition (the video ending) is met.

STEP 6
Trial End
VIDEO CENTER OFF

Even though we know the video has finished playing by the start of this STEP, we still have included a statement to explicitly turn off the video now. When videos finish playing, the processes that BITTSy uses to play them stay active. A "garbage collector" will take care of this so that the videos that are done stop taking up any resources, but this is not immediate. Explicitly turning videos OFF frees up these resources right away. It is not required, but ensures best performance, particularly if your computer is under-powered or if a lot of programs are running in the background. However, if you do include OFF commands in this type of case, you should always turn off trial stimuli after the Trial End command. This is not crucial in this study, but for studies that are live-coded, in-progress looks are logged when the Trial End line is executed. If the stimulus is explicitly turned OFF before the end of the trial, BITTSy may not be able to associate the in-progress look with the stimulus that was removed.

We've now defined the basic structure of our experiment - attention-getter, then trial. For the rest of the trials, we can use a loop. We'll jump back to STEP 2, where the attention-getter started, and go back through a whole trial, and repeat the loop again - until the loop has executed a total of 19 times, giving us a total of 20 trials.

STEP 7
LOOP STEP 2
UNTIL 19 TIMES

We could have combined STEPs 6 & 7. Like trial starts, our lab likes to put loops in a step by themselves to make them more visually obvious, but it doesn't matter whether you combine these or break them into two STEPs. All that BITTSy requires is that a loop and its terminating condition are the final lines in the STEP that contains them.

Lastly, since we defined a phase start, we'll need to end it.

STEP 8
Phase End

Alternate options for stimuli selection and randomization

What if you don't want trial videos to be presented totally randomly? Or what if you don't want to pre-define a set of videos to present, but rather pull different sets of videos for each participant? Below are some examples of how you might set up your protocol differently for different levels of control over stimuli selection and presentation order.

Fixed orders

You might wish to define a fixed trial order, where every time the protocol is run, participants see the same stimuli in the same order. This requires minimal changes to the example protocol in the previous section.

Tags

Making a fixed-order version of this study would also involve pre-selecting the 20 trial videos that will be shown. You can define just these videos in your tags section, or you can define all of them - it doesn't matter if some are unused in your experiment. Parsing your protocol will take a little longer if you have extra tags defined, but this step is typically done while setting up for a study rather than when the participant is ready, and does not present any issues for running study sessions.

Groups

Crucially for a fixed-order protocol, you will define your trial_videosgroup to list tags in the exact order in which you want them to appear in the study.

STEPs

Later, when you select a tag from the trial_videos group to present on a particular trial, you'll select by FIRST rather than RANDOM.

STEP 5
LET vid = (TAKE trial_videos FIRST)
VIDEO CENTER vid ONCE
UNTIL FINISHED

Because you're using TAKE to choose without replacement, each time you select a tag from trial_videos, you can't select that same tag again. So each time, the tag that is the FIRST tag in the list that is still available for selection will be the one you want - the next tag in line from the trial order you defined when making thetrial_videos group.

All of the rest of the protocol would be the same!

If you wished to define multiple fixed-order versions of the study, you could simply save additional copies of the protocol file in which you change the ordering of tags in the definition of trial_videos, or swap in different tags that you have defined. This would be the only change necessary to make the additional versions.

Pseudorandom orders

You might want to not have a pre-defined set of 20 trial videos to use, given that with the different stimulus types and conditions in our study, we actually have 48 possible videos. You might also want to psuedorandomize when trial types are selected - for example, participants may get bored more quickly if they see the same object pair for several trials in a row, so we might want to keep the same pair from being randomly selected too many times back-to-back.

Below, we'll walk through one way that you could define the protocol to select trial videos from the entire set, in a manner that satisfies particular balancing requirements and presents trials in a psuedorandom order. This is a significantly more complicated case than the previous versions of this study we've defined - not for running your study, or for writing your whole BITTSy protocol, but specifically for planning the structure of the groups in your protocol, upon which stimuli selection will critically rely. Before beginning to create such a protocol, it is useful to take time to think about the layers of selection you will do - if it can be drawn as a selection tree structure (like the example here), with selections between branches all at equal probability, it can be implemented in BITTSy.

Tags

We'll be defining all the possible video files and tags for this version of the study, because we won't specify in advance which ones the participant will see. First, our baseline trials. We have 4 object pairs, 2 left-right arrangements of each pair, and 2 background talkers, for a total of 16 possible baseline videos.

LET ball_truck_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\ball_truck_f_generic.mp4"
LET ball_truck_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\ball_truck_m_generic.mp4"
LET truck_ball_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_f_generic.mp4"
LET truck_ball_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_m_generic.mp4"
LET horse_bird_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_f_generic.mp4"
LET horse_bird_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_m_generic.mp4"
LET bird_horse_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\bird_horse_f_generic.mp4"
LET bird_horse_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\bird_horse_m_generic.mp4"
LET cat_dog_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_f_generic.mp4"
LET cat_dog_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_m_generic.mp4"
LET dog_cat_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\dog_cat_f_generic.mp4"
LET dog_cat_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\dog_cat_m_generic.mp4"
LET blocks_keys_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_f_generic.mp4"
LET blocks_keys_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_m_generic.mp4"
LET keys_blocks_f_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\keys_blocks_f_generic.mp4"
LET keys_blocks_m_generic = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\keys_blocks_m_generic.mp4"

Now our trial videos. We have even more of these, because now the target word said by the main talker could be either of the two objects - a total of 32 videos & tags.

LET ball_truck_f_BALL = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\ball_truck_f_BALL.mp4"
LET ball_truck_f_TRUCK = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\ball_truck_f_TRUCK.mp4"
LET ball_truck_m_BALL = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\ball_truck_m_BALL.mp4"
LET ball_truck_m_TRUCK = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\ball_truck_m_TRUCK.mp4"
LET truck_ball_f_BALL = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_f_BALL.mp4"
LET truck_ball_f_TRUCK = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_f_TRUCK.mp4"
LET truck_ball_m_BALL = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_m_BALL.mp4"
LET truck_ball_m_TRUCK = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\truck_ball_m_TRUCK.mp4"
LET blocks_keys_f_BLOCKS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_f_BLOCKS.mp4"
LET blocks_keys_f_KEYS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_f_KEYS.mp4"
LET blocks_keys_m_BLOCKS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_m_BLOCKS.mp4"
LET blocks_keys_m_KEYS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\blocks_keys_m_KEYS.mp4"
LET keys_blocks_f_BLOCKS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\keys_blocks_f_BLOCKS.mp4"
LET keys_blocks_f_KEYS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\keys_blocks_f_KEYS.mp4"
LET keys_blocks_m_BLOCKS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\keys_blocks_m_BLOCKS.mp4"
LET keys_blocks_m_KEYS = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\keys_blocks_m_KEYS.mp4"
LET cat_dog_f_CAT = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_f_CAT.mp4"
LET cat_dog_f_DOG = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_f_DOG.mp4"
LET cat_dog_m_CAT = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_m_CAT.mp4"
LET cat_dog_m_DOG = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\cat_dog_m_DOG.mp4"
LET dog_cat_f_CAT = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\dog_cat_f_CAT.mp4"
LET dog_cat_f_DOG = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\dog_cat_f_DOG.mp4"
LET dog_cat_m_CAT = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\dog_cat_m_CAT.mp4"
LET dog_cat_m_DOG = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\dog_cat_m_DOG.mp4"
LET bird_horse_f_BIRD = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\bird_horse_f_BIRD.mp4"
LET bird_horse_f_HORSE = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\bird_horse_f_HORSE.mp4"
LET bird_horse_m_BIRD = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\bird_horse_m_BIRD.mp4"
LET bird_horse_m_HORSE = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\bird_horse_m_HORSE.mp4"
LET horse_bird_f_BIRD = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_f_BIRD.mp4"
LET horse_bird_f_HORSE = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_f_HORSE.mp4"
LET horse_bird_m_BIRD = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_m_BIRD.mp4"
LET horse_bird_m_HORSE = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\horse_bird_m_HORSE.mp4"

Repetitive tag definitions like these do not need to all be typed out by hand! We used spreadsheet functions like clicking-and-dragging to copy cells, JOIN, and CONCATENATE to quickly make these lines in Google Sheets and copy-pasted them into our protocol file. See here for an example.

The last tag to define is our attention-getter video.

LET attentiongetter = "C:\Users\ldev\Desktop\BITTSy\ToddlerBGGender\baby.mp4"

Groups

Defining the groups in this protocol is the really critical step for allowing us to control how stimuli are later balanced - how many of each target pair we show, how many times the named object is on the left or the right, how many trials have a male background talker vs. female.

We will construct a nested structure of groups. It is often helpful to work backwards and plan your group structure from the top down - that is, from the highest-level group that you will select from first in your protocol, to the lowest-level group that directly contains the tags referencing your stimuli files. We'll end up with 3 levels of groups for this protocol.

Your highest-level group's components should be defined around the characteristic that you want the most control over for pseudorandomization. Here, we'll make that be our object pairs. Our highest-level group, trial_videos, will contain groups for each of the four object pairs.

LET trial_videos = {ball-truck_ALL, bird-horse_ALL, blocks-keys_ALL, cat-dog_ALL}

If your study has more than one factor that you crucially need to control with psuedorandomization, you may find constructing multiple fixed-order protocols that follow your restrictions to be the most practical solution.

Now let's define one of these groups for an object pair. We're going to have five trials per object pair. One of these has to be a baseline trial. For the remaining four, we'll stipulate that they are:

  1. a video with a female background talker where the object named by the target talker is on the right

  2. a video with a female background talker where the object named by the target talker is on the left

  3. a video with a male background talker where the object named by the target talker is on the right

  4. a video with a male background talker where the object named by the target talker is on the left

LET ball-truck_ALL = {ball-truck_baseline, ball-truck_f_right, ball-truck_f_left, ball-truck_m_right, ball-truck_m_left}

This example balances some factors across trials, but not others. For example, we do not balance how many times one object is named versus the other. We could just as easily balance this in place of left-right answers, and define those groups accordingly. These decisions of what to keep balanced and what to leave random is the crux of setting up your group structure!

Now we'll define these component groups, which will contain our video tags. First, the baseline group. There were four baseline videos for each object pair.

LET ball-truck_baseline = {ball_truck_f_generic, ball_truck_m_generic, truck_ball_f_generic, truck_ball_m_generic}

For the other four groups, we defined two videos that match each trial type.

LET ball-truck_f_right = {ball_truck_f_TRUCK, truck_ball_f_BALL}
LET ball-truck_f_left = {ball_truck_f_BALL, truck_ball_f_TRUCK}
LET ball-truck_m_right = {ball_truck_m_TRUCK, truck_ball_m_BALL}
LET ball-truck_m_left = {ball_truck_m_BALL, truck_ball_m_TRUCK}

We'll define the nested group structure for our other object pairs the same way.

Once you've defined one set, the rest is quick and easy: copy it over to a blank document, use find+replace to swap out the object names, and paste the new groups back into your protocol!

Because BITTSy will validate the protocol line by line, we can't actually define our groups in backwards order - we'll need the definitions of the lower-level groups to come before the higher-level groups that reference them. So we'll just swap them around. We end up with this.

LET ball-truck_baseline = {ball_truck_f_generic, ball_truck_m_generic, truck_ball_f_generic, truck_ball_m_generic}
LET ball-truck_f_right = {ball_truck_f_TRUCK, truck_ball_f_BALL}
LET ball-truck_f_left = {ball_truck_f_BALL, truck_ball_f_TRUCK}
LET ball-truck_m_right = {ball_truck_m_TRUCK, truck_ball_m_BALL}
LET ball-truck_m_left = {ball_truck_m_BALL, truck_ball_m_TRUCK}

LET bird-horse_baseline = {bird_horse_f_generic, bird_horse_m_generic, horse_bird_f_generic, horse_bird_m_generic}
LET bird-horse_f_right = {bird_horse_f_horse, horse_bird_f_bird}
LET bird-horse_f_left = {bird_horse_f_bird, horse_bird_f_horse}
LET bird-horse_m_right = {bird_horse_m_horse, horse_bird_m_bird}
LET bird-horse_m_left = {bird_horse_m_bird, horse_bird_m_horse}

LET blocks-keys_baseline = {blocks_keys_f_generic, blocks_keys_m_generic, keys_blocks_f_generic, keys_blocks_m_generic}
LET blocks-keys_f_right = {blocks_keys_f_keys, keys_blocks_f_blocks}
LET blocks-keys_f_left = {blocks_keys_f_blocks, keys_blocks_f_keys}
LET blocks-keys_m_right = {blocks_keys_m_keys, keys_blocks_m_blocks}
LET blocks-keys_m_left = {blocks_keys_m_blocks, keys_blocks_m_keys}

LET cat-dog_baseline = {cat_dog_f_generic, cat_dog_m_generic, dog_cat_f_generic, dog_cat_m_generic}
LET cat-dog_f_right = {cat_dog_f_dog, dog_cat_f_cat}
LET cat-dog_f_left = {cat_dog_f_cat, dog_cat_f_dog}
LET cat-dog_m_right = {cat_dog_m_dog, dog_cat_m_cat}
LET cat-dog_m_left = {cat_dog_m_cat, dog_cat_m_dog}

LET ball-truck_ALL = {ball-truck_baseline, ball-truck_f_right, ball-truck_f_left, ball-truck_m_right, ball-truck_m_left}
LET bird-horse_ALL = {bird-horse_baseline, bird-horse_f_right, bird-horse_f_left, bird-horse_m_right, bird-horse_m_left}
LET blocks-keys_ALL = {blocks-keys_baseline, blocks-keys_f_right, blocks-keys_f_left, blocks-keys_m_right, blocks-keys_m_left}
LET cat-dog_ALL = {cat-dog_baseline, cat-dog_f_right, cat-dog_f_left, cat-dog_m_right, cat-dog_m_left}

LET trial_videos = {ball-truck_ALL, bird-horse_ALL, blocks-keys_ALL, cat-dog_ALL}

STEPs

Now that we've defined our group structure, implementing the stimulus selection is really easy. Our protocol starts off just like the original version with playing the attention-getter video before every trial.

STEP 1
Phase Test Start

STEP 2
VIDEO CENTER attentiongetter LOOP
UNTIL KEY C

STEP 3
VIDEO CENTER OFF

In our original setup with the pre-determined stimulus set, we then started a trial, picked a video, and displayed it. In this one, we have multiple selection steps, and BITTSy takes a (very, very small) bit of time to execute each one. Given that we have several, we might prefer to place these before our trial starts, just to ensure that there's never a gap between when BITTSy marks the beginning of a trial and when the video actually starts playing. (If you are not relying on BITTSy logs for trial timing and instead use cues in your participant video, such as lighting changes in the room or an image of your display in a mirror, this doesn't matter at all!)

STEP 4
LET pair = (FROM trial_videos RANDOM {with max 4 repeats, with max 2 repeats in succession})
LET type = (TAKE pair RANDOM)
LET vid = (FROM type RANDOM)

Above, you can see how we selected from each layer of our group structure, highest to lowest. First, we picked an object pair. We need to restrict how many times we can select each pair using with max 4 repeats - we require that we have exactly 5 trials of each object pair. This repeat clause, combined with the number of times we'll loop over this section, will ensure we get the intended result. We also choose to add the with max 2 repeats in succession clause, which ensures that we don't ever show videos of the same pair more than 3 trials in a row.

Next, we pick a trial type - baseline, female background right target, female background left target, male background right target, or male background left target. We only want one of each to come up in the experiment for any given object pair, so we'll use TAKE to choose without replacement. The next time the same object pair's group is picked to be pair, subgroups chosen previously to be type will not be available for selection.

Lastly, we need to pick a particular video tag from type. We will do this randomly. It doesn't matter whether we define this selection with FROM or TAKE because, per the previous selection where we chose type, we can't get this same subgroup again later.

Now that we have the trial video we want to display, we can run a trial.

STEP 5
Trial Start
VIDEO CENTER vid ONCE
UNTIL FINISHED

STEP 6
Trial End
VIDEO CENTER OFF

Just like in the original example version of this protocol, we'll loop over the steps that define an attention-getter + a trial to display a total of 20 trials. Then, our experiment is done.

STEP 7
LOOP STEP 2
UNTIL 19 TIMES

STEP 8
Phase End

See the resources page for copies of the versions of this protocol.

Last updated