Preferential looking example - fast-mapping

About this protocol

This protocol is based on the Newman et al. (2020) study cited below, testing toddlers' fast-mapping from noise-vocoded speech via a preferential looking paradigm with an initial training period on the word-object mappings. In a training phase, participants are taught the names for two objects, which appear alternately on the screen. Following training, both objects appear on-screen simultaneously with no accompanying audio to assess baseline preferences and familiarize participants to the idea that the objects will now appear together. Subsequently, the objects appear together in these same positions across several test trials. Sometimes the speaker asks the child to look toward one object ("find the coopa!") and sometimes directs them to look at the other object ("find the needoke!")

Newman, R. S., Morini, G., Shroads, E., & Chatterjee, M. (2020). Toddlers' fast-mapping from noise-vocoded speech. The Journal of the Acoustical Society of America, 147(4), 2432-2441.

This study was not originally run in BITTSy, and it used four fixed-order experiment files to control condition assignment and trial order rather than the randomization set up in this example protocol. However, this kind of multi-phase study, with multiple conditions and restrictions on stimulus presentation order in each phase, is a great example of a more complex preferential looking study (see this example for a simpler case). Below is a walk-through of how you could recreate its structure in BITTSy.

Starting the protocol

Starting definitions

As in any protocol, first come the starting definitions:


In this protocol, we are only using a single display, and our audio is playing from stereo speakers. We therefore only name one display in the DISPLAYS definition, and leave out the definitions for LIGHTS and AUDIO. We only need to name one SIDE to match with the display.


Next, we're going to set up all the tags that reference our video stimuli files.

Our visual stimuli are two animated 3D models, one of a spikey ball on a pedestal (which we call "spike") and one that looks like an F in profile (which we call "fred"). In training trials, we'll present one of these two objects and an audio track will identify it as either the needoke or the coopa.

LET coopa_fred = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\coopa_fred.mp4"
LET coopa_spike = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\coopa_spike.mp4"
LET needoke_fred = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\needoke_fred.mp4"
LET needoke_spike = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\needoke_spike.mp4"

After the training phase, we'll have a baseline trial with both objects on-screen and no accompanying audio. We'll want to randomize which is on the left and which is on the right. In the test phase, to keep the task as easy as possible, we'll keep the object positions the same as in the baseline trial.

This baseline trial has "fred" on the left and "spike" on the right. (The order of objects in our video filenames identify their ordering on screen left-to-right.)

LET silent_fred_spike = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\silent_fred_spike.mp4"

And this baseline trial can be presented with either of these two sets of test videos with that same object positioning, depending on how the objects were labeled in the training phase.

# Fred = Coopa, Spike = Needoke
LET coopa_fred_left = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\coopa_fred_spike.mp4"
LET needoke_spike_right = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\needoke_fred_spike.mp4"
# Fred = Needoke, Spike = Coopa
LET needoke_fred_left = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\needoke_fred_spike.mp4"
LET coopa_spike_right = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\coopa_fred_spike.mp4"

Comments (lines starting with a #) do not affect the execution of your study, and are helpful for leaving explanation of your tag naming conventions and study structure.

Here are the rest of our tags, for the opposite object positioning.

LET silent_spike_fred = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\silent_spike_fred.mp4"
# Spike = Coopa, Fred = Needoke
LET coopa_spike_left = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\coopa_spike_fred.mp4"
LET needoke_fred_right = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\needoke_spike_fred.mp4"
# Spike = Needoke, Fred = Coopa
LET needoke_spike_left = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\needoke_spike_fred.mp4"
LET coopa_fred_right = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\coopa_spike_fred.mp4"

Note that in our test video tag names, we've labelled which object is the target object and which side is the correct answer. This makes it extremely convenient for us to determine these later, looking at the reports from study sessions - it is directly in the tag name! This is not necessary for this protocol, just nice. It does lead to some redundancy in files - for example, coopa_spike_left and coopa_fred_right are the same video file. Which tag we use to call up that video just depends on which object was named coopa earlier in the study.

Lastly, we'll have an attention-getter video that plays in-between all the trials.

LET attentiongetter = "C:\Users\ldev\Desktop\BITTSy\CIWordLearning\baby.mp4"


Now, let's walk through the groups we'll define for this study. There are two possible ways to pair our two objects and two audio labels. Either coopa goes with the object "spike" and needoke goes with the object "fred," or vice versa. We will call these two training phase configurations Order 1 and Order 2, respectively. We will want to randomly assign participants to one of these two ways of pairing the labels and objects.

LET order1_train = {coopa_spike, needoke_fred}
LET order2_train = {coopa_fred, needoke_spike}

We're also going to randomly determine which sides the two objects will appear on in baseline and test trials. With this next layer of random assignment, we'll end up with four distinct orders. In the "A" orders, "fred" will be on the left, and in the "B" orders, "spike" is on the left.

LET order1A_baseline = {silent_fred_spike}
LET order1B_baseline = {silent_spike_fred}
LET order2A_baseline = {silent_fred_spike}
LET order2B_baseline = {silent_spike_fred}

Now for the test phase groups. In Order 1A, coopa goes with "spike" and needoke goes with "fred," and "fred" is on the left and "spike" on the right. These are the test video tags that fit that.

LET order1A_test = {coopa_spike_right, needoke_fred_left}

Similarly, we'll set up the test video groups for the other orders.

LET order1B_test = {coopa_spike_left, needoke_fred_right}
LET order2A_test = {coopa_fred_left, needoke_spike_right}
LET order2B_test = {coopa_fred_right, needoke_spike_left}

Now we're ready to define our groups of groups - each whole order, from training to test.

LET order1A = {order1_train, order1A_baseline, order1A_test}
LET order1B = {order1_train, order1B_baseline, order1B_test}
LET order2A = {order2_train, order2A_baseline, order2A_test}
LET order2B = {order2_train, order2B_baseline, order2B_test}

So that we can randomly pick one of these four orders at the start of the experiment, there will be one more layer to this group structure. We'll have a group that contains all of the orders.

LET orders = {order1A, order1B, order2A, order2B}

Optional experiment settings

At this point, we would define any optional experiment settings that we needed. But we don't need any for this protocol. For other preferential looking studies, you most notably might want to change the background color that is visible when there are no stimuli on the screen, so that gaps between videos are not disruptive.

STEPS for execution

Now that the experiment is set up, we'll define what will actually happen when we click to run the protocol.

Training phase

Phase Train Start
LET order = (FROM orders RANDOM)
LET training_group = (TAKE order FIRST)
LET baseline_group = (TAKE order FIRST)
LET test_group = (TAKE order FIRST)

In the first STEP, we'll start the training phase. We'll want to delineate which trials are in the training phase and which are in the test phase so that we can easily separate them in reports from study sessions. Then, we choose the order the child will participate in (1A, 1B, 2A, or 2B) from our orders group. This resulting group contains three other groups that together make up the videos that can be in that order: its training phase videos, its baseline video, and its test videos. We defined them in this order, so we can use TAKE statements to choose them from first to last, without replacement, and assign them dynamic tag names that we can use to refer to them later. We've named these dynamic tags in a way that makes explicit that these are still groups. We'll need to select particular stimuli from these groups later, to display in their respective phases.

Now that we've picked an order, time to start displaying stimuli. First, an attention getter. This is played until the experimenter indicates that the child is looking at the screen and ready for a trial to start, by pressing the X key on the keyboard.

VIDEO CENTER attentiongetter LOOP


Now that the child is ready to see a trial, let's show one.

Trial Start

LET training_trial = (FROM training_group RANDOM {with max 0 repeats in succession})
VIDEO CENTER training_trial ONCE

Trial End

The group of training videos contains two tags. We'll choose one of them randomly. We use a randomization clause {with max 0 repeats in succession} to restrict which tags are chosen from this group after we make the first selection. This one means that we can never display the same tag in the group twice in a row. Because there are only two in the group, trials will alternate between them: children will either hear coopa, needoke, coopa, needoke... or needoke, coopa, needoke, coopa...

With this randomization clause, we have specified exactly what we want for the whole phase, right here. We can use a loop to run the rest of our trials.


When we finish the first trial and reach this loop, we'll jump back to STEP 2, where the attention getter video started, and we'll continue, running another whole trial, until we reach STEP 7 again. This will repeat until the loop terminating condition is reached: when we've looped back over the section seven times. Along with the first execution of this section, before we started looping, we display a total of eight training trials. This is the entirety of the training phase.

Phase End

Test phase

Now we'll start the test phase. We'll still have the attention-getter appear in between every trial, and our first video we show here will be our baseline video.

Phase Test Start

VIDEO CENTER attentiongetter LOOP


Trial Start

LET baseline_trial = (FROM baseline_group FIRST)
VIDEO CENTER baseline_trial ONCE

Trial End

Recall that the baseline_group had the baseline video we wanted in this order as its only tag. We'll still have to make a selection and assign it the dynamic tag baseline_trial in order to play the video. But with only one tag in the group, there are many possible ways to define the choose statement.

Next, the rest of our test videos, which we will again run in a loop.

VIDEO CENTER attentiongetter LOOP

Trial Start

LET test_trial = (FROM test_group RANDOM {with max 3 repeats})

Trial End


Note that these steps are almost identical to steps 2-7, which made up the training phase. Parallel trial structures like these make it really simple to write protocol files - simply copy and paste the section of STEPs and change step numbers, tags, and other flags as necessary.

In the test phase, we want to display eight total test trials. We achieve this when the loop in STEP 19 has run seven times. There are two tags available in the test_group for each order, and we want each to be displayed exactly four times. After each tag's first selection, it can have 3 repeat selections. We define this in the randomization clause in STEP 17. If we wanted to further restrict the trial order (for example, if we wanted to prevent a random order that showed the same video four times in a row, then the other one four times in a row) we could do so with a further randomization clause.

Once this loop is completed, the test phase is done, and so is our experiment!

Phase End

See the resources page for a copy of this full protocol.

Last updated