Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The Behavioral Infant and Toddler Testing System
The Behavioral Infant and Toddler Testing System (BITTSy) was developed at the University of Maryland College Park. It is designed to run classic infant and toddler testing paradigms such as headturn preference procedure, preferential looking procedure, visual fixation/habituation procedures, and conditioned headturn procedure, all within a single freely-available program, interfacing with standard off-the-shelf hardware to allow for standardization and consistency across testing sites and research groups.
In the following sections, you will find more information and guidance on using BITTSy for your own research.
The development of BITTSy was funded by NSF BCS1152109, "New Tools for New Questions: A Multi-Site Approach to Studying the Development of Selective Attention".
BITTSy (Behavioral Infant & Toddler Testing System) was designed to be a consistent, multi-paradigm infant experimental testing platform that can be easily set up across multiple sites in a uniform manner, establishing a standardized research tool that will enhance cross-site research collaborations. The testing platform can run several key infant testing protocols (Headturn Preference Procedure, Preferential Looking, and Visual Fixation/Habituation) using the same standard design.
Our intent was to create a single program that could run a wide range of infant paradigms in a comparable fashion, that used off-the-shelf equipment, which would make it cost-effective. By creating such a system, we hoped to be able to facilitate growth in our knowledge of early language development across diverse circumstances.
To date, most individual projects on language development have acquired data in a single laboratory. The ability to have a comparable experimental setup across labs will enable researchers across institutions to start to develop collaborative research approaches; this is particularly important for assessing low-incidence linguistic populations (where no single location would be able to recruit sufficient numbers of participants). BITTSy provides improved reliability and consistency across research sites by promoting multi-investigator and multi-site collaborations. Shared resources allow for collaborative testing of low-incidence populations or populations located in different geographical regions, such as comparisons between infants learning different languages.
The lack of an easily-accessible and cost-efficient system also deters new investigators from initiating research programs that rely on some of these testing methods, particularly HPP for which there are no off-the-shelf systems available. While some testing paradigms do have standardized versions available (e.g., Habit runs visual fixation procedures, although it does not run natively on modern operating systems), HPP does not, and HPP is particularly well-suited to testing young infants. Moreover, there is no single program available that runs a wide range of infant paradigms in a comparable fashion; BITTSy will provide the option of using multiple methods within a single test session.
Some infant testing procedures have a set time per trial (e.g., play this stimulus for 4 seconds, or play this sound file until it ends); for others, trial lengths depend on the child's looking behavior, requiring that the experimenter code behavior while the experiment is running. BITTSy allows individual stimuli to either , or to play until an event occurs (such as ), or for a of these (e.g., until a child looks away for X seconds, or until the file ends, whichever comes first). It also allows different trials to continue to occur for a , or is reached (e.g., in habituation-based studies, or in training studies). Thus, an experiment might be set up to continue presenting the stimulus within a given trial until the child looks away for 2 seconds, but then to continue playing trials until some cumulative amount of looking has occurred. All of these timing constraints can be set up as part of the protocol file.
BITTSy runs a number of standard infant/toddler testing paradigms, such as the Headturn Preference Procedure, Preferential Looking Paradigm, and Visual Fixation Procedure. It allows researchers to use either lights or videos as attention-getters and to present stimuli on any given trial that are either audio-only, visual only, or audio-visual. Experimenters can fully determine which stimuli to present on which trials, whether trials are repeated, and whether stimuli are presented in a fixed order, randomized within blocks, or fully randomized. They can set the relative timing of all presentation events, and set up multiple phases within an experiment. Experimenters can specify whether individual stimuli, trials, and experimental phases continue for fixed amounts of time or continue until the child reaches a particular looking/listening criterion. Investigators can code infant looking during testing, or off-line from recordings.
BITTSy creates a raw ("low-level") log file of all events that occur within a session. However, it also includes a separate analysis module to summarize the data in a more usable fashion based on standard analysis requirements. This dual file-function allows an experimenter to go back and reanalyze different aspects of the data at a later time, an option that is not available in most current infant systems.
In the following discussion, we refer to several standard infant paradigms by acronyms:
All infant testing procedures have some means of attracting the infants' attention, usually via a visual attention getter.
One difference among methods is whether the visual stimuli are (1) flashing lights (as in classic HPP), (2) videos, or (3) static images presented on monitors, as used in PLP or VFP. BITTSy allows the use of either computer monitors or lights to present visual stimuli.
A second difference among methods is whether all the visual stimuli are in front of the infant, common in PLP, or whether there are also stimuli presented on the sides, as in HPP. BITTSy can control the use of 3 different presentation monitors: one in front, and two on the sides of a test room (plus the monitor for the Experimenter). The monitors can be different sizes. For greatest flexibility, we would recommend a large central monitor (which can be used for PLP and VFP, as well as for the central attention getter for HPP with monitors), and two smaller monitors on the left and right sides of the room (for HPP with monitors). You can also have lights in all three locations.
Thus, BITTSy software allows the researcher to select:
flashing lights or monitors for each stimulus location
1 to 3 stimulus presentation locations (for front only, either or both side locations only, or front and side presentation)
See sections below for more information on these two types of visual stimulus presentation in BITTSy.
BITTSy allows images or videos to be displayed to 1-3 screens. In addition, one screen is reserved as the experimenter's control monitor.
Ensure that your computer system and graphics card can handle the desired number of displays for your experiments - see system requirements and recommendations page
We do not recommend using on-board sound from televisions with BITTSy (see here for more info). Therefore, when selecting displays, it is not necessary to consider sound quality.
We recommend a having a larger central monitor for use in preferential looking paradigm studies/visual fixation studies. Any television screen is fine. Just pick one of the size and quality you want.
We used two TOGUARD WR943 10.1” screens for the side locations, but any smaller television or computer monitor would work as well.
Take note of the display output port types from your computer, and the input ports for your displays, and purchase connector cables or converters as necessary. Side monitors will require much longer input cables/extensions.
BITTSy was designed to allow testing using the Headturn Preference Procedure, a paradigm for which there are no other freely-available standardized test systems. Below we discuss the way HPP works, and some of its advantages.
Since the time it was first developed in the early 1980s (Colombo & Bundy, 1981; Fernald, 1985), the HPP has been a cornerstone method for studying early language development, and has been used in several of the most influential studies in infant speech perception (e.g., Fernald, 1985; Jusczyk & Aslin, 1995; Saffran, Aslin, & Newport, 1996, collectively cited more than 1000 times). The last substantial modifications to the method were made in the mid 1990s (Jusczyk & Aslin, 1995; Kemler Nelson et al., 1995) and current set-ups have lagged behind advances in software programming and hardware. In the standard variant, infants sit on their caregivers’ laps in a 3-walled test booth (see Fig. 3). Based on an observer's real-time coding of infant orienting behavior, a computer controls flashing lights, one on each wall of the booth, and 2 audio speakers, one under each of the side lights. Typically, trials begin with the light in front of the infant flashing, attracting the infant’s attention front and center. After the infant orients to the center, the light stops flashing and one of the two side lights begins to flash. When the infant turns towards that light, sounds begin to play from the speaker on that side; sounds continue until the sound file ends or until the infant looks away for a pre-determined amount of time (usually 2 seconds), ending the trial. By comparing infant orientation and attention to trials of different types, researchers can examine the types of sounds and sound patterns infants recognize, differentiate, or prefer. Although some studies have used HPP to test infants’ ability to discriminate between different stimuli (e.g.Nazzi et al., 2000), its most common use is to test either infants’ recognition of stimuli or their preference for specific components of their native language (see Johnson & Zamuner, 2010 for a recent review). When combined with a familiarization or training phase, this paradigm not only examines what children have already learned about their language(s), but also provides information as to what they are able to learn in the laboratory (Marcus et al., 1999; Saffran et al., 1996).
HPP uses infants’ natural tendency to maintain visual orientation towards an attended sound source
The task puts minimal demands on the infants as compared to tasks that require children to understand verbal instructions and produce an overt reaching or verbal response.
This makes HPP ideal for separating differences in perceptual skills from differences in the comprehension or processing of verbal instructions.
HPP is easy to use across a wide age-range that includes key milestones of language development.
HPP has been used with typically-developing children ranging from 4 months (see, for example, Mandel et al., 1995; Seidl, Cristià, Bernard, & Onishi, 2009) through 2 years (Santelmann & Jusczyk, 1998; Tincoff, 2006), a perfect range for examining early language development.
A unique quality of HPP is that it provides opportunities to study developmental change through cross-sectional and longitudinal studies; use of the same task across ages helps to ensure that changes are the result of cognitive/perceptual development rather than differences in task difficulty.
HPP is extremely flexible.
HPP can be used with stimuli ranging in complexity from single syllables (e.g. McMurray & Aslin, 2005) to entire passages (Jusczyk & Aslin, 1995; Soderstrom & Morgan, 2007), and with both speech and nonspeech stimuli.
HPP has been used to examine long term memory for sound patterns presented in the lab (Houston & Jusczyk, 2003) and from naturalistic settings (Jusczyk & Hohne, 1997; Saffran, Loman, & Robertson, 2000), and to explore musical knowledge and learning (Krumhansl & Jusczyk, 1990; Saffran & Griepentrog, 2001; Saffran et al., 2000), rule learning (Marcus et al., 1999), auditory stream segregation (Newman, 2005), and knowledge of phonetic categories (McMurray & Aslin, 2005), phonotactics (Chambers, Onishi, & Fisher, 2003, in press; Jusczyk et al., 1993), and prosody (Seidl & Cristià, 2008).
HPP maximizes research resources, compared to other infant testing procedures.
Sessions last approximately 10-15 minutes, which is well-matched to the attention spans of infants in the target age range. Shorter sessions maximize the number of infants who are able to complete the experimental session and provide usable data. Low attrition rate is particularly important when testing populations for whom recruitment can be difficult and costly, such as bilingual or atypical infants.
Observer training can be effectively accomplished in a relatively short period of time, thus maximizing research assistants’ time for collecting and analyzing data.
The original use of the HPP paradigm focused on typically-developing monolingual infants in the 4-12 month age range, and there are still relatively few speech perception studies with bilingual infants using HPP (e.g., Bosch & Sebastián-Gallés, 2001; Bosch & Sebastián-Gallés, 2003; Vihman et al., 2007). Although most research using HPP focuses on infants under one year of age, recent work has demonstrated the use of HPP in typically-developing infants in the 20-24-month age range and in clinical populations with children as old as 48 months (c.f., Soderstrom & Morgan, 2007; Van Heugten & Johnson, 2010). Research has also demonstrated links between early listening preferences shown through HPP and subsequent linguistic and cognitive development (Newman, Bernstein Ratner, Jusczyk, Jusczyk, & Dow, 2006). Researchers are also starting to use behavioral paradigms in concert with physiological measures, such as heart-rate and ERPs (Kooijman, Johnson, & Cutler, 2008; Panneton & Richards, under review), and have begun developing visual variants of HPP for testing American Sign Language (Baird, Seal, & DePaolis, 2010). All of these point to new ways to use this paradigm with other populations.
One limitation of previous research with HPP is that the majority of this research has used flashing lights as the primary attention-getting device, and flashing lights may not hold the attention of more cognitively-advanced participants, such as older children. Recent studies have begun showing images on flat screen monitors (Bosch & Sebastián-Gallés, 2001; Volkova, Trehub, & Schellenberg, 2006), an extension that lends itself well to testing older children in that it creates a more visually stimulating environment. BITTSy is designed to allow either the use of traditional lights, or the use of video monitors; if your test room is set up with both, you can switch between them just by what you list in your protocol file (that is, you do not need to set up your system to do one or the other; it can be set up to allow both.)
Until BITTSy was developed, there was no easily available, off-the-shelf testing system for HPP. HPP was dependent upon custom-designed hardware – the computer needed to be physically connected to the test booth as well as to an input panel which researchers use to indicate the direction in which infants orient. The input panel required specialized wiring, and response boxes needed to be created specifically for each research lab. Most researchers using HPP had to employ their own programmer and electrical engineering consultants to build the system. This limited the use of the approach to only individuals with substantial funding and technical resources and more importantly, it reduced the likelihood of new investigators implementing this methodology.
Since most researchers developed their own systems, there was no standardization, making cross-site collaborative research very difficult. This in turn limited comparative work across populations located in distinct geographical areas (e.g., work comparing infants with different language exposure). This was particularly problematic for research examining the role of different language combinations in infants raised bilingually, in that few locales have easy access to sufficiently large populations of infants raised in particular types of households.
Finally, at the time that BITTSy was created, many of the systems in use were becoming obsolete, as the timing boards and software tied to specific operating systems.
In addition to HPP, BITTSy also allows testing using the Preferential Looking Paradigm (Golinkoff, Ma, Song, & Hirsh-Pasek, 2013). During this procedure, children are presented with pairs of images (or animated objects) on a screen. This can include familiar (e.g., a hand) or unfamiliar items (e.g., an object that the child would not know the label for). Sometimes the objects are presented one at a time (e.g., during familiarization and training), and sometimes they appear in pairs or in groups of 4 images. Images are accompanied by speech stimuli, which may include sentences either teaching children a new word, or instructing them to look at one of the objects on the screen. A digital camera positioned near the screen (typically above or below) records children’s eye movements. Videos of the test-sessions are then coded off-line on a frame-by-frame basis by trained research assistants, to measure participants' fixations and to calculate accuracy and reaction times across trials.
BITTSy works on the basis of protocol files. A protocol file is basically the instruction set for that particular experiment or experimental session. The protocol file instructs the program on what stimuli to play, when to play them, how many trials should occur, how they should be randomized, etc.
A protocol file can be made up of multiple phases, or sub-sections; each phase can be made up of multiple individual trials. Generally, you would want to create different phases if you were changing stimulus sets (e.g., a training phase vs. a testing phase), or changing paradigm (e.g., a habituation phase followed by a preferential looking phase). Each phase can essentially work akin to an entirely separate experiment.
The trials within a phase can be entirely distinct, if each trial is set up as a single, specified, set of events, but typically trials share similarity within one another (e.g. 12 trials that each present one of a set of stimuli in random order, but have the same timing structure). BITTSy is entirely flexible with regards to this, however. A protocol file could list every event individually, and be specific for a single participant - but it is far more efficient to create protocol files that select from a set of items and repeat types of trials where the same trial structure is reused.
The flexibility that BITTSy provides does come with a bit of a cost - you need to specify a lot more detail than would be needed in a computer program that restricted you to fewer options. As a result, protocol files can get rather long, and are essentially similar in many ways to programming. However, once a template has been created for an experiment/paradigm it is very easy to modify it by swapping out the file names, reordering them in groups, changing the restrictions on action clauses, etc. We thus provide some sample protocol files, and would recommend creating new ones by making adjustments to existing protocol files rather than by starting from scratch each time. Moreover, we have worked to make the protocol files, and the commands embedded within them, highly intuitive - even a novice student researcher should be able to read a protocol file and understand what is happening at each point in the experiment.
To create and edit protocol files, we recommend using Notepad ++. This application includes a side bar that numbers each line of the protocol, making it easier to go back and make changes based on any error messages that BITTSy may present.
Visual fixation is a type of habituation paradigm frequently used to study either infant auditory or visual discrimination. The Switch paradigm (Stager & Werker, 1997) is one type of visual fixation procedure.
In visual fixation, the infant is facing a central video screen which shows an image on each trial; at the same time, a sound is played. The infant's interest in the auditory stimulus is measured by how long they choose to look at the co-occuring visual image.
In most cases, the image is not intended to represent the sound; for example, the child might see a bullseye image, or a checkerboard, paired with a repeating sound (such as a syllable). At first, the image and sound are novel, so the infant attends for a fair amount of time. But as the trial repeats over and over, the combination of sound plus image eventually become less interesting, and the child's attention level drops. This is referred to as habituation; it is usually defined experimentally in terms of a set decrease in attention (e.g., looking 50% of the time as at the start of the study).
In BITTSy, the habituation criteria can be based on a percentage decrease from initial looking (e.g., the first 3 trials), or from peak looking (the 3 trials with longest average looking). You can also set the amount of decrease in looking (e.g., 50%, 60%) and the number of trials over which looking is averaged.
The Switch paradigm is a form of VFP in which infants are first "taught" one or two new words and then tested on their learning of that pairing. During the habituation phase, they are presented with a particular combination of auditory word and visual image. After habituation, the child is then presented with either the same pairing, or a "switch". For example, if the child was habituated with object A + sound 1 and object B + sound 2, they would be tested with either a correct pairing (object A + sound 1) or a switch (ojbect B + sound 1). If children have learned the specific pairing, they should dishabituate (or increase their looking) to the novel pairing.
See the habituation section for more information about setting up visual fixation/habituation studies in BITTSy.
BITTSy can run conditioned headturn studies, a paradigm in which the child is trained (or conditioned) to turn a particular direction when they hear a particular event or a change in an event. Conditioned headturn can be used to test children's hearing (for audiometry - turn when a sound is heard), or can be used to test discrimination (turn when a change in a repeated sequence of sounds occurs). For the original use of this paradigm, see Eilers, Wilson & Moore, 1977; or see Kuhl , 1985 or Polka, Jusczyk & Rvachew, 1995, for methodological reviews.
In conditioned headturn studies, the child is seated on their parent's lap facing an assistant; the assistant maintains the infant's attention forward by playing silently with toys. Test trials consist of an auditory stimulus - either the presence of a sound, or a change in a background sound.
During training, the test trials are especially salient or obvious (a large change, or a loud sound). Immediately afterwards, a reinforcer then turns on to the side of the infant - originally, this was a lit-up animatronic toy, but it can be anything that would reward (and encourage) a child to look to that side. This reward plays briefly, and then turns off, so that the infant's gaze returns to the assistant, and the process repeats.
After a series of trials, the infant learns to expect that the reward will occur whenever the sound change happens. At this point, the reward itself is gradually delayed relative to the event, such that the infant begins to turn BEFORE the reward happens, or to anticipate it. This is the response that the experimenter is looking for - a turn towards the reinforcer when the particular sound occurs (before the reinforcer), rather than a turn in response to the reinforcer. There is a preset criterion for the test phase to end (e.g., three correct anticipatory head turns).
At this point, training ends, and the test phase begins; the experimenter indicates when a child is ready, and the computer randomly selects either a change trial (where the sound changes) or a control (no-change) trial, where the sound does not change. The experimenter indicates when the child turns his or her head; this leads to 4 potential trial results: a hit (the sound did change, and the infant turned appropriately), a miss (the sound changed but the infant did not turn), a false positive (the child turned even though there was no sound change), or a correct rejection (there was no change, and the child did not turn). These are then used to determine a child's discrimination ability. Only hits are reinforced during this test phase.
An alternative form of conditioned headturn uses two distinct reinforcers, and the infant is trained to turn either to the left or to the right. Other studies add a generalization phase after the standard test phase.
Creating conditioned headturn studies in BITTSy was made possible with the addition of the JUMP command in version 1.5. Due to its later introduction, implementing conditioned headturn paradigm is currently a bit "clunkier" than the other main paradigms BITTSy was designed to run, and the process of developing these protocols is typically more complex. While BITTSy can execute the typical trial and phase structures of conditioned headturn, there are some types of phase progression logic used in prior studies that are difficult to replicate exactly in BITTSy. However, similar to headturn preference procedure, the use of conditioned headturn has been limited by the unavailability of a standard system capable of running these procedures. The ability of BITTSy to run these paradigms provides an easy way for researchers to begin using this paradigm. We hope that BITTSy will allow more labs to adopt and adapt this paradigm for their research.
Some studies may have fixed numbers of trials and lengths of trials; these can run somewhat automatically. But for studies in which trial lengths, or the number of trials, depends on the child's looking behavior, there has to be a method for coding that behavior during the course of the study itself.
BITTSy is set up with default keys for coding (see later section on coding for more details), but these can be altered to fit the preferences of the lab. (However, we generally recommend that the same experiment use the same coding keys for different participants.)
One limitation of BITTSy is that coding is based on a system of Left / Right / Center / Away coding. If you wish to use a simpler Towards / Away coding, you can essentially use the Center coding as "towards" - but this will still require two keys for the coding system, rather than a single key with a press down /lift up option.
If you do not explicitly assign keys for coding, BITTSy will use the default keys:
C = center L = left R = right W = away
Note, though, that for a standard QWERTY keyboard, this puts the key referring to the left on the right side of the keyboard, and the key referring to the right on the left side of the keyboard, which many coders find very unintuitive. You may find it much easier to select a key under the left hand for left looks, and one under the right hand for right looks. Also, note that this is from the viewpoint of the coder - a look to the left from the coder's perspective (facing the child) is a look to the right side of the test space.
If you would instead like to assign your own keys, use this syntax within your optional experimental settings section (see linked page for more info).
BITTSy creates a of all events that occur within a session. This means that everything is logged, allowing full access to data. However, these detailed files are not particularly user-friendly. A takes information from the log file and summarizes it in a manner more useful to the experimenter. We have tried to anticipate common summary information that might be desired, but additional summary analyses can be created from the detailed log file at any time.
For more information on the data obtainable from sessions run with BITTSy, see other pages in the Data Output section of the navigation pane.
Events can be specified to occur in a set order. However, there are also a number of aspects that can be selected randomly. These include the side of occurrence for an event (left vs. right) and the particular stimulus from a set. can occur WITH or WITHOUT replacement - that is, BITTSy will keep track of which items it has already selected, and can avoid reusing items until all have already been selected (without replacement), or can just pick entirely randomly each time regardless of what has already been selected (with replacement). You can also on the number of times the same thing can be re-selected (e.g., pick a side randomly, but don't present to the same side more than X times in a row).
Stimuli can also be arranged in , with randomization occurring within sets. As an example, you could have pictures of animals
which include cats
and dogs
, with multiple examples of each, and could tell BITTSy to select one of the two animal types randomly, then randomly order the presentations of all the different examples within that animal type before moving on to the other animal type.
Infant testing procedures typically either have sound from front and center (as in PLP), or from the two sides of the room (as in HPP). Sound in HPP is usually played either in stereo from both speakers simultaneously (as in Saffran et al., 1996), or is played only from the same side of the room as the attention-getter. Dichotic presentation is relatively rare, but there are situations in which it would be advantageous to be able to play different sounds from multiple locations at once, such as in studies exploring infant perception of speech in the presence of background noise or studies of speech perception in deaf children with CIs, where little is known about their use of binaural cues.
BITTSy software allows the researcher to output sound individually over any of the speakers available to their system, as well as output sound in stereo (simultaneous presentation of a single file through the left and right channels).
In a standard stereo audio (2.1) system, there is a left channel and a right channel, allowing for up to two speakers. With a surround sound capable system (5.1 or 7.1), you can control more speakers individually (5 or 7 locations, respectively).
Any speakers that are compatible with your computer will work for BITTSy.
We recommend using wired connections to speakers, rather than connecting through Bluetooth.
This section is just for those curious about why we recommend a separate speaker system.
Some labs (including ours) may select audio-capable TVs as their displays for use with BITTSy. There are a number of reasons to prefer separate speakers over internal speakers on TVs when running BITTSy studies. Here are the main reasons why we recommend this:
BITTSy has not been tested with presenting audio over an HDMI audio signal (the main reason - which stems from several of these others!)
Computers with HDMI outputs have separate sound drivers listed for HDMI and for devices plugged into audio-only ports, which may have different default settings that could complicate setting up or standardizing how audio is presented to different locations in BITTSy.
Graphics cards with multiple HDMI output ports are more expensive than cards with more non-audio-carrying port types such as VGA and DisplayPort - and computers that can be equipped with these cards are correspondingly more expensive.
Modern TVs often accept only HDMI input, do not have an option of supplying audio from an audio jack on the computer instead, and output two-channel audio from the left and right side of the device. For left-side and right-side screens that you wish to associate with only left- or right-channel audio (i.e. for HPP studies), this combination makes it impossible to separate channels and present them from the appropriate side. (E.g., a command from BITTSy to play audio on the left would be carried over HDMI to both TV speakers, and both would play audio on the left-hand side of the device - rather than the intended effect of audio being played from only the device that is on the left side of the room.)
Internal speakers on TVs are generally of poorer quality, and present audio signals with less fidelity, than what can be obtained with external speakers.
BITTSy is designed to run on Windows 10. Original development started in Windows 8, but later switched to Windows 10; testing in Windows 8 has ceased, and Windows 10 is our only recommended version. No versions of Windows prior to Windows 8 have been tested. Regular Windows updates should not affect the functionality of the BITTSy application.
BITTSy itself does not require specifications beyond a basic desktop computer. However, depending on needs and preferences for particular studies and research programs, you may decide to use BITTSy on a more powerful system, or upgrade particular components to meet certain specifications. If you are in a position to purchase a PC for running BITTSy studies, here are some questions you may wish to consider.
These specs will mainly come into play for loading and rendering video stimuli from your file system and ensuring BITTSy runs smoothly while your operating system runs other processes in the background. Nothing particularly high-end is needed here, but you may want to opt for something mid-range as insurance that there will be no noticeable blips in execution.
Tip: Systems that are marketed as budget gaming computers prioritize the same important functions needed for many experiments (good time resolution for inputs from the keyboard for live coding, fast graphics rendering), are highly customizable, and often offer a great balance of specs for their price.
The accuracy of keypress timing for live coding depends partially on the quality of your keyboard. Using a bottom-tier keyboard (such as the basic keyboards that are included with a computer purchase) can result in a small additional delay in BITTSy receiving and logging that input has occurred. Many keyboards designed for gaming will list their key response times, but there are diminishing returns from more expensive models; testing various models at UMD has suggested that mid-tier keyboards work fine for live coding purposes.
BITTSy is capable of displaying stimuli across multiple screens (computer monitors, TV displays, or a combination of both). If you are interested in using BITTSy for studies that utilize visual stimuli, ensure that your system can support the number of monitors you would like to use (plus one, for the experimenter's control monitor). Many basic computers will support a dual-monitor setup (suitable for a standard visual fixation or preferential looking procedure), but adding more screens may require a more advanced graphics card and graphics processing power. (More specifically, if you wish to run HPP using side monitors as attention getters rather than using lights, you would need 4 monitors: 3 in the booth and one in front of the experimenter. So your graphics card would need to have places for 4 monitors to plug in.)
Example graphics card from UMD lab: AMD Radeon™ RX 5700 with 8GB GDDR6 (3x DisplayPort 1.2, 1x HDMI 2.0)
In addition to standard stereo audio (2.1), BITTSy supports multichannel audio, such as 5.1 and 7.1 surround sound. If you wish to make use of audio outputs beyond a two-channel system, check that your computer has (or can be upgraded to) a surround-sound capable card.
We wanted BITTSy to do HPP with off-the-shelf materials. This required finding a solution for blinking lights that did NOT require hard-wiring.
Our solution was a DMX512 lighting system. DMX stands for Digital Multiplex - DMX512 (Digital Multiplex with 512 pieces of information) is basically a standard for digital communication networks that are used to control things like stage lighting (where it initially began), light dimmers, Christmas lights, special effects devices, electronic billboards - basically anything you would want a computer system to control.
To use a DMX system with BITTSy, you need to purchase two pieces of hardware, plus connector cables.
The first piece you must buy is a USB-DMX interface box. Basically, this is what allows you to make the connection between the PC computer and the actual lighting system itself. It takes information from the computer, and then outputs DMX codes.
We purchased the ENTTEC DMXUSB PRO (part number 70304).
One end of this has a USB 2.0 port, which connects to your computer (this cable is included when you purchase the device). The other end has XLR ports to connect with the lightbox. There is a port for both DMX IN and DMX OUT, of which you will only need DMX OUT.
There is a cheaper option from Enttec that is a more pared-down DMX controller, the ENTTEC Open DMX USB (part number 70303). It has a USB connection and DMX OUT (the one you need!). It was not available when we purchased the Pro unit, but it has all the capabilities that are needed for controlling lights in BITTSy and should function identically.
The second piece of hardware you need to buy takes the DMX commands from the USB interface box and uses that to turn things on and off.
We used the Elation DP-415.
This unit has been discontinued, but is still available at many vendors (check out B&H Photo, or search on Amazon, Google or Ebay - there are a LOT of them available), along with a very similar model, the Elation DP-415R.
The same company also made a new item, the CYBER PAK, which is the replacement unit - we have not tried it, but this or any other DMX dimmer/switch pack should work similarly, since DMX is a standard.
Dimmer packs take a DMX signal and use it to control power given to outlets along a certain number of channels. The number of channels corresponds to the number of devices you can independently control. There are devices for big light shows and concerts that have many channels and outlets, but basic units like the DP415 generally have four channels. This is enough to allow for left, right, and center lights in HPP that can all turn on and off individually (plus an extra, unused channel).
The DP415 has 8 outlets, in pairs divided between 4 channels. The outlets are 3-prong Edison sockets - the basic outlets you have in your house that take 3-prong connectors -- assuming you are in North America. (Outlets in Europe and Asia are different; you can still use this device, but would need to purchase lights with American-style plugs, or use converters.)
With this system, the computer can turn on and off anything plugged into the outlets on your dimmer pack. So you can select any type of light you wish, plug them into those outlets, and BITTSy can turn them on/off and make them flash.
This DMX system means you can select any type of light you wish, as long as they plug in with a basic power plug. Nightlights, bright strobes, lava lamps... anything electronic will be treated exactly the same way. You could even turn on fans or bubble machines rather than lights - BITTSy doesn't know the difference.
Things to consider when selecting lights:
Cord Length: The DMX system is going to be located near the computer running BITTSy - so you need to make sure you have long enough cords on your lights to reach from there to where you want the lights to be. (You can use extension cords, but if you are worried about timing, at least make sure they are good ones, and are similar in length on the two sides of your testing space)
Brightness: This is relative to the room space itself, and how dim it is - you want something that is bright enough to be easily seen, but not overpowering.
If the cables for your lights have a switch on them, ensure this switch is always on so that they are entirely controlled by the dimmer pack supplying or not supplying power to the outlet.
We then found a cover to go over the lights.
Suitable covers will depend on your lights, room, and intended uses. Some considerations are:
Size of the light bulb
Brightness of the room during testing, and brightness of the light as it shines through the cover. You want the brightness of the light when it is flashing to noticeably illuminate the participant's face so that your experimenters can see which side is currently active, even if the light itself is beyond the frame of your camera view - but you also don't want the lights to be so bright that they are uncomfortable to look at directly.
Color. For headturn preference, where the center light is consistently used between trials and the two side lights during trials, it is nice to have the center light cover be a different color than the side lights' covers. This helps experimenters identify which light is active even more easily during testing. Our center light has a red cover, and our side lights' covers are yellow-orange.
Our covers are turn signal/brake light covers designed for a small vehicle, but you can select anything that works well in your room! You could take apart nightlights for their covers, use caps from large Christmas lights, or get more creative with any kind of semi-transparent plastic material - perhaps some hard plastic cups or shot glasses.
You may not need a cover for your lights, as long as they are shining through small openings, with nothing visible behind, and the brightness is okay!
There is also no need for the lights to project past the surface of the booth walls. You could mount the light slightly behind the wall, and line the cut-out in the wall with a piece of colored plastic vellum for the light to shine through. Just be sure that this is still bright enough!
You will need an XLR cable to connect your USB interface box to your dimmer pack. XLR cables can be purchased online or from any A/V store, as XLR is a standard cable format. XLRs used for this purpose are sometimes labelled DMX Turnaround Cables. However, there are many XLR cables with different pin numbers and configurations available - you will need to verify what kind you need.
If you have selected the models described above for your USB interface box and dimmer pack, you need a 5-pin XLR (male) to 3-pin XLR (female) cable (pictured below).
If you have purchased a different USB interface box or dimmer pack than the models described above, connecting your selections may require a different XLR cable type. The cable will need to connect your DMX OUT port on the USB interface box and your DMX IN port on the dimmer pack - check what type of connection these require.
Protocol files in BITTSy allow you to define what, when, and how stimuli should be presented, setting up the entire structure of an experiment.
The following sections will detail how to set up the basic structure of a protocol file.
Sometimes, you may wish to build a protocol file from scratch. But other times, particularly if the study you are building is fairly prototypical example of a common behavioral paradigm, you may be able to adapt an existing protocol file to meet your needs - and this is certainly easier! See our and for sample protocols of headturn preference, preferential looking, and habituation studies.
Pay attention to capitalization of keywords when learning BITTSy's protocol syntax. Capitalization matters, except for tag names and phase names.
Follow installation instructions for your selected speakers and displays.
For BITTSy to display image and video stimuli, your system's display settings should be set to Extend mode. .
There are several audio issues that can be identified during setup that can cause problems for experiments in BITTSy (or any other experimental software) later. See for more info.
Connect the (such as ENTTEC DMX USB PRO) to the computer tower. Both ends should be USB ports (albeit different types).
Connect your (such as DP-415 unit) to a power source.
Connect the USB-DMX box to the dimmer pack. The connection between them should be DMX OUT on the USB-DMX box to DMX IN on the dimmer pack. The outputs of the dimmer pack are only for the lights.
Connect dimmer pack to the lights. If you wish to control each light individually (as in a headturn preference setup with a left, center, and right light), ensure that they are plugged under separate numbered channels. The numbering of the channels on your dimmer pack will generally match with the device IDs used by BITTSy, but this will be confirmed later when you run the !
If you have the DP-415 unit, be sure that notches 1 & 10 are flicked to the “ON” position (toward the top). Notch 1 sets the starting DMX address to the lowest (the options 2-9 are unneeded), and notch 10 makes it function as a switch pack.
Check for necessary firmware/drivers for your USB-DMX interface box. If you purchased from ENTTEC, see the Downloads section at the bottom of the product page ( for the ENTTEC DMX USB Pro); download and install both the Win D2XX driver and the recommended firmware.
After everything has been successfully installed, this is what the booth looks like:
Stimuli can be flashing lights with appropriate hardware; if so, any lights that plug into standard power outlets are usable. (See section on lights for hardware requirements.)
For stimuli presented over monitors, BITTSy relies on Microsoft Visual Studio; as such, BITTSy should (in theory) be able to handle any kind of media file that can be opened in default Windows programs such as Windows Media Player, since BITTSy is working with exactly the same system interfacing. It may be possible to download codecs to allow for additional formats, but we would recommend using the following:
Video formats: .wmv & .mp4
Image formats: .jpg, .png & .gif
When creating your stimulus files, keep in mind of your display(s) on which they will be shown. BITTSy will display stimuli at full size, and could cut off an image or video that is sized too large. Visual stimuli that are smaller than the screen resolution will be displayed centered, with the for that protocol shown behind it.
Note: If you wish to present multiple images on the single center monitor, you will need to build the image file to have both a left and right picture embedded within it as a single image. BITTSy presents images to the monitor as a full-screen image, rather than allowing multiple "windows" to appear on the same monitor. This decision was made to avoid having image windows that could appear off center in a monitor.
Sound output can either be in the form of audio files, or the sound portion of a video file in cases where synchronicity between video and audio is required. For audio files, we recommend the following:
Audio formats: .wav & .mp3
Audio can be encoded at the sample rate and bit depth of your choosing, as long as they are supported by your computer's sound driver. BITTSy does not require any particular settings, and does not downsample files.
You may have audio stimuli in .aiff format from previous experiments. AIFF is a Mac-specific format that will play on Windows, but may not play as intended. We recommend converting these.
Starting definitions are the very first lines of any protocol file. These definitions handle how you, the user, will identify sides later in the protocol, and how BITTSy will map those side names to devices that are plugged into your computer.
Starting definitions can look like this:
Or, if your protocol does not utilize all the component types that BITTSy can use, you may only need a subset of these definitions:
The SIDES
definition lists the names that will be used to identify sides in the protocol. In the example above, they are CENTER
, LEFT
, and RIGHT
- but they could be named however you like. They can be listed in any order, but the number of them should correspond to how many distinct sides you will need to refer to later in the protocol.
Once you have decided on your side names and listed them in SIDES
, you will use the same ones in the other starting definitions and in action statements throughout your protocol.
The SIDES
opening definition is required for any protocol using any type of stimulus presentation: all stimulus types (lights, audio, image/video) require specifying which sides they should be presented on.
Once available side names are given in SIDES
, the next thing to set up is how BITTSy will map these names to particular components controlled by your computer, in order to control stimulus presentation across those devices.
Only include definitions for the components (displays/lights) that will be used in that particular protocol. BITTSy will require that all of the defined components are connected and turned on before running the protocol - so if your protocol doesn't use lights, or only uses one central display and not your two side displays, leave out everything that's not used, so that experimenters running the study don't have to power on all the unnecessary components.
By default, BITTSy assumes the system is set up for stereo audio (2.1), and allows for control of two channels (LEFT
and RIGHT
) that can be played from individually or simultaneously (with the command CENTER
or STEREO
). If your computer is equipped with two or one speakers, this is the correct setting for you, and you will not need to include an AUDIO
definition.
If your computer has more than two audio channels, you can include an AUDIO
definition to say how you will identify each speaker. For example:
Like lights and displays, the order in which the sides are listed in the AUDIO
definition should match the order in which the peripheral components (speakers, in this case) are numbered by the computer. That is, the speaker with the lowest ID number is given the first listed side name, the second ID gets the second name, etc. Unlike for the lights and displays, the ordering of audio channel IDs is standardized and predictable. Surround sound-capable sound cards have a standard color-coding system to denote which jacks are for which channel(s), and will be listed in order in your manual. Generally, for a 5.1 system, it will match the example above, and for a 7.1 system, the additional middle channels will have the highest IDs and be defined at the end.
Channel names in the example above match where speakers are typically intended to be placed in a 5.1 surround sound system. But since BITTSy is not applying any audio effects, all the speaker channels are functionally the same. You can physically place them anywhere you like, and choose side names to use in your starting definitions that describe where they are actually located, rather than what the audio jack is labelled to be.
Below are descriptions of experimental settings that can be changed to suit a particular study or paradigm. Within a protocol, these lines can be added anytime after the and before the first STEP.
Transitions between STEPs, trials, and phases can be controlled by participant looking time, recorded by an experimenter via . To live-code an experiment or have looking-controlled timing, the keys you will use for coding need to be assigned to the looking direction so that they can be matched to stimuli being presented on that side.
If you do not explicitly assign keys for coding, BITTSy will use the following default keys:
C = CENTER L = LEFT R = RIGHT W = AWAY
Key assignments can be changed with the following command:
where <side>
is any side named in the , or AWAY
, and <key>
is any alphanumeric key or arrow key. (Arrow keys are denoted by the codes UP
, DOWN
, LEFT
, and RIGHT
.)
Below are several examples.
Keys on the number pad have different encoding than their equivalents on the standard keyboard; at this time, number pad keys cannot be assigned to sides through key assignment definitions and cannot be used for live coding.
If you name sides in the opening SIDES
definitions with keywords other than CENTER
, LEFT
, and RIGHT
, and wish to use live coding, be sure to include key assignment definitions: BITTSy will not be able to pair default keys with sides that have different names.
COMPLETELOOK
and COMPLETELOOKAWAY
define the minimum length that a look should be in order to be counted in live coding - that is, the threshold below which any look is considered to be a mistake (as in an accidental double-press of a key, or pressing a wrong key and immediately correcting). Looks below this threshold will have their time folded into the length of a look (in a valid direction) that follows them.
For example,
By default, these times are each 100 milliseconds. The COMPLETELOOK
and COMPLETELOOKAWAY
settings allow you to change these thresholds up or down, depending on what is appropriate for your study.
These values must be nonzero, but can be set effectively low enough that they will not ever discount any looks (there is a limit on how quickly two keys on the keyboard can be pressed!) However, note that the COMPLETELOOKAWAY
value also determines how often time spent looking away is written to the detailed log file during a run. If you are setting this value very low, test your protocol to ensure that the frequent logging is not resulting in any undue load on your computer.
For any protocol using external displays, at the start of the run, BITTSy will cover them with a solid color background to show whenever they are not actively displaying an image or video. The background color can also show around the edges of images/videos if they do not take up the whole screen, and prevents the desktop from showing if there are any brief gaps between stimuli being displayed.
By default, the background color is black. It can be changed to white by adding the following line to your protocol, anywhere before the beginning of the first STEP
.
to download the latest version of BITTSy (note the last modified date and version number in the file name).
When you are first starting with BITTSy, you will also need to download the Setup zip file, located on the same download page (see for how to use the setup protocol).
BITTSy does not require any installation. Simply unzip your downloaded file and move the folder out of your Downloads folder to somewhere it won't be accidentally erased. Navigate through the subfolders to the .exe file (with the BITTSy spider icon) and right-click to create a shortcut. You can then move that shortcut to the desktop.
Windows Defender or other anti-virus programs on your computer may prevent you from simply double-clicking to open the BITTSy executable from your shortcut. You can right-click and select Open.
If you see a screen like the one below, click "More Info" where it appears below Unknown Publisher, then the "Run anyway" option will appear at the bottom. Once you click this to open BITTSy, Windows Defender will allow you to open BITTSy normally in the future by double-clicking the shortcut.
When you open BITTSy, it will look like this.
The first protocol file you will load and run will be the setup protocol. See below!
This protocol should be run whenever you are setting up a new system with BITTSy, as well as any time that you unplug/replug displays or lights from your system, to ensure that the device ID numbers assigned by your operating system have not swapped around.
If you do not have any external displays connected for displaying visual stimuli, and would like to configure only the LIGHTS
definition, you will need to remove all lines beginning with the IMAGE
keyword, and delete the opening DISPLAYS
definition.
The setup protocol is copied below.
We used: Darice 6402 Accessory Cord with 1 Light and added an extension cord to each one.
The order of the side designations in DISPLAYS
and LIGHTS
definitions corresponds to their ID numbers (0, 1, 2, etc.) assigned by your computer (which is according to where they are plugged in). Refer to the for details on how to run the setup protocol, which will guide you through the process of identifying your components' ID numbers and setting up your DISPLAYS
and LIGHTS
definitions to appropriately label what side they are on.
Habituation can require many additional experimental settings. Within the protocol, these are defined, like the settings above, anywhere between the and the first . See the for more details on these experimental settings.
Upon launch, BITTSy checks for the and whether a DMX box is currently connected. A couple pop-ups (pictured below) will warn you if these are not available. If you are not using lights, you can ignore these and click OK!
The setup protocol is designed to help you configure BITTSy's : SIDES
, DISPLAYS
, and LIGHTS
.
The setup protocol is available for download on the page or . It is designed to identify starting definitions for a system with three displays (in addition to the experimenter's screen) and three lights. If you have fewer displays/lights, the protocol contains comments describing what changes to make to configure it properly for your system.
To run protocols, use the buttons on the top-left of the BITTSy window in order, starting with Load Protocol. (See for guidance). Follow the comments written within the protocol file for how to progress from step to step and how to use the setup protocol to create the starting definitions for DISPLAYS
and LIGHTS
.
Confused by the display setup in this protocol? for more info about how BITTSy identifies displays.
The tag types discussed thus far must be set up in the opening definitions section of your protocol, before the execution of the first STEP
, and once they are defined, what they refer to cannot be changed. Dynamic tags are tags that can be defined later in your protocol and used to refer to the result of a selection statement when a tag is chosen from a group. They don't have a fixed identity as "meaning" any other particular tag; rather, they work as a placeholder tag that can still be reassigned to a new item later on in the execution of the protocol.
Dynamic tags are explained further in the section on randomization and selection from groups. But they are another type of tag that, once defined, work in the same way as the tags discussed thus far. They can reference a particular stimulus file or they can reference a group, and they can be used in selection statements or action statements in the same way as static tags.
BITTSy runs with a series of STEPS. The steps run in order, and everything that occurs in the experiment itself is part of a step. Steps allow you to LOOP
back, which is how you repeat sections of execution (such as trials) - and thus steps are a fundamental way in which the program is structured.
The first line that defines a STEP
marks an important division in the protocol file. Everything in a protocol file discussed thus far, with the exception of dynamic tags - starting definitions, optional settings, file path definitions for tags, group definitions - must all precede the first STEP
. All of these are fully validated and processed by BITTSy prior to the start of an experiment. Once the experimenter clicks to run a protocol, the portion of the protocol file being executed is what follows STEP 1
.
Note that everything in the example above happens within a STEP
. These STEPs provide important reference points in your protocol, and all statements should be placed inside the structure of STEPs.
STEPs should be numbered in ascending order with no duplicates or missing values. Misnumbering steps will not necessarily result in any problems executing your protocol. But when loops are involved, there may be serious issues jumping back to an indicated STEP
and executing intervening steps if the LOOP
references steps that have been misnumbered.
Protocols can be split into as many STEPs as needed or desired.
STEPs can optionally end in step terminating conditions, which specify conditions that must be met before execution can proceed from one STEP
to the next STEP
. These are the UNTIL
statements in the example above. When thinking about how to divide the progression of your experiment into STEPs, the most important thing to consider will be the necessity for these terminating conditions, and which actions in your experiment need to occur before you wait for one to be met. (How to structure STEPs and protocols will become clearer as you learn more in the upcoming sections, and look through example protocols.)
While STEPs provide internal structure to your protocol and are essential to BITTSy's execution of it, phases and trials are optional divisions within your protocol, whose chief importance is for marking sections that you wish to analyze.
Trials are the basic unit of your experiment that you wish to analyze, and can span across multiple STEPs.
For most experimenters, "trials" are the natural subdivision in an experiment - that is, we generally tend to think about experiments as consisting of trials. As such, a trial is probably a more intuitive concept than a STEP
. But since multiple things could happen in a trial, trials will often contain multiple STEPs. Crucially, trials typically proceed until some sort of condition is met, such as a stimulus finishing playing, a certain amount of time elapsing, or the participant reaching a particular looking time requirement. These conditions will be step terminating conditions that then allow the trial to end in another, immediately following STEP
.
Trials are not explicitly numbered or named within the protocol - you merely specify where one should start and where it should end. All trials should have a start and end, and trials should not overlap with other trials.
The below example is what a trial could look like in a standard PLP study.
Note that the video being presented as an attention-getter is not included within a trial, because we don't have any intention of analyzing this section. Rather, the trial starts immediately before the first stimulus video, and ends immediately afterward.
When reporting looking time across trials, BITTSy will number trials within each phase, and restart numbering when a new phase is defined.
Phases can contain multiple trials, and mark divisions between groups of trials or STEPs that should be analyzed completely separately from each other, because they reflect some difference in the task the participant is doing (as in training and test phases; or pre-test, habituation, test and post-test).
Phases have a name that you can customize and specify in their start flags. These names are displayed to the experimenter when running the experiment, and listed in reports of study data. Phase names cannot contain any spaces. Phase end flags do not specify the name of the phase, but simply end the phase that was most recently started (i.e. the current phase).
The below example is what phases and trials could look like in a simple visual fixation study, with a fixed-length familiarization passage followed by two test trials in a fixed order.
Marking divisions between phases is helpful for analyses of looking time through BITTSy's reporting module: phases are listed separately in these reports. Trials can always be looked at individually, but trials within the same phase can also be summed/averaged across automatically in reports. Thus, phase definitions should be placed to demarcate which trials can be analyzed together.
In some cases, the placement of phase definitions can impact the execution of protocols. For example, cumulative looking time can be calculated within the current phase, and exclude other trials previous to that phase. In habituation, phase definitions allow BITTSy to "know" to reject windows containing trials from two different phases (i.e. not including a pre-test trial as part of a basis window, as the pre-test should be considered completely separately from habituation phase trials).
Without placing any phase start and stop markers in your protocol files, your experiment will be considered to be a single phase, with all its trials analogous to each other.
The first and most common type of tag is a tag that directly references a file (image, audio, or video) on your computer. These tags are defined before the beginning of the first phase or step, and once defined, cannot be changed later in the protocol: they are linked to only that particular file.
To easily find the complete file path for your stimuli:
Open the folder where the files are located
Right-click on a file and select Properties
The filepath is shown under "location" and can be copied directly
File paths for stimulus files used in BITTSy cannot contain any whitespace characters. (This will cause a validation error in BITTSy.) If any of the folder names within your filepath contain spaces, you will need to rename them within Windows File Explorer. If your files are stored under a particular user account that contains a space, you will also need to change the username to not include a space.
There is no requirement for stimuli files to be located in the same folder as protocol files, so if you prefer to have spaces in names for other folders (or don't want to disrupt file references for other experimental software on your computer), you can store stimuli files for BITTSy in a separate, dedicated set of folders, such as directly on the C: drive.
When you set up these types of tags with LET
statements, BITTSy doesn't immediately store information about what kind of file it is (image, audio, or video) other than by storing the file path and extension. There is a special way to declare a tag that allows you to tie the tag name to a particular stimulus type.
Here, <type>
can be image
, audio
, or video
.
Tags defined with TYPEDLET
(as opposed to LET
) can be used in any type of group, but only tags defined with TYPEDLET
can be made into LINKED
tags.
LINKED
tags make it easier to reference pairs of stimuli that should be presented together. They are useful in cases where:
you will want to present particular, limited combinations of stimuli (e.g. each image stimulus goes with one or two possible audio files out of a much larger set in the study, and would never be presented with any of the others)
stimuli in the pairings are consistently of different types (image/audio/video) [no two stimuli of the same type can be LINKED
together]
you want to randomize presentation order across stimulus pairs rather than randomizing across stimulus types (such as audio and image) independently of each other
LINKED
tags allow you to refer to their paired components by the same tag name (dog
in the example above). For more, see the section on using LINKED tags in action statements.
Preferential looking (PLP) studies generally require paired audio and visual stimuli. However, when creating video files is an option, video stimuli with an embedded audio track are a better choice for PLP studies in BITTSy than using separate or LINKED
audio + visual media files. When presenting a video file, the beginning of the visual display and audio tracks are wholly simultaneous, but presenting separate visual and audio files can theoretically result in a slight delay between the start of each file. This delay is not generally perceptible, but it has the potential to be a problem when timepoints (such as the onset of a target word) must be completely predictable and consistent.
Tags are keywords defined within your protocol that allow you to refer to:
particular files on your computer
groups of stimulus files
the result of a random selection from a group
Once defined, all three of these types of tags can be used similarly in protocol files. Tags provide a layer of abstraction that allows protocol files to group, select, and reference stimuli (audio, image, and video files on your computer) in dynamic and flexible ways.
Valid tag names...
must be unique (the same tag name cannot be defined as two separate files/groups)
must be single "words" that consist of letters, numbers, and underscores (no spaces, and no other symbols)
The first two types of tags - tags denoting particular stimulus files, and tags that are given to name groups of other tags - are defined near the start of your protocol, anywhere after the starting definitions and before your first STEP statement. See the following pages for more information on these tags.
Groups are tags that refer to a set of other tags. The tags within the group could refer directly to particular stimulus files, or they could be groups themselves.
Tags within a group should be separated by a comma and a space, and there should be no spaces between the opening curly brace and first tag name or the last tag name and the closing curly brace.
Groups refer to tags that have been previously defined, and create a hierarchical structure.
Below is a visualization of the relationships of tags in the above example.
In this example, dog
, cat
, snake
, and turtle
refer directly to files (the tag type covered in the previous section). The tags mammals
, reptiles
, and animals
are all groups - but while the members of mammals
and reptiles
are tags referring to files, animals
is a group of other groups.
Groups allow for random selection from within a set of tags, which can be used iteratively to select subsets of stimuli before selecting a particular stimulus from the final subset. Ultimately, the purpose will be to present a selected stimulus using an action statement. In order to select through a consistent number of steps, no matter which tags from within groups are chosen, the tags that are contained in a single group should be at the same level in the tag hierarchy. For example, if we wanted to make a group that was the set of the above animals that have legs, we could define:
But we would NOT use:
because if mammals
is selected first from the group, there is another layer of selection required before finally presenting the stimulus than if we selected turtle
initially. BITTSy cannot proceed through different numbers of selection steps based on what is chosen.
The above point is especially important in protocols using loops. But if your selections from your group are not inside a loop, and are occurring in a fixed order (such that you know when you're selecting another group and when you're selecting a tag) you can construct a group with tags at different levels in the tag hierarchy - there's nothing stopping you. But the fix below always works.
In cases where you really want something more like {mammals, turtle}
, you can use "dummy groups," which contain only one item, to get around this issue.
Note: You can go back and forth between defining tags that reference files or tags that are groups as you write protocols. However, whenever a group is defined, all of its component tags must have already been defined earlier in the protocol. BITTSy validates protocols by reading line by line, and if a tag name is referenced inside a group before it is defined through a LET
statement, this will cause a validation error.
There is one special type of group that doesn't contain tags as its elements - it contains side names. This allows for random selection not just of stimulus files (like other groups), but also of presentation location.
The side names that can be used in groups should be previously defined in the starting SIDES definition.
Many experiments require selecting a particular stimulus from a group of stimuli and presenting it. Once a is defined, tags that denote particular stimuli can be chosen in BITTSy, with or without replacement, either in the order in which they are listed in the group or with randomization. This is done with choose statements.
Choose statements can occur on their own, but they are often employed within , sections of steps that repeat until a specific condition is met. For example, a particular STEP
could select and display an image from a group, and a loop could repeat that STEP
until all of the images have been chosen and displayed. After covering the basics of choose statements and loops individually, more about how they work together can be seen in the .
Remember ? Unlike other tags, they are not defined at the start of a protocol. Rather, they get defined through choose statements, within the steps of your protocol, and refer to the result of that choose statement on its most recent execution.
The syntax above shows how to create a dynamic tag. The name you give it within the LET
statement is its new name: this will be what you use later to refer to whatever tag or stimulus was selected. The part in parentheses on the right side of the equals sign is the choose statement (we'll talk more about the syntax and options for choose statements later.)
The restrictions on valid dynamic tag names are the same as for other . Unlike static tags that directly denote files or groups, there is no restriction on defining the same dynamic tag name twice - indeed, this is extremely common, as you are often creating and overwriting dynamic tags via executing a choose statement within a . However, they must not have the same name as any static tag.
In the example above, we have a group dogs
that contains several tags referencing image files. We want to pick a random one and display it. We don't know which tag out of the group dogs
will be chosen on any given run of this particular STEP
. Therefore, we can't refer to the result by directly using any of the tags referencing files (dalmatian
, deerhound
, boxer
, etc.) Instead, we create a dynamic tag, to which we can assign whatever tag we randomly select.
You can think of dynamic tags like arrows. If the choose statement selects beagle
from the group, dog
points to beagle
, and in the subsequent to display the image, BITTSy follows the arrow back from dog
to beagle
to the actual file on your computer it will access and display.
Dynamic tags keep referring back to the same tag until a later choose statement changes their reference - that is, the arrow stays pointing in the same direction until another choose statement moves it. If beagle
was selected above, you could refer to dog
many steps later and it would still mean beagle
, as long as there were no intervening choose statements that also assigned their result to dog
. If there were more choose statements that assigned to the same dynamic tag, dog
would always refer to the result of the most recently-executed choose statement.
Often, dynamic tags are used within a . Loops repeatedly execute a set of STEPs within a protocol. When a dynamic tag and choose statement are defined within a loop, the choose statement is executed each time the loop runs, and the assignment of the dynamic tag changes each time.
Whenever you create a choose statement, you must specify whether to choose the items from the group in order (by always selecting the FIRST
tag that is eligible for selection), or to select in a random order (using the flag RANDOM
).
More on these and how they work in different types of choose statements below!
Whether or not a particular tag is eligible for selection is tracked per group. If a particular static tag belongs to two groups, and is chosen via TAKE
from one of those groups, it could still later be selected when using a choose statement that is set to pull from the second group. (The same applies to choosing with FROM
statements in the next section!)
Choosing items with RANDOM
, as in the example above, does just that - picks any random remaining tag in that group, with equal probability.
Choosing items in order, via FIRST
, is really designed for choosing without replacement with TAKE
. Doing this allows you to set up a particular, fixed order of tags while defining the group, and then present those tags in that order. For example, the items from this group...
...would display dalmatian
, then deerhound
, then boxer
- the order in which they appear in the definition of dogs. This set-up is completely deterministic, which means you could equivalently set up your protocol to directly reference the static tags:
However, when designing experiments that present files in a fixed order, there are often multiple fixed-orders to create and assign across study participants. Using choose statements with FIRST
allows you to refer back to these tags generically within the steps of the protocol. Therefore, when you are creating multiple fixed-order protocols for one experiment, all you need to do is change one line: the group definition.
Choosing from groups with FROM
marks the resulting tag as having been chosen, but does not bar it from being selected again from that group later.
For example, selecting with FROM dogs RANDOM
two times on the following group...
...could result in:
dalmatian, deerhound dalmatian, dalmatian deerhound, dalmatian deerhound, deerhound
Selecting with FROM dogs FIRST
, on the other hand, would always yield dalmatian
.
There are additional clauses you can add to a FROM
statement that place restrictions on the repeated selection of tags.
These repeat clauses are:
A common mistake with all of these is confusing the number of repeats with the total number of presentations. If you don't want it to repeat at all, it has to be listed as zero repeats, not one repeat. One repeat means it could present the item twice: the initial time, plus one repetition of it.
This type of clause allows you to restrict how many times a particular item is selected from a group, overall.
This means that each side, LEFT
or RIGHT
, can be chosen for stimulus presentation up to 5 times. Once one side has been picked 5 times, any remaining repetitions will be forced to pick the other side. There are valid sides remaining for exactly 10 selections; if this step is executed 11 times, BITTSy will crash on the last one because it won't be allowed to pick either one!
More generally - ensure that when you are placing selection criteria, they are appropriate for the number of times a step will execute in your protocol, and there will always be a valid selection that BITTSy can make.
Selecting {with max 0 repeats}
in a FROM
statement is completely equivalent to using a TAKE
statement - it specifies to not allow repeats, which is what selecting without replacement is! TAKE
is just shorter to write, and easier for someone reading your protocol to understand.
This type of clause allows you to restrict how many times a single item can be chosen from a group in a row.
In the above example, each side can be chosen up to 3 times in a row. Whenever one is chosen 3 times in a row, the next selected side must be the other one.
Using {with 0 repeats in succession}
specifies to never make the same selection twice in a row. For a group that contains just two items, like sides
in our example, this forces selections to alternate.
This type of clause can be used when you wish to restrict how many times an item can be repeated within a particular window of trials.
This means that in any set of four consecutive selections, each item can appear a maximum of two times. For example, this would be a valid possible selection order:
boxer, bulldog, bulldog, beagle, dalmatian, bulldog, deerhound
Because every set of four contains only up to two of the same item.
{boxer, bulldog, bulldog, beagle}, dalmatian, bulldog, deerhound boxer, {bulldog, bulldog, beagle, dalmatian}, bulldog, deerhound boxer, bulldog, {bulldog, beagle, dalmatian, bulldog}, deerhound boxer, bulldog, bulldog, {beagle, dalmatian, bulldog, deerhound}
This type of clause is good for when you want to restrict how frequent one thing can be within a certain time frame, beyond whether they are strictly all in a row (which is repeats in succession
).
Here's another example.
This means that as each item is selected, it cannot be selected again in any of the next 5 selections that are made from that group. For example, if boxer
is chosen first, it cannot appear again until trial 7 at the earliest; beagle
, if chosen second, cannot appear again until trial 8, and so on.
On the surface, the above example might seem like a good way to create a block structure in BITTSy (displaying all of the stimuli from a set in a random order without repeats, then showing the set again in a different random order). However, you would typically want stimuli within a block to be randomized independently, and this selection criteria always works along a moving window, meaning that there is never a point where the current selection is independent of the previous ones. For example, if repeated executions of the above line made the following selections:
whippet, dalmatian, deerhound, beagle, boxer, bulldog, ______
Because the last selection is considering the five previous, and won't allow any repeats within that frame...
whippet, {dalmatian, deerhound, beagle, boxer, bulldog, ______}
...that selection must be whippet - it is the only item that wouldn't be a repetition within that set of most recent 6 picks - and next would have to be dalmatian, and then deerhound... Essentially, what you get is a random permutation of the six items when moving through the group for the first time, but thereafter it repeats that same ordering of the six tags.
How, then, do you make a block structure with BITTSy? See below!
Let's go back to the basic formulation of a choose statement.
Let's say we're writing a protocol in which we want to display one of these images. We start with animals
, a group whose members are other groups. We need two choose statements before we can display an image tag - first to select either mammals
or reptiles
out of animals
, and second to select a particular mammal or reptile from that resulting group (which we've called class
).
Now let's say we want to make an experiment that displays these images in a block structure. One block of trials will be mammals, one will be reptiles. We'll randomly select which block comes first, and within each block, we'll randomly order the two animals that belong to that phylogenetic class. Below is an example of how this can be constructed.
Defining animals
as a group of groups allows you to set up choose statements that create this type of simple block structure, and gives the following possibilities.
Once a dynamic tag is reassigned via another choose statement to mean a different static tag, there is no way to use it to reference what it used to mean. Therefore, any experiment that needs to call back up "stimulus that was chosen two trials ago," after having chosen more things, will need to assign results of choose statements to new dynamic tags so that the older results are not overwritten. This can require not using a structure, as a loop repeatedly executes a single section of protocol and cannot allow modifications to the included lines.
TAKE
clauses specify selection that is strictly without replacement. Whatever tag is chosen is no longer available to be chosen from that group in any subsequent choose statements, whether they occur in later steps or in the same step via a . (However, tags that are chosen still belong to the group for the purposes of tracking accumulated looking time or logging experiment data.)
make TAKE <group> FIRST
an even more convenient option for fixed-order protocols than using static tags. For an example protocol that puts this all together, see .
When a group tag is referenced, BITTSy doesn't care what type of tags it contains, as long as it's defined as a group and has at least one tag in it that is eligible for selection, according to your selection criteria. It simply picks one and makes the dynamic tag point at it. That group can contain tags that denote files, it can contain SIDES
, or it can contain other groups. Going back to our example from the section:
Or equivalently, with :
1
2
3
4
5
6
7
8
Block 1: mammals
cat
dog
Block 1: mammals
dog
cat
Block 1: mammals
cat
dog
Block 1: mammals
dog
cat
Block 1: reptiles
snake
turtle
Block 1: reptiles
turtle
snake
Block 1: reptiles
snake
turtle
Block 1: reptiles
turtle
snake
Block 2: reptiles
snake
turtle
Block 2: reptiles
snake
turtle
Block 2: reptiles
turtle
snake
Block 2: reptiles
turtle
snake
Block 2: mammals
cat
dog
Block 2: mammals
cat
dog
Block 2: mammals
dog
cat
Block 2: mammals
dog
cat
Habituation studies are ones in which an experiment or phase continues until the child no longer attends (or attends less) to a particular stimulus.
In BITTSy, habituation phases terminate based on your protocol's set termination criteria - a decrease in looking relative to baseline. This, in turn, is based on the following factors:
How many trials should be included in the baseline measure and in judging whether habituation has occurred (e.g., 3 trials, 4 trials, etc.). Note that the window SIZE (in terms of number of trials) is the same for both.
Which trials are included in the baseline (e.g,. the FIRST three trials, or the 3 trials with LONGEST looking overall...)
Percentage drop (e.g., the phase should end when looking has reduced by a certain percentage of baseline, such as 50% of baseline looking)
See the other pages in this section for specifics on how to set habituation criteria, reach (or not reach) habituation and proceed to another phase of the experiment, or exclude particular trials from habituation calculations that do not meet desired criteria.
Action statements start or stop the presentation of stimuli. They are used to control the displays, lights, and speakers.
Media files that are presented in action statements in one STEP
persist through that step and into subsequent ones until they are instructed to turn off, either by reaching the end of the file (for audio and video files that are set to play only once) or by including a corresponding action statement later in the protocol that turns the stimulus off.
Sides that are used in IMAGE
action statements should come from the SIDES
starting definition and should be matched to a display in the DISPLAYS
starting definition.
Sides that are used in VIDEO
action statements should come from the SIDES
starting definition and should be matched to a display in the DISPLAYS
starting definition. The audio component of the video will play through a correspondingly-named audio channel, if available (see explanation for both stereo and multichannel audio systems below).
ONCE
will play the file to the end, then remove the video from the screen (it is not necessary to use a VIDEO <side> OFF
statement to remove it, but doing so will not cause any problems). LOOP
will continue looping back to the start whenever the file ends, and continue replaying the file until the next VIDEO <side> OFF
statement.
As with video, ONCE
will play the file to the end, then stop the audio from presenting to the speaker(s) (it is not necessary to use an AUDIO <side> OFF
statement to remove it, but doing so will not cause any problems). LOOP
will continue looping back to the start whenever the file ends, and continue replaying the file until the next AUDIO <side> OFF
statement.
In a stereo audio system, channels are LEFT
and RIGHT
. Protocols run with stereo audio systems can also use the keywords CENTER
and STEREO
. Both of these commands will play audio simultaneously from the left and right speakers.
To make use of more than two speakers on a multichannel audio system, your protocols will require an AUDIO starting definition in order to provide names for the available audio channels. Any of these names can then be used in the AUDIO
action statement format.
It is important to note that while in a stereo 2.1 system, STEREO
and CENTER
mean the same thing, they are not assumed to be the same thing whenever a multichannel AUDIO
starting definition is provided. In this case, STEREO
is the only command that means to play from two speakers simultaneously (these will be the first two speakers defined in the AUDIO
starting definition, which are designated the front left and front right speakers on sound cards). CENTER
can be used as a name in the AUDIO
starting definition to mean a particular speaker, and if it is not present in the starting definition, it is not considered a valid channel name.
Using the above command, in a single action statement, ensures temporal synchrony in presenting the audio file across the left and right speakers. Two separate action statements playing tags to separate channels do not ensure complete synchrony. When playing a single tag to left and right, STEREO
(or CENTER
, for a 2.1 system) is therefore always preferable.
Whenever it is crucial that two different audio streams begin to play simultaneously, it is better to use audio files in which these streams have already been mixed together/placed into separate channels (as desired), and use a single action statement to play this file.
LINKED tags are defined earlier in the protocol, and consist of 2 or 3 member tags of different types (image, audio, or video).
Creating these tags via TYPEDLET
and LINKED
allows you to use action statements that refer to these tags in the same way across action statements of different types.
In protocols where they are suitable, LINKED
tags help cut down on the number of tag names to which you must directly refer.
BITTSy allows you to specify under what conditions to end one STEP
and move on to the next one. These step terminating conditions are used to control the experiment flow from step to step within your protocol.
Note that the UNTIL
statements that function as step terminating conditions are formatted identically to loop terminating conditions (covered in the Loops section), and many of the options overlap - but some conditions are exclusive to terminating steps or terminating loops, respectively!
Sometimes you don't need to wait for any particular condition to be met before moving on to the next step. When a step has no terminating condition, execution proceeds to the next step immediately (STEP 1
to STEP 2
in the example above).
When you want your experiment to wait for something, that's when you'll use a step terminating condition. These are UNTIL
statements, and should be understood as "don't move on to the next step until the following conditions are true."
When a step has terminating conditions, these must be the last lines in that step. Once the specified terminating condition(s) are fulfilled, execution proceeds immediately to the next step. Therefore, the next thing you want to happen should be placed at the start of the step that follows.
With a KEY
terminating condition, execution waits at the current step until the experimenter presses the specified key. For a protocol using live-coding, these could be used to time the start of steps according to where the experimenter judges that the participant is looking - in this case, you should specify the same keys that you have assigned to sides for live coding. You may also wish to use keypresses to progress the experiment without tying it to participants' looking behavior. In this case, you can use any valid unassigned key.
Valid keys are alphanumeric, plus arrow keys (denoted by the names UP
, DOWN
, LEFT
, and RIGHT
). Number and arrow keys correspond to keys on the main keyboard, not the number pad. Letter keys should be capitalized.
This terminating condition allows the step to end after a specified amount of time. The time is counted from the point when the step starts. Remember, time is in milliseconds! (So 20 seconds would be listed as 20000).
This terminating condition checks whether all the media files (audio/video) that began to play in the current step have finished playing, and only allows execution to move on to the next step once they have ended.
UNTIL FINISHED
should not be used in steps where audio or video action statements make the file play on LOOP
rather than ONCE
- this will go very badly! See below for an alternative.
Below are two examples of how this could be used in a study with an audio familiarization passage that is presented with an unrelated video to help maintain the participant's attention.
In the above example, the step would only end once both the video and familiarization passage had ended.
And in this example, STEP 2
would end when the audio file was finished playing, but would not wait for the video file to be finished (and thus instructs for this video to be turned off in the next step).
The first example would be suitable for a study in which the passage and the video were made to be the same length. The second example is better for use where the video file is a different length - either shorter and needing to be played on LOOP
, or longer and can be played ONCE
, but it is not necessary to wait for it to play all the way to the end before going to the next step.
Often you may want BITTSy to wait until the child does something - either the child has looked at something for a sufficient length of time, or has looked but then looked away for a certain length of time, etc. There are four basic types of looking behavior that can serve as a trigger: a single look of a particular length, a total amount of (cumulative) looking of a particular amount, a single look away of a particular length, or a total amount of time spent looking away. These are done with the keywords SINGLELOOK
, SINGLELOOKAWAY
, TOTALLOOK
, and TOTALLOOKAWAY
.
For all of these, you must also specify a tag - it must be clear what the child should be looking toward or away in order to be tracked for these terminating conditions.
This terminating condition is met whenever the experimenter records via keypress a single look in the direction where the given tag is being presented that exceeds the given time in milliseconds. The look must end (the experimenter presses a key that indicates a different direction, or the tag is no longer actively being presented) before the step will end and the next begin - the step is not cut off as soon as the threshold is met.
The tag given in this terminating condition can be of any type (denoting a particular file, a group, or a dynamic tag).
In this case, the terminating condition would be waiting for a look to be recorded by the experimenter that was to the left (as defined by the key assignments in the protocol), where the dynamic tag trialaudio
is being presented, that lasts for at least 5 seconds before the experimenter indicates the participant has looked in another direction (by pressing a key that has not been assigned to LEFT
).
This terminating condition is met whenever the experimenter records via keypress a single look in a direction where the given tag is NOT being presented that exceeds the given time in milliseconds. The look away does not have to switch to a new direction before the terminating condition is met; it will be cut off at the specified time.
Importantly, BITTSy's timer on looks starts when the tag begins to be active/displayed, so if the participant starts in a state of looking away from the tag, they could meet this terminating condition without ever having a look that was in the direction of the tag. If you would like to require this (a look but then a look away), you should set up the preceding step such that they must start in the correct direction (as is typically the case in HPP studies).
In the case above, where the key L is assigned to LEFT
, the left light would blink until it got the participant's attention and the experimenter indicated they had turned to the left. Then, the audio would begin playing. Because the step could only start when the participant is looking left, there must be a change in their looking (that lasts for at least 2 seconds without looking back to the left) before the step can end.
"Away" in this case is defined as any direction that is not the direction where the tag is being presented, not solely AWAY
. That is, if the tag is being presented LEFT
, SINGLELOOKAWAY
could be satisfied by a look started by pressing any key other than the one assigned to LEFT, whether it denotes another side, AWAY,
or is completely unassigned. BITTSy assumes that anything that isn't LEFT
is away from LEFT
!
This terminating condition is met whenever the participant accumulates the specified amount of time looking in the direction of the given tag during the current step.
In the above example, the step continues (and the audio continues to loop) until the participant looks towards it for a total of at least 20 seconds. This could occur with a single look that lasts longer than 20 seconds, or could accumulate across any number of shorter looks until the total reaches this threshold. Each look in the given direction must end (the experimenter presses a key that is not assigned to that direction) before being added to the running total and checked for whether it exceeds the threshold; the step will not end while the participant is still looking toward the given tag.
This terminating condition is met whenever the participant accumulates the specified amount of time looking away from the direction of the given tag during the current step.
As covered above for SINGLELOOKAWAY
, any keypress that is not associated with the side the tag is being presented on is treated as "away" from that tag.
For steps that can end one of two ways, the step terminating conditions should be listed on two separate lines, one after the other.
In the example above, the step ends when either the audio file runs out OR the participant looks away from that direction for at least two seconds - whichever happens first.
Terminating conditions are checked sequentially. If there is one that is "held up" and cannot currently be evaluated, this prevents any of the others from being checked in the interim. In particular, SINGLELOOK and TOTALLOOK cannot be evaluated while a look to that stimulus is still in progress. Another terminating condition can be met while the look is still in progress, but it cannot be checked until later, which means that the in-progress look cannot be cut off by ending a step via an alternate terminating condition.
For steps must meet multiple conditions before ending, the terminating conditions get put on one line and are joined by "and".
In the above example, the step would have to last for at least 25 seconds and the experimenter would have had to press a key, either anytime before the 25-second mark (in which case the step would end at 25 seconds), or after the 25 seconds have elapsed (in which case it would continue for however long until the key was pressed). A condition like this would ensure that a minimum time is always met, but allow an experimenter to draw out the step for longer until a particular cue comes (e.g. when waiting for a certain behavior).
You can use both of the above combination types on the same step, for even more nuanced end conditions for steps.
In the above example, the step must last at least 5 seconds and a maximum of 25 seconds. If the experimenter presses the indicated key anytime before the maximum time, the step can end immediately when the first condition is satisfied; otherwise, it ends when the second condition, reaching 25 seconds, is met. A combination of conditions like this ensures minimum and maximum times for the step, but allows the experimenter to indicate whether the step should end earlier than the maximum time.
Experiments done with infants and toddlers usually consist of a series of trials, in which the stimuli used or side of presentation may vary from trial to trial, but the core structure of every trial is the same. Rather than writing a long protocol file that separately specifies every trial with a unique set of steps, you can specify the trial structure once, then use a loop to repeat it.
A LOOP statement will cause the steps in between the specified step number and the step in which the LOOP
statement is defined to be executed. This occurs repeatedly, until the loop statement's terminating conditions have been satisfied.
The following example is the entire test phase of a preferential looking study with trials shown in a fixed order. The entire study runs via looping over a set of 3 steps that comprise an attention getter and a trial.
When running the protocol, we begin, as always, at STEP 1
. The test phase starts, and the first attention getter video plays until the experimenter indicates the participant is ready for a trial. The first trial stimulus is chosen from the list of stimuli, displayed, and played to the end. The loop step (the step that contains the LOOP
statement) is STEP 5
. When it is reached, execution jumps back to STEP 2
, and proceeds again through steps 3 and 4, and onto 5. Upon reaching the LOOP
statement again, it again jumps back to 2, and proceeds through the steps. This repeats a total of 23 times, displaying a total of 24 attention-getters and trials (the one original execution of steps 2-4, plus the 23 times it looped over that sequence.)
Loops can be nested within each other, allowing for concise specification of experiments with a block structure. Below is the test phase of a headturn preference study that consists of 3 blocks. Each block is comprised of 4 trials, for a total of 12 trials. Before each trial, the participant is oriented to a neutral position (center), and then the trial begins once they reorient in the direction where the light has begun flashing and from which the audio will play.
Steps 11-15 specify a sequence of centering the participant and playing a trial, which is a unit that repeats throughout the experiment. These steps are looped over in STEP 16
, which specifies to execute this sequence three additional times so that the first block contains four trials. Once the first block is finished, the LOOP
statement in STEP 17
is reached, which has it jump back to STEP 10
to select the next block of stimuli to present. The inner loop in STEP 16
is executed just as before, playing the four trials in this block also. STEP 17
is reached again, and execution jumps back to STEP 10
to select the third block, and the nested loops work in the same way to play four trials here. Now that STEP 17
has had its loop repeated the required two times, we move to STEP 18
and the phase ends.
Be careful with trial/phase start and stop flags, in relation to where loops are placed. In the above example, STEP 9
contains only the flag to start the test phase, and we loop to STEP 10
to repeat a block. If we had started the test phase in the step that was looped to, we would repeat it at every block - and we want the phase to only start once. Therefore, it should be outside the loop.
Just like step terminating conditions, loop terminating conditions can be combined together for more complex end conditions. They can be joined by an AND, such that both conditions must be met:
Or by an OR, such that only one condition need be met before ending the loop.
This loops back to the specified step, executes all the intervening steps in order, and loops again until it has completed the entire loop the specified number of times. For example,
The above loop would execute STEP 1
five times, in addition to the first execution before the loop step was reached. Therefore, we would select and display an image from the group dogs
a total of 6 times.
Rather than specifying a set number of times to loop, you can also loop until a particular group has no remaining options for selection.
For example,
The above loop would run until all the tags in dogs
have been marked as selected, and none are eligible to be chosen by the TAKE
statement - in this case, the loop would repeat 5 times, in addition to the first selection, for a total of 6 items displayed.
It is also possible to execute a loop until at least a certain amount of time has elapsed since the looping began.
In the above example, images are selected randomly (with replacement) and displayed for 5 seconds each, with the section looped for a total of 55 seconds. Importantly, the timer for the loop begins when the UNTIL
statement is reached for the first time - which is after the first tag from dogs
is selected and displayed for 5 seconds. Therefore, images of dogs are displayed for a total of 60 seconds.
Like all loop termination conditions, UNTIL TIME
conditions are only evaluated when execution reaches the loop step (STEP 2
, in the above example) again. If the specified time is met in the middle of a loop sequence, the current run of the loop will be allowed to finish before the terminating condition is met and execution continues to later steps in the protocol file.
As an illustration of the above point: If each image in the example above were shown for 6 seconds rather than 5, the 55-second timer would be met 1 second into displaying the 10th image (9th from within the loop). This last image would continue to be displayed for its full 6 seconds, then the timer would be checked upon reaching STEP 2
. Therefore we would be displaying images for a total of 66 seconds (6 from the initial execution of STEP 1
+ 55 seconds of the loop + 5 seconds to finish the in-progress loop sequence).
As of BITTSy version 1.33, you can also loop until a particular key is currently active.
It is important again to note that loop terminating conditions are evaluated only when execution reaches the loop step. Therefore, if the loop was set to run UNTIL KEY X
, and the experimenter pressed X followed by another key while the loop was still executing, the loop would not end: when the terminating condition was checked, some other key was the most recent key.
This terminating condition is useful in cases in which you wish to use a loop to run through a set of trials until an experimenter makes some judgment that the child is "done." For example, this could be used for an experimenter to end a phase if some more desirable terminating condition (e.g. accumulated looking time) cannot be met, while still allowing the child to continue to participate in the subsequent parts of a study. It can also be used for studies in which the variable of interest is how many trials are shown before the child makes some overt behavioral response, such as imitating a person on a screen, pointing, or saying an answer.
Often we want to repeat the presentation of a stimulus or set of stimuli until the child has reached a particular level of attention toward the stimuli - either they've looked a set amount, or they've reached a particular point of boredom or inattention. This can be achieved by setting up these trials within a loop, and looping until some looking-controlled end condition is met.
This loop terminating condition is met when the time a participant spends looking toward a tag, totaled across all its presentations in a given phase of the experiment, meets the given threshold.
The example above is a common one used in HPP studies, where two training stimuli alternate until both reach a fixed criterion. But this has at times been criticized because if one of the two music files reaches criterion and the other does not, it will continue to loop - and the loop could mean that it keeps playing the stimulus that had already reached criterion, rather than the one that still needed to do so. This could result in OVER familiarizing one of the two stimuli (or habituating to it) before the other one reaches criterion. This can be solved with JUMP
statements - see the example here for a solution to this exact problem.
This works identically to TOTALLOOK
, but rather than totaling up time when an experimenter indicated the participant was looking at the given tag as it was presented, uses the time spent not looking toward the tag. Time when the tag was not active (not being displayed/played) does not count as time looking away from that tag; only time when the child could have been looking at it but was recorded as looking some other way (with any key that was not assigned to the tag's current side) is counted.
See the section on habituation for more about the CRITERIONMET
condition.
All habituation settings are defined in the same section of the protocol as such as key assignments. These settings are designed to allow experimenters to define and control habituation experiments in BITTSy similarly to established habituation procedures.
This defines the number of trials which constitute a window. The same size is used when evaluating potential basis windows and criterion windows.
There is no default value for window size. This setting must be specified in the protocol of any habituation procedure that uses basis and criterion windows.
This setting defines whether windows for evaluation as possible basis/criterion windows should overlap with each other. SLIDING allows overlapping windows, while FIXED does not. For example, with a WINDOWSIZE of 3 and the window type FIXED, trial sets 1-3, 4-6, 7-9, etc. will be evaluated; if SLIDING, trial sets 1-3, 2-4, 3-5, 4-6, 5-7, etc. will all be evaluated.
SLIDING is the default setting if the WINDOWTYPE is not defined within a protocol.
This setting defines whether the basis window and a criterion window are allowed to overlap. This setting is important whenever WINDOWTYPE is SLIDING; it is irrelevant for FIXED windows.
If this setting is YES to allow overlap, and the WINDOWSIZE was 4, you could have a basis window of trials 1-4 and trials 2-5, 3-6, 4-7, etc. would all be evaluated as possible criterion windows. But if WINDOWOVERLAP was set as NO, the first window to be evaluated as a potential criterion window would be trials 5-8. All prior windows would be disallowed because they would include trial 4, which was part of the basis window.
YES is the default if WINDOWOVERLAP is not defined in the protocol.
Defines which window is chosen as the basis window. This can either be the first window or the window with the longest total looking time.
LONGEST is the default if BASISCHOSEN is not specified.
This setting allows you to set a minimum total looking time for a basis window. This can help prevent a child who is fussy at the start of an experiment from reaching a habituation criterion without having met a minimal requirement for looking time.
Zero milliseconds is the default, i.e. no minimum.
This value is a multiplier between 0 and 1, exclusive. It can be expressed as a decimal with or without a leading zero.
An error is thrown if CRITERIONMET is used as a terminating condition without a definition of CRITERIONREDUCTION being present in the protocol.
You might decide that you want to exclude particular trials from calculation of habituation. For example, you might wish to exclude any trial that has no looking at all. Or you might want to allow the experimenter to exclude individual trials at the time (say, if they felt the child was distracted by something in the room, such that the trial was not an accurate measure).
This can be done, and is part of the used within a trial. You have already learned how to denote end conditions that are "successful" and should let the trial be counted for habituation calculations - these are simply UNTIL statements. "Unsuccessful" end conditions are the same kinds - just with an UNSUCCESSFUL flag.
The above example would play the file "audiofile" until one of the following occurs:
You reach the end of the audio file
A look away is logged that exceeds 2 seconds (when the trial is started only when the experimenter judges the child is already oriented in the active direction, this requires that a look toward the stimulus has already been logged)
10 seconds have passed since the trial started and the child has not yet looked at all
The experimenter presses a designated key (X) to stop the trial
As with other , these conditions are treated as "whichever comes first". So, if the 3rd or 4th condition were met first, the trial would be unsuccessful. The trial would end at that point, and would not be counted as a trial for the purposes of habituation calculations, which depend on successful trials. (More precisely, this means that any window containing this trial would not be evaluated as either a potential basis window or criterion window.) If condition 1 or 2 were met instead, the trial would then end, and it would be counted.
Because terminating conditions are checked sequentially, if there is one that is "held up" and cannot currently be evaluated, none of the others are evaluated yet either. In particular, and cannot be evaluated while a look to that stimulus is still in progress. They will be evaluated as soon as the experimenter records the look has ended, or the stimulus stops being presented, and then alternate terminating conditions that are listed after them can be checked.
Marking a trial as unsuccessful only excludes the trial from habituation calculations. It does not exclude it from being counted as one of a set number of trials, as in an alternate loop terminating condition. For example, if your phase is set up as
it repeats until either the criterion is met or the loop occurs 20 times - marking a trial as unsuccessful doesn't change the looping.
If you want to replace a no-look trial altogether, you could have another loop that loops the trial until a condition was met. Then the inner loop would do 2 trials if its condition wasn't met the first time, and the outer loop would still go the same number of times, getting you an extra trial for each trial that was not successful.
It is also worth noting that in the syntax above, not only does the trial not count if the child hadn't looked in 10 seconds, but it also ends at that point - that is, it does not continue to play the rest of the sound file. You could have it finish the trial anyway with the statement:
but that would mean that any looks that occurred after 10 second mark would still be ignored from the count of habituation, and yet would presumably influence the child nonetheless. We instead recommend that if you are ignoring a trial for non-looking, that the trial end at whatever point you make that decision.
Here, it ends and is marked as successful if the sound file ends and there were any looks during that time, or if the child looked and then looks away for 2 seconds. It is unsuccessful if the experimenter pressed the X key, or if there weren't any looks at all by the time the audio file ends.
This section features walkthroughs of examples of some common study paradigms. Check the page for the full protocols and stimuli sets if you would like to try running these example protocols or adapt them for your own studies.
This setting specifies the looking time reduction compared to the basis window that is desired in order to say that the participant has habituated (see ).
In any live-coded experiment, including habituation, it is sometimes desirable to specify a minimum length of a look to a stimulus or away from it that should "count." This can be controlled with the COMPLETELOOK and COMPLETELOOKAWAY settings. See the section on the page for more information.
What about if you want it to play the whole file, and only mark the trial as unsuccessful if they never look at all? For that, recall the setting defines the minimum look length that counts for looking time calculations. This is the smallest amount that can be logged, so "never looking at all" is anything less than the COMPLETELOOK value.
Habituation phases are generally set up as a trial (or set of trials) occurring within a loop. CRITERIONMET is a special loop terminating condition for habituation that is based on looking-time reduction from a basis window. How the basis window is chosen, and what reduction in looking time it takes to consider the child to be habituated, are defined at the beginning of a protocol as habituation criteria. The CRITERIONMET terminating condition, whenever it is evaluated, checks whether these habituation conditions have been satisfied.
Below is an example of the basic structure of a habituation protocol.
Using the habituation settings above, here is an example of how the basis window and criterion would be calculated by BITTSy across trials.
In this example, note that the target criterion window time (i.e. the maximum time a participant can look during a window and be considered to have met habituation) can change when a new basis window is identified, when BASISCHOSEN is set to LONGEST. The first window in which the child's total looking time is less than the current target criterion time is the window of trials 5+6. After trial 6, when this window is evaluated, CRITERIONMET is true, the loop terminating condition is met, and the loop will no longer repeat. The habituation phase will end, and execution will move on to the next phase of the protocol.
It is important to note that, like all loop terminating conditions, CRITERIONMET is only checked when the loop step is reached - that is, after each time the contents of the loop have been fully executed (see loop terminating conditions for an expanded explanation.) This means that if the loop contained two trial starts and trial ends, CRITERIONMET would only be evaluated after even-numbered trials. This does not prevent BITTSy from evaluating and identifying criterion windows based on the habituation settings in the protocol. But it does mean, in this case, if a criterion window was identified that ended with an odd-numbered trial, one additional habituation trial would be run before the loop would end. For this reason, it is generally recommended to only define one trial within a habituation loop, and if stimuli vary across trials, to define their cycling via selection from groups. (See the example protocols page for examples of setting up simple and more complex habituation phases.)
Many times, you want not only a terminating condition for when the participant habituates, but also a point at which the phase would end even if habituation is not reached. For example, you might want to end a phase either when the child habituates, or after 20 trials, whichever comes first.
In this case, you essentially set up two potential endings for the phase:
In a loop clause, if you have two different UNTIL statements, with a carriage return between them, they are treated as if they are linked by an OR condition (see this section for all available options). This would mean it would end either if the criterion was met or if it had already looped through 19 times (after the first one, so UNTIL 19 TIMES means there would be 20 times total).
Whichever terminating condition is met first, execution will progress to the next step: it is not possible to skip to a different step or phase based on which condition is met. Generally, this means that the post-habituation test phase of a typical habituation experiment would be shown to all participants, whether or not they habituated. When generating a habituation report from the event log of a test session, it is marked whether the participant habituated (i.e. met the CRITERIONMET condition rather than the alternate one) so that participants who did not habituate can be recorded as such, and excluded from analyses of looking time in the test phase.
This protocol is based on Newman & Morini (2017), cited below. This study focuses on toddlers' ability to recognize known words when another talker is speaking in the background. The target talker is always female, but the background talker is sometimes male, and sometimes another female talker. When there is a big difference in the fundamental frequency of two voices (as there is for the target female talker and background male talker in this study) adults will readily use this cue to aid in segregating the two speech signals and following the target talker. When the fundamental frequency of the two talkers are similar (as in the female background talker condition), the task is more difficult. This study asks whether toddlers also take advantage of a fundamental frequency cue when it is present, and demonstrate better word recognition when the background talker is male.
Newman, R.S. & Morini, G. (2017). Effect of the relationship between target and masker sex on infants' recognition of speech. Journal of the Acoustical Society of America, 141(2). EL164-169.
This study presents videos showing four pairs of objects. Each object pair is presented on 5 trials, for a total of 20 trials. One of the five for each pair is a baseline trial, in which the speaker talks generically about an object but does not name either one ("Look at that!"). In the other four trials per pair, the target talker names one of the objects. Half of the trials the target object is on the left side of the screen, and half of the trials on the right. All trials, including baseline trials, are presented with the target speaker audio mixed with either a male or female background talker.
Although only 20 test videos are presented to each participant, far more combinations of object pair, target object, target position, and background talker are possible. In this example, we'll demonstrate how to set up this study in several different ways. First, we'll set up the whole protocol with a pre-selected set of 20 videos that satisfy the balancing requirements above, and present all of them in a random order. Next, we'll talk about what we would change in that protocol to present them in a fixed order, and create multiple fixed-order study versions. Lastly, we'll talk about how to use selection from groups and psuedorandomization in BITTSy to select and present appropriate subsets of the possible stimuli for different participants.
The opening section of a protocol, before any STEPs are defined, typically takes up the bulk of your protocol file. This is especially the case in , in which there are often large sets of video stimuli to define as tags and arrange in groups. While this part may seem to take a while, by the time you do this, you're almost done!
Tip for setting up tags and groups in your own studies: Copy+Paste and Find+Replace tools in text editors like Notepad and Notepad++ are your friend! When adapting an existing protocol to reference a different set of stimuli, use find+replace all to change the whole file path, up to the file name, on ALL of your stimuli at once to be your new study folder. When adding more tags to reference more stimulus files, copy+paste another tag definition, then go back and fix the tag names and the names of the files to be your new ones.
The first lines in any protocol are your starting definitions. Here, we will only use one central TV display. We'll name it CENTER
.
Before creating this protocol, we pre-selected 20 videos that fit balancing requirements for the study, such as having an equal number of trials with the female background talker as with the male background talker. We happen to have more possible videos for this study, but a study with fewer factors or more trials, that displays all stimuli to every participant, could be constructed exactly like this. (.)
We have one more tag to define, which is the attention-getter video we'll present before starting each trial.
Our lab prefers to consistently put trial start flags in a STEP
all by themselves, just so it is easier to visually scan a protocol file and find where a trial is defined. But this is not necessary. This trial start flag could equivalently be the last line of the previous STEP
or in the next STEP
preceding the line that plays the video.
Now the trial ends. This has to be a new STEP
, because UNTIL
statements are always the last line of the STEP
that contains them, but this happens immediately once the terminating condition (the video ending) is met.
Even though we know the video has finished playing by the start of this STEP
, we still have included a statement to explicitly turn off the video now. When videos finish playing, the processes that BITTSy uses to play them stay active. A "garbage collector" will take care of this so that the videos that are done stop taking up any resources, but this is not immediate. Explicitly turning videos OFF
frees up these resources right away. It is not required, but ensures best performance, particularly if your computer is under-powered or if a lot of programs are running in the background. However, if you do include OFF
commands in this type of case, you should always turn off trial stimuli after the Trial End
command. This is not crucial in this study, but for studies that are live-coded, in-progress looks are logged when the Trial End
line is executed. If the stimulus is explicitly turned OFF
before the end of the trial, BITTSy may not be able to associate the in-progress look with the stimulus that was removed.
Lastly, since we defined a phase start, we'll need to end it.
What if you don't want trial videos to be presented totally randomly? Or what if you don't want to pre-define a set of videos to present, but rather pull different sets of videos for each participant? Below are some examples of how you might set up your protocol differently for different levels of control over stimuli selection and presentation order.
You might wish to define a fixed trial order, where every time the protocol is run, participants see the same stimuli in the same order. This requires minimal changes to the example protocol in the previous section.
Making a fixed-order version of this study would also involve pre-selecting the 20 trial videos that will be shown. You can define just these videos in your tags section, or you can define all of them - it doesn't matter if some are unused in your experiment. Parsing your protocol will take a little longer if you have extra tags defined, but this step is typically done while setting up for a study rather than when the participant is ready, and does not present any issues for running study sessions.
Crucially for a fixed-order protocol, you will define your trial_videos
group to list tags in the exact order in which you want them to appear in the study.
All of the rest of the protocol would be the same!
If you wished to define multiple fixed-order versions of the study, you could simply save additional copies of the protocol file in which you change the ordering of tags in the definition of trial_videos
, or swap in different tags that you have defined. This would be the only change necessary to make the additional versions.
You might want to not have a pre-defined set of 20 trial videos to use, given that with the different stimulus types and conditions in our study, we actually have 48 possible videos. You might also want to psuedorandomize when trial types are selected - for example, participants may get bored more quickly if they see the same object pair for several trials in a row, so we might want to keep the same pair from being randomly selected too many times back-to-back.
We'll be defining all the possible video files and tags for this version of the study, because we won't specify in advance which ones the participant will see. First, our baseline trials. We have 4 object pairs, 2 left-right arrangements of each pair, and 2 background talkers, for a total of 16 possible baseline videos.
Now our trial videos. We have even more of these, because now the target word said by the main talker could be either of the two objects - a total of 32 videos & tags.
The last tag to define is our attention-getter video.
Defining the groups in this protocol is the really critical step for allowing us to control how stimuli are later balanced - how many of each target pair we show, how many times the named object is on the left or the right, how many trials have a male background talker vs. female.
We will construct a nested structure of groups. It is often helpful to work backwards and plan your group structure from the top down - that is, from the highest-level group that you will select from first in your protocol, to the lowest-level group that directly contains the tags referencing your stimuli files. We'll end up with 3 levels of groups for this protocol.
Your highest-level group's components should be defined around the characteristic that you want the most control over for pseudorandomization. Here, we'll make that be our object pairs. Our highest-level group, trial_videos
, will contain groups for each of the four object pairs.
If your study has more than one factor that you crucially need to control with psuedorandomization, you may find constructing multiple fixed-order protocols that follow your restrictions to be the most practical solution.
Now let's define one of these groups for an object pair. We're going to have five trials per object pair. One of these has to be a baseline trial. For the remaining four, we'll stipulate that they are:
a video with a female background talker where the object named by the target talker is on the right
a video with a female background talker where the object named by the target talker is on the left
a video with a male background talker where the object named by the target talker is on the right
a video with a male background talker where the object named by the target talker is on the left
This example balances some factors across trials, but not others. For example, we do not balance how many times one object is named versus the other. We could just as easily balance this in place of left-right answers, and define those groups accordingly. These decisions of what to keep balanced and what to leave random is the crux of setting up your group structure!
Now we'll define these component groups, which will contain our video tags. First, the baseline group. There were four baseline videos for each object pair.
For the other four groups, we defined two videos that match each trial type.
We'll define the nested group structure for our other object pairs the same way.
Once you've defined one set, the rest is quick and easy: copy it over to a blank document, use find+replace to swap out the object names, and paste the new groups back into your protocol!
Now that we've defined our group structure, implementing the stimulus selection is really easy. Our protocol starts off just like the original version with playing the attention-getter video before every trial.
In our original setup with the pre-determined stimulus set, we then started a trial, picked a video, and displayed it. In this one, we have multiple selection steps, and BITTSy takes a (very, very small) bit of time to execute each one. Given that we have several, we might prefer to place these before our trial starts, just to ensure that there's never a gap between when BITTSy marks the beginning of a trial and when the video actually starts playing. (If you are not relying on BITTSy logs for trial timing and instead use cues in your participant video, such as lighting changes in the room or an image of your display in a mirror, this doesn't matter at all!)
Lastly, we need to pick a particular video tag from type
. We will do this randomly. It doesn't matter whether we define this selection with FROM
or TAKE
because, per the previous selection where we chose type
, we can't get this same subgroup again later.
Now that we have the trial video we want to display, we can run a trial.
Just like in the original example version of this protocol, we'll loop over the steps that define an attention-getter + a trial to display a total of 20 trials. Then, our experiment is done.
This protocol is based on a commonly-given example of rather than a particular study. The question is whether young infants can recognize that objects of the same kind but with their own unique appearances all belong to the same category. Can infants recognize a pattern in the images that are being presented - that they are all cats, or that they are all dogs? And when an image is presented that does not belong to the category, do they notice that it is different?
As infants recognize the pattern or "rule" in the habituation phase (that all the images are the same kind of animal) we expect them to start to pay less attention to the individual images. This decrement in looking time across trials is the principle of habituation. But what really demonstrates whether they have learned the pattern is whether they show an increase in looking time when presented with something outside of the category, relative to another novel example of the trained category.
Here, we'll habituate infants to either instances of cats or instances of dogs. Later, to both groups, we'll present one new image each of a dog and a cat. We might think that breeds of dogs are more visually dissimilar from each other than breeds of cats, and we might expect that infants who are habituated to cats will have greater success in detecting which test phase image doesn't belong to their learned category. They may be more likely to recognize that the dog presented in the test phase is something different and more interesting than another cat image, while infants who are habituated to dogs may be less likely to recognize the cat in the test phase as particularly novel.
At the very beginning and very end of our experiment, we'll have pre-test and post-test trials. These help us see whether the participant is as actively engaged at the end of the experiment as they were at the beginning. It lets us differentiate children who didn't recognize the category switch but were still paying attention in general (looking longer at the post-test stimulus, even if they didn't look long during the test phase) from children who were simply bored or inattentive (looking very little at the post-test stimulus).
We'll begin with our starting definitions. In this protocol, we'll only use one central display, and code looks towards/away that monitor.
We have a lot of tags to define for this protocol, because we'll be using a lot of image files. We'll have a maximum of twenty images displayed of either cats or dogs in the habituation phase of the experiment, plus one of each for test, and two unrelated images for a pre-test and post-test (these should be consistently really interesting to infants - but as a placeholder, we'll use turtles). We'll also show an attention-getter video in between trials.
This file definition section is a very large portion of our protocol. We've abbreviated it below.
We'll have two conditions that participants are randomly assigned to - they will either be habituated to the group cats
, or the group dogs
. After they are habituated to that group, they will see the images dog_test
and cat_test
. In this protocol, we'll have infants always see the image that wasn't the category they were habituated to as the first image of the test phase, then the other example of the habituated category on their second test trial. Let's create groups that list what they'll see in each of these possible conditions, in order of presentation.
Lastly, so that we can randomly select a condition for the participant at the start of a study session, we'll make a group that contains both.
Now, our experiment begins. First, we'll have a pre-test phase, where we'll show something unrelated to our habituation and test images. Here we're using two similar images for pre- and post-test and will select them randomly from the prepost
group, but it is common to use the same image for both.
Like all other trials in our study, we'll play an attention-getter video beforehand, and only start the trial once the experimenter indicates the child is looking at the screen. Then, the experimenter will continue to code the child's looks toward and away from the image being displayed. If the child has a single look away from the screen that lasts at least 2 seconds, or if the image has been displayed for a maximum length of 20 seconds, the trial will end.
Next comes our habituation phase. First, we will need to assign this participant to one of our study conditions - habituating to cats, or habituating to dogs. We'll choose this randomly.
Recall that this condition
we pick (either habitcats
or habitdogs
) contains three tags:
the group of tags to display in habituation
a tag referring to the test item from the novel category
a tag referring to the test item from the familiarized category
Now, we'll display another attention-getter and a habituation trial.
Now for our test phase. We'll start it off, and keep having attention-getter videos in between trials.
There are two test trials - so let's loop to repeat this section to play the second one.
This protocol is based on the Newman et al. (2020) study cited below, testing toddlers' fast-mapping from noise-vocoded speech via a preferential looking paradigm with an initial training period on the word-object mappings. In a training phase, participants are taught the names for two objects, which appear alternately on the screen. Following training, both objects appear on-screen simultaneously with no accompanying audio to assess baseline preferences and familiarize participants to the idea that the objects will now appear together. Subsequently, the objects appear together in these same positions across several test trials. Sometimes the speaker asks the child to look toward one object ("find the coopa!") and sometimes directs them to look at the other object ("find the needoke!")
Newman, R. S., Morini, G., Shroads, E., & Chatterjee, M. (2020). Toddlers' fast-mapping from noise-vocoded speech. The Journal of the Acoustical Society of America, 147(4), 2432-2441.
This study was not originally run in BITTSy, and it used four fixed-order experiment files to control condition assignment and trial order rather than the randomization set up in this example protocol. However, this kind of multi-phase study, with multiple conditions and restrictions on stimulus presentation order in each phase, is a great example of a more complex preferential looking study (see for a simpler case). Below is a walk-through of how you could recreate its structure in BITTSy.
As in any protocol, first come the :
In this protocol, we are only using a single display, and our audio is playing from stereo speakers. We therefore only name one display in the DISPLAYS
definition, and leave out the definitions for LIGHTS
and AUDIO
. We only need to name one SIDE
to match with the display.
Next, we're going to set up all the .
Our visual stimuli are two animated 3D models, one of a spikey ball on a pedestal (which we call "spike") and one that looks like an F in profile (which we call "fred"). In training trials, we'll present one of these two objects and an audio track will identify it as either the needoke or the coopa.
After the training phase, we'll have a baseline trial with both objects on-screen and no accompanying audio. We'll want to randomize which is on the left and which is on the right. In the test phase, to keep the task as easy as possible, we'll keep the object positions the same as in the baseline trial.
This baseline trial has "fred" on the left and "spike" on the right. (The order of objects in our video filenames identify their ordering on screen left-to-right.)
And this baseline trial can be presented with either of these two sets of test videos with that same object positioning, depending on how the objects were labeled in the training phase.
Comments (lines starting with a #) do not affect the execution of your study, and are helpful for leaving explanation of your tag naming conventions and study structure.
Here are the rest of our tags, for the opposite object positioning.
Note that in our test video tag names, we've labelled which object is the target object and which side is the correct answer. This makes it extremely convenient for us to determine these later, looking at the reports from study sessions - it is directly in the tag name! This is not necessary for this protocol, just nice. It does lead to some redundancy in files - for example, coopa_spike_left
and coopa_fred_right
are the same video file. Which tag we use to call up that video just depends on which object was named coopa earlier in the study.
Lastly, we'll have an attention-getter video that plays in-between all the trials.
We're also going to randomly determine which sides the two objects will appear on in baseline and test trials. With this next layer of random assignment, we'll end up with four distinct orders. In the "A" orders, "fred" will be on the left, and in the "B" orders, "spike" is on the left.
Now for the test phase groups. In Order 1A, coopa goes with "spike" and needoke goes with "fred," and "fred" is on the left and "spike" on the right. These are the test video tags that fit that.
Similarly, we'll set up the test video groups for the other orders.
Now we're ready to define our groups of groups - each whole order, from training to test.
So that we can randomly pick one of these four orders at the start of the experiment, there will be one more layer to this group structure. We'll have a group that contains all of the orders.
Now that the experiment is set up, we'll define what will actually happen when we click to run the protocol.
Now that we've picked an order, time to start displaying stimuli. First, an attention getter. This is played until the experimenter indicates that the child is looking at the screen and ready for a trial to start, by pressing the X key on the keyboard.
Now that the child is ready to see a trial, let's show one.
The group of training videos contains two tags. We'll choose one of them randomly. We use a randomization clause {with max 0 repeats in succession}
to restrict which tags are chosen from this group after we make the first selection. This one means that we can never display the same tag in the group twice in a row. Because there are only two in the group, trials will alternate between them: children will either hear coopa, needoke, coopa, needoke... or needoke, coopa, needoke, coopa...
When we finish the first trial and reach this loop, we'll jump back to STEP 2
, where the attention getter video started, and we'll continue, running another whole trial, until we reach STEP 7
again. This will repeat until the loop terminating condition is reached: when we've looped back over the section seven times. Along with the first execution of this section, before we started looping, we display a total of eight training trials. This is the entirety of the training phase.
Now we'll start the test phase. We'll still have the attention-getter appear in between every trial, and our first video we show here will be our baseline video.
Next, the rest of our test videos, which we will again run in a loop.
Note that these steps are almost identical to steps 2-7, which made up the training phase. Parallel trial structures like these make it really simple to write protocol files - simply copy and paste the section of STEPs and change step numbers, tags, and other flags as necessary.
Once this loop is completed, the test phase is done, and so is our experiment!
Here are the definitions for our 20 trial video . All are named by 1) which object is on the left, 2) which object is on the right, 3) m or f for the background talker, 4) the target word spoken by the target talker ("generic" for baseline videos in which the target speaker said "look at that!")
We have just one to define for this protocol. It will contain our 20 trial videos, so that we can later randomly select these videos from the group to present.
After defining tags and groups, we would define if we had any to include. Most of these are unimportant for preferential looking studies. One that is important is the . BITTSy will cover the screen in either black or white whenever there are no visual stimuli being presented on the display. In preferential looking, we are often presenting video stimuli back-to-back, but there can sometimes be perceptible gaps between one video ending and the next starting. If you are using cues for coding when trial timing that depend on the appearance of the screen or level of light that is cast from the screen onto the participant's face, you will want to ensure that the background color defined here matches the background of your inter-trial attention-getters, rather than your trial videos, so that the background color being displayed is not mistaken for part of a trial.
First, we will begin a . In a study like this that has only one phase, this is completely optional - but it is not bad to be in the habit of including them.
The next thing we want to do is display our attention-getter video, before we show a trial. We'll want to come back to this part of the protocol again later (in a ), to show more attention-getter videos. We can't put the attention-getter in the same step as the phase start flag without also repeatedly starting the phase - which doesn't make sense.
So we'll start a new STEP
and display the attention-getter with a VIDEO
. The short attention-getter clip will be played repeatedly until the experimenter decides the participant is ready for a trial - by pressing C.
When you have an in a STEP
, it must always be the last line. So we'll start a new STEP
that will happen as soon as the experimenter presses the key. In it, we'll just stop the video.
Next, we need to start a .
Our trials need 1) a tag of a video to display, 2) a command to play the video, and 3) a that tells us when to move on. We define the vid
that will hold whatever tag we select, which we'll then play in our . We'll use the UNTIL FINISHED
to not move on to the next step until the video has played through.
We've now defined the basic structure of our experiment - attention-getter, then trial. For the rest of the trials, we can use a . We'll jump back to STEP 2
, where the attention-getter started, and go back through a whole trial, and repeat the loop again - until the loop has executed a total of 19 times, giving us a total of 20 trials.
We could have combined STEPs 6 & 7. Like trial starts, our lab likes to put loops in a step by themselves to make them more visually obvious, but it doesn't matter whether you combine these or break them into two STEPs. All that BITTSy requires is that a loop and are the final lines in the STEP
that contains them.
Later, when you select a tag from the trial_videos
group to present on a particular trial, you'll select by rather than RANDOM
.
Because you're using TAKE
to choose , each time you select a tag from trial_videos
, you can't select that same tag again. So each time, the tag that is the FIRST
tag in the list that is still available for selection will be the one you want - the next tag in line from the trial order you defined when making thetrial_videos
group.
Below, we'll walk through one way that you could define the protocol to select trial videos from the entire set, in a manner that satisfies particular balancing requirements and presents trials in a psuedorandom order. This is a significantly more complicated case than the previous versions of this study we've defined - not for running your study, or for writing your whole BITTSy protocol, but specifically for planning the structure of the groups in your protocol, upon which stimuli selection will critically rely. Before beginning to create such a protocol, it is useful to take time to think about the layers of selection you will do - if it can be drawn as a selection tree structure (like the example ), with selections between branches all at equal probability, it can be implemented in BITTSy.
Repetitive tag definitions like these do not need to all be typed out by hand! We used spreadsheet functions like clicking-and-dragging to copy cells, JOIN, and CONCATENATE to quickly make these lines in Google Sheets and copy-pasted them into our protocol file. .
Because BITTSy will , we can't actually define our groups in backwards order - we'll need the definitions of the lower-level groups to come before the higher-level groups that reference them. So we'll just swap them around. We end up with this.
Above, you can see how we selected from each layer of our group structure, highest to lowest. First, we picked an object pair. We need to restrict how many times we can select each pair using with max 4 repeats
- we require that we have exactly 5 trials of each object pair. This , combined with the number of times we'll loop over this section, will ensure we get the intended result. We also choose to add the with max 2 repeats in succession
clause, which ensures that we don't ever show videos of the same pair more than 3 trials in a row.
Next, we pick a trial type - baseline, female background right target, female background left target, male background right target, or male background left target. We only want one of each to come up in the experiment for any given object pair, so we'll use TAKE
to . The next time the same object pair's group is picked to be pair
, subgroups chosen previously to be type
will not be available for selection.
See the page for copies of the versions of this protocol.
For tags like our cat and dog stimuli, which are named systematically, never type the whole thing out - there's a much easier way! We use Excel or Google Sheets and click and drag to fill cells with components of these lines that will be the same, auto-fill series to make columns of the numbers in increasing order, and a that we then click and drag to apply down the rows to put each of the lines together. Then you can just copy and paste the results into your protocol!
First, let's define for habituation and our pre-test and post-test.
Note that these two groups, habitcats
and habitdogs
, contain first a group, then two tags that refer directly to files. We'll need to remember this as we use these groups for stimulus selection and presentation later. In the habituation phase, when we're pulling items from the group cats or dogs, we'll need a to pick a tag from the group before we present it onscreen. But in the test phase, we can display these tags directly - they refer to particular files, rather than a group of files.
Some optional experiment settings may be useful here. This is a live-coded experiment, so we might want to decide what "counts" for looking time calculations. We also might want to change to something other than C for CENTER
and W for AWAY
, or the .
We also have to specify for this study. These should be decided based on previous studies, or some piloting in your lab. Here are the settings we chose for this example.
We need to refer to the first one in the habituation phase, so in this same step, let's pull it out of our chosen condition
with another .
We want to keep running habituation trials until the child either , or reaches a maximum number of trials. We'll do this by back to STEP 8
and repeating this section until one of those conditions are met. When one of them is, our habituation phase is over.
What goes in our test trial? Recall that the group called condition
that we chose earlier for this participant has two remaining items in it, after we selected the habituation group from it with our earlier . They are our two test items, in the order we want them presented. We can use another TAKE
statement to remove them in order and display them.
Lastly, the post-test. This is identical to our pre-test phase earlier, so we can specify it in our protocol the same way. We just copy-pasted it and changed the step numbers, phase name, and name of the .
And the experiment is done! See the page for a full copy of this protocol.
Now, let's walk through the we'll define for this study. There are two possible ways to pair our two objects and two audio labels. Either coopa goes with the object "spike" and needoke goes with the object "fred," or vice versa. We will call these two training phase configurations Order 1 and Order 2, respectively. We will want to randomly assign participants to one of these two ways of pairing the labels and objects.
At this point, we would define any that we needed. But we don't need any for this protocol. For other preferential looking studies, you most notably might want to change the that is visible when there are no stimuli on the screen, so that gaps between videos are not disruptive.
In the first STEP
, we'll start the training phase. We'll want to delineate which trials are in the training phase and which are in the test phase so that we can easily separate them in from study sessions. Then, we choose the order the child will participate in (1A, 1B, 2A, or 2B) from our orders group. This resulting group contains three other groups that together make up the videos that can be in that order: its training phase videos, its baseline video, and its test videos. We defined them in this order, so we can use to choose them from first to last, without replacement, and assign them names that we can use to refer to them later. We've named these dynamic tags in a way that makes explicit that these are still groups. We'll need to select particular stimuli from these groups later, to display in their respective phases.
With this randomization clause, we have specified exactly what we want for the whole phase, right here. We can use a to run the rest of our trials.
Recall that the baseline_group had the baseline video we wanted in this order as its only tag. We'll still have to make a selection and assign it the baseline_trial
in order to play the video. But with only one tag in the group, there are many possible ways to define the choose statement.
In the test phase, we want to display eight total test trials. We achieve this when the loop in STEP 19
has run seven times. There are two tags available in the test_group
for each order, and we want each to be displayed exactly four times. After each tag's first selection, it can have 3 repeat selections. We define this in the in STEP 17
. If we wanted to further restrict the trial order (for example, if we wanted to prevent a random order that showed the same video four times in a row, then the other one four times in a row) we could do so with .
See the page for a copy of this full protocol.
The following protocol is based on the Newman (2009) paper cited below. A headturn preference procedure is used to assess whether infants can recognize their own name being spoken when multiple other talkers are speaking in the background.
Newman, R. S. (2009). Infants' listening in multitalker environments: Effect of the number of background talkers. Attention, Perception & Psychophysics, 71, 822-836
This study was run prior to the development of BITTSy on an older, hard-wired system in our lab. However, the following example BITTSy protocol replicates its structure and settings, and has been used in our lab for studies in this same body of work.
In headturn preference procedure, we'll start off with the participant facing a neutral direction, then have the participant turn their head to listen to an audio stimulus. For as long as they are interested in that stimulus, they tend to keep looking toward where the sound is coming from. When they get bored, they tend to look away. Across trials, we can use how long they listen to stimuli as a measure of listening preference.
Infants as young as four months old will listen longer to their own name (a highly familiar word) than other, unfamiliar names. In this study, we'll present names with noise of other people talking in the background. If infants still listen longer to the audio that contains their own name, we can say that they can still discern their name and recognize it as familiar, despite these harder listening conditions. By testing infants' success in this task at different age ranges, with different types of background noise, and at different noise levels, we can better understand infants' speech perception abilities.
This study begins with a phase that familiarizes infants with the procedure, and continues until they accumulate a certain amount of looking time toward the audio stimuli in this phase. In the test phase, we present three blocks of four trials each, with the trial order randomized within each block.
As always, we open with starting definitions. For this headturn preference study, we will use three lights in our testing booth: one directly in front of the participant, and one each on the left and right sides of the testing booth which the participant must turn their head 90 degrees to view. Our lab's starting definitions look like this:
In this headturn preference study, we will be turning on light/speaker pairs that are located on the same side of the testing booth. It will be important to define your LEFT
and RIGHT
lights in a way that matches up with your left and right speaker channels, so that each light/speaker pair can be turned on and off with the same side name. Here, the lights that we define as LEFT
and RIGHT
match what we see in our booth from the experimenter's perspective, and we have set up our stereo audio system with the left-channel and right-channel speakers directly behind these LEFT
and RIGHT
lights, respectively.
You can choose to name the sides of your booth according to your experimenter's perspective or your participant's perspective, whenever they don't match - it doesn't matter, as long as the lights and speakers that should be treated as a pair have the same side name, and it's clear to your experimenters how to identify these sides via your key assignments for live-coding.
See the starting definitions page for more on how to order these definitions in a way that fits your own testing setup.
In this protocol, we have six files to use and assign tag names - two clips of classical music to use in the familiarization phase, and four audio files for the test phase, in which a speaker repeatedly calls a name with the noise of several people talking in the background. One name file is the name the participant is most commonly called ("Bella" in this example), one has a name that matches the stress pattern of the participant's name ("Mason") and two foil names have the same number of syllables but a different stress pattern ("Elise" and "Nicole").
Because the stimuli in this experiment are particular to each participant (one of them must be that participant's name), we make a copy of the protocol before each participant's visit that has the appropriate filenames filled in for their session. Tag names let us label these generically by what trial type it is (name, matched foil, or unmatched foil) rather than the particular name that was being called, and once we change the filenames they reference, no further changes to the protocol are necessary to prepare for each participant's session.
These tag names, which are kept consistent, will also appear in the reports of session data rather than the filenames, allowing us to easily combine data across participants even when stimulus files themselves differ. (And keep reports de-identified, in this study in which their first name is a stimulus!)
Having defined our tags, now we'll create groups that will help us define our stimulus selection later, in the familiarization and test phases of our experiment.
The training phase is straightforward - we'll just present those two music clips throughout.
But in the test phase, we'll use a block structure. Each of the four trial types will be presented in a random order in each block, and we'll have a total of three blocks. All our test blocks have the same four stimuli. However, we'll define three copies of this group - testblock1
to testblock3
- which will allow us to randomize the order of stimuli within each block completely independently from the other blocks.
You might wonder why we can't define a single testblock
group, and just restrict selection from that group to produce the desired block structure. See the last example in the max <number> repeats in <number> trials section for why this doesn't work, and its following section for more on why this nested group structure is a good solution for creating blocks in experiments.
And we need an overarching group for the test phase that contains our blocks:
In addition to randomly ordering stimuli, we will want to randomly order stimulus presentation locations. We can set up for this by creating groups of sides. CENTER
in this experiment is used only for in between trials; LEFT
and RIGHT
are the sides for selection here. We'll place some restrictions on the randomization order (e.g. to prevent too many trials in a row on the same side), but we'll have these restrictions reset between the familiarization phase and test phase. Therefore, we'll make two groups of sides, one to use in each phase, so that when we switch to the test phase, we're starting fresh on which side choices are allowable.
At this point in the protocol, we would define any optional experimental settings we needed. In a HPP study, relevant ones for consideration are COMPLETELOOK and COMPLETELOOKAWAY as well as key assignments for live coding. Here, we'll use all the defaults, so we won't need to include any - but this is how they would appear.
See also our page on live coding for recommendations on key assignments and details on how live coding works in BITTSy.
Now we're ready for the body of the protocol - what will happen when we click the button to run the protocol.
First, we'll start off the training phase, in which we simply familiarize the participant to the basic procedure. First, a light will start blinking in the center of the booth, getting the participant to attend to this neutral direction before a trial starts, so that they later will demonstrate a clear headturn to attend to stimuli on the sides of the booth. Once the experimenter judges they are looking at it (by pressing the key assigned to CENTER
), it turns off.
Immediately, one of the side lights will turn on. We choose this side randomly to be either LEFT
or RIGHT
, but restrict the randomization to not choose the same side more than 3 times in a row. Once the participant turns and looks at this side light, the experimenter will press a key to indicate they are now looking in that direction.
In a protocol that assigned different keys to LEFT
and RIGHT
, the terminating conditions for this step should be until those keys were pressed, rather than L and R
Now, immediately once the look toward the light is recorded, a trial starts. Audio will begin to play from the speaker directly behind the light. It will continue to play until either the file ends, or the participant looks away for at least 2 seconds - whichever comes first.
Here, we choose the audio for the training trial from the trainingmusic
group such that we cannot pick the same music clip twice in a row. The two clips will play alternately throughout the training phase.
Once one of the step terminating conditions is met, we want the trial to end. The light should turn off, and so should the audio. (If the FINISHED
terminating condition was the one that was met, it has already stopped playing, and the OFF
command does nothing. But if the SINGLELOOKAWAY
one was met instead, it would continue to play if we didn't turn it off now.)
We end this with UNTIL TIME 100
just to have a tiny perceptible break between turning this side light off, and the start of the next inter-trial period - which we'll get to via a loop.
Up to now, our steps have defined the basic structure of our training phase: start the CENTER
light, wait for a look and turn off, start a side light, wait for a look and play a trial. We can now define a loop that will repeat this basic structure until the participant is sufficiently familiarized to proceed to the test phase.
We'll loop back to STEP 2
(where we turned on the CENTER
light to re-orient the participant between trials), and execute all the steps back to STEP 7
, running another trial in the process. We keep looping back over these STEPs until the child accumulates 25 seconds of total looking time to both of the music clips. Once this happens, we consider the participant to be sufficiently familiarized with the study procedure, and end the training phase.
Now, time for the test phase. Recall that we have three test blocks. Each is a group of tags, within the group testaudio
that contains all three blocks. The first thing we'll need to do is pick a block.
All the blocks in this experiment contain the same tags, so it doesn't matter whether we choose by FIRST or RANDOM. But we do want to use TAKE rather than FROM. When we later choose the stimuli from within the block for each trial, we're going to use TAKE
to remove them so that they aren't chosen on more than one trial within the block. If at the end of the block, this empty block was still available for choosing from testaudio
(i.e. if we used FROM
) we could get in trouble if we picked it again - we'd try to select stimulus files from the block, but there wouldn't be any more in there to pick, and our protocol would have an execution error and stop running.
The test phase of the protocol will start off the same way that the familiarization phase did - by defining the inter-trial period, with the flashing CENTER
light, then choosing which side of the booth the test trial will be presented on. We'll select from testsides
this time, and give it the dynamic tag side2
so that we can refer to this side for the light, and later the audio. No two tags, including dynamic tags, can have the same name, which is why we define our placeholder tag for the active side as side1
in training and side2
now.
Now for a test trial. From the block that we chose back in STEP 10, we'll select a random tag to play - either ownname
, matchedfoil
, or one of the two unmatched foils. We want to select these without replacement using TAKE
, so that we never repeat a single stimulus within the same block.
Now, we'll define a loop that will let us play a trial (with the blinking CENTER
light in between) for each trial type in the block. We loop the section 3 times, so that we end with 4 trials total.
Note that we loop back to STEP 11
. That was after we selected a block from the testaudio
group, so the dynamic tag block
refers to the same block throughout this whole loop. This is what we want - to TAKE
each of the items from the same block and play them in a trial by the time we're done with this loop.
From STEP 10
, where we select a block, up to this point constitutes one block out of three in our test phase. This is only a third of our trials - but we're almost done with writing our protocol. Because each block has the same structure, we can make another loop to repeat the process of selecting a block and playing four trials within it - we'll add an outer loop, which will contain the loop we've already defined.
When we loop back to STEP 10
, we pick the next block out of testaudio
, and repeat the whole structure, including the loop at STEP 16
that lets us run through all four trials within the new block we've selected. When this loop in STEP 17
finishes and our second block of trials have played, we loop through again for our third block. Then, when the STEP 17
loop has run 2 times - all three of our test blocks have been completed - our experiment is done.
These kinds of loops may not be totally intuitive if you are used to thinking of your experiments as a linear progression of trials. However, loops are well-suited for any kind of experiment that has repeating units with the same internal structure, and they will save you tons of effort in defining your protocols!
See the resources page for a copy of this protocol.
This protocol is based on Experiment 3 of the classic Werker et al. (1998) study cited below. Infants are habituated to a single word-object pair. Later, they are presented with four test items: familiar object with familiar word, familiar object with novel word, novel object with familiar word, and novel object with novel word. Pre-test and post-test trials, consisting of a novel object and word that do not appear in any other trials, are also included.
Werker, J. F., Cohen, L. B., Lloyd, V. L., Casasola, M., & Stager, C. L. (1998). Acquisition of word–object associations by 14-month-old infants. Developmental psychology, 34(6), 1289.
This protocol will use one central display and a light positioned directly below the display. These would be our minimal starting definitions:
However, if you have more lights connected in your system (e.g. for headturn preference procedure) and will be leaving them all connected during testing, you may need to define more lights so that you can identify which one should be CENTER
.
Now we'll define our tags that reference files. We have three audio files and three video files - two of each for use in habituation/test trials, and one of each reserved for pre-test and post-test.
We could define these with LET
statements. But because these will always be paired, it is convenient to use TYPEDLET. This will allow us to define all of our possible word-object pairings and randomly select a pair for presentation, rather than independently selecting audio and video, and having less control over which appear together.
Our videos, which we'll call round
, green
, and blue
, show toys being rotated by a hand on a black background. Our audio files consist of infant-directed repetitions of a nonword: deeb, geff, or mip.
Having defined these tags with TYPEDLET
, we can use LINKED to define tags that pair them up appropriately. The tags mip
and round
will be reserved for pre-test and post-test, but all other pairings will occur in the test phase, and one of them will be featured in the habituation phase.
We'll define two groups consisting of our LINKED
tags. Both will contain all of the possible pairings of deeb
, geff
, blue
, and green
. At the start of each test session, we will randomly select one pairing from the first group for the infant to see during habituation. The test phase for all infants will consist of all four pairings, so our test_trials
group will also contain all four.
Now we define our experiment settings. Because this protocol has a habituation phase, we must define all of our habituation criteria here.
We could also define key assignments for live-coding - without them, experimenters will use C for looks CENTER
to the light/screen, and W for AWAY
.
Now for the STEPs that will start once we run the protocol. First, we'll define our pre-trial phase. Before each trial, we'll have a center light flash until the infant is paying attention. We'll have the experimenter press C to begin a CENTER
look.
Once the infant is looking, we can start a trial. We'll turn off the light and display the pre-test stimuli. Note that because we defined the LINKED
tag prepost
with both an audio tag component and a video tag component, we can reference the LINKED
tag in the action statements for both AUDIO
and VIDEO
.
Our trials throughout the experiment will last a maximum of 14 seconds. Trials will end when we reach this limit, or when the infant is recorded as looking away for at least 1 second. There is only one pre-test trial, so once it is over, we end the phase and move on to habituation.
As we start the habituation phase, the first thing we need to do is assign the participant randomly to a word-object pair that they will see during habituation. There were four possibilities in the habit_pairs
group.
Having selected in advance which word-object pair
the participant will see, we can define the inter-trial period (with the blinking light) and a habituation trial. Note that the pair
tag we selected from habit_pairs
is again a LINKED
tag that can be referenced in both of the action statements to play the audio and video.
We'll play the exact same stimuli for the rest of our habituation trials, so now we'll define a loop. We want to either keep playing trials until the infant meets our habituation criteria or reaches the maximum of 20 habituation trials (19 of them via the loop).
Note that our loop goes back to STEP 8
, where we started the light blinking for the inter-trial period, but excludes the assignment of the dynamic tag pair
back in STEP 7
. This is why we chose which word-object pair would be presented before we needed to use it to display trial stimuli in STEP 10
: we didn't want this LET
statement to be included in the loop. We want this dynamic tag to be assigned as one pair and stay that way so that we are repeatedly presenting the same word-object pair. If the LET
statement were inside the loop steps, we would repeat the choose statement on every loop iteration, and we would show different word-object pairings on different trials. In general, when you want to select from a group and be able to refer to the result of that selection throughout a phase, it's a good practice to make that selection in the same STEP
where you define your phase's start. "Getting it out of the way" like this makes it easier to not accidentally loop over and reassign a dynamic tag that you would prefer to stay static.
Once the infant has habituated or met the maximum number of trials, we move on to the test phase. We'll begin again with the inter-trial light flashing before beginning a trial.
Recall that our four test items are all in the test_trials
group, and are LINKED
tags with all the possible pairings of the audio deeb
and geff
, and the videos of the objects blue
and green
. We want to display these four word-object pairs in a random order, without replacement. We'll define one trial, then use a loop to run the remaining trials.
The test phase is where we see the advantage of defining our pairs of tags via LINKED tags rather than selecting video and audio tags separately. If we had defined a test audio group and test video group, they would look like this:
LET test_audio = {deeb, deeb, geff, geff}
LET test_video = {blue, green, blue, green}
With random selection from each in turn across trials, there would be nothing to stop us from repeating a pairing, and thus failing to show all the combinations of words and objects. For example, on our first test trial we could randomly select deeb
and blue
- but there is no way to specify that if we choose deeb
again from the audio group, green
must be selected rather than blue
from the video group. We could define groups that would be chosen from in a fixed order, arranging each of the audio and video tags so that all the pairings are present when they are selected using FIRST (and creating multiple copies of this protocol to counterbalance test trial order.) But without LINKED
tags, we could not use RANDOM selection in this protocol.
We'll use another loop to play the remaining three test trials, after our first one is done, and this concludes our test phase.
Lastly, we have a post-test trial, which is identical to our pre-test phase.
Now our experiment is done!
See the resources page for a copy of this protocol.
This protocol demonstrates a common application of a conditioned headturn procedure: signal detection. It isn't strictly based on any particular study, although it is similar conceptually to many in the realm of infant signal detection in noise, e.g. Trehub, Bull, & Schneider 1981; Werner & Bargones 1991.
In this example study, multi-talker background noise plays continuously throughout. During some trials, an additional voice that repeatedly calls the participant's name begins playing too. When the name is being called and the participant turns toward the source of the voice (a "hit"), they are rewarded with a desirable stimulus (e.g. a toy being lit up or beginning to move). When the name is being called but their behavior does not indicate that they detected it (i.e. they don't turn toward it - a "miss"), the reward stimulus is not presented, nor is it presented when the name is not being played, whether they turn (a "false alarm") or not (a "correct rejection").
This is a simple example of a signal detection task because during the test phase, it does not adjust the intensity of the signal (voice calling the name) relative to the noise, as you would when attempting to establish a detection threshold.
This protocol will use two separate channels on a DMX dimmer pack. In the protocol we'll refer to these as LIGHT LEFT
and LIGHT RIGHT
, but crucially, these won't both correspond to lights.
The LEFT light will be our reward stimulus. Depending on the implementation of conditioned headturn, it might be an actual, simple light that illuminates an otherwise invisible toy. Or the "light" might be a plugged-in device or electronic toy that will begin to move when receiving power - BITTSy doesn't know the difference!
The RIGHT light will refer to an empty channel on the dimmer pack, with either nothing plugged in or something plugged in that will remain powered off regardless of the dimmer pack command (i.e. by having the power switch on the device be in the off position). We need this empty channel so that trials can always be structured the same way, whether they are trials with a potential to show the reward or not.
The above are our starting definitions. However, if you have more lights connected in your system (e.g. for headturn preference procedure) and will be leaving them all connected during testing, you may need to define more lights so that you can identify which channel has the reward stimulus and which is empty.
Now we'll define our tags that reference files. We have only audio files for this type of study. One is the background noise that will play continuously throughout. Then we have our signal of interest - the file that contains the voice calling the participant's name. We'll call this audio file changestim
. We'll need another to be the controlstim
so that regardless of the trial type, we've got a command to play an audio file. But in this case, the control stimulus is actually a completely silent file, since we're interested in comparing the presentation of the name to a case where the background noise simply continues with nothing else added.
We'll use groups to set up a key aspect of this paradigm: the "change stimulus" or signal of interest (the voice calling the participant's name) gets paired with the reward stimulus (plugged into the LEFT
light channel) and the "control stimulus" (the absence of the voice - a silent audio file) does not.
A "change trial" will use both the "change" stimulus and the LEFT
light. So we'll define a group that pairs them, called changetrial
.
If you wanted change trials to contain one of several possible audio files (e.g. with different tokens/words/voices), your changetrialstim
group would contain multiple tags for the different audio files. In later steps of the protocol when the audio files are played, you could randomize which audio file is selected from the group. However, it's important to set up your selection criteria in a way that ensures that there will always be a valid selection that can be made for the audio file, regardless of whether the current trial is a change trial or control trial.
Similarly, we'll make a controltrial
group that pairs the "control" stimulus (silence) with the empty RIGHT
channel, so that this trial type can have the same structure within the protocol, but not actually turn on a device.
You may wonder why we define the first two groups in each set rather than directly defining LET changetrial = {changestim, LEFT}
. At time of writing, BITTSy throws a validation error when a group directly contains both tags and sides. However, creating a "dummy" group that contains each and making changetrial
be a group of other groups sidesteps the validation error and allows you to run the protocol.
Now that we have structures representing change and control trials, we can define the groups that we'll use to present these trial types within phases - training phase, conditioning phase, and test phase.
In this study, we can choose to not include control trials in the training phase - since the control trials are the absence of the voice and absence of the reward stimulus, the gaps between change trials essentially function as reinforcing that the reward does not occur until the change stimulus is present. A study focusing on discrimination between two signals, where one is rewarded and the other is not, would likely have control trials during training. But it is not necessary here, so the training
group can contain only changetrial
.
The conditioning phase will also only contain change trials - the goal of this phase, after the initial exposure to the paired signal and reward in the training phase, will be to ensure that the participant predicts the reward after hearing the stimulus, and makes an anticipatory turn toward the location of the reward.
The test phase will contain both change trials and control trials. Here, we'll have them occur with equal probability, but we could weight the relative probabilities by adding more change trials or more control trials to the test
group.
There are no special experiment settings required for this protocol. However, if we planned on using the reporting module to analyze participant logs, it is helpful to make sure that the key that is used to mark the participant turning toward the signal matches the key assigned to the LEFT
side (where the signal will be presented). That way, the turn is logged as a look toward that stimulus rather than just a timestamped keypress, and we can easily summarize which trials had turns and the time it took before the participant turned via a standard report of looking time by trial. This protocol is configured to use L, the default key for LEFT
, so it doesn't require a key assignment definition, but if you wished to use another key other than L, you could include one in this section.
First in this study, we'll start playing the background noise file that will play on loop throughout the entire study. Then we'll want to wait until the experimenter judges that the infant is ready for a trial to start. (Typically in conditioned headturn studies, a second experimenter sits in front of the infant to engage their attention with some small toys, and the experimenter running the protocol would wait until the infant was attending to the toys, relaxed and unfussy.) We'll have the main experimenter press C when the infant is ready.
The starting definitions we used didn't specify a CENTER
audio channel (i.e. for use with a multichannel speaker array), so here we have the default behavior of CENTER
, where it's exactly the same as STEREO
- played simultaneously from the left and right channels. This means that when we later have trial audio play from LEFT
, BITTSy will play it simultaneously from one of the speakers that is already playing the noise file (while the speaker on the right channel continues with only the noise). Having the noise play from CENTER
allows us to turn off the trial audio on LEFT
without interrupting the noise at all. If you wanted the noise to come from only the same speaker as the trial audio, while keeping the ability to turn off trial audio without interrupting the noise, you could set up your noise audio file to have information only on the left channel and silence on the right channel, and still instruct BITTSy to play it CENTER
.
Next we'll need to pick the trial type from the training
group so that we can subsequently use its components. But there's only actually one trial type in this group, so it doesn't matter how we select it, so long as we allow for repeats.
The dynamic tag trial_tr
will now refer to that one trial type group, changetrial
, which itself has two tags inside - the first referring to the audio file that will be played, and the second referring to the LIGHT
channel that will be used during those trials - for change trials, the location of the reward stimulus. We can use selection restrictions to ensure we pick first the audio file group, then the reward location group, to access these components and later use them to display stimuli.
Recall that these subgroups were themselves groups that contained one tag/side each in order to avoid making a group that directly contained both a regular tag and a side name. So we have one more layer of selection to get to our final tags.
Now we can begin to present the change trial audio. We'll add a brief delay after it starts so that the reward doesn't turn on simultaneously with the audio signal, but rather the start of the audio indicates that the reward will appear soon.
Audio set to LOOP
does not turn off until we use an AUDIO OFF
command. The change trial audio will continue playing after this step, while we present the reward! If you wanted the audio and reward to be presented sequentially and not overlap, you could adjust this UNTIL TIME 1000
statement above to be the full length of time you want the audio to play, and put the AUDIO LEFT OFF
command to before the LIGHT ON
statement in STEP 4
.
Now it's time to turn on the reward. In the training phase, this isn't dependent on the behavior the participant exhibits yet, we'll simply always turn on the reward. We'll keep it on for a little while, then turn off both the reward and audio together. (Note that since the background noise was playing AUDIO CENTER
and the change trial audio plays AUDIO LEFT
, we can turn off the change stimulus with AUDIO LEFT OFF
without interrupting the background noise.)
From STEP 2
to here constitutes the repeating structure of the training phase - waiting for the experimenter to indicate the participant is ready for a trial, starting to present the change audio, turning on the reward, and turning both off. We can use a loop to repeat this structure until the end of the phase (in a real study, we would likely repeat it more times.)
Now that the participant has some exposure to the audio signal and has had an opportunity to learn that the reward turns on when it plays, in the conditioning phase, we'll ensure that the participant has learned that the audio stimulus predicts the onset of the reward by requiring that they make anticipatory headturns toward the location of the reward stimulus whenever they hear the audio signal. We'll begin delaying the onset of the reward to give them more opportunity to do so, and require three trials in a row where the participant turns in anticipation before the reward stimulus turns on.
Like the training trials, we need to wait for the experimenter to press C to indicate when we're ready for a trial to start.
Next we'll select the trial audio (always the change stimulus in this phase, since that's the only trial type in the conditioning
group) and reward location, and start playing the audio. Note that this selection structure is identical to the training phase.
What's different in this phase is that we want to do different things depending on whether the participant makes an anticipatory headturn or not. If they don't make the headturn within the time limit before the reward stimulus comes on, we want to continue with more trials exactly like this one so they have more opportunity to learn the pattern. But if they've started to learn that the audio predicts the reward and make an anticipatory turn, we're ready to increase the delay in the next trial, and need to keep track of this successful trial as the start of a possible three in a row that would result in ending this phase.
We can use an OR combination of terminating conditions with JUMP to create two separate "branches" for each of these possibilities. If there's a headturn before the time limit, we'll jump to STEP 14
. If not, we'll keep going to the next step. (For a more detailed explanation of this type of usage of JUMP
, see this page.)
The second UNTIL statement could also be written UNTIL TIME 2000
without the JUMP clause - JUMP isn't required when the result is simply to go to the next step. But when the protocol is shown broken up into parts like this, adding it makes it a little easier to track what's happening!
From here until STEP 14
defines what we'll do when the participant failed to make an anticipatory headturn on the trial. It looks just like in the training phase. We'll still turn on the reward, so that the participant continues to have opportunities to learn the relationship between the audio stimulus and reward.
After, we'll use a LOOP statement to run more trials. We should include some way to end the study in case the participant doesn't begin making anticipatory headturns. This loop terminating condition will have us jump to the very end of the protocol (STEP 37
), ending the study, if the participant fails to make an anticipatory headturn 10 times in a row.
Using JUMP
to move outside of this loop when the participant successfully makes an anticipatory headturn will reset the count on this UNTIL 9 TIMES
loop statement, which is why it requires 10 times in a row rather than 10 times total. If you didn't want to reset the count and use a total number of misses instead, you could A) restructure your steps and JUMPs to not require moving to steps that are outside this "core loop", or B) use a loop terminating condition that depends on phase information, like TOTALLOOKAWAY (from the reward stimulus) or global information like whether a group is EMPTY, which are unaffected by moving outside the loop. If you preferred to continue presenting trials until the experimenter judged the participant was tired, you could use KEY instead.
Recall that STEP 14
was where we jumped if the participant successfully made an anticipatory turn. This will also result in presenting the reward.
But crucially after this, instead of looping back, we'll keep going to STEP 17
, which is only reached when the participant has a current streak of exactly one anticipatory turns in a row. This way, we can track what they need in order to reach three in a row to move to the next phase. (See this page for further explanation of this type of usage of JUMP
.)
STEP 17
will start another trial, which is largely identical to before - the only difference is the amount of delay between when the audio starts and when the reward turns on, during which they have an opportunity to make an anticipatory turn.
STEP 18
begins what will happen when the participant had a current streak of one anticipatory turn, but missed this one, resetting their streak. So after presenting the reward as usual, it will jump back to STEP 9
to present the next trial, which is used whenever the current streak is zero.
STEP 20
, however, is where we go when the participant made another anticipatory turn - two in a row now. So after presenting the reward as usual, we'll move on to STEP 23
, which presents another trial but is only reached when the current streak is two in a row. The delay where the participant has an opportunity to make an anticipatory turn increases again.
STEP 24
is where we end up when the participant missed the time window for the anticipatory turn. So after presenting the reward, we'll specify a jump back to STEP 9
to reset their streak.
But if we jumped to STEP 26
instead, we got here because the participant made another anticipatory turn. They already had two in a row to get to STEP 23
, so the jump straight to STEP 26
means that they've now achieved three in a row, our criteria for ending the conditioning phase. So after presenting the reward, we'll end the phase and move on straight to the test phase.
In the test phase, trials will sometimes contain the change audio, and sometimes not. The participant should only be rewarded for successfully detecting (making an anticipatory headturn) the change stimulus - this possible outcome is called a "hit." There are three more possible outcomes - the change audio is present and the participant does not make a headturn to show they detected it (a "miss"), the change audio is not present but the participant makes a headturn anyway (a "false alarm"), and the change audio is not present and the participant does not make a headturn (a "correct rejection").
We'll start a trial just like before, waiting for the experimenter to indicate the participant is ready, then selecting from the highest-level group for this phase until we have the final tags we'll use to play audio and turn on a light channel.
Since the test
group contains both change trials and control trials, audiostim_test
and reward_test
end up being either the change audio and LEFT
(the reward stimulus channel) respectively, or a silent audio file and RIGHT
(the empty channel on the DMX box).
The test
group was defined at the start of the protocol as having one copy of changetrial
and one copy of controltrial
, so (FROM test RANDOM)
in the step above will select them with equal probability but not guarantee an equal number of each across the test phase. If we wanted to ensure a particular number of each, we could define test
to have that number of copies - e.g., for four of each trial type, LET test = {changetrial, changetrial, changetrial, changetrial, controltrial, controltrial, controltrial, controltrial}
Then use (TAKE test RANDOM)
instead.
Like in the conditioning phase, we're interested in whether the participant makes an anticipatory headturn toward the reward (L), and we need to do different actions depending on their behavior. We'll have another OR combination of UNTIL statements that will allow us to jump to different sections depending on what happens.
STEP 32
is what will execute when the participant makes a headturn within 5 seconds of when the trial audio (either the change audio or the beginning of a silent file) plays. The case where the participant makes a headturn actually represents two different possible trial outcomes, depending on which audio file was playing. If the trial audio was chosen to be the change audio, this represents a hit, and we should present the reward. But if it was the silent file, this is a false alarm, and we shouldn't present the reward.
Because the audio and side name were paired originally, so that the change audio went with the side of the reward (LEFT
) and the silent file went with the empty channel (RIGHT
), we don't actually have to differentiate between these two outcomes in order to correctly reward or not reward the headturn. The tag reward_test
contains the appropriate side name so that if the change audio was chosen, the LEFT
channel reward will now turn on, but if it was a control trial, nothing will visibly happen, and we'll simply wait a few seconds before we can start another trial.
STEP 34
is only reached when the participant does not make a headturn shortly after the start of a trial. This behavior could be a miss if the change audio signal was actually present, or it could be a correct rejection if it wasn't. Although when we ultimately analyze data we'll consider the first outcome as an incorrect response on the trial and the second as a correct one, we don't need to differentiate between these possible situations now, because neither results in the presentation of a reward. In both cases, we simply need to end the trial and turn off whatever audio had been playing.
Regardless of which "branch" was taken to handle the outcome of the trial, we now end up at STEP 35
(there was a JUMP
to here from the end of the step that handled hits and false alarms). We could have a larger delay specified here if we wanted to make sure there was a certain minimum time in between trials, and in the next step we'll loop back to STEP 30
to run more test trials. This one will run a total of 8 test trials.
When all eight test trials have been executed and the loop is done, the study is over! We'll end the test phase and turn off the background noise that's been playing from CENTER
since the beginning.
See the resources page to download a copy of this protocol.
JUMP
clauses are a very powerful option introduced in BITTSy version 1.5. Ordinarily, BITTSy protocols progress linearly through the STEPs that you write - the entire protocol is executed (with the exception of LOOPs) line-by-line, in the order they appear in the protocol file. JUMP
clauses allow there to be new and different paths through an experiment: STEPs can be skipped over, repeated, or executed out of order. Different participants can experience different parts of the protocol in different orders, depending on conditions that you specify. This opens up a wide new range of possible experiments in BITTSy.
JUMP
clauses are an optional part of UNTIL
statements, which are used by both step terminating conditions and loop terminating conditions. These statements must always be the last lines of the STEP
that contains them, and cause execution to either remain within that step or loop through a series of steps until the specified conditions are met.
By default, once the UNTIL
conditions of a STEP
are met, the experiment progresses to the next step of the protocol, in order. Adding an optional JUMP
clause to the UNTIL
statement allows you to change this, and specify a different step in the experiment that should be executed next, after that condition is satisfied.
JUMP
clauses can be utilized in any UNTIL
statement, whether it is a step terminating condition or a loop terminating condition. The jump to the specified step does not occur until the UNTIL
statement that contains it is satisfied. (Note that loop terminating conditions are checked only when the loop step is reached - see explanation here.)
JUMP
clauses must always be embedded in UNTIL
statements. They cannot occur on their own, to mean that execution should always jump to the specified step - they must depend on a condition being met. But you can achieve the same goal (a JUMP
that always executes, immediately) by having that condition be something that you know will already be true at that point in your experiment, or specifying a wait time of a single millisecond.
The specified step to JUMP
to can be either before or after the current STEP
number. That is, a JUMP
that is specified in an UNTIL
statement at the end of STEP 5
could cause execution to skip backwards to an earlier step, or to anywhere later in the protocol.
It is possible to specify a jump to the same step that the JUMP
clause itself is in - in this case, the same step is restarted from its beginning. However, this is rarely a desired behavior, and may have unintended consequences during steps in which stimuli are active and participants' looking time is being tracked. More commonly, to repeat the most recent part of an experiment, you would jump back to at least one STEP
prior.
In combinations of terminating conditions (AND and OR types), one JUMP
clause can be used per UNTIL
line. For an AND type UNTIL
statement such as this one,
both terminating conditions would need to be met in order for a jump to the specified step to occur. (That is, the experimenter would have to press the X key during a trial in which the participant had looked at the stimulus for less than 5 seconds. Only then would STEP 11
be the next step to be executed.)
OR combinations are composed of two terminating conditions listed on consecutive lines. Whichever is satisfied first is where execution will jump to.
JUMP
clauses are optional - not having one always means to simply progress to the next STEP
in order. You can always leave them out when that is the intended behavior, including within a combination of terminating conditions.
Sometimes in combinations of terminating conditions, a human reader might find it clearer what's happening if you still include a JUMP
statement that just goes to the next step (i.e.,UNTIL KEY C JUMP STEP 6
in the example above), even though you could equivalently leave it out. You may see some examples like this in this manual!
JUMP is a very powerful command that can achieve very useful new effects, but it also introduces the possibility to create new types of errors if you are not careful! When validating protocols, BITTSy is not able to check for errors that could occur because steps are executed out of order. Instead, these errors could cause unintended behavior, or could make your experiment crash.
You will need to be careful to think through all of the ways you've set your experiment to progress, and make sure that all of these are well-formed. Below are some of types of errors that you should look for when using JUMP
:
Placement of start and stop flags for trials and phases. These primarily function to mark sections for analysis, so if one path through the experiment does not have starts and stops happening at all the right times, or has some skipped or missing, it is not always obvious when testing the study's execution. Whenever a JUMP
is specified in a step where a phase and/or trial is currently in-progress, check that all possible next steps lead to a Stop
statement for that trial or phase. You can also label and save logs from tests of running through different possible paths in the study, and generate reports from them to check whether phase and trial information was recorded like you intended. If you're missing start or stop flags, you may have missing trials, trials that last far too long or contain stimuli that they shouldn't, or you can get a trial embedded within another trial, or a phase within another phase (these last two possibilities could make the reporting program crash.)
Dynamic tag selections and use in action statements. A tag cannot be used in an action statement before it has been defined in a previously-executed STEP
. So if a JUMP
clause can allow your steps to happen out of order, ensure that steps where dynamic tags are created occur beforehand in all possible orderings. Otherwise, some runs of the experiment will end abruptly with an error message about the tag not existing. Whenever possible, it is easiest to prevent this type of error by establishing persistent group selections and tag assignments (such as the condition assignment for the participant) at the very beginning of an experiment, and making trial stimuli selections immediately before they are presented.
Creating an experiment that doesn't end. If your JUMP
clauses allow you to jump backwards or repeat sections of the study, check that you don't accidentally create an infinite repetition, or make any series of jumps that will result in never reaching the end of the protocol!
For OR combinations of terminating conditions, where different conditions result in jumping to different parts of the experiment, be careful that they are checked in the correct order. UNTIL
conditions are checked in the order that they are listed. See below about skipping a phase based on habituation status for an example of where this can be extremely important!
This section walks through a variety of cases in which JUMP
clauses are particularly useful, or create experiment structures that cannot be achieved with a BITTSy protocol that progresses through steps in a strictly linear fashion. These examples are meant to illustrate some possibilities opened up by using JUMP
, but are by no means exhaustive!
Sometimes you want an option to abort a phase or loop, but not by pressing the Escape key, which ends the whole experiment - just to stop what's currently happening and move on to a different point in the protocol. In a simple loop, you may be able to do this with an alternate loop terminating condition. But in other cases, this will be best achieved using JUMP
.
The example below runs three trials, each selecting a video to play out of a different group. The pattern is looped an additional 8 times, for a total of 27 trials.
If a participant is fussy or inattentive, we might want to let an experimenter skip to a different phase and see how that one goes instead, just by pressing X on the keyboard. If that decision should only be made at the end of one of these sequences of three trials, we could specify that as an alternative loop terminating condition:
But since loop terminating conditions are only checked when the loop step is reached, the experimenter could press the X key during trial #4 but not have the loop end until after trial #6 was complete. That might not be acceptable, particularly if the trial videos are quite long. Instead, adding a keypress with a JUMP
statement as new step terminating conditions within the loop would let the experimenter end the loop immediately by jumping outside it, to STEP 8
. This change to STEP 1
is shown below; the change would also be made to steps 3 and 5.
Sometimes you may want the ability to repeat the same trial again, for example in a familiarization or training phase of an experiment if the participant isn't paying attention or if a trial is started early by an experimenter mistake. A JUMP
clause can be used to repeat trials according to an experimenter's decision during the study session (i.e. a particular key is pressed to repeat the trial) or according to external or looking-time criteria.
In the example below, the experimenter must decide after a trial to continue the experiment normally (by pressing C) or to repeat the trial (by pressing X). Because the jump goes to STEP 2
and does not repeat STEP 1
, where the stimulus video is randomly selected, the same stimulus will be shown again on the repeated trial.
Rather than deciding after a trial is over, you could specify criteria that can be met during a trial to end the trial early and then repeat it. Here, it is important to ensure that Trial Start
and Trial End
flags are placed properly with respect to JUMP
statements. We'll need to make sure that the initial presentation still properly ends with a Trial End
flag, even when it is cut off early, so that we don't end up with a trial starting inside of another trial (which is problematic for making reports!) To do this, we can adapt the example from before to make two separate steps that end the trial, one to go to when the trial ends normally (STEP 5
) and one that is only executed when the experimenter decides to end the trial early, and otherwise skipped over (STEP 4
). A second JUMP
clause in STEP 4
happens effectively immediately to restart the trial by going back to the start of STEP 2
.
It is important to note that trials that are repeated are not distinguished in any way in standard BITTSy reports of session information or looking time. BITTSy simply numbers each trial with how many Trial Start
flags it has encountered so far within the session - so if trials were repeated like in these examples, they would show up in reports as trials with consecutive numbers and the same stimulus displayed, and both would be included in reported summary measures. If your protocols allow for cut-short or repeated trials that you would want to exclude, you must recalculate summary measures manually - or you can write a custom reporting function that handles trials differently whenever the log file records a JUMP
step being executed, in any way that is appropriate for your study.
Similar to repeating a trial, there may be situations where you'd like to repeat a whole phase of the experiment - for example, to re-run a familiarization phase if the experimenter feels the participant is not yet comfortable with how the task works.
In the example below, four familiarization trials are played. At the end of the phase, the screen stays blank until the experimenter selects whether to continue on to the test phase as normal (pressing C) or to repeat the familiarization phase (pressing X).
In the above example, it's important for the experimenter to know that they are supposed to make a choice when the phase ends, and which keys to press for which phase! Reminders can be put in comment lines (any line that begins with a #, which is ignored by BITTSy when validating and executing protocols) at the beginning of a protocol, so that they are visible after the protocol is loaded. The name of the current phase is also displayed in the experimenter's window, so signaling them with a new phase name may make the timing of this choice more obvious.
If you wanted instead to let the experimenter decide mid-trial to stop and restart the familiarization phase, you could adapt the example like below. Like the second example in the previous section on repeating a trial, we define two separate steps that end the trial, with STEP 3
, the option that also ends the phase and restarts it, only happening if the experimenter presses X during a trial.
Sometimes, when a particular condition is met, a phase or section of the experiment can be skipped entirely. For example, in habituation studies, participants who do not meet criteria for habituation are typically excluded from all analyses of looking time in the test phase. Sometimes it is desirable to show the test trials anyway if there are only a couple of them, so that the parent observing the study can see it end as originally described. But if it would be preferable in your study to not make the participant go through any extra trials when you know the data won't be used, you can use a JUMP
statement to skip to the end of the study depending on the participant's habituation status.
The example below is the habituation phase of the word-object mapping habituation example walk-through. The only change is the JUMP
clause added to the first loop terminating condition in STEP 12
, which specifies when the habituation phase should stop if the criterion is not yet reached. If twenty habituation trials have been displayed and the participant has not reached the specified looking time reduction criteria, we can skip to the very end of the protocol without playing any test or post-test trials.
Remember that terminating conditions in an OR combination (like those in STEP 12
above) are always checked in order of which is written first, and the first one that is satisfied is the one that will progress the experiment. This can be extremely important when one of them has a JUMP
statement! If the two UNTIL
statements in STEP 12
were in the opposite order, whenever a participant met the habituation criteria in the last possible trial, we would check UNTIL 19 TIMES
first, and end up skipping over the test phase without calculating that the participant had actually habituated, and should have gone into the test phase instead. Listing UNTIL CRITERIONMET
first ensures that this end condition is always checked first, and habituation can still be achieved on the last possible trial.
You can use JUMP
statements to take different actions depending on a participant's behavior during a particular step. This capability is the crux of conditioned headturn experiments, but can be used flexibly outside of this paradigm to reinforce a particular behavior or to respond to a selection that the participant makes.
The example below begins by having the experimenter press L or R to indicate when the participant turns to look to the left or right side of the testing booth. Whichever key is pressed first, a video will be displayed to a screen on the same side where the participant is now looking, and the video stops when they look away briefly. This lets us display stimuli in a way that is responsive to a cue from the participant.
This is a simple example, with only a couple steps in each of the two "branches" that can execute based on different criteria - but it is possible to construct much more complex sequences of steps that are skipped in one case and executed in another, with more than two branches, branches within other branches, etc. BITTSy's STEP
numbers, which only label steps and do not reflect their order of execution when JUMP
clauses are being used, can start to look confusing in these cases - and you'll want to keep track of which step numbers represent important events in your experiment, as well as the context of when they happen. Constructing a flowchart is very useful for this! This one represents the example above.
This kind of setup could be used similarly to "reveal what's behind a door" by having the experimenter record a child's selection (either from gaze direction or pointing behavior) with a keypress, then displaying an animation on the screen that the child picks.
Some experiments may require a particular sequence of conditions to be met, and other conditions to not be met in-between, before moving on to another phase of the experiment. For example, in a study with a "correct" or desired response, you might require three correct responses in a row before moving on from the training phase.
Often such studies will be constructed so that experimenters can remain blinded to which participant behaviors are considered correct. But for a simple example of this kind of protocol structure, imagine a study where during each training trial, the experimenter waits for one of two behaviors, and records either a C keypress for "correct" or M for "mistake." The participant must get three correct trials in a row to keep going in the study, and training trials continue until they achieve this, or get 10 wrong in a row.
STEP 3
is a trial like any other, yet it is only executed when the participant has a current streak of one correct response. Another correct response leads to a jump to STEP 5,
which is only executed when the participant has a current score of two correct responses in a row. And a correct response recorded in STEP 5
, bringing us to three correct responses in a row, is the only way to get to STEP 7
, which begins the next phase of the experiment. But an incorrect response in either STEP 3
or STEP 5
breaks the streak and lands the execution back in STEP 1
, which represents a current streak of zero.
This structure can be visualized in a flowchart like below:
Studies with familiarization or training phases often require the participant to accumulate a certain threshold of looking time toward a set of stimuli before moving to the next phase. Sometimes you only care about total time attending to the set as a whole, and it doesn't matter whether the participant looks much longer at some than others. But sometimes you want the looking time to end up relatively balanced. In these cases, once one stimulus reaches the threshold, you can use a JUMP
statement to repeat another one that's below the threshold, until the participant meets the required looking time to that one too.
This is commonly needed in headturn preference procedure experiments with an initial exposure phase that involves a couple of passages. Typically the passages alternate trials or are ordered randomly - but if the participant reaches the required exposure for one passage very early on, we don't want them to keep hearing that passage more and more while trying to reach the threshold listening time on the other passage. The example below is based on the familiarization phase of this headturn preference example, but rather than alternating trials and looping in STEP 6
until both audio files achieve 25 seconds of listening time, we only alternate trials until one of them does, then execute either STEPs 7-10 or 11-14 depending on which audio file still needs more listening time.
BITTSy's main window has a button for Advanced Settings, located below the boxes for the basic run information. As of version 1.32, there are two settings located here.
These settings are located here because they affect only a subset of users, but if your lab needs to toggle them, be sure to document that. Settings that you change here will persist when you close and reopen BITTSy, but whenever you download a new version, you will need to go into the advanced settings again.
There are two options for rendering video stimulus files - hardware rendering, and software rendering. One of these options may give you smoother playback of video files than the other.
By default, BITTSy forces your computer to use software rendering, which is smoother and faster for most users. If you have trouble with video files skipping frames, stalling, or appearing to freeze briefly at the beginning or end, you may want to uncheck this option to try hardware rendering, and see if there is an improvement.
The other likely culprits for poor video playback are using non-Windows-native video formats or not having necessary codecs installed on your computer. Check if your video plays smoothly in Windows Media Player, and consider converting your video to one of our suggested file formats.
Waves MaxxAudio is a sound driver service that is known to have built-in audio enhancement effects that critically disrupt experiments in BITTSy. If this is present on your computer (as is indicated in the above screenshot), you should pause or disable the service, or follow steps to remove it from your computer. See this page for more information.
When opening BITTSy, you will see a screen like this.
Version number - located in the top left. Main user interface buttons - Load Protocol File, Validate Protocol File, Event-Log Save Location, and Run Experiment; located in order on the left-hand side. Session information - Participant ID, Participant DOB, Experimenter Name (small boxes in the middle of the window) and Experimenter Comments (at the bottom left). These are filled by the experimenter with any necessary information, and make up the opening section of a run's log file. Protocol File - located on the right side. Once a protocol has been loaded, a copy is displayed here for reference. Errors in Protocol File - located in the middle left. Once a protocol is validated, any errors that BITTSy encounters will be displayed here. Run progress information - Current Phase, Current Trial, and Most Recent Key (located in the middle). This information is updated throughout the execution of the experiment and displayed for the experimenter's reference.
Click Load Protocol File and browse to the correct folder. Select your protocol file and open it. Once a protocol is open, it will be displayed in the Protocol File box. This allows the experimenter to verify that they have selected the correct protocol, or review if there are special key assignments for live coding. (You can include comments at the beginning of your protocols with instructions or reminders for the experimenters in your lab; they will see them at this point!)
Validating a protocol checks that it is properly formed. This includes: that it has the necessary sections, that optional settings have been defined in a valid way, the devices named in your starting definitions are available (connected to your computer, powered on, and being recognized), your files are located in the indicated folders, steps of your experiment are properly formatted, and that tags that are referenced in action statements were previously defined. In other words, validation checks for things that would make your experiment crash in the middle if they weren't fixed before starting to run it!
Whenever BITTSy encounters a validation error, the error message(s) will be displayed in the Errors in Protocol File box to the left. For example, in the screenshot below, in its DISPLAYS ARE
starting definition, the protocol defines one display for stimulus presentation and names it CENTER
, but when the protocol was validated, the display was not powered on, so Windows reported to BITTSy that there was no additional display available to be assigned the CENTER
label.
Whenever BITTSy encounters validation errors, it will prevent you from running the protocol. You will need to close out of BITTSy, fix the error(s), and reopen BITTSy to reload the protocol file and try validating it again.
When there are no errors found, a pop-up will tell you validation was successful, and you can move on with prepping to run the experiment.
When errors are found in your experiment, BITTSy will report the line number of the protocol file on which the error occurs. Notepad++ is a free text editor for Windows that we recommend using for BITTSy - by default, line numbers are always displayed in the left margin, so you can go straight to the line number where BITTSy encountered the issue!
Validation primarily checks for formatting, and that the individual lines of your protocol have been specified with all of their fields being valid. Because protocol files can be written flexibly, and BITTSy doesn't recognize any particular protocol as a template of an established experimental paradigm, it will not display any warning or error about whether your protocol properly implements a specific paradigm.
Trial and phase start and end flags are considered optional, so BITTSy will not warn users when protocols lack them. However, misplaced or missing start/end flags for trials and phases can affect the execution of your experiment, particularly for habituation studies, and trial markers define sections in which key presses for live-coding should be expected and logged. Reviewing your protocol to ensure these are correctly placed is very important for any such experiment.
It is also important to verify that selection from groups, particularly via loops, produces valid available selections for the entirety of the experiment. For example, if a loop includes a TAKE
statement from a group with 5 items, but the looped section can execute a total of more than 5 times, your experiment will encounter an error and halt when there are no valid selections remaining. Loops that are nondeterministic in the number of times that the looped steps are repeated (e.g. loops with an UNTIL
condition that depends on the participant's looking time) should have an additional UNTIL condition that would limit the maximum number of iterations to an appropriate amount, preventing any error that could occur when no new stimulus could be selected.
It is also possible to specify a loop or step whose terminating condition cannot be satisfied, or could run for an impractical amount of time when certain conditions are met. BITTSy does not check for these during validation. For example, a step could require the participant to accumulate a set amount of time to a tag, but that tag is not active during that step - so they cannot accumulate any looking time at all. This step will run forever, and you will have to end execution by pressing escape. You could also test a particularly fussy child, and at a certain point wish to just throw out that trial and move on to the next part of the experiment - unless you specify an alternative end condition, such as a maximum amount of elapsed time or the experimenter pressing a key, the step will continue until the participant can accumulate the required looking time.
Once your experiment is all set up and ready to be run, you might wonder why you still need to validate the protocol file every time you want to run it. Since the file is no longer changing, shouldn't it be okay to run without checking the protocol for errors, and shouldn't you be able to bypass validation?
We feel that it's a lot safer to require validation every time. Your protocol file could have been edited unintentionally, or a required file for your experiment could have been moved - or you could have simply forgotten to turn on the displays or lights used in your experiment. Without a validation step, BITTSy would not find these errors until it came time to execute that line that had a typo in it, or display a file that it couldn't find, or send a signal to a monitor or light that can't receive it - and each of these things would be a serious error in an experiment or could cause BITTSy to crash and the session to end early. Validation does a good job of checking the things that are most likely to change while a study is already finalized and running - namely, having the appropriate devices on and recognized by your computer at the time of running a session, and not having any files or folders accidentally moved or renamed - and gives you a chance to fix them while you're still getting ready for your participant, rather than having to drop the session for a technical error.
The following information is saved automatically by BITTSy into the header of the log file from running your experiment, and does not need to be specified by the experimenter before starting the run:
filename and path of the protocol file (typically specifying the study and study version)
date and time that the experiment was started
BITTSy version that was used to run the protocol
The following fields can be filled in by the experimenter with whatever information is important for your lab/study:
participant ID
participant date of birth
experimenter's name/initials
additional comments (e.g. study version, task order, participant group)
Each of these boxes on the main BITTSy screen can be filled in or changed prior to running the experiment, but once the experiment starts, they will be locked from editing.
After validating, the next button to use is to specify where the event log will be saved, and what to call it. This log file will contain all of the information about the run, including the session information above, timing and stimulus presentation order, and looking times (if your experiment is live-coded). Click this and navigate to the folder you would like to save the log into, and specify a name for it. Typically, logs would be saved as something that uniquely identifies that session, such as a participant number. This run of the setup protocol is just being named "test". There is no need to specify a file extension - ".txt" will automatically be added.
Every log file will have a time stamp added to the end of its name that is the date and time the experiment was started. For example, the first text file in the file view above was a log file that was generated on 9/14/19 at 11:15 AM. This makes it impossible for any two log files to have the same name, and prevents new logs from overwriting or being confused with older logs in the same folder.
The log file is created as soon as Run Experiment is clicked, and BITTSy saves the basic session information into it. As the experiment progresses, BITTSy saves information to the log file automatically. When the experiment ends, no additional action is required to save the data.
Information is saved to the log file in chunks, so if BITTSy encounters an error during execution, it is possible for the in-progress step to not be logged. When the experimenter ends the study early by pressing escape, all lines that are waiting to be logged are pushed out at that time, so that no information from the end is missing. The log file will also record the time that the experiment was ended, and whether it was halted prematurely by the experimenter.
Once these earlier steps have been completed, you are ready to start by clicking the run experiment button. This will start the execution of your experiment, beginning at STEP 1
.
Throughout the run, BITTSy will display the most recent key that the experimenter has pressed. Whenever a trial or phase is ongoing, that information will also be displayed so that the experimenter can see how the study is progressing.
When you reach the end of the experiment, and do not encounter any errors along the way, you will see this pop-up.
If you need to stop the experiment early, you can do so by pressing the escape key.
Click OK, and then exit out of BITTSy.
Do not exit out of BITTSy while running an experiment without first pressing the escape key and waiting for the pop-up. This is important for two reasons. First, this ensures that any information from the run that has not yet been saved is written to the log file, and that the log file is properly saved and closed without risking file corruption. When the pop-up appears, you know that the process of writing information has been completed, and you can safely close the program. Second, this can cause a problem with controlling the lights on subsequent runs of BITTSy experiments, because BITTSy does not fully stop running when you close its window without also having stopped the experiment. (See here for info on how to fix this, if you accidentally do this!)
Throughout runs of protocols, BITTSy is capable of receiving keypresses from an experimenter to indicate when and where the participant is paying attention. During the run, BITTSy will use this experimenter input to track the amount of time spent looking toward/away from active stimuli, and use this information to evaluate when looking-time controlled steps or loops should end. In the log file produced by the run, the precise timing of each keypress is logged, as well as a running record of durations spent attending in the direction of the currently-active stimuli within each trial.
Experiments that use and , as is typical in and studies, must necessarily have the experimenter observing the session and live-coding the participant's attention via keypresses. In these paradigms, live-coded looking time is not only the primary data collected, but controls the progression of the experiment during its execution.
Experiments that use fixed-length trials and loops do not require live coding, but BITTSy will log experimenter keypresses in the same way regardless of whether the execution of the experiment depends upon looking time. Paradigms such as typically rely on finer-grained analysis of looking time than can be achieved with live coding, and are coded offline from a video of the session. However, for any study where live coding is desired or appropriate, BITTSy will record keypresses, and the experimenter's recorded looking times can be obtained after the fact from the log file.
Keys that the experimenter presses during live coding are typically meant to denote which direction the participant is looking. In BITTSy, individual keys can be assigned to the defined in your experiment, so that when the experimenter presses a key that denotes a particular direction, the duration of that look will be logged as a look toward the stimuli being presented on that side. See the section on for how to define these in your protocol file.
BITTSy will still log and respond to keys that are not assigned to a SIDE
in your protocol. This is useful when you wish to have experimenter input progress the experiment without associating this action with any active stimuli. For example, you can play an attention-getting video on loop as an inter-trial period, and have the experimenter press a particular key to terminate that step and begin a trial. In the logs, you could recover the time a participant spent in this period before the experimenter started the trial, if you wished. As another example, you could specify a key as an alternative terminating condition that allows the experimenter to skip ahead to another independent section of the experiment if they judge the participant is too fussy to complete the current section.
However, this can cause problems in live coding if the experimenter accidentally presses the wrong key. Keys that are not assigned to a side are assumed when pressed to mean not any side (i.e. AWAY
), and are treated as time spent looking away from all active stimuli. Coding mistakes from accidental keypresses can be corrected by inspecting the log file afterwards (as long as what the experimenter intended is known), but when they occur in a step or loop that is looking-time controlled, they will affect the way the experiment progresses. When deciding which keys will be used for live coding a protocol, it is important to consider what set of keys will be easiest and least confusing for experimenters to use, and to ensure that study runners have adequate practice live coding with that set of keys before running any test sessions.
The time of a keypress is recorded as the time it is pressed down (as opposed to the time it is released, or "key up"). Pressing and holding down a key is therefore not any different from pressing it and immediately releasing it - BITTSy only cares about the initial pressing. Once a key is pressed, BITTSy assumes that looking continues in that direction until the experimenter presses a different key. We call this an "explicit away" coding scheme because the experimenter must explicitly press a new key to mean the participant has looked away.
Some other experimental software packages will allow you to code with a single key, where anytime that a key is pressed denotes looking in that direction and anytime it is released denotes looking away from that direction. BITTSy does not currently support this. Instead, the experimenter would need to press a different key that was set to mean a different direction, or a more general AWAY
, to end the look.
Setting up live coding and training experimenters to live code depends entirely on your requirements and preferences for your experiment. We do have a few general recommendations:
We recommend training coders to live-code in the same way (e.g. using the same number of fingers, having the same starting/resting locations.) Work with your team of experimenters to determine what they find easiest and most comfortable, and best limits coding mistakes.
Be consistent across live-coded studies in your lab as to whether the experimenter is coding looking directions from their own perspective or from the participant's perspective, whenever these differ.
Make sure that your key assignments relate to your BITTSy side names and actual physical locations in your testing room in a way that is clear and consistent for coders, as well as for you as you process data.
If you are setting up a cross-site study, coordinate with partner labs to use the same key assignments across testing sites to help experimenters be as consistent as possible.
If you are new to running visual fixation/HPP studies in your lab, your study runners will need to practice and typically meet a set reliability standard before testing subjects. Check with collaborators if they can share session videos and their coder data for training purposes. Ideally, these are videos of sessions run in BITTSy, where they can also share the protocol for your coders to use for practice.
There are a couple things to know about report files that can make them easier to read and analyze.
Some text fields will be too wide to view in Excel - expand them to read them Numbers/dates that are too wide to display are visually replaced by ##### in Excel. Double-click on the cell and then click outside it, and the column width will adjust to show the entire number/date.
Within each report type, information is arranged systematically. Sorting order is defined by the column types from left to right, with columns further right used to "break ties" in earlier columns. Phases are sorted chronologically Trials are sorted chronologically Groups are sorted alphabetically Tags are sorted alphabetically For example, in the report, the columns from left to right are: Phase, Group, Tag, LookingTimePerTrial. This means that we'll sort first by phase: looking times for all the groups and tags that were presented in a single phase will be arranged together. Averages always go at the beginning of each phase, and are ordered alphabetically by group name. Then within each phase, we'll sort by the group that the tag belonged to. Within each group, we'll report the tags in alphabetical order. Whenever you need to pull measures from individual reports for further analysis, this helps you easily select the right values. However, any log that is from a session that was ended early or was accidentally included from a different experiment will have a different number of lines or have information that should not be included. If you are using formulas to pull values from every X rows, you should always verify that these line up correctly. If you have multiple protocols for different versions of a single study, keeping your group and tag names consistent across protocols and merely changing their or helps ensure that you can easily analyze these versions together, since the row numbers in which certain pieces of information appear will be consistent across participants in those different study versions.
In addition to the , the reporting module has an option for running a custom report. You will be able to write an additional report function that is used whenever you select this option on the user interface.
After the reporting module reaches its , we intend to make it open-source. Instructions will be added to this page on how to add your own custom function or modify existing functions for your lab's general needs!
Just like the reporting module's existing reports, you can pull necessary information to calculate and report measures of interest directly from or via the report type. You can use any programming language in which you have expertise (e.g. Python, MATLAB, Visual Basic in Excel) to parse, filter, locate/calculate values from log files. Feel free to make use of the community on our for any questions about working with log files or for inquiries about any similar custom scripts made by other researchers!
Do you have a custom report function you have created to use with BITTSy logs? If you are willing to share it with other BITTSy users, we would love to make it available as a resource. You never know who will find it useful! Get in touch through our or on , and we will link your info here!
Log files are created as you begin to run a protocol, and record the timing of every event in the study session. One of the steps before running a protocol is to the save location and name of the log file for that session.
The first portion of any log file contains information about the session, most of it in the blanks for subject number, date of birth, experimenter name/initials, and comments. It also indicates which version of BITTSy was used to run the protocol, the name and location of the protocol file, and the test date and run time. This information is saved to the log file immediately upon clicking to run the protocol.
An example of the header section of a log file is below (personal information has been replaced with ▢ symbols).
The main contents of the log file follow these two opening sections, and detail what happened in the study session and when.
As soon as the protocol file begins execution, we define the start of a phase (called "Train"), and start the center light blinking. This continues until the experimenter presses the key C. (In the log below, you can see that it takes the infant a few seconds to orient - note the gap in timestamps between the first "stimuli light blink" line and "C pressed".) Then the center light is turned off, and we randomly select a side (LEFT or RIGHT) from a group called trainingsides
, and start to blink the light on that side (here, LEFT was chosen).
While you'll most often be working with summary reports rather than looking at log files directly, there are some important things to understand about how log files are generated and what they record.
Log files remain open throughout a session, and events are written to the log in real time (or with a slight delay). Writing to log files as events happen, rather than generating a full log file after the run, helps ensure that as much information on a study session as possible is available in case something unexpected occurs, such as encountering an error. Sometimes the most recent few long lines are held in a "buffer" and written to the log file in batches. The log file is not finished and closed until execution ends. This means it is important to always end execution properly (by pressing Escape) before closing out of BITTSy - it ensures that the final batch of lines can be written to the log.
Header information is written immediately, and cannot subsequently be changed within the user interface's text boxes. The text fields on the user interface for the subject number, date of birth, experimenter, and comments should all be filled in with any necessary information prior to running the study so that they are included in the log file.
When BITTSy runs an experiment, it generates a of all events and their timing. Afterwards, these log files can be used by a separate reporting program to generate summaries of what happened in a single study session, or in a set of study sessions (i.e. logs from all participants in a study).
Reports are output into CSV files, which can be imported into other programs for statistical analysis.
The BITTSy reporting module is open-source. , or follow instructions below to download as an executable.
Download the reporting module from the . Like the main BITTSy program, it does not require any installation. Simply unzip the folder, navigate into the subfolders, and locate the .exe file. (Like , you may wish to create a shortcut to move to the desktop, and may need to take some additional steps to allow Windows to run the program the first time you try to launch it.)
The user interface of the reporting program looks like this.
The reporting module can be used to generate a summary of events or looking behavior on a single study session by examining one log file document, or to average across study session logs by examining an entire folder. With the folder option, individual summaries are also available below the averages section. Note that on some report types, averaging across participants does not make sense; on these, analyzing a whole folder is simply a faster way of obtaining all of the individual summaries.
To analyze a log file individually, select Document and click the Load Document button. Navigate through the file browser pop-up to pick the desired log file.
To analyze a whole folder of log files, select Folder and click the button underneath (which will switch to saying Load Folder) to select the desired folder. This is especially useful for analyzing a whole study, and will generate averages across participants on measures of interest in addition to summaries of individual sessions. Note that prior to analyzing the folder, you should ensure that it contains:
all the log files you wish to analyze,
which were all from a single study,
and no other text files, including the protocol file or logs that were generated during test runs of the protocol.
All reports that are generated by the reporting module begin with specifying which files were included in the report. When looking at data files that result from running a folder, it is always easy to see whether the folder contained all the intended participants, and whose data is included in the reported averages.
You can check off multiple report types at once to get different types of information about your log file/folder of log files.
Sometimes certain stimulus types are presented in an experiment, but aren't important to look at in a report. These are often visual stimuli that serve as a fixation point while separate audio files are played - flashing lights during a HPP study, or a checkerboard pattern on the screen during an auditory-focused central fixation or habituation study. In these cases, we will end up analyzing attention toward to audio files, and attention toward the visual fixation stimulus is completely redundant. Therefore, reports that analyze looking time towards particular stimuli allow you to exclude certain stimulus types from consideration. This helps remove redundant information and sometimes makes it easier to work with the resulting CSV files.
When you check off a report type that allows you to restrict stimulus types, you will see checkboxes appear in the stimulus types section. Any stimulus type that is not present at all in your protocol will obviously not be present in the reports - so there is no need to uncheck anything that doesn't occur. But if there is a stimulus type present that isn't important to include in reports, you can uncheck that box so that it will not appear.
Whenever restriction of reported stimulus types is available for some selected reports and not others, a note will appear underneath explaining which reports your stimulus type selections will affect.
Analyzing a single log with single or multiple report types, or a folder of logs with a single report type will be output into a single CSV file. When you click the Save Location button, you can specify where this file should be saved and what to name it.
When analyzing a folder with multiple report types, the different reports will be saved as separate CSV files, as this is generally more readable. In this case, the following window will appear when you click the Save Location button. Instead of naming a single CSV file, you will choose a folder to save into, and make a "filename stub". The CSV files for each report will be named with the stub and the name of the report.
Once your source document/folder, report types, and save location are specified, you can click Generate Reports, and the CSV file(s) for your data reports will be created.
Additional reports can be generated immediately afterwards. A save location/file name must be specified every time, but other selections persist until they are changed.
Below is a description of all the report types included in the reporting module. We have tried to define reports that summarize the most commonly-needed information across different types of infant and toddler testing procedures. Sometimes, running multiple reports may best suit your needs. You may also find that none of these capture a measure that you need. If so, we are you better understand the formatting of log files so that you can create a - either to run within the reporting module, or as a separate script.
All report types open with basic information about their generation via the reporting module and the log file(s) included. In this opening section, one row is output per log file included in the report. Each line includes:
Subject ID
Date and time of experiment session
Date and time when report was generated
BITTSy version used to run the experiment session
Reporting module version used to generate the report
File path of the log file included in the report
In all report types, all time measurements are reported in milliseconds.
When a report is generated from multiple logs, an additional column is added on the far left for subject number. The report begins with averages across subjects (when applicable for that report type) and then the individual summaries by subject.
See for more information about how reports are formatted.
This report type contains information from the header section of the log file(s) being analyzed. It includes:
Subject ID
Subject date of birth
Experimenter name
Date and time of experiment session
Name and filepath of the protocol file run during the session
Any comments entered by the experimenter prior to the study session
This report type can be used to simply list which stimuli were presented on which trials, and to what location. It includes:
Phase in which the stimulus occurred
Trial in which the stimulus occurred
Stimulus type (audio/video/image)
Side that the stimulus was presented to
This report type generates some general attention measures across phases of the experiment. It includes:
Phase being reported
Average looking time per trial within the phase
Total looking time to all stimuli within the phase (this does not double-count attention toward two stimulus types that are active in the same location at the same time)
Average time from a stimulus becoming active to the participant orienting to that stimulus. Note that this is across all stimulus types (audio/video/image/lights), and includes all stimuli that were active during trials.
This report breaks down some of the same attention measures from the overall looking information report above by trial. Even reported per trial, it is "overall looking" because all looks to a single stimulus are totaled together. It contains:
Phase being reported
Trial number within the phase
Tag name of active stimulus (light_blink or light_on for lights)
Group name that the tag was selected from, if applicable
Side that the active stimulus was presented on
Time from when the stimulus started to when the experimenter first indicated the participant was looking in that direction. This time is reported as zero for cases in which 1) the participant was already looking in a direction when a stimulus started on that side, or 2) when the stimulus presentation begins simultaneously with the participant's look being recorded in that direction (as is the case for trial audio in HPP).
Total time the stimulus was active (this includes time outside the trial, if the stimulus started earlier or continued after - as seen in the HPP example below, where the side lights serve as attention-getters before a trial begins)
Total time the participant spent looking toward the active stimulus within that trial.
This report gives a more detailed view of the participant's looking behavior within trials, allowing you to see the lengths of individual looks. It contains:
Phase the look occurred in
Stimulus that was active in the direction the look was recorded in (looks in directions where no stimuli were active are not reported)
Group that the stimulus was selected from, if applicable
Trial number the look occurred in (looks that occur outside a trial are not reported)
Length of the look in milliseconds
This report breaks down the time that the participant spent looking toward media (audio/video/images) in different locations, ignoring what the particular stimuli were. Having a strong bias to look to one side over another, in a study in which stimulus type and presentation location is randomized, is sometimes a reason to exclude a participant: it is unclear whether their preference for the stimuli on that side was due to the features of those particular stimuli, or simply because it was more comfortable to maintain attention to that side based on how they were positioned. This report contains:
The phase being reported
How many tags were presented to a given location within that phase
Total time participant spent looking toward tags in that location
Average time spent looking toward a tag in that location as a single look
Average time spent looking toward a tag in that location per trial that had active stimuli in that location
Percent of time participant spent looking toward that location out of total looking to all active stimulus locations within that phase
In the example below, media was presented LEFT and RIGHT, but not to any other sides. The information above is listed for each side, and the order in which sides are reported is alphabetical by side name. Percentage looking to the different locations (#6 in the list above) is always placed after all individual sides are listed; columns here are also alphabetical by side name.
In calculating the percent time looking to different locations, BITTSy will include all locations that had active media during trials in that phase. It does not account for how many trials occurred on each side, or what stimulus types were present. You may wish to recalculate percentages using the other information in this report, to correct for unbalanced numbers of trials or to exclude a presentation location that had fundamentally different stimuli in your study (e.g. center training/reinforcement trials interspersed in a study where left/right trials tested learning of the training stimuli.)
This report gives looking time information for particular tags (across all times the tag was presented) and groups of tags. It includes:
Phase the group/tag was presented in. Groups/tags that are presented in multiple phases have their looking time averaged only within each phase rather than across phases.
Group name (or names, comma-separated, if a tag was selected from multiple groups in the course of a single phase)
Tag name (or (average) to report average looking time across all tag members of that group that were presented in the phase)
Looking time per trial of the phase in which the group/tag was presented
This report contains:
Current phase
Current time, as expressed from the start of the experiment (when the experimenter clicked Run Protocol)
Current time, as expressed from the start of the current trial (times that coincide with trial start/stop events are reported as Trial Start and Trial Stop)
Most recent key pressed by experimenter at the current timepoint
Stimuli that were active at the current timepoint, comma-separated
Sides that the active stimuli were present on, comma-separated and corresponding in order to their respective stimulus/tag in the ActiveStimuli column
Window size
Window overlap
Criterion reduction
Basis chosen (LONGEST or FIRST)
Window type (SLIDING or FIXED)
Basis minimum time
Information on the habituation phase (identified as the one containing the CRITERIONMET
condition - it can be named as you wish)
Whether or not habituation criteria were met
How many trials before the participant reached habituation (n/a if they did not habituate)
Trial and looking time information across phases
Current phase
Trial number within the current phase
Active trial stimulus tag
Group the tag was selected from, if applicable
Side the tag was presented on
Time from when the stimulus started to when the experimenter first indicated the participant was looking in that direction. This time is reported as zero for cases in which 1) the participant was already looking in a direction when a stimulus started on that side, or 2) when the stimulus presentation begins simultaneously with the participant's look being recorded in that direction
Total time the stimulus was active (this includes time outside the trial, if the stimulus started earlier or continued after)
Total time the participant spent looking toward the active stimulus within that trial.
The next section of a log file records the that are important for analyzing looking time and habituation. These can be a useful reference, especially when default values are used for some settings, as they may or may not be explicitly stated in the protocol file itself.
In the example below, the protocol did not specify any changes from the default settings. None of the were utilized in this study, but their values are consistently recorded in log files.
The example log below is from the start of a headturn preference study similar to .
Every line in a log file begins with a timestamp, accurate to the nearest millisecond. These are used by the to generate of the events in the study session and calculate looking times. Some lines (particularly those for beginning the presentation of stimuli) also end in a number denoting the elapsed milliseconds since the start of the experiment, with an even higher degree of accuracy.
There are a lot of events to log in a typical study - in the example above, we logged 20 before even starting the first trial! BITTSy does not make any assumptions about what information will be important or unimportant for later analyses, so we opt for logging all possible information. As a consequence, log files often are quite long, and not terribly human-readable. This is why we supply a set of that read log files and output commonly-used measures suitable for analyzing HPP, preferential looking, central fixation, and habituation studies.
The last lines of a log file denote how execution stopped (ended, by reaching the end of the protocol; prematurely, by the experimenter pressing the Escape key; or unexpectedly, by BITTSy encountering an error) This can be reviewed to verify that an experiment ran fully to completion, especially in the case of missing lab-internal notes from an experimenter about how a session went. Note that the line specifying an unexpected end can only be written by BITTSy if the error is not so critical that it causes the program to close or lose access to the log file. Execution errors will typically end in BITTSy displaying a pop-up with an explanation of the error as the session is ended. (For example, this will occur if your protocol attempts to make a selection from a group in which no items are eligible for selection.) However, issues such as closing out of BITTSy without first ending the experiment, having your computer suddenly shut down due to power loss, or encountering an that causes BITTSy to crash can result in this final line not being written to the log (see #1 above). Any log file that is missing a line denoting how the experiment ended should be assumed to be unexpected, and may be additionally missing experiment events that happened immediately beforehand.
In the right-hand column, there is a list of available report types. See for descriptions of these, to help you determine which reports are appropriate and useful for your study.
The reporting module was designed for use with logs from BITTSy version 1.12 and later, and will not parse earlier logs correctly. If you were an early beta user and have older logs to analyze, for a simple script to pull out trial and looking information. Please include the version number of BITTSy used to run your set of logs.
Tag name of the stimulus. This is always a , even if the action statement that played it used a .
This report includes the same information as above, with an additional column containing number of looks toward that stimulus within the trial. Looks whose total length are shorter than the threshold are not counted as a separate look, but are combined with another look in that direction. Looks that are in-progress when a trial stimulus becomes active (the participant is already looking in the direction where the stimulus appears) are counted within the number of looks for that trial.
The example above comes from a protocol similar in structure to , in which participants hear 6 audio files repeated across 3 blocks of trials. In between trials, audio attention-getter clips (in the group ags
) are played to help orient the child toward the blinking side lights - these end as trials start, and thus do not appear in any trials or accumulate any looking time. In this report, we can see from the testblock1
, testblock2
, and testblock3
averages (our three blocks of trials, in chronological order) that looking time was greater toward the start of the experiment. Our main items of interest in this HPP study are the average looking times for the name and foil audio files presented in the test phase.
This report generates a row for every 50 milliseconds during each of your experiment's trials, and reports what was happening at that time by filling in time between events in the . This can be used to generate a time-course of looking behavior in any experiment. Keep in mind that it is a rough time-course! This granularity of 50ms was chosen as conservative estimate of . However, inaccuracies due to experimenter reaction times still exist, and we generally recommend offline coding of looking behavior for studies such as preferential looking paradigm that typically require more fine-grained analyses of looking time.
Trial in which the time was reached (timepoints outside of are not reported)
are not included in log files, so to know which stimulus a participant was looking at based on the most recent experimenter keypress, you would need to match keys with the sides to which they were assigned in your protocol file. Then, some simple conditional formulas in Excel can help you generate a column of tag names that were attended to across timepoints in the study.
This report is designed especially for studies that have a habituation phase using . It has three sections, in addition to the standard header.
, consisting of:
The example report above comes from a run of this .
This report simply outputs the that were loaded into the reporting module into a CSV, with the date, time, and event separated into columns. This can then be analyzed with a or macro, if desired.
Release notes
Added JUMP
Any in-progress audio is now turned off when the end of the protocol is reached and the experiment ends
Release notes
Bug fixes from version 1.32
If any of the monitors defined in DISPLAYS ARE {…}
have DPI not equal to 100%, a parsing error explains the issue and gives a link to further instructions on how to change display scaling, if needed.
KEY
is now a valid LOOP terminating condition.
Any keywords can be used in the SIDES ARE {…}
definition, not just LEFT/CENTER/RIGHT. If custom sides are used, they each need a key assignment, and all other starting definitions need to use these sides.
Halting an experiment while a look is in-progress will now result in that look being logged in the event log, with duration recorded as "in_progress".
If an experiment is halted while a light is blinking, BITTSy now properly turns off the light. Previously, BITTSy handled this correctly for light ON actions, but BLINK would stay on and leave a background process open even after closing out of BITTSy.
Documented issues
Audio that was still playing when the end of the protocol was reached would continue playing after the experiment had ended, which could cause BITTSy to crash and log information to fail to save. Only the combination of audio stimuli (not lights, video, or images) and this experiment end condition (not ending by pressing the escape key, or ending in an error message) produces this behavior. Adding an AUDIO OFF
command before the end of the protocol prevents this issue. (Resolved in version 1.5)
Release notes
Bug fixes from version 1.31
Log files now have the BITTSy version number in which the session was run recorded in the header information
Documented issues
Looks logged in a direction with no active stimuli could be "left over" and logged later to that side once a stimulus became active. Because trial starts clear in-progress looks, this issue only occurred when the stimulus appeared outside of a trial. (Presumed present in older versions; resolved in version 1.33)
Release notes
Notes entered in the comments box on the user interface are saved to the header information in the detailed log (comments box is locked once the experiment starts).
New default values for COMPLETELOOK
and COMPLETELOOKAWAY
of 100 ms (previously 1000 ms).
Terminating an experiment early by pressing the escape key during the execution of a loop step could sometimes result in BITTSy crashing and failing to log data. This has been resolved.
Documented issues
Looks toward active stimuli that are in progress at the end of a trial will have 250 ms added to their logged time. Trials that contain this error can be identified in the log by searching for the phrase "Debug: look towards L being handled automatically" (replacing L with whatever keys can correspond to an active direction), and subtracting 250 ms from the last look length logged on that trial. (Presumed present in older versions; resolved in version 1.32)
In some cases, audio stimuli could still be considered active on a side after they had finished and the trial they were in had ended. Audio files were cleared from the active stimulus list by AUDIO <side> OFF
commands, but not by trial ends. This could result in a look that took place after the end of the trial being logged as if it were a look to the now-completed audio file. Looks that occur in between trials are properly excluded in standard generated reports, but any custom report function should check for them. (Presumed present in older versions; resolved in version 1.32)
Check that you're opening the correct application file. There are a lot of supporting files that have similar names or look like something you should open, but aren't the main BITTSy program, so this can be confusing! Check what you're trying to open against the screenshots shown here. If you made a shortcut earlier, check that it's correctly referencing this main application file.
Give Windows Defender permission to allow BITTSy to run. If you aren't able to double-click on the BITTSy program to run it, try right-clicking and selecting Open. If you get a blue pop-up trying to warn you that it was downloaded from the internet, click More Info then Run Anyway.
These are warnings about your computer missing a driver for lights or there not being a DMX controller for the lights recognized by your computer. If you aren't using lights, you can ignore these completely, and just click through them to launch BITTSy! If you are using lights, these are critical errors that need to be addressed before trying to validate your protocol - which is why they're checked upon launch. Check that you downloaded the appropriate drivers and firmware (step #6) for your USB-DMX interface box, and that it is connected to your computer.
This is intended, as a foolproof way of preventing uncaught validation errors or execution errors from one protocol from ever affecting subsequent runs. Close out of BITTSy and open it again so that you can select a new protocol, or load and run the same one again.
BITTSy will display this error whenever your system doesn't have enough displays active, so make sure that your system has the external monitors configured and recognized properly. Check that your display settings in Windows are set to Extend Mode, and all your additional displays are powered on before opening BITTSy. (See this section on display ID numbers for more background on how BITTSy uses displays.)
Protocols that do not require the use of displays should not contain the DISPLAYS ARE starting definition at all. This prevents you from having to turn on unnecessary equipment for every run just so that the protocol avoids this validation error.
There are several possibilities.
Make sure that you have the correct and full file path specified in your protocol for each of the files, and that all of your files are present in the desired folder.
Verify that there are no typos, either in your protocol or in the names of the files themselves.
Make sure you're using backslashes (\), not forward slashes (/) in your file paths.
If you typed up your protocol file on a Mac, there could be an encoding issue that causes important characters for parsing protocols, particularly quotation marks, to be read differently on a PC. There are two types of quotation marks, straight and curved. When you type a quotation mark in TextEdit on a Mac, you get curved ones, but BITTSy expects the default for a PC, which is straight quotation marks. Try deleting your quotation marks and retyping them on your PC in Notepad or another plain text editor. If they look different, this was likely an issue for you. (Use find+replace to fix all of them!)
All the text boxes on the user interface have their contents saved to the log file immediately when you start running a protocol file. This ensures that this important information for identifying the log file is always present, in case an experiment unexpectedly crashes or access to the log file is lost. Once you click the Run button, they are locked from editing, to help make it clear that no further information from there can be saved. After the session, experimenter notes or comments on how the session went can be added manually to the log file by opening it in a text editor, or you can use other paper or electronic documents such as a run book.
If you choose to manually edit log files, be sure that they are always opened after the experiment has stopped completely, as demonstrated by a pop-up in BITTSy, indicating successful execution or an early termination of the experiment after pressing the Escape key. This ensures that BITTSy has completed writing information to the log file, and that there are no conflicts between your accessing the file to make edits and BITTSy finishing logging events from the experiment.
When you open an instance of BITTSy, it checks for the device that controls the lights, and if it's available, takes control of it. Once a program controls this device, that program must signal to give it up before another program can gain control. Since BITTSy is likely to be the only program on your computer you're using with your DMX controller for the lights, the issue is when there are two instances of BITTSy open at once - only one of them can be controlling the lights at any given time.
In BITTSy, the signal to give up control of the lights comes when:
You close out of the BITTSy window without having started to run a protocol
The experiment executes successfully (end of the protocol file is reached)
BITTSy encounters an execution error and interrupts the run with an error message
You press the Escape key to end the study early
If you are already running a protocol, it does not come when you click the red X to close the main BITTSy window, or right-click and close BITTSy from the task bar. These only close the user interface, and don't fully stop background processes such as the control of the lights. So if you close BITTSy improperly in the middle of an experiment, then reopen BITTSy to run a new protocol, you may get strange behavior in regards to using the lights: they may respond to keypresses as if you are still running the previous experiment, or they might not respond at all.
You can fix this by fully stopping the previously-running instance of BITTSy that is still controlling the lights. Press Ctrl+Alt+Delete and go into Task Manager, select any instance of BITTSy that appears, and End Task.
Ending an experiment before closing the main BITTSy window is important both for avoiding this problem and for ensuring that log files are complete and not at risk of being corrupted. If you need to close out of BITTSy before the end of an experiment, always press the escape key first, and wait for the pop-up that execution has been stopped!
Simply open BITTSy again, and the lights should all turn off. (If they don't, see question above - your previous run may not have ended fully.)
Submit a bug report/request for help, and a member of our team will contact you!
Got a question or issue? Check our F.A.Q.!
We are also adding longer descriptions and solutions for issues that can come up with setting up your system to run experiments in BITTSy, whether or not they are issues with BITTSy itself. Check the topics below if you are having trouble with these aspects of setting up your system, or contact us if there is something our team can help you look into.
Check the topics below if you are having trouble with these aspects of setting up your system, or contact us if there is something our team can help you look into!
Setting up your displays and using the setup protocol to label them appropriately for BITTSy is covered earlier in this manual, but this section explains the assigning of display ID numbers and side labels in more depth, and what kinds of changes to your system can affect your DISPLAYS
starting definition.
When you have multiple displays connected (and configured to extend rather than duplicate each other), your operating system will assign them each an ID number. In Windows 10, you can see what the system is assigning to which screen by going into the Display Settings (right click on the desktop and select this option), then clicking Identify, as shown on the image below.
This will cause number labels to appear temporarily in the corner of each of your connected displays. If you would like to change which number is associated with which monitor, you can click and drag the numbered boxes that appear above the Identify button. However, be careful to verify that your changes "stick" whenever your computer is restarted, or your displays are turned off and back on.
Except for when you, the user, specify an order (by clicking and dragging numbers in the Display Settings as mentioned above), Windows will assign display IDs solely by the port where the monitor is plugged into your computer. Windows doesn't "know" that you've arranged your monitors on one side or another of your desk or room - it only knows that there's a cable coming out of that particular display output jack, and a display plugged in on the other end.
This has several important implications for using displays for experiments in BITTSy.
Because the operating system doesn't tell BITTSy anything about where monitors are located, we have to do that within the protocol file (that is, we need a DISPLAYS ARE
starting definition!)
BITTSy relies on how the operating system numbers monitors, and if the operating system switches the display IDs for any reason, BITTSy will need to be given an updated DISPLAYS
definition to preserve the matching of physical devices to their locations.
Whenever you validate a protocol, BITTSy will fetch the display ID numbers from your operating system, and will match them, in order, to the side names that appear in that protocol's DISPLAYS
starting definition.
For example, let's say the experimenter's monitor (where the BITTSy window will be, and no stimuli) had ID 1, a right-side monitor has ID 2, and a left-side monitor has ID 3. BITTSy will take the lowest ID that is not the experimenter's monitor, and assign it the first side name in the DISPLAYS ARE
definition. This means that our right-side monitor (#2) will be chosen first, then the left-side one (#3), so the DISPLAYS ARE
statement should label those sides in that order.
The goal of the setup protocol is to help you identify the display IDs of your monitors, and label them with the appropriate side names so that BITTSy will display images/videos to the intended location whenever you use an action statement. But you can also use Windows to show the display IDs, as described on this page; your DISPLAYS ARE
statement should simply list the side names, in order, that correspond to those ID numbers.
The operating system's display ID assignment depends on where the monitor is plugged into your computer. Therefore, anytime that you unplug and replug a display cable, you should run the setup protocol to verify that your system is still assigning the same IDs to the same devices, and the DISPLAYS
starting definition is still the correct order to match where your displays are positioned in your room.
If you experience monitors swapping IDs for any reason other than unplugging and replugging into your computer, let us know! Some users have experienced this issue in Windows 10, and others not at all. There may be a numbering that is more stable on your system than the default, which you can check by reordering your monitors in Display Settings (or swapping where monitors are plugged in) and checking that the display IDs assigned stay consistent across restarting your computer. Some TVs connected via HDMI appear to take priority over other connected monitors; you can try manually setting them to be a lower ID number. If your computer has an upgraded graphics card, it may matter if you have some displays connected directly to the motherboard/on-board graphics card and others connected to ports that are located on your installed graphics card. It also may matter whether your displays are all powered on before turning on your computer, or whether you're turning them on after the computer has started up.
BITTSy can send an audio file to play over just one speaker out of the available array for your system. How it actually does this is to send file information to the desired speaker to play at full volume, and silence of the same length to play at the other speakers. (The silence does not block those speakers from playing another audio file from a subsequent protocol line; the two will be played simultaneously, and since one has no audio information and is completely silent, it does not affect the second file's playback.)
It is possible to experience an effect where audio that is sent to only one speaker is also heard at a lower volume on another speaker - an audio channel crossover effect. This is not caused by BITTSy, but could be a result of 1) a cable/audio jack issue, 2) a sound card driver "audio enhancement" effect, or 3) an issue with your sound card.
If you get a warning about Waves MaxxAudio whenever you open BITTSy, #2 should be fixed first. Skip to the sound card driver section below.
Before you start, make sure the audio file you're using is in one of our suggested file formats. When playing file formats such as .aiff (native to Mac computers rather than Windows), some users find that they can play the file, but it does not play correctly with respect to channel separation, and converting to a .wav format resolves the issue. See this page for more info.
The first possibility is the easiest to check. Unplug the cable that runs from the 3.5mm audio jack on your computer to your speakers and plug in a pair of headphones (which you know work!) instead. Wired earbuds work especially well because you can put in just one at a time to check that the channel that is not supposed to be playing audio is in fact silent.
It's best to try headphones that match the jack type on the computer, either stereo audio (3 metal sections separated by 2 rings) or stereo audio plus microphone input (four sections separated by 3 rings). If your computer has a separate dedicated microphone jack nearby, the audio output jack will be just stereo audio.
Test your audio with these headphones. You can do this easily with a test audio file that you know to be only in one channel with silence in the other, or where the two channels are completely different, and playing it in Windows Media Player (BITTSy relies on the same codecs and calls as Windows Media Player and playback should be the same between the two), but you could also run a simple BITTSy protocol that plays a file to just one side, waits for a keypress, then plays to the other.
If the headphones play audio as intended: You may need to replace the audio cable that runs to your speakers. It may have been damaged, or the sections on its cable may be misaligned with respect to your computer's audio jack. You can test the latter possibility in the same manner described below for headphones. These cables are cheap, so it is worth buying a couple more from different manufacturers and returning whatever doesn't work. If nothing seems to align with your computer, there may have been a manufacturing issue; check if it is under warranty and proceed from there.
If the headphones have the same crossover issue: It's less likely that the issue is with the speaker audio cable itself, but there could be a problem with the audio jack on your computer. Different manufacturers can place the sections and buffer rings (as you see on the diagram above) in slightly different places, and a misalignment between cable and jack can mean that information the computer is sending to one channel can be also received on the very edge of another. This will cause a very faint and often static-sounding version of the audio to be played on the other channel, while a fuller-sounding version plays to the correct one. Try unplugging your headphones slowly and very slightly as you listen, and see if the crossover issue resolves at any point. If this is your issue, there will be a place where you can leave it and both left-only and right-only audio will play cleanly, without crossover and at full volume.
If it doesn't seem to be a jack alignment issue: Check your sound card driver next (below).
Sound card drivers take audio information (such as audio files that BITTSy opens) and format that information into the final signal that is sent via your audio jack and cable to play through your speakers. Any functional sound driver will correctly take in all the specifics of the audio file, and take information from the software that requested the file about how to play it - but importantly, sound drivers can use their own settings within the formatting step to change the final output from being played exactly how the file and requesting program specified.
These are audio "enhancements" and may include rebalancing across frequency ranges (such as a bass boost), balancing perceived loudness across files (rather than playing them with their encoded average intensities), and introducing slight audio channel crossover for a more "natural" sound. These can be nice for listening to music, but for experiments, it is extremely important that stimuli are played with fidelity, exactly as they were designed. And since these audio enhancements are applied at the end, right before the signal is sent to the speakers, BITTSy can't do anything about it - its request to play the file to only one channel, with complete silence on the other, is overridden and remixed to have crossover.
First, ensure that audio enhancements are turned off within Windows 10. See instructions here.
Now that those are disabled, you may still have more places where these kinds of enhancements are turned on, or you may have a sound driver that doesn't have options to turn them off. Check what sound driver(s) is/are active on your computer by opening Device Manager (type it in your search bar) and expanding the "Sound, video and game controllers" menu. There may be several listed, from companies such as Intel and Realtek.
Search for each of these on your computer and see if you have a program installed along with that driver that serves as a control panel for it. In this program, look for where to turn off audio enhancements (search online for support for your specific driver if you need to.)
Waves MaxxAudio is a type of Realtek driver that ships with some Dell computers, and applies a lot of audio "enhancements" but (at present) has no options to turn them off. If you have Waves MaxxAudio on your computer, your only recourse is to uninstall it and revert to a generic Windows sound driver. Because Waves MaxxAudio causes so many problems, BITTSy versions 1.32 and later will specifically check to see if Waves MaxxAudio is installed and display a warning if it detects it on your system, every time you open BITTSy.
The warning dialog contains a link with steps to fix the issue and install the generic sound driver. Once you restart your computer, check that Waves MaxxAudio is really gone (open BITTSy and it will check for you!) Your computer will scan for new drivers on restart, and may find an older version of Waves MaxxAudio that wasn't removed during the uninstall, or could download a new copy of Waves MaxxAudio via Windows Update whenever an updated driver is made available. If this happens, you'll need to repeat the process again until it is completely gone.
Under the Advanced Settings menu in BITTSy, there are also options to pause or disable Waves MaxxAudio. These can be used if you are unable to uninstall it. However, you will need to re-select to disable Waves whenever you download and start using a new version of BITTSy.
It is possible that there are other sound drivers out there that are just as bad as Waves MaxxAudio... this is just the one that several of our beta testing labs encountered. If you have a different driver but are having similar audio problems, you can still use our instructions to disable it and replace it with a generic driver. (If that fixes the issue, let us know what your driver was so we can add it to the bad list!)
A final possibility is that there could be a defect in your sound card that is causing it to have electrical crossover between audio channels. This is rare, because the manufacturing standards for channel crossover at normal playing volumes are set well below human hearing thresholds - there would need to be a major issue to make them even barely audible. But if you've eliminated other possibilities, it may be best to try to replace your sound card. Check warranties, whether your sound card is the standard one for your computer model, and what other sound cards are compatible.
Please use the following citation for BITTSy (APA format). Replace the version number with the version used for your experiment.
Newman, R.S., Shroads, E., Morini, G. Johnson, E.K., Onishi, K.H. & Tincoff, R. (2019). BITTSy: Behavioral Infant & Toddler Testing System (Version 1.xx) [Software]. Available from
Some users have experienced issues playing video or audio files in BITTSy - either failing to play, or with problems like stuttering, stalling, freezing, or missing audio channels. Here are steps to take to resolve these issues.
Check our recommendations for visual and audio stimuli
Ensure your files are compatible with Windows 10 and playable with existing codecs Particularly if your stimuli were made on a Mac or originally used for studies on a Mac, your stimuli may be working poorly with BITTSy simply because they work poorly with Windows. Test this by opening your files in a Windows-native default application such as Windows Media Player and checking whether they play correctly. If they don't, you'll either need to convert to a new format or install a new codec to interpret the file.
Try re-converting your files Creating video/audio files in a compatible format for your computer requires using compatible options for... - video container format (e.g. MP4, WMV) - audio container format (e.g. WAV, AAC) - video codec (e.g. wmv1/wmv2, msmpeg4) - audio codec (e.g. adpcm_ima_wav). Container formats are what you are used to seeing - they're file extensions - and they're easy to directly specify in conversion programs, and see in your files by viewing their file properties. We WMV and MP4 as video container formats, and MP3 and WAV as audio formats, although others may also work well. Container formats that are Mac-specific, such as AIFF for audio and MOV for video (Apple recently ended support of Quicktime for Windows, which used to help these play!) should always be avoided. Sometimes even with the recommended container formats, you can have a video that isn't compatible with your system and may experience playback issues, and this is generally due to incompatible codecs. However, unlike with container formats, it is often difficult to verify which video and audio codecs are being used (within conversion software) or were used (for an existing file). If you can create/convert your video files on the same computer that you use to run BITTSy, this is generally a good way to ensure you're always using codecs that are compatible and available on your system. We recommend as a free video conversion program that gives you a lot of control over the audio/video settings and encoding.
Try switching your video rendering settings Under the advanced settings menu within the main BITTSy user interface, you can select whether to use hardware rendering or software rendering for playing videos. One of these may result in better performance on your system. See for more information.
Try installing additional video codecs on your computer Finally, it's possible that your videos could play better with different codecs, or that there are some problems with the codecs that came preloaded on your computer. If you try downloading and installing a codec pack, be sure to ask tech support staff at your institution for guidance and recommendations - many links out there are actually malware! The K-Lite standard codec pack (downloaded from ) was helpful for some users.
The development of BITTSy was funded by NSF BCS1152109, "New Tools for New Questions: A Multi-Site Approach to Studying the Development of Selective Attention".
, University of Maryland College Park
, University of Toronto at Mississauga
, The College of Idaho
, McGill University
, University of Delaware Primary investigator - Lab manager - Emily Fritzson
, University of Arizona Primary investigator - Postdoctoral researcher -
& , University of Maryland College Park Primary investigator - Lab manager - Emily Shroads
, programmer, 2017-2018
, programmer, 2018-2019
, programmer, 2019-
, technical supervisor
Emily Shroads, project manager
Rochelle Newman
Giovanna Morini
Emily Shroads
(see for more information on how to use this protocol)
Before running any protocol you download or receive from a colleague: 1. Change the to match the locations/IDs of your displays/lights/speakers (see the page on to determine these) 2. Change file paths of all audio/image/video files to the correct location on your computer (see the page on for more info)
Headturn preference procedure - word segmentation (based on Jusczyk & Aslin, 1995) Download: | No protocol walkthrough available, but study structure is identical to !
Training phase that requires accumulated looking time to each audio file
Block design
Randomized ordering of sides for lights/audio presentation
Randomized selection of audio stimuli within blocks
Headturn preference procedure - name recognition in noise (based on Newman 2009) Download: |
Training phase that requires accumulated looking time to each audio file
Block design
Randomized ordering of sides for lights/audio presentation
Randomized selection of audio stimuli within blocks
Training phase of alternating word-object pairs
Test phase with random assignment of object arrangement and random ordering of target words
Pre-test, habituation, test, and post-test phases
Habituated to a set of images; individual images chosen randomly without repetition
Tested on novel image belonging to same category and novel image from a novel category
Pre-test, habituation, test, and post-test phases
Random assignment to a single word-object pair for habituation
Test phase consisting of all possible word-object pairs in random ordering
Pre-test, habituation, dishabituation, and post-test phases
Habituation is to a single visual stimulus
Dishabituation phase consists of repeated exposures to the familiarized video and a novel video, with the novel video occurring more rarely than the familiarized video
Training phase in which each trial has the "change" stimulus
Test phase in which each trial plays the "change" or "control" stimulus
Use of JUMP to present a reward stimulus when participant correctly detects the change stimulus
Continuous background noise
Training phase in which each trial has the relevant signal (audio of a name being called)
Conditioning phase in which participant must show anticipatory headturn three trials in a row to proceed to the test phase
Test phase in which each trial either does or does not present the relevant signal
Use of JUMP to present a reward stimulus when participant correctly detects the signal
Preferential looking - word recognition Download: |
Several versions available of this protocol to demonstrate varying levels of randomization of trial order: 1) full randomization of a stimulus set, 2) presenting in a fixed order, 3) pseudo-randomization of trial order. See the for more on how these differ!
Preferential looking - fast-mapping Download: |
Habituation - familiarization to a category Download: |
Habituation - word-object pairings (based on Werker et al., 1998) Download: |
Habituation - visual oddity paradigm Download: |
Conditioned Headturn - speech sound discrimination (based on Werker et al., 1981) Download: |
Conditioned Headturn - signal detection in noise Download: |
Ask to be added to our to chat with other BITTSy users about setup questions, best practices, and how they've used BITTSy in their labs. If you know a BITTSy user who is a member, you can ask to be added directly; otherwise, just to say you'd like to be added to Slack!