Here, we present a protocol to demonstrate a behavioral assay that quantifies how alternative visual features, such as motion cues, influence directional decisions in fish. Representative data are presented on the speed and accuracy where Golden Shiner (Notemigonus crysoleucas) follow virtual fish movements.
Collective animal behavior arises from individual motivations and social interactions that are critical for individual fitness. Fish have long inspired investigations into collective motion, specifically, their ability to integrate environmental and social information across ecological contexts. This demonstration illustrates techniques used for quantifying behavioral responses of fish, in this case, Golden Shiner (Notemigonus crysoleucas), to visual stimuli using computer visualization and digital image analysis. Recent advancements in computer visualization allow for empirical testing in the lab where visual features can be controlled and finely manipulated to isolate the mechanisms of social interactions. The purpose of this method is to isolate visual features that can influence the directional decisions of the individual, whether solitary or with groups. This protocol provides specifics on the physical Y-maze domain, recording equipment, settings and calibrations of the projector and animation, experimental steps and data analyses. These techniques demonstrate that computer animation can elicit biologically-meaningful responses. Moreover, the techniques are easily adaptable to test alternative hypotheses, domains, and species for a broad range of experimental applications. The use of virtual stimuli allows for the reduction and replacement of the number of live animals required, and consequently reduces laboratory overhead.
This demonstration tests the hypothesis that small relative differences in the movement speeds (2 body lengths per second) of virtual conspecifics will improve the speed and accuracy with which shiners follow the directional cues provided by the virtual silhouettes. Results show that shiners directional decisions are significantly affected by increases in the speed of the visual cues, even in the presence of background noise (67% image coherency). In the absence of any motion cues, subjects chose their directions at random. The relationship between decision speed and cue speed was variable and increases in cue speed had a modestly disproportionate influence on directional accuracy.
Animals sense and interpret their habitat continuously to make informed decisions when interacting with others and navigating noisy surroundings. Individuals can enhance their situational awareness and decision making by integrating social information into their actions. Social information, however, largely stems from inference through unintended cues (i.e., sudden maneuvers to avoid a predator), which can be unreliable, rather than through direct signals that have evolved to communicate specific messages (e.g., the waggle dance in honey bees)1. Identifying how individuals rapidly assess the value of social cues, or any sensory information, can be a challenging task for investigators, particularly when individuals are traveling in groups. Vision plays an important role in governing social interactions2,3,4 and studies have inferred the interaction networks that may arise in fish schools based on each individual’s field of view5,6. Fish schools are dynamic systems, however, making it difficult to isolate individual responses to particular features, or neighbor behaviors, due to the inherent collinearities and confounding factors that arise from the interactions among group members. The purpose of this protocol is to complement current work by isolating how alternative visual features can influence the directional decisions of individuals traveling alone or within groups.
The benefit of the current protocol is to combine a manipulative experiment with computer visualization techniques to isolate the elementary visual features an individual may experience in nature. Specifically, the Y-maze (Figure 1) is used to collapse directional choice to a binary response and introduce computer animated images designed to mimic the swimming behaviors of virtual neighbors. These images are projected up from below the maze to mimic the silhouettes of conspecifics swimming beneath one or more subjects. The visual characteristics of these silhouettes, such as their morphology, speed, coherency, and swimming behavior are easily tailored to test alternative hypotheses7.
This paper demonstrates the utility of this approach by isolating how individuals of a model social fish species, the Golden Shiner (Notemigonus crysoleucas), respond to the relative speed of virtual neighbors. The protocol focus, here, is on whether the directional influence of virtual neighbors change with their speed and, if so, quantifying the form of the observed relationship. In particular, the directional cue is generated by having a fixed proportion of the silhouettes act as leaders and move ballistically towards one arm or another. The remaining silhouettes act as distractors by moving about at random to provide background noise that can be tuned by adjusting the leader/distractor ratio. The ratio of leaders to distractors captures the coherency of the directional cues and can be adjusted accordingly. Distractor silhouettes remain confined to the decision area (“DA”, Figure 1A) by having the silhouettes reflect off of the boundary. Leader silhouettes, however, are allowed to leave the DA region and enter their designated arm before slowly fading away once the silhouettes traversed 1/3 the length of the arm. As leaders leave the DA, new leader silhouettes take their place and retrace their exact path to ensure that the leader/distractor ratio remains constant in the DA throughout the experiment.
The use of virtual fish allows for the control of the visual sensory information, while monitoring the directional response of the subject, which may reveal novel features of social navigation, movement, or decision making in groups. The approach used here can be applied to a broad range of questions, such as effects of sublethal stress or predation on social interactions, by manipulating the computer animation to produce behavioral patterns of varying complexity.
All experimental protocols were approved by the Institutional Animal Care and Use Committee of the Environmental Laboratory, US Army Engineer and Research and Development Center, Vicksburg, MS, USA (IACUC# 2013-3284-01).
1. Sensory maze design
2. Recording equipment
3. Calibrate lighting, projector, and camera settings
4. Calibrate visual projection program: background
5. Calibrate visual projection program: visual stimuli
NOTE: Rendering and animating the visual stimuli can also be done in Processing using the steps below as guides along with the platform’s tutorials. A schematic of the current program’s logic is provided in (Figure 3) and additional details can be found in Lemasson et al. (2018)7. The following steps provide examples of the calibration steps taken in the current experiment.
6. Animal preparation
7. Experimental procedure
8. Data Analysis
Hypothesis and design
To demonstrate the utility of this experimental system we tested the hypothesis that the accuracy with which Golden Shiner follow a visual cue will improve with the speed of that cue. Wild type Golden Shiner were used (N = 16, body lengths, BL, and wet weights, WW, were 63.4 ± 3.5 mm and 1.8 ± 0.3 g, respectfully). The coherency of the visual stimuli (leader/distractor ratio) were fixed at 0.67, while we manipulated the speed at which our motion cues (i.e., the leaders) moved with respect to their distractors. Speed levels of the leader silhouettes that provide the directional cues ranged from 0-10 BL/s (in increments of 2), which spans the range of speeds typically considered to reflect sustained, prolonged, or burst swimming modes of activity in fish12. At the control level, 0, the leader silhouettes were oriented towards a destination arm among the randomly oriented distractors, but none of the silhouettes moved. The destination arm was chosen at random for each trial by the program. Distance units are in body length, which was defined by the mean standard length of our subjects, and time is in seconds. The current representative analysis focuses on measuring primary response variables (decision speed and accuracy), yet the design of the experiment also enables investigators to extract added information by tracking subject movements and analyzing their kinematics.
Our fish subjects were housed following section 6 of the protocol. Each subject was exposed to one level of the treatment per day. We randomized both within subject treatment level (cue speed) across days and the order in which subjects were tested on each day. Linear and generalized linear mixed effects models (LMM and GLMM, respectively) were used to test the effects of leader silhouette speed on the speed and accuracy with which subjects followed the visual stimuli. Subject id was included as the random effect in both models.
Data and findings
In the absence of any motion cues Golden Shiner acted as expected and chose their direction at random (stimulus speed = 0, binomial test, nLeft = 33, nRight = 40, = 0.45, P = 0.483). While most subjects showed no signs of stressful behavior within the domain and made a decisive decision within the allotted time (5 min), 22% of the subjects showed a reluctance to leave the holding area or enter the decision area. Data from these indecisive fish were not included in the analysis. The remaining 78% of our subjects showed a significant improvement in the accuracy with which they followed the directional stimuli as the speed of those stimuli increased (GLMM, z = 1.937, P = 0.053). Figure 5A shows the nature of this relationship, where we find a 1.2-fold increase in directional accuracy for each increase in stimulus speed level. This relationship is only modestly disproportionate and is not, by itself, suggestive of a threshold response to changes in cue speed. Increases in stimulus speed also led to a significant increase in decision speed (LMM, F1,56 = 4.774, P = 0.033). However, as evident in Figure 5B the trend in decision speed was inconsistent and highly variable across stimulus speed levels. What is apparent in these decision speed data is that it took subjects, on average, anywhere from 5-20x longer to make their decision when the stimuli were moving than when they were not (decision speeds of 4.6 ± 2.3 s and 81.4 ± 74.7 s for stimulus speeds of 0 and 8, respectively, ± standard deviation, SD). Indeed, without the control level we found no significant change in decision speed as a function of stimulus speed.
Figure 1. Y-Maze domain. A. Image of the Y-maze apparatus for decision-making test. Annotations represent the following: Holding Area (HA, green), Decision Area (DA, blue), Left Decision Arm (LDA), and Right Decision Arm (RDA). B. Image of the Y-maze and room with overhead adjustable track lighting and GigE camera placement (only one of the four overheads lights strips are visible). C. Image of the Y-maze (side-view) including the projector placement which is locked by the sliding carriage to eliminate movements during, or between, trials. Please click here to view a larger version of this figure.
Figure 2. Background and stimulus calibration. A. Image of the illuminated Y-maze with a uniform background color and a pixel intensity transect (green line) between the holding area and the Decision Area, DA (mean pixel intensity 112 ± 1278). The light gradient generated by the projector’s bulb (hotspot) is clearly visible. B. Image showing the alignment of the projections with the DA. C. Image of the maze with the filtered background and a solitary silhouette projected in the center of the DA for calibration (size, speed). The addition of the counter gradient background in (C) results in a darker background (mean pixel intensity 143.1 ± 5.5) and far less spatial variability (coefficient of variation drops from 11.4 (A.) to 0.03 (C.). Please click here to view a larger version of this figure.
Figure 3. Schematic of the general flow of operations in the visualization program used in the experiments. For additional procedural details see7. Please click here to view a larger version of this figure.
Figure 4. Experimental trial with both real and virtual fish silhouettes. A. Image a (live) Golden Shiner leaving the holding area (green circle). B. Image of a (live) Golden Shiner in the decision area (green circle) among the virtual fish silhouettes. Please click here to view a larger version of this figure.
Figure 5. Accuracy and speed of directional responses to changes in the relative speed of motion cues. A. Graph of the fish decision accuracy with which Golden Shiner followed the ‘leader’ silhouettes plotted against the stimulus speed (BL/s). B. Graph of the fish decision speed plotted against the stimulus speed (BL/s). Data are means ± standard errors, SE. Groups of 15 virtual silhouettes were randomly distributed throughout the decision zone with a 67% coherency level (10 of the 15 silhouettes acted as Leaders, the remaining 5 silhouettes acted as distractors) and we varied the speed of the leaders from 0-10 BL/s. Distractor speeds remained fixed at 1 BL/s at all speed levels, except the control in which none of the silhouettes moved. Please click here to view a larger version of this figure.
Visual cues are known to trigger an optomotor response in fish exposed to black and white gratings13 and there is increasing theoretical and empirical evidence that neighbor speed plays an influential role in governing the dynamical interactions observed in fish schools7,14,15,16,17. Contrasting hypotheses exist to explain how individuals in groups integrate neighbor movements, such as reacting proportionally to all discernible cues14, adopting a motion-threshold response17, or monitoring collision times18. A first step in testing these alternative hypotheses is validating their underlying assumptions. Here we demonstrated the utility of our protocol in identifying the role that a particular sensory feature can have on guiding directional decisions.
We isolated how individuals of a social fish species, the Golden Shiner, responded to changes in the relative speed of visual stimuli designed to mimic conspecifics in a school. Golden Shiner directional accuracy did improve with increases in the relative speed of the visual stimuli, but the functional relationship between these variables was only marginally disproportionate. The relationship between decision speed and stimulus speed, while significant, was highly variable and inconsistent. The results do demonstrate, however, that a speed difference found in images scattered across the field of view of these fish does play an important role in triggering a response and guiding their overt attention. Teasing apart how individuals select among the actions of specific neighbors could be probed with the current design by introducing conflicting directions in the stimuli.
In a recent experiment with Zebrafish, Danio rerio, we found no evidence of indecisiveness in solitary trials7, yet Golden Shiner in this demonstration displayed a greater reluctance to leave the holding area. The differences between these two species may be explained by their life history strategies and the relative strength of their social tendencies (or reliance). Zebrafish appear to display more variable social coherency than Golden Shiners (e.g., facultative vs. obligate schoolers3). It is likely that the stronger social coherency in Golden Shiner may have contributed to subjects showing higher levels of shyness, or hesitancy within the domain than their zebrafish counterparts.
The order of the steps is subtle yet critical in the protocol. The process of balancing the lights, the projector, and program filter can take more time than often anticipated for new domains. In this protocol, lessons learned have been included to reduce setup and light balance time, such as use of track lights that reflect off the wall (not on the domain), adjustable light controllers, and program-generated filters for the projector. Consider also that what may appear to be visually acceptable to the human eye will not be viewed by the camera and software the same way, thus your lighting conditions may require additional adjustments. Even slight changes in monitor angles will result in background gradient changes. Thus, detailed note taking and saving file settings will greatly reduce the likelihood of changes occurring during the experiment. Moving through the process from physical to filtering, as presented here, yields the fastest steps to success.
The use of a ST projector enables greater spatial flexibility over a monitor, but this approach creates an unwanted visual anomaly called a “hotspot”. A hotspot is a bright spot on the projection surface created by the proximity of the projector’s bulb. In the protocol, Section 4 was dedicated to the creation of background filters and checking for homogeneous lightning across the domain. The steps provided here will help users avoid, or minimize, the unwanted effects of the hotspot by modeling any unwanted gradient and using the model to reproduce an inverse gradient to counter the effects. Lastly, the ST projector model may vary, however, image adjustments (rotate, flip, front or rear screen projection) and keystone correction (± 3-5 degrees) are useful features to ensure the desire image fits the domain and can be adjusted for distortion.
Over time, the experimental rooms were updated for ease by changes in the hardware (i.e., cameras, cabling, video cards, monitors). It is noteworthy to mention that hardware changes will likely result in additional start-up time to balance lighting and work through any potential program issues. Therefore, it is recommended that any hardware be dedicated to a system until completion of the desired experiments. Most challenges have been tied to performance differences between monitors, video cards and the cameras resulting sometimes in alteration of the programming code. Since the time of this work, new domains have been developed in which the inner test domain can be removed and switched for other test domains. We recommend this flexibility be considered when designing the experimental domains and support structures.
The current protocol allows investigators to isolate and manipulate visual features in a manner that both reflects the visual environment expected within a school, while also controlling for confounding factors that accompany exposure to real conspecifics (e.g., hunger, familiarity, aggression)7. In general, computer animation (CA) of virtual fish (i.e., silhouettes) is a practice that is becoming more common place due to its distinct advantages in stimulating behavioral responses19,20,21. CA allows one to customize visual cues (direction, speed, coherency, or morphology), while introducing a level of standardization and repeatability in the desired stimulus that exceeds what can be achieved when using live animals as the stimulant. The use of virtual reality in behavioral studies, on both animals22 and humans23, is also steadily increasing and promises to become a powerful empirical tool as the technology becomes more available and tractable. Taken together, these virtual approaches also replace and reduce the live animal requirements of animal ethics in science (e.g., IACUC, AAALAC, and ACURO)24, while concomitantly lowering laboratory costs and burdens.
The authors have nothing to disclose.
We thank Bryton Hixson for setup assistance. This program was supported by the Basic Research Program, Environmental Quality and Installations (EQI; Dr. Elizabeth Ferguson, Technical Director), US Army Engineer Research and Development Center.
Black and white IP camera | Noldus, Leesburg, VA, USA | https://www.noldus.com/ | |
Extruded aluminum | 80/20 Inc., Columbia City, IN, USA | 3030-S | https://www.8020.net 3.00" X 3.00" Smooth T-Slotted Profile, Eight Open T-Slots |
Finfish Starter with Vpak, 1.5 mm extruded pellets | Zeigler Bros. Inc., Gardners, PA, USA | http://www.zeiglerfeed.com/ | |
Golden shiners | Saul Minnow Farm, AR, USA | http://saulminnow.com/ | |
ImageJ (v 1.52h) freeware | National Institute for Health (NIH), USA | https://imagej.nih.gov/ij/ | |
LED track lighting | Lithonia Lightening, Conyers, GA, USA | BR20MW-M4 | https://lithonia.acuitybrands.com/residential-track |
Oracle 651 white cut vinyl | 651Vinyl, Louisville, KY, USA | 651-010M-12:5ft | http://www.651vinyl.com. Can order various sizes. |
PowerLite 570 overhead projector | Epson, Long Beach CA, USA | V11H605020 | https://epson.com/For-Work/Projectors/Classroom/PowerLite-570-XGA-3LCD-Projector/p/V11H605020 |
Processing (v 3) freeware | Processing Foundation | https://processing.org/ | |
R (3.5.1) freeware | The R Project for Statistical Computing | https://www.r-project.org/ | |
Ultra-white 360 theater screen | Alternative Screen Solutions, Clinton, MI, USA | 1950 | https://www.gooscreen.com. Must call for special cut size |
Z-Hab system | Pentair Aquatic Ecosystems, Apopka, FL, USA | https://pentairaes.com/. Call for details and sizing. |