Here, we present a simplified open-source hardware and software setup for investigating mouse spatial learning using virtual reality (VR). This system displays a virtual linear track to a head-restrained mouse running on a wheel by utilizing a network of microcontrollers and a single-board computer running an easy-to-use Python graphical software package.
Head-restrained behavioral experiments in mice allow neuroscientists to observe neural circuit activity with high-resolution electrophysiological and optical imaging tools while delivering precise sensory stimuli to a behaving animal. Recently, human and rodent studies using virtual reality (VR) environments have shown VR to be an important tool for uncovering the neural mechanisms underlying spatial learning in the hippocampus and cortex, due to the extremely precise control over parameters such as spatial and contextual cues. Setting up virtual environments for rodent spatial behaviors can, however, be costly and require an extensive background in engineering and computer programming. Here, we present a simple yet powerful system based upon inexpensive, modular, open-source hardware and software that enables researchers to study spatial learning in head-restrained mice using a VR environment. This system uses coupled microcontrollers to measure locomotion and deliver behavioral stimuli while head-restrained mice run on a wheel in concert with a virtual linear track environment rendered by a graphical software package running on a single-board computer. The emphasis on distributed processing allows researchers to design flexible, modular systems to elicit and measure complex spatial behaviors in mice in order to determine the connection between neural circuit activity and spatial learning in the mammalian brain.
Spatial navigation is an ethologically important behavior by which animals encode the features of new locations into a cognitive map, which is used for finding areas of possible reward and avoiding areas of potential danger. Inextricably linked with memory, the cognitive processes underlying spatial navigation share a neural substrate in the hippocampus1 and cortex, where neural circuits in these areas integrate incoming information and form cognitive maps of environments and events for later recall2. While the discovery of place cells in the hippocampus3,4 and grid cells in the entorhinal cortex5 has shed light on how the cognitive map within the hippocampus is formed, many questions remain about how specific neural subtypes, microcircuits, and individual subregions of the hippocampus (the dentate gyrus, and cornu ammonis areas, CA3-1) interact and participate in spatial memory formation and recall.
In vivo two-photon imaging has been a useful tool in uncovering cellular and population dynamics in sensory neurophysiology6,7; however, the typical necessity for head restraint limits the utility of this method for examining mammalian spatial behavior. The advent of virtual reality (VR)8 has addressed this shortcoming by presenting immersive and realistic visuospatial environments while head-restrained mice run on a ball or treadmill to study spatial and contextual encoding in the hippocampus8,9,10 and cortex11. Furthermore, the use of VR environments with behaving mice has allowed neuroscience researchers to dissect the components of spatial behavior by precisely controlling the elements of the VR environment12 (e.g., visual flow, contextual modulation) in ways not possible in real-world experiments of spatial learning, such as the Morris water maze, Barnes maze, or hole board tasks.
Visual VR environments are typically rendered on the graphical processing unit (GPU) of a computer, which handles the load of rapidly computing the thousands of polygons necessary to model a moving 3D environment on a screen in real time. The large processing requirements generally require the use of a separate PC with a GPU that renders the visual environment to a monitor, multiple screens13, or a projector14 as the movement is recorded from a treadmill, wheel, or foam ball under the animal. The resulting apparatus for controlling, rendering, and projecting the VR environment is, therefore, relatively expensive, bulky, and cumbersome. Furthermore, many such environments in the literature have been implemented using proprietary software that is both costly and can only be run on a dedicated PC.
For these reasons, we have designed an open-source VR system to study spatial learning behaviors in head-restrained mice using a Raspberry Pi single-board computer. This Linux computer is both small and inexpensive yet contains a GPU chip for 3D rendering, allowing the integration of VR environments with the display or behavioral apparatus in varied individual setups. Furthermore, we have developed a graphical software package written in Python, "HallPassVR", which utilizes the single-board computer to render a simple visuospatial environment, a virtual linear track or hallway, by recombining custom visual features selected using a graphical user interface (GUI). This is combined with microcontroller subsystems (e.g., ESP32 or Arduino) to measure locomotion and coordinate behavior, such as by the delivery of other modalities of sensory stimuli or rewards to facilitate reinforcement learning. This system provides an inexpensive, flexible, and easy-to-use alternative method for delivering visuospatial VR environments to head-restrained mice during two-photon imaging (or other techniques requiring head fixation) for studying the neural circuits underlying spatial learning behavior.
All procedures in this protocol were approved by the Institutional Animal Care and Use Committee of the New York State Psychiatric Institute.
NOTE: A single-board computer is used to display a VR visual environment coordinated with the running of a head-restrained mouse on a wheel. Movement information is received as serial input from an ESP32 microcontroller reading a rotary encoder coupled to the wheel axle. The VR environment is rendered using OpenGL hardware acceleration on the Raspberry Pi GPU, which utilizes the pi3d Python 3D package for Raspberry Pi. The rendered environment is then output via a projector onto a compact wraparound parabolic screen centered on the head-restrained mouse's visual field15,16, while the behavior (e.g., licking in response to spatial rewards) is measured by a second behavior ESP32 microcontroller. The graphical software package enables the creation of virtual linear track environments consisting of repeated patterns of visual stimuli along a virtual corridor or hallway with a graphical user interface (GUI). This design is easily parameterized, thus allowing the creation of complex experiments aimed at understanding how the brain encodes places and visual cues during spatial learning (see section 4). Designs for the custom hardware components necessary for this system (i.e., the running wheel, projection screen, and head-restraint apparatus) are deposited in a public GitHub repository (https://github.com/GergelyTuri/HallPassVR). It is recommended to read the documentation of that repository along with this protocol, as the site will be updated with future enhancements of the system.
1. Hardware setup: Construction of the running wheel, projection screen, and head-fixation apparatus
NOTE: The custom components for these setups can be easily manufactured if the user has access to 3D-printing and laser-cutting equipment or may be outsourced to professional manufacturing or 3D prototyping services (e.g., eMachinehop). All the design files are provided as .STL 3D files or .DXF AutoCAD files.
2. Setup of the electronics hardware/software (single board computer, ESP32 microcontrollers, Figure 2)
3. Running and testing the graphical software package
NOTE: Run the graphical software package GUI to initiate a VR linear track environment, calibrate the distances on the VR software and behavior ESP32 code, and test the acquisition and online plotting of the mouse's running and licking behavior with the included Processing language sketch.
4. Mouse training and spatial learning behavior
NOTE: The mice are implanted for head fixation, habituated to head restraint, and then trained to run on the wheel and lick consistently for liquid rewards progressively ("random foraging"). Mice that achieve consistent running and licking are then trained on a spatial hidden reward task using the VR environment, in which a single reward zone is presented following a visual cue on the virtual linear track. Spatial learning is then measured as increased licking selectivity for positions immediately prior to the reward zone.
This open-source virtual reality behavioral setup allowed us to quantify licking behavior as a read-out of spatial learning as head-restrained mice navigated a virtual linear track environment. Seven C57BL/6 mice of both sexes at 4 months of age were placed on a restricted water schedule and first trained to lick continuously at low levels while running on the wheel for random spatial rewards ("random foraging") without VR. Although their performance was initially affected when moved to the VR projection screen setup with a 2 m random hallway pattern, it returned to previous levels within several VR sessions (Figure 4A). The mice that developed the random foraging strategy with VR (six of the seven mice, 86%; one mouse failed to run consistently and was excluded) were then required to lick at an uncued operant reward zone at 0.5 m following a single visual location cue in the middle of an otherwise featureless 2 m virtual track in order to receive water rewards ("hidden reward task"). According to the current pilot data with this system, four of the seven (57%) mice were able to learn the hidden reward task with a single visual cue in two to four sessions, as shown by licking near the reward zone with increasing selectivity (Table 1, Figure 4B,C), which is similar to our previous results with a non-VR treadmill17. This fact is important in the study of spatial learning, as it allows for the monitoring and/or manipulation of neural activity during critical periods of learning without extensive training. Furthermore, the mice exhibited both substantial within-session as well as between-session learning (Figure 4C), providing an opportunity to observe both the short-term and long-term neural circuit adaptations that accompany spatial learning. We did not test the learning rate of an equivalent non-VR task, but many classical real-world hippocampus-dependent spatial tasks, such as the Morris water maze, require even more extensive training and present dramatically fewer behavioral trials and, thus, are less suitable for the monitoring of learning behavior along with neural activity changes.
While a majority of mice in this pilot group (57%) were able to learn the hidden reward task in a small number of sessions, additional mice may exhibit spatial learning over longer timescales, and individualized training should increase this fraction of mice. Indeed, variations in learning rates may be useful for dissociating the specific relationships between neural activity in brain areas such as the hippocampus and behavioral learning. However, we observed that a small percentage of mice did not learn to run on the wheel or lick for either non-operant or operant rewards (one of the seven mice, 14%) and, thus, could not be used for subsequent experiments. Additional handling and habituation and a reduction in the general state of stress of the animal through further reinforcement, such as by using desirable food treats, may be useful for helping these animals adopt active running and licking during head-restrained behavior on the wheel.
By manipulating the presence and position of the cue and reward zones on intermittent laps on the virtual track, an experimenter may further discern the dependence of spatially selective licking on specific channels of information in VR to determine, for example, how mice rely on local or distant cues or self-motion information to establish their location in an environment. The licking selectivity of mice that have learned the hidden reward location should be affected by the shift or omission of the visual cue along the track if they actively utilize this spatial cue as a landmark, as we have shown in a recent work using spatial olfactory cues20. However, even with the simple example we have presented here, the highly selective licking achieved by the mice (Figure 4C, right) indicates that they encode the VR visual environment to inform their decisions about where they are and, therefore, when to lick, as the reward zone is only evident in relation to visual cues in the VR environment. This VR system also allows the presentation of other modalities of spatial and contextual cues in addition to the visual VR environment, such as olfactory, tactile, and auditory cues, which can be used to test the selectivity of neural activity and behavior for complex combinations of distinct sensory cues. Additionally, although we did not test for the dependence of task performance on hippocampal activity, a recent study using a similar task but with tactile cues showed a perturbation of spatial learning with hippocampal inactivation23, which should be confirmed for the VR hidden reward task performed in this study.
Figure 1: Head-restrained VR hardware setup: Projection screen, running wheel, and head-fixation apparatus. (A) A 3D design schematic of the running wheel and projection screen. (B) Completed VR behavioral setup. The VR environment is rendered on (1) a single-board computer and projected onto a parabolic (2) rear-projection screen (based on Chris Harvey's lab's design15,16). (3) Wheel assembly. (4) Head post holder. (5) Water reservoir for reward delivery. (C) Top view of the projection screen and behavioral setup. (1) LED projector. (2) Mirror for rear-projecting the VR corridor onto the curved screen. (3) Running wheel. (D) Rear view of the wheel assembly. Wheel rotations are translated by the (1) rotary encoder and transmitted to the single-board computer via an (2) ESP32 microcontroller. (3) A dual-axis goniometer is used to fine-tune the head position for optical imaging. (E) Setup at the level of mouse insertion, showing the (1) head-fixation apparatus and (2) lick port placement over the (3) running wheel surface. (F) Photograph of the (1) lick port attached to the (2) flex arm for precise placement of the reward spout near the mouth of the mouse. Rewards are given via a (3) solenoid valve controlled by the behavior ESP32 (via the OpenMaze OMwSmall PCB). Also visible is the rotary encoder coupled to the (4) wheel axle and (5) the goniometer for head angle adjustment. Please click here to view a larger version of this figure.
Figure 2: VR electronics setup schematic. This schematic depicts the most relevant connections between the electronic components in the open-source virtual reality system for mice. (A) Mice are head-restrained on a custom 3D-printed head-fixation apparatus above an acrylic running wheel. (B) The rotation of the wheel axle when the mouse is running is detected by a high-resolution rotary encoder connected to a microcontroller (Rotary decoder ESP32). (C) Movement information is conveyed via a serial connection to a single-board computer running the HallPassVR GUI software and 3D environment, which updates the position in the VR virtual linear track environment based on the mouse's locomotion. (D) The rendered VR environment is sent to the projector/screen via the HDMI #2 video output of the single-board computer (VR video HDMI). (E) Movement information from the rotary encoder ESP32 is also sent to another microcontroller (Behavior ESP32 with the OpenMaze OMwSmall PCB), which uses the mouse's position to control spatial, non-VR behavioral events (such as reward zones or spatial olfactory, tactile, or auditory stimuli) in concert with the VR environment and measures the mouse's licking of the reward spout via capacitive touch sensing. Please click here to view a larger version of this figure.
Figure 3: Graphical software GUI and behavior. (A) HallPassVR GUI: Four images are selected to tile over each spatial pattern covering one-third of the track length (or the previously saved combination pattern is loaded) for three patterns in each path equal to the track length. Ceiling and floor images are selected, and then Start is pressed to initialize the VR environment on the single-board computer's HDMI output (projection screen). (B) Example virtual corridor created with the GUI parameters shown in A and used for a hidden reward experiment to test spatial learning. (C) Photograph of a head-restrained mouse running on the wheel in the virtual environment shown in B. (D) The top panel shows the online plot of animal behavior in a VR environment from the included Processing sketch to record and plot the behavioral data. Licks, laps, and rewards are plotted per 30 s time bins for the 30 min session during hidden reward spatial learning. The bottom panel shows the current mouse position (black) and the location of any reward zones (gray) during behavior. Please click here to view a larger version of this figure.
Figure 4: Spatial learning using the graphical software environment. Representative spatial licking data from one animal (A) during random foraging with random cues along the virtual linear track and (B–C) 2 days of training with a static hidden reward zone at 1.5 m with a single visual cue in the middle of the track. (A) Day 0 random foraging for four reward zones per lap, selected randomly from eight positions spaced evenly along the 2 m virtual linear track. (Left) The average number of licks per spatial bin (5 cm) over the 30 min session (top: VR hallway with random visual stimulus panels). (Right) Number of licks in each 5 cm spatial bin per lap during this session, represented by a heatmap. (B) Day 1, the first day of training with a single reward zone at 1.5 m (red box on the track diagram, top) using a virtual track containing a single high-contrast stimulus at position 0.8-1.2 m. (Left) Average spatial lick counts over the session, showing increasing licks when the animal approaches the reward zone. (Right) Spatial licks per lap, showing increased selectivity of licking in the pre-reward region. (C) Day 2, from the same hidden reward task and virtual hallway as Day 1 and from the same mouse. (Left) Total licks per spatial bin, showing a decrease in licks outside of the pre-reward zone. (Right) Spatial licks per lap on Day 2, showing increased licking prior to the reward zone and decreased licking elsewhere, indicating the development of spatially specific anticipatory licking. This shows that this animal has learned the (uncued) hidden reward location and developed a strategy to minimize effort (licking) in regions where they do not expect a reward to be present. Please click here to view a larger version of this figure.
Behavioral outcome | Number of mice | Percentage of mice |
Mouse did not run/lick | 1 | 14% |
Random foraging only | 2 | 29% |
Learned hidden reward | 4 | 57% |
Total (N) | 7 |
Table 1: VR spatial learning behavioral pilot results. Seven C57BL/6 mice of both sexes at 4 months of age were progressively trained to perform a spatial hidden reward task in VR. Of these mice, one mouse did not run/lick after initial training (one of the seven mice, 14%), while six of the remaining mice learned to run on the wheel and lick for random spatial rewards in the random foraging step of training (six of the seven mice, 86%). Four of the six mice that were able to perform the random foraging behavior subsequently learned to lick selectively in anticipation of the non-cued reward in the hidden reward task (four of the seven mice, 57% of mice in total, four of the six mice, 66% of random foraging mice), while two did not (two of the seven mice, 29%).
This open-source VR system for mice will only function if the serial connections are made properly between the rotary and behavior ESP32 microcontrollers and the single-board computer (step 2), which can be confirmed using the IDE serial monitor (step 2.4.5). For successful behavioral results from this protocol (step 4), the mice must be habituated to the apparatus and be comfortable running on the wheel for liquid rewards (steps 4.3-4.5). This requires sufficient (but not excessive) water restriction, as mice given ad libitum water in the homecage will not run and lick for rewards (i.e., to indicate their perceived location), and dehydrated mice may be lethargic and not run on the wheel. It is also worth noting that there are alternative methods for motivating mouse behavior without water restriction24; however, we did not test these methods here. For the training procedure, animals that do not run initially may be given ad hoc (i.e., non-spatial) water rewards by the experimenter via an attached optional button press, or the wheel may be moved gently to encourage locomotion. To develop random foraging behavior, mice that run but do not lick should be run with non-operant rewards (behavior ESP32 code: isOperant = 0;, step 4.5.1) until they run and lick for rewards, and they can then be run with alternating laps of non-operant and operant reward zones (altOpt=1; step 4.5.2) until they start to lick on operant laps before moving to fully operant random reward zones (step 4.5.3).
While we have provided complete instructions and example results for a basic set of experiments aimed at eliciting one form of spatial learning (conditioned licking at a hidden reward location in the virtual linear track environment), the same basic hardware and software setup can also be modified for the delivery of more complex visuospatial environments using the pi3d Python package for Raspberry Pi. For example, this system can incorporate more complex mazes such as corridors with variable lengths, multiple patterns and 3D objects, and naturalistic 3D VR landscapes. Furthermore, the behavioral software for the delivery of water rewards and other non-visual stimuli can be modified for other training paradigms by altering key variables (presented at the beginning of the behavior ESP32 code) or by inserting new types of spatial events into the same code. We are happy to advise users regarding methods for implementing other types of behavioral experiments with this VR setup or in troubleshooting.
Immersive VR environments have proven a versatile tool for studying the underlying neural mechanisms of spatial navigation6,7,8, reward-learning behaviors9, and visual perception25 both in clinical and animal studies. The main advantage of this approach is that the experimenter has tight control over contextual elements such as visual cues and specific spatial stimuli (e.g., rewards and olfactory, auditory, or tactile stimuli), which is not practical in real-world environments with freely moving animals. It should be noted, however, that differences may exist in the manner in which VR environments are encoded by brain areas such as the hippocampus when compared with the use of real-world environments26,27. With this caveat, the use of VR environments allows experimenters to perform a large number of behavioral trials with carefully controlled stimuli, allowing the dissociation of the contributions of distinct sensory elements to spatial navigation.
The complexity of building custom VR setups often requires an extensive background in engineering and computer programming, which may increase the time of setup and limit the number of apparatuses that can be constructed to train mice for experimentation. VR setups are also available from commercial vendors; however, these solutions can be expensive and limited if the user wants to implement new features or expand the training/recording capacity to more than one setup. The estimated price range of the open-source VR setup presented here is <$1,000 (USD); however, a simplified version for training (e.g., lacking goniometers for head angle adjustment) can be produced for <$500 (USD), thus allowing the construction of multiple setups for training mice on a larger scale. The modular arrangement of components also allows the integration of VR with other systems for behavioral control, such as the treadmill system with spatial olfactory and tactile stimuli we have used previously20, and, thus, VR and other stimulus modalities are not mutually exclusive.
This open-source VR system with its associated hardware (running wheel, projection screen, and head-fixation apparatus), electronics setup (single-board computer and ESP32 microcontrollers), and software (VR GUI and behavior code) provides an inexpensive, compact, and easy-to-use setup for delivering parameterized immersive VR environments to mice during head-restrained spatial navigation. This behavior may then be synchronized with neural imaging or electrophysiological recording to examine neural activity during spatial learning (step 2.3.7). The spectrum of experimental techniques compatible with VR is wide, ranging from spatial learning behavior alone to combination with fiber photometry, miniscope imaging, single-photon and multi-photon imaging, and electrophysiological techniques (e.g., Neuropixels or intracellular recording). While head restraint is necessary for some of these recording techniques, the extremely precise nature of stimulus presentation and the stereotyped nature of the behavior may also be useful for other techniques not requiring head fixation, such as miniscope imaging and fiber photometry. It should be noted, however, that our capacitive sensor-based solution for detecting licks may introduce significant noise on electrophysiological traces. To avoid such artifacts, optical or other (e.g., mechanical), sensor-based solutions should be implemented for lick detection.
Future improvements to the VR system will be uploaded to the project GitHub page (https://github.com/GergelyTuri/HallPassVR), so users should check this page regularly for updates. For example, we are in the process of replacing the hardwired serial connections between the microcontrollers and the single-board computer with Bluetooth functionality, which is native to the ESP32 microcontrollers already used in this design. In addition, we are planning to upgrade the HallPassVR GUI to allow the specification of different paths in each behavioral session to contain different positions for key landmark visual stimuli on different laps. This will allow greater flexibility for dissociating the impact of specific visual and contextual features on the neural encoding of space during spatial learning.
The authors have nothing to disclose.
We would like to thank Noah Pettit from the Harvey lab for the discussion and suggestions while developing the protocol in this manuscript. This work was supported by a BBRF Young Investigator Award and NIMH 1R21MH122965 (G.F.T.), in addition to NINDS R56NS128177 (R.H., C.L.) and NIMH R01MH068542 (R.H.).
1/4 " diam aluminum rod | McMaster-Carr | 9062K26 | 3" in length for wheel axle |
1/4"-20 cap screws, 3/4" long (x2) | Amazon.com | B09ZNMR41V | for affixing head post holders to optical posts |
2"x7" T-slotted aluminum bar (x2) | 8020.net | 1020 | wheel/animal mounting frame |
6" diam, 3" wide acrylic cylinder (1/8" thick) | Canal Plastics | 33210090702 | Running wheel (custom width cut at canalplastics.com) |
8-32 x 1/2" socket head screws | McMaster-Carr | 92196A194 | fastening head post holder to optical post |
Adjustable arm (14") | Amazon.com | B087BZGKSL | to hold/adjust lick spout |
Analysis code (MATLAB) | custom written | file at github.com/GergelyTuri/HallPassVR/software/Analysis code | |
Axle mounting flange, 1/4" ID | Pololu | 1993 | for mounting wheel to axle |
Ball bearing (5/8" OD, 1/4" ID, x2) | McMaster-Carr | 57155K324 | for mounting wheel axle to frame |
Behavior ESP32 code | custom written | file at github.com/GergelyTuri/HallPassVR/software/Arduino code/Behavior board | |
Black opaque matte acrylic sheets (1/4" thick) | Canal Plastics | 32918353422 | laser cut file at github.com/GergelyTuri/HallPassVR/hardware/VR screen assembly |
Clear acrylic sheet (1/4" thick) | Canal Plastics | 32920770574 | laser cut file at github.com/GergelyTuri/HallPassVR/hardware/VR wheel assembly |
ESP32 devKitC v4 (x2) | Amazon.com | B086YS4Z3F | microcontroller for behavior and rotary encoder |
ESP32 shield | OpenMaze.org | OMwSmall | description at www.openmaze.org (https://claylacefield.wixsite.com/openmazehome/copy-of-om2shield). ZIP gerber files at: https://github.com/claylacefield/OpenMaze/tree/master/OM_PCBs |
Fasteners and brackets | 8020.net | 4138, 3382,3280 | for wheel frame mounts |
goniometers | Edmund Optics | 66-526, 66-527 | optional for behavior. Fine tuning head for imaging |
HallPassVR python code | custom written | file at github.com/GergelyTuri/HallPassVR/software/HallPassVR | |
Head post holder | custom design | 3D design file at github.com/GergelyTuri/HallPassVR/hardware/VR head mount/Headpost Clamp | |
LED projector | Texas Instruments | DLPDLCR230NPEVM | or other small LED projector |
Lick spout | VWR | 20068-638 | (or ~16 G metal hypodermic tubing) |
M 2.5 x 6 set screws | McMaster-Carr | 92015A097 | securing head post |
Matte white diffusion paper | Amazon.com | screen material | |
Metal headposts | custom design | 3D design file at github.com/GergelyTuri/HallPassVR/hardware/VR head mount/head post designs | |
Miscellenous tubing and tubing adapters (1/16" ID) | for constructing the water line | ||
Optical breadboard | Thorlabs | as per user's requirements | |
Optical posts, 1/2" diam (2x) | Thorlabs | TR4 | for head fixation setup |
Processing code | custom written | file at github.com/GergelyTuri/HallPassVR/software/Processing code | |
Raspberry Pi 4B | raspberry.com, adafruit.com | Single-board computer for rendering of HallPassVR envir. | |
Right angle clamp | Thorlabs | RA90 | for head fixation setup |
Rotary encoder (quadrature, 256 step) | DigiKey | ENS1J-B28-L00256L | to measure wheel rotation |
Rotary encoder ESP32 code | custom written | file at github.com/GergelyTuri/HallPassVR/software/Arduino code/Rotary encoder | |
SCIGRIP 10315 acrylic cement | Amazon.com | ||
Shaft coupler | McMaster-Carr | 9861T426 | to couple rotary encoder shaft with axle |
Silver mirror acrylic sheets | Canal Plastics | 32913817934 | laser cut file at github.com/GergelyTuri/HallPassVR/hardware/VR screen assembly |
Solenoid valve | Parker | 003-0137-900 | to administer water rewards |