Summary

Integration of Animal Behavioral Assessment and Convolutional Neural Network to Study Wasabi-Alcohol Taste-Smell Interaction

Published: August 16, 2024
doi:

Summary

This article describes a set of methods for measuring the suppressive ability of sniffing alcoholic beverages on the wasabi-elicited stinging sensation.

Abstract

The commercial wasabi pastes commonly used for food preparation contain a homologous compound of chemosensory isothiocyanates (ITCs) that elicit an irritating sensation upon consumption. The impact of sniffing dietary alcoholic beverages on the sensation of wasabi spiciness has never been studied. While most sensory evaluation studies focus on individual food and beverages separately, there is a lack of research on the olfactory study of sniffing liquor while consuming wasabi. Here, a methodology is developed that combines the use of an animal behavioral study and a convolutional neural network to analyze the facial expressions of mice when they simultaneously sniff liquor and consume wasabi. The results demonstrate that the trained and validated deep learning model recognizes 29% of the images depicting co-treatment of wasabi and alcohol belonging to the class of the wasabi-negative liquor-positive group without the need for prior training materials filtering. Statistical analysis of mouse grimace scale scores obtained from the selected video frame images reveals a significant difference (P < 0.01) between the presence and absence of liquor. This finding suggests that dietary alcoholic beverages might have a diminishing effect on the wasabi-elicited reactions in mice. This combinatory methodology holds potential for individual ITC compound screening and sensory analyses of spirit components in the future. However, further study is required to investigate the underlying mechanism of alcohol-induced suppression of wasabi pungency.

Introduction

Wasabia japonica, commonly known as wasabi, has gained recognition in food preparation1,2. The intense sensory experience it elicits upon consumption, characterized by tearing up, sneezing, or coughing, is well-known. This distinctive pungency of wasabi can be attributed to a homologous compound of chemosensory isothiocyanates (ITCs). They are volatile organosulfur phytochemicals that can be categorized into ω-alkenyl and ω-methylthioalkyl isothiocyanates3. Among these compounds, allyl isothiocyanate (AITC) is the most predominated natural ITC product found in plants belonging to the Cruciferae family, such as horseradish and mustard4. Commercial wasabi pastes are commonly prepared from horseradish, making AITC a chemical marker used for quality control of these commercial products5.

Pairing dietary alcoholic beverages with wasabi-infused dishes can be considered an example of cultural disposition6. Subjectively, this combination may complement the spiciness and heat between wasabi and the spirit, enhancing the overall culinary experience. Animal qualitative behavioral assessment (QBA) is a comprehensive whole-animal methodological approach that examines behavioral changes in subjects in response to short-term or long-term external stimuli using numerical terms7. This method encompasses pain tests, motor tests, learning and memory tests, as well as emotion tests specifically designed for rodent models8. However, studies investigating the synergistic sensory evaluation of gustation together with olfaction remain scarce in the literature until now9,10. Most of the studies on chemesthetic sensation are confined to examining individual food and beverage consumption separately11. Consequently, there is a dearth of research on the taste-smell interaction involving the act of sniffing liquor while consuming wasabi.

As the wasabi-induced stinging sensation is believed to be a form of nociception12, animal behavioral assessments are well-suited for evaluating the nociceptive sensory responses in rodent animals8,13,14. A method for assessing nociception in mice, known as the mouse grimace scale (MGS) scoring was developed by Langford et al.15,16. This behavioral study method is a pain-related assessment approach, relying on the analysis of facial expressions exhibited by the experimental mice. The experimental setup is straightforward, involving a transparent cage and 2 cameras for video recording. By incorporating advanced technologies17,18,19 for automatic data capture, quantitative and qualitative behavioral measures can be obtained, enhancing animal welfare during behavioral monitoring20. Consequently, the MGS has the potential to be applied in studying the effects of various external stimuli on animals in an uninterrupted and ad libitum manner. However, the scoring process only involves selecting a few (less than 10) video frame images for evaluation by the panelists, and prior training is necessary. Scoring a large number of sample images can be labor-intensive. To overcome this time-consuming challenge, several studies have employed machine learning techniques for predicting the MGS score21,22. Yet, it is important to note that the MGS is a continuous measure. Therefore, a multi-class classification model would be more suitable for evaluating a logical and categorical problem, such as determining whether the images of mice simultaneously ingesting wasabi and sniffing liquor resemble those of normal mice.

In this study, a methodology for investigating the taste-smell interaction in mice was proposed. This methodology combines animal behavioral studies with a convolutional neural network (CNN) to analyze the facial expressions of the mouse subjects. Two mice were observed thrice under normal behavioral conditions, during the experience of wasabi-induced nociception and while sniffing liquor in a specifically designed cage. The facial expressions of the mice were video-recorded, and the generated frame images were utilized to optimize the architecture of a deep learning (DL) model. The model was then validated using an independent image dataset and deployed to classify the images acquired from the experimental group. To determine the extent of wasabi pungency suppression when the mice simultaneously sniffed liquor during wasabi consumption, the insights provided by artificial intelligence were further corroborated through cross-validation with another data analysis method, the MGS scoring16.

Protocol

In this study, two 7-week-old ICR male mice weighing between 17-25 g were utilized for the animal behavioral assessment. All housing and experimental procedures were approved by the Hong Kong Baptist University Committee on the Use of Human and Animal Subjects in Teaching and Research. The animal room was maintained at a temperature of 25 °C and a room humidity of 40%-70% on a 12-h light-dark cycle.

1. Cage design

  1. Prepare acrylonitrile butadiene styrene bricks in 3 different dimensions for cage construction: 8 mm x 8 mm x 2 mm, 16 mm x 16 mm x 6 mm, and 32 mm x 16 mm x 6 mm.
  2. Prepare an acrylonitrile butadiene styrene plate (312 mm x 147 mm x 2 mm) as the cage base.
  3. Prepare a 239 mm x 107 mm non-transparent acrylic plate with a thickness of 2 mm to be used as the bottom plate.
  4. Prepare a 239 mm x 107 mm transparent acrylic plate with a thickness of 5 mm to be used as the top plate.
  5. Prepare a 107 mm x 50 mm transparent acrylic plate with a thickness of 7 mm to be used as the terminal plate.
  6. Construct 2 opaque side walls by stacking bricks to a height of 54 mm.
  7. Embed the acrylic plates into the acrylonitrile butadiene styrene-based cage, as illustrated in Figure 1A.
  8. Prepare a chows' chamber that is constructed by five 90 mm x 50 mm transparent acrylic plates with a thickness of 2 mm, as illustrated in Figure 1B. Among the 5 transparent acrylic plates, use 2 plates for the sides, 1 plate for the top, 1 plate for the bottom, and 1 plate for the terminal.
  9. Prepare a 60 mm x 50 mm 2 mm transparent acrylic plate as a chow introduction plate, and place it in the chow's chamber.

2. Animal behavioral assessment

  1. House littermates of 2 7-week-old ICR male mice together in a regular cage.
  2. Provide the littermates of 2 mice with free access to food pellets and tap water for a 1-week adaptation period.
  3. After 1 week, introduce the littermates of 2 mice with a bottle of ethanol (~40% v/v).
    NOTE: Make sure that they are only allowed to sniff or inhale the provided aqueous ethanol on an ad libitum basis while drinking is restricted.
  4. Conduct behavioral experiments using the 9-10-week-old mouse model and the transparent cubicle cage that is depicted in Figure 1A.
  5. Disassemble all the acrylic plates and acrylonitrile butadiene styrene plates and clean them thoroughly. Start by rinsing them with ultrapure water at least 3 times and then dry them using paper towels. Next, spray them with 75% ethanol, followed by cleaning them with lens paper. Finally, allow them to air dry for at least 15 min.
  6. Weigh the mice and record their body weights before each replication of the behavioral experiment.
  7. Freshly prepare a mixture of wasabi and peanut butter by weighing 1 g of commercial wasabi and 4.5 g of peanut butter. Mix them in a zip plastic bag.
    NOTE: Due to the volatility of the isothiocyanates in wasabi, it is important to store the commercial wasabi in a -20 °C freezer.
  8. Weigh and provide either two 0.5 g pastes of peanut butter or a mixture of wasabi and peanut butter on the chow introduction plate, as illustrated in Figure 1B, C.
  9. Place the prepared chow introduction plate in the chows' chamber, as illustrated in Figure 1B, C, to allow the 2 mice to have ad libitum access to food during each video recording session.
  10. Fill the groove underneath with 30 mL of liquid, either pure water or liquor (~42% v/v ethanol), to facilitate concurrent inhalation, as indicated in Figure 1B, C.
  11. Begin recording using the cameras on 2 smartphones placed on the phone stands at each terminal.
    NOTE: The specifications of the videos are as follows: frame width, 1920; frame height, 1080; data rate, 20745 kbps; frame rate, 30.00 frames per second (fps).
  12. Carefully place the 2 trained littermates of mice into the designed animal behavioral study platform from the top and promptly secure the cage with the top plate.
    NOTE: Make sure this step is completed within 15 s.
  13. Record each video for 2-3 min.
    NOTE: Make sure the entire duration of the experiment, from the preparation of the peanut butter-wasabi mixture to the completion of the video recording, is limited to 5 min.
  14. Repeat the whole experiment 3 times.
    NOTE: Make sure each replicate of the experiment is separated for at least 6 h.
  15. Mimic different scenarios.
    NOTE: For example, in this work, a pair of mice was used in 4 groups with 4 different scenarios that were mimicked by using the experimental setting described above. These scenarios include Scenario A for the background study, Scenario B for the positive control study, Scenario C for the wasabi-alcohol taste-smell interaction study, and Scenario D for the negative control study. Table 1 provides a summary of these scenarios.

3. Image recognition

Similar to many studies on image processing23,24,25, a classification model was attained by training a CNN. The script for DL operations was written in Python v.3.10 on Jupyter Notebook (anaconda3). It is available on the following GitHub repository: git@github.com:TommyNHL/imageRecognitionJove.git. To construct and train the CNN, open-source libraries were used, including, numpy v.1.21.5, seaborn v.0.11.2, matplotlib v.3.5.2, cv2 v.4.6.0, sklearn v.1.0.2, tensorflow v.2.11.0, and keras v.2.11.0. These libraries provided the necessary tools and functionality to develop and train CNN for image recognition.

  1. Export a series of video frame images from the collected video clips to generate a data set for model training by using the provided Jupyter Notebook, named Step1_ExtractingAndSavingVideoFrameImages.ipynb.
  2. Only select the images with at least 1 mouse consuming the provided paste. Examples of the selected images are provided in Supplementary Figure 1, Supplementary Figure 2, Supplementary Figure 3, Supplementary Figure 4, Supplementary Figure 5, Supplementary Figure 6, and Supplementary Figure 7.
  3. Perform data augmentation by horizontally flipping the generated images by implementing the script provided in the Jupyter Notebook, named Step2_DataAugmentation.ipynb.
  4. Reserve the image data from each second replicate for external independent CNN model validation. Use the images from each first and third replicates for internal model training and testing.
  5. Preprocess the image data used in CNN modeling by running the script in the Jupyter Notebook, named Step3_CNNmodeling_TrainTest.ipynb, including image resizing, black color conversion, and image signal normalization.
  6. Split the training materials into internal training and testing datasets in an 8:2 fashion randomly.
  7. Initialize the architecture for CNN. Design the number of outputs of CNN based on the number of scenarios to be examined.
    NOTE: For example, in this study, the neural network was designated to classify 3 classes. Make sure the script for handling data imbalance on class weight is compiled.
  8. Find the hyperparameter combination that yields minimal loss on the internal test samples for CNN construction.
  9. Adopt the optimal hyperparameters combination for constructing CNN architecture.
  10. Open the provided Jupyter Notebooks Step4_CNNmodel_ExternalValOriginal.ipynb and Step5_CNNmodel_ExternalValFlipped.ipynb. Validate the attained model using the independent (original and flipped) images from the second replicate of the animal behavioral experiment.
  11. Deploy the attained and validated model for classifying the video frame images generated from the experimental group using Jupyter Notebook Step6_CNNmodel_Application.ipynb.
    NOTE: For example, it is scenario C in this work.

4. Manual mouse grimace scale scoring

NOTE: To validate the insights provided by the CNN model prediction, another method previously developed and validated by Langford et al. was applied16. This method involves scoring the MGS based on the 5 specific mouse facial action units (AUs): orbital tightening, nose bulge, cheek bulge, ears tightening outward, and whisker change. Each AU is assigned a score of 0, 1, or 2, indicating the absence, moderate presence, or obvious presence of the AU, respectively. This scoring system allows for the quantification and scaling of each AU to assess the level of nociception or discomfort experienced by the mice.

  1. Capture 3 video frame images of the littermates ingesting the paste for each video clip. Ensure that each frame is separated by at least 3 seconds.
  2. Blind code and randomly reorder images from different classes of scenarios in sequence by using the provided template file named "shuffleSlides.pptm" (Supplementary File 1) and running the embedded Macro code.
  3. Invite at least 10 panelists to score the sample images.
  4. Train the panelists to score image samples using the MGS. Provide the panelists with training materials that include the original article regarding MGS and its manual15,16.
  5. Calculate the MGS score of each animal subject in a captured frame by averaging all the score points of the corresponding 5 facial AUs. Present the MGS score as the mean ± standard error of measurement (SEM).
  6. Determine whether statistically significant differences exist among different classes of scenarios by one-way analysis of variance (ANOVA) with Bonferroni's multiple comparison post-hoc test.
    NOTE: A value of P < 0.05 is considered statistically significant.

Representative Results

The main objective of this study is to establish a robust framework for investigating the taste-smell interaction in mice. This framework incorporates the use of artificial intelligence and QBA to develop a predictive classification model. Additionally, the insights obtained from DL are cross-validated with a quantitative MGS assessment for an internal independent analysis. The primary application of this methodology is to examine the extent of suppression of the wasabi-invoked nociception when mice sniff dietary alcoholic beverages.

Recognition of 29% of wasabi-alcohol co-treatment images belonging to the class of the wasabi-negative liquor-positive group by the deep learning model
Facial overreaction might occur in humans upon commercial wasabi paste consumption. A similar experience could also be observed in mice (Supplementary Video 1). Because nociception sensory is well correlated with the facial expressions in rodents, the behaviors of 2 littermates were monitored to determine their response when provided with chows containing or lacking commercial wasabi. In this study, a CNN was trained using DL techniques. Because 3 replicates were conducted for each scenario, the image materials from the 1st and 3rd replicates were used for CNN model hyperparameters tuning and modeling internal training and testing. The evolution of the CNN architecture in this study is detailed in Table 2. The optimized architecture of the CNN consisted of an input layer with a resolution of 268 x 268 pixels, followed by 4 two-dimensional convolution layers, each with 8 filters. Additionally, 2 pooling layers, 2 dropout layers, a flattening layer, and 2 dense layers were incorporated for the 3-class classification task. Refer to Supplementary File 2 for the CNN architecture.

The model in this study was trained by using a dataset of 35,486 images and tested on 8,872 images throughout 7 epochs. The performance of the trained model during training, including metrics such as loss, precision-recall ratio, precision, and sensitivity (or recall), on the internal training and testing datasets can be seen in Supplementary Figure 8A-D. The model demonstrated its accuracy, precision, and sensitivity of 1 for both internal training and testing sets. This indicates that the model's predictions aligned with the actual classes of the images in both internal training and testing datasets. After model training, the images from the 2nd replicate were used to ensure and independently validate that overfitting was not an issue for the attained CNN model. The model exhibited its robustness of prediction by the values of accuracy, precision, and sensitivity of 1 for 4,820 images in the background study (scenario A), 3,900 images in the positive control study (scenario B), and 2,731 images in the negative control study (scenario D). The attained predictive model was deployed to classify the video frame images of mice sniffing spirit and consuming wasabi-containing peanut butter (scenario C) into different study scenarios, i.e., Scenarios A, B, and D. The results show that 0% and 29% of images were categorized into the classes of background study (scenario A, pure water and wasabi-free peanut butter present) and negative control study (scenario D, Chinese spirit and wasabi-free peanut butter present), respectively, while 71% of images were classified as the classes of positive control group (scenario C, where pure water and wasabi-peanut butter mixture were present). These findings imply that the presence of dietary liquor might reduce the wasabi-evoked nociception in mice. To evaluate what the machine learned from the provided training materials, feature maps were generated for each convolution layer. As depicted in Supplementary Figure 9A-D, the machine's attention was observed to be focused on a variety of facial features such as eyes orbital, nose, edges of cheek and body, ears, and also on the contact surface of the chows. Building upon these observations, another data analysis method, the MGS, was conducted. The MGS focuses on the facial AUs of mice, allowing for a statistical assessment and further validation of the suppression ability of presenting alcoholic drinks to wasabi-induced nociception.

Manual mouse grimace scale scoring statistically validates machine learning predictions.
The intensity of the wasabi-evoked nociception was evaluated using the MGS score, which assigns scores ranging from 0 to 2, with score 0 representing the least severe and score 2 indicating the most severe response. In this study, MGS scores of 24 mice (i.e., 2 mice were presented in the 12 selected images as Supplementary Figure 10, Supplementary Figure 11, Supplementary Figure 12, Supplementary Figure 13 shown) were assessed by 22 trained panelists. Analysis of littermate of 2 mice, with triplicate experiments, showed that the intra-batch precisions and inter-batch precisions had relative standard derivation ranging from 0%-25% and 9%-33%, respectively. This indicates good precision and consistency in the MGS scoring process. A significant difference was observed between the MGS scores of the mice given a paste of peanut butter (Figure 3A, 0.73 ± 0.04) and those provided with a mixture of wasabi and peanut butter (Figure 3B, 1.59 ± 0.11). This validates the experimental setup for the animal behavioral study and confirms the efficacy of wasabi in inducing nociceptive responses. Even though the alcohol vapor released from the underneath provided alcoholic beverage (~42% v/v of alcohol) might be irritant, the MGS score (Figure 3D, 0.86 ± 0.16) did not show a significant difference compared to the control group (Figure 3A, 0.73 ± 0.04). However, a significant difference was observed between the MGS scores of mice provided with the wasabi-peanut butter mixture (Figure 3B, 1.59 ± 0.11) and those exposed to alcohol vapor during wasabi consumption (Figure 3C, 0.95 ± 0.07) with a P value of <0.05. These findings suggest that the co-application of wasabi and alcoholic drinks might potentially diminish the wasabi-induced irritation, aligning with the perspective of artificial intelligence. Further investigation is warranted to explore the effective ingredient within the provided Chinese spirit that contributes to the suppression of wasabi pungency, likely attributable to the presence of ethanol.

Example for code that is already available. All code for performing image recognition associated with the current submission is available at a public GitHub Repository: git@github.com:TommyNHL/imageRecognitionJove.git. Any updates will also be published on the GitHub Repository with a citation of the final DOI of this work.

Figure 1
Figure 1: Experimental setup for the pain-related animal behavioral study. (A) The whole set-up for video recording the facial expressions of 2 mice by using the cameras on 2 smartphones placed on the phone stands at each terminal. (B) The chows' chamber used to provide two 0.5 g pastes of peanut butter or a mixture of wasabi and peanut butter to 2 mice on an ad libitum basis, and 30 mL of liquid (water or ~42% v/v liquor) was filled in the groove underneath the pastes for concurrent inhalation. (C) A closer look of the groove that was filled by 30 mL of liquid and contained 2 pastes of wasabi-peanut butter mixture. The mixture was prepared by mixing 1 g of commercial wasabi in 4.5 g of peanut butter. Please click here to view a larger version of this figure.

Figure 2
Figure 2: The architecture of the convolutional neural network model and its input and output. Images from 3 different scenarios were used for training, including scenario A for the background study, scenario B for the positive control study, and scenario D for the negative control study. The images of scenario C for the wasabi-alcohol taste-smell interaction study were classified into scenarios A, B, or D after model training. Please click here to view a larger version of this figure.

Figure 3
Figure 3:The mouse grimace scale (MGS) scores of the mice that were provided different chows. MGS score is from 0 to 2, with the score 0 representing the least severe and the score 2 for the most severe. The reference MGS scale was adapted from Langford et al. (2010)16. The grimace of mice that were provided (A) peanut butter for consuming scored 0.73 ± 0.04; (B) a mixture of wasabi and peanut butter (1 g of commercial wasabi in 4.5 g of peanut butter) for ingesting scored 1.59 ± 0.11; (C) a mixture of wasabi and peanut butter for ingestion, while the inhalation of alcohol was available from the underneath provided spirit (~42% v/v of alcohol), scored 0.95 ± 0.07; (D) peanut butter for consumption while the inhalation of alcohol was available from the underneath provided spirit scored 0.86 ± 0.16. *, P < 0.05; **, P < 0.01; ns, no significant difference via one-way analysis of variance with Bonferroni's multiple comparison post-hoc test under 95% confidence interval. Error bars represented the standard error of measurements (SEM). Triplicate measurements were conducted for 2 mice in each scenario. Please click here to view a larger version of this figure.

Table 1: Summary of the four scenarios mimicked in animal behavioral assessment. Remarks: a The mixture was prepared by mixing 1 g of commercial wasabi in 4.5 g of peanut butter. b Chinese spirit that contains ~42% v/v alcohol. Please click here to download this Table.

Table 2: Summary of model performance. The hyperparameters of a batch size of 64, filter number of the 1st 2D convolution layer of 8 without following dropout, filter number of the 2nd 2D convolution layer of 8, filter number of the 3rd 2D convolution layer of 8 with 0.1 dropout, filter number of the 4th 2D convolution layer of 8, and filter number of the dense layer of 64 with 0.5 dropout yielded the lowest loss for the internal split testing set. These hyperparameters were then adopted to train a 3-group (scenario A, B, and D) classification model for predicting the images of scenario C. Scenario A was for the background study. Scenario B was for the positive control study. Scenario D was for the negative control study. Scenario C was for the wasabi-alcohol taste-smell interaction study. Please click here to download this Table.

Supplementary Figure 1: Example training materials for deep learning of the class of scenario A (background study). Please click here to download this File.

Supplementary Figure 2: Example training materials for deep learning of the class of scenario B (positive control study). Please click here to download this File.

Supplementary Figure 3: Example training materials for deep learning of the class of scenario D (negative control study). Please click here to download this File.

Supplementary Figure 4: Predictions and example independent materials of the class of scenario A (background study) for model validation. Please click here to download this File.

Supplementary Figure 5: Predictions and example independent materials of the class of scenario B (positive control study) for model validation. Please click here to download this File.

Supplementary Figure 6: Predictions and example independent materials of the class of scenario D (negative control study) for model validation. Please click here to download this File.

Supplementary Figure 7: Predictions and example materials of the class of scenario C (wasabi-alcohol taste-smell interaction study) for model application. Please click here to download this File.

Supplementary Figure 8: Predictive performance of the optimal deep learning model. (A) Training loss and internal validation loss along 7 epochs; (B) Precision-recall curves, (C) Precision, and (D) Recall of training and internal validation along 7 epochs. Please click here to download this File.

Supplementary Figure 9: Feature maps of each convolution layer. Feature maps of the (A) first, (B) second, (C) third, and (D) last convolution layer in scenarios A, B, and D, sorted order from upper to lower. Scenario A was for the background study. Scenario B was for the positive control study. Scenario D was for the negative control study. Please click here to download this File.

Supplementary Figure 10: Selected images for manual mouse grimace scale scoring for scenario A (background study). Please click here to download this File.

Supplementary Figure 11: Selected images for manual mouse grimace scale scoring for scenario B (positive control study). Please click here to download this File.

Supplementary Figure 12: Selected images for manual mouse grimace scale scoring for scenario C (wasabi-alcohol taste-smell interaction study). Please click here to download this File.

Supplementary Figure 13: Selected images for manual mouse grimace scale scoring for scenario D (negative control study). Please click here to download this File.

Supplementary Video 1: Grimaces of mice after wasabi consumption with the presence or absence of underneath-provided alcoholic beverages. From 0 s to 8 s, the video clip presented the normal facial expressions of mice upon consuming peanut butter. From 8 s to 17 s, it illustrated the grimaces of mice upon wasabi ingestion. The mice showed orbital tightening, nose bulge, cheek bulge, ears tightening outward, and whisker change. From 17 s to 29 s, it presented that the wasabi-induced nociception could be suppressed when the spirit was available underneath. From 29 s to the end, it showed a collection of grimaces in different scenarios described above. Please note that the provided video is just used for illustration. Its quality is different from the described quality requirement for image recognition and modeling. Please click here to download this Video.

Supplementary File 1: shuffleSlides.pptm. Supplementary Microsoft PowerPoint macro-enabled presentation, named "shuffleSlides.pptm". This file was prepared for shuffling images for the blind test of mouse grimace scale scoring. Please click here to download this File.

Supplementary File 2: The CNN architecture. Please click here to download this File.

Discussion

The proposed method for studying taste-smell interaction in this work is based on the original method of behavioral coding for facial expression of pain in mice, which was developed by Langford et al.16. Several recently published articles have introduced CNN for automatic mouse face tracking and subsequent MGS scoring21,26,27,28. Applying CNNs offers an advantage over the solely original method, as it follows for the generation of a series of video frame images that can be used to train a prediction model. In contrast, the original method required pre-selected a limited number of images for MGS, which can introduce subjectivity and bias.

Among the previously published CNN-based methods, all have been developed for assessing the degree of distress or discomfort upon a direct pain-causing agent injection into the blood circulation26,28. One disadvantage of this drug introduction approach is that a significant amount of effort is required to filter out video frame images that do not capture the mouse's face profile. In this study, the nociception-causing agent commercial wasabi is provided on an ad libitum basis. Besides masking the volatile ingredient AITC, peanut butter is also capable of attracting mouse subjects to present a clear front-view gesture, facilitating the capture of their facial profiles during video recording. Moreover, this approach eliminates any potential interference from pain caused by needle insertion. However, it is important to note that the ingesting motion of the mice may interfere with MGS scoring for nose bulge and cheek bulge. To address this, a baseline control experiment must be conducted by providing the mice with pure peanut butter.

In certain CNN-based algorithms for animal behavioral study, only a subset of mouse facial action units is selected from training images and programmed to be captured21,22. These computational methods rely on pre-defined features that are associated with nociception sensation while potentially neglecting other important features specific to individual mouse subjects. As a result, it may lead to information loss. In contrast, this study takes a different approach by using the whole mouse face image to train a CNN model. This image-only usage (IOU)-based method has been shown to be more efficient for image classification25. Hence, the proposed CNN modeling approach in the present study is similar to some proposed methods that first train a CNN model by using the entire image without region annotation and then validate the model through manual human evaluation27,29. This approach can retain more information in the images and prevent subjectivity introduction.

However, a limitation of the experimental setup in this work is that mouse subjects were housed as littermates of 2 rather than in individual habitats. This decision was intentional, considering that mice are social animals and typically thrive in group settings. An interesting finding is that, during the animal training period, only 2 littermates in 1 cage out of 3 cages were trained to be adapted to the presence of the Chinese spirit. As a result, only triplicate experiments were conducted exclusively on these 2 littermates. For the training materials for CNN modeling, an individual image containing both 2 littermates was also taken as an individual sample without performing any segmentation. This approach allowed us to capture the interaction between the littermates and analyze their behaviors together with facial expressions collectively.

Similar to the image-based MGS scoring method, the proposed method in this study is expected can be transferrable to other kinds of animals, such as felines and horses30,31. By applying the proposed methodology, a food-inducing chows introduction method with the application of computer vision could be an approach for facilitating animal behavioral study, specifically for the taste-smell interaction in various animals.

Disclosures

The authors have nothing to disclose.

Acknowledgements

Z. Cai would like to acknowledge the financial support from the Kwok Chung Bo Fun Charitable Fund for the establishment of the Kwok Yat Wai Endowed Chair of Environmental and Biological Analysis.

Materials

Absolute ethanol (EtOH) VWR Chemicals BDH CAS# 64-17-5
Acrylonitrile butadiene styrene bricks Jiahuifeng Flagship Store https://shop.paizi10.com/jiahuifeng/chanpin.html
Acrylonitrile butadiene styrene plates Jiahuifeng Flagship Store https://shop.paizi10.com/jiahuifeng/chanpin.html
Allyl isothiocyanate (AITC) Sigma-Aldrich CAS# 57-06-7
Anhydrous dimethyl sulfoxide Sigma-Aldrich CAS# 67-68-5
Chinese spirit Yanghe Qingci https://www.chinayanghe.com/article/45551.html
Commercial wasabi S&B FOODS INC. https://www.sbfoods-worldwide.com
Formic acid (FA) VWR Chemicals BDH CAS# 64-18-6
GraphPad Prism 5 GraphPad https://www.graphpad.com
HPLC-grade acetonitrile (ACN) VWR Chemicals BDH CAS# 75-05-8
HPLC-grade methanol (MeOH) VWR Chemicals BDH CAS# 67-56-1
Microsoft Excel 2016 Microsoft https://www.microsoft.com 
Microsoft PowerPoint 2016 Microsoft https://www.microsoft.com
Milli-Q water system Millipore https://www.merckmillipore.com
Mouse: ICR Laboratory Animal Services Centre (The Chinese University of Hong Kong, Hong Kong, China) N/A
Peanut butter Skippy https://www.peanutbutter.com/peanut-butter/creamy
Python v.3.10 Python Software Foundation https://www.python.org 
Transparent acrylic plates Taobao Store https://item.taobao.com/item.htm?_u=32l3b7k63381&id=60996545797
0&spm=a1z09.2.0.0.77572e8dFPM
EHU

References

  1. Isshiki, K., Tokuoka, K., Mori, R., Chiba, S. Preliminary examination of allyl isothiocyanate vapor for food preservation. Biosci Biotechnol Biochem. 56 (9), 1476-1477 (1992).
  2. Li, X., Wen, Z., Bohnert, H. J., Schuler, M. A., Kushad, M. M. Myrosinase in horseradish (Armoracia rusticana) root: Isolation of a full-length cDNA and its heterologous expression in Spodoptera frugiperda insect cells. Plant Sci. 172 (6), 1095-1102 (2007).
  3. Depree, J. A., Howard, T. M., Savage, G. P. Flavour and pharmaceutical properties of the volatile sulphur compounds of Wasabi (Wasabia japonica). Food Res Int. 31 (5), 329-337 (1998).
  4. Hu, S. Q., Wei, W. Study on extraction of wasabi plant material bio-activity substances and anti-cancer activities. Adv Mat Res. 690 – 693, 1395-1399 (2013).
  5. Lee, H. -. K., Kim, D. -. H., Kim, Y. -. S. Quality characteristics and allyl isothiocyanate contents of commercial wasabi paste products. J Food Hyg Saf. 31 (6), 426-431 (2016).
  6. Bacon, T. Wine, wasabi and weight loss: Examining taste in food writing. Food Cult Soc. 17 (2), 225-243 (2014).
  7. Fleming, P. A., et al. The contribution of qualitative behavioural assessment to appraisal of livestock welfare. Anim Prod Sci. 56, 1569-1578 (2016).
  8. Shi, X., et al. Behavioral assessment of sensory, motor, emotion, and cognition in rodent models of intracerebral hemorrhage. Front Neurol. 12, 667511 (2021).
  9. Stevenson, R. J., Prescott, J., Boakes, R. A. Confusing tastes and smells: How odours can influence the perception of sweet and sour tastes. Chem Senses. 24 (6), 627-635 (1999).
  10. Pfeiffer, J. C., Hollowood, T. A., Hort, J., Taylor, A. J. Temporal synchrony and integration of sub-threshold taste and smell signals. Chem Senses. 30 (7), 539-545 (2005).
  11. Simons, C. T., Klein, A. H., Carstens, E. Chemogenic subqualities of mouthfeel. Chem Senses. 44 (5), 281-288 (2019).
  12. Andrade, E. L., Luiz, A. P., Ferreira, J., Calixto, J. B. Pronociceptive response elicited by TRPA1 receptor activation in mice. Neuroscience. 152 (2), 511-520 (2008).
  13. Palazzo, E., Marabese, I., Gargano, F., Guida, F., Belardo, C., Maione, S. Methods for evaluating sensory, affective and cognitive disorders in neuropathic rodents. Curr Neuropharmacol. 19 (6), 736-746 (2020).
  14. Topley, M., Crotty, A. M., Boyle, A., Peller, J., Kawaja, M., Hendry, J. M. Evaluation of motor and sensory neuron populations in a mouse median nerve injury model. J Neurosci Methods. 396, 109937 (2023).
  15. Langford, D. J., et al. . Mouse Grimace Scale (MGS): The Manual. , (2015).
  16. Langford, D. J., et al. Coding of facial expressions of pain in the laboratory mouse. Nat Methods. 7 (6), 447-449 (2010).
  17. Liu, H., Fang, S., Zhang, Z., Li, D., Lin, K., Wang, J. MFDNet: Collaborative poses perception and matrix Fisher distribution for head pose estimation. IEEE Trans Multimedia. 24, 2449-2460 (2022).
  18. Liu, T., Wang, J., Yang, B., Wang, X. NGDNet: Nonuniform Gaussian-label distribution learning for infrared head pose estimation and on-task behavior understanding in the classroom. Neurocomputing. 436, 210-220 (2021).
  19. Liu, T., Liu, H., Yang, B., Zhang, Z. LDCNet: Limb direction cues-aware network for flexible human pose estimation in industrial behavioral biometrics systems. IEEE Trans Industr Inform. 20 (6), 8068-8078 (2023).
  20. Grant, E. P., et al. What can the quantitative and qualitative behavioural assessment of videos of sheep moving through an autonomous data capture system tell us about welfare. Appl Anim Behav Sci. 208, 31-39 (2018).
  21. Vidal, A., Jha, S., Hassler, S., Price, T., Busso, C. Face detection and grimace scale prediction of white furred mice. Mach Learn Appl. 8, 100312 (2022).
  22. Zylka, M. J., et al. Development and validation of Painface, A software platform that simplifies and standardizes mouse grimace analyses. J Pain. 24 (4), 35-36 (2023).
  23. Liu, H., Zhang, C., Deng, Y., Liu, T., Zhang, Z., Li, Y. F. Orientation cues-aware facial relationship representation for head pose estimation via Transformer. IEEE Trans Image Process. 32, 6289-6302 (2023).
  24. Liu, H., Liu, T., Chen, Y., Zhang, Z., Li, Y. F. EHPE: Skeleton cues-based Gaussian coordinate encoding for efficient human pose estimation. IEEE Trans Multimedia. , (2022).
  25. Liu, H., et al. TransIFC: Invariant cues-aware feature concentration learning for efficient fine-grained bird image classification. IEEE Trans Multimedia. , (2023).
  26. Akkaya, I. B., Halici, U. Mouse face tracking using convolutional neural networks. IET Comput Vis. 12 (2), 153-161 (2018).
  27. Andresen, N., et al. Towards a fully automated surveillance of well-being status in laboratory mice using deep learning: Starting with facial expression analysis. PLoS One. 15 (4), e0228059 (2020).
  28. Ernst, L., et al. Improvement of the mouse grimace scale set-up for implementing a semi-automated Mouse Grimace Scale scoring (Part 1). Lab Anim. 54 (1), 83-91 (2020).
  29. Tuttle, A. H., et al. A deep neural network to assess spontaneous pain from mouse facial expressions. Mol Pain. 14, 1744806918763658 (2018).
  30. Lencioni, G. C., de Sousa, R. V., de Souza Sardinha, E. J., Corrêa, R. R., Zanella, A. J. Pain assessment in horses using automatic facial expression recognition through deep learning-based modeling. PLoS One. 16 (10), e0258672 (2021).
  31. Steagall, P. V., Monteiro, B. P., Marangoni, S., Moussa, M., Sautié, M. Fully automated deep learning models with smartphone applicability for prediction of pain using the Feline Grimace Scale. Sci Rep. 13, 21584 (2023).
This article has been published
Video Coming Soon
Keep me updated:

.

Cite This Article
Ngan, H., Qi, Z., Yan, H., Song, Y., Wang, T., Cai, Z. Integration of Animal Behavioral Assessment and Convolutional Neural Network to Study Wasabi-Alcohol Taste-Smell Interaction. J. Vis. Exp. (210), e66981, doi:10.3791/66981 (2024).

View Video