Summary

An Automated Squint Method for Time-syncing Behavior and Brain Dynamics in Mouse Pain Studies

Published: November 01, 2024
doi:

Summary

This protocol provides a method for tracking automated eye squint in rodents over time in a manner compatible with time-locking to neurophysiological measures. This protocol is expected to be useful to researchers studying mechanisms of pain disorders such as migraine.

Abstract

Spontaneous pain has been challenging to track in real time and quantify in a way that prevents human bias. This is especially true for metrics of head pain, as in disorders such as migraine. Eye squint has emerged as a continuous variable metric that can be measured over time and is effective for predicting pain states in such assays. This paper provides a protocol for the use of DeepLabCut (DLC) to automate and quantify eye squint (Euclidean distance between eyelids) in restrained mice with freely rotating head motions. This protocol enables unbiased quantification of eye squint to be paired with and compared directly against mechanistic measures such as neurophysiology. We provide an assessment of AI training parameters necessary for achieving success as defined by discriminating squint and non-squint periods. We demonstrate an ability to reliably track and differentiate squint in a CGRP-induced migraine-like phenotype at a sub second resolution.

Introduction

Migraine is one of the most prevalent brain disorders worldwide, affecting more than one billion people1. Preclinical mouse models of migraine have emerged as an informative way to study the mechanisms of migraine as these studies can be more easily controlled than human studies, thus enabling causal study of migraine-related behavior2. Such models have demonstrated a strong and repeatable phenotypic response to migraine-inducing compounds, such as calcitonin-gene-related peptide (CGRP). The need for robust measurements of migraine-relevant behaviors in rodent models persists, especially those that may be coupled with mechanistic metrics such as imaging and electrophysiological approaches.

Migraine-like brain states have been phenotypically characterized by the presence of light aversion, paw allodynia, facial hyperalgesia to noxious stimuli, and facial grimace3. Such behaviors are measured by total time spent in light (light aversion) and paw or facial touch sensitivity thresholds (paw allodynia and facial hyperalgesia) and are restricted to a single readout over large periods of time (minutes or longer). Migraine-like behaviors can be elicited in animals by dosing with migraine-inducing compounds such as CGRP, mimicking symptoms experienced by human patients with migraine3 (i.e., demonstrating face validity). Such compounds also produce migraine symptoms when administered in humans, demonstrating the construct validity of these models4. Studies in which behavioral phenotypes were attenuated pharmacologically have led to discoveries related to the treatment of migraine and provide further substantiation of these models (i.e., demonstrating predictive validity)5,6.

For example, a monoclonal anti-CGRP antibody (ALD405) was shown to reduce light-aversive behavior5 and facial grimace in mice6 treated with CGRP, and other studies have demonstrated that CGRP antagonist drugs reduce nitrous oxide-induced migraine-like behaviors in animals7,8. Recent clinical trials have shown success in treating migraine by blocking CGRP9,10 leading to multiple FDA-approved drugs targeting CGRP or its receptor. Preclinical assessment of migraine-related phenotypes has led to breakthroughs in clinical findings and is, therefore, essential to understanding some of the more complex aspects of migraine that are difficult to directly test in humans.

Despite numerous advantages, experiments using these rodent behavioral readouts of migraine are often restricted in their time point sampling abilities and can be subjective and prone to human experimental error. Many behavioral assays are limited in the ability to capture activity at finer temporal resolutions, often making it difficult to capture more dynamic elements that occur at a sub-second timescale, such as at the level of brain activity. It has proven difficult to quantify the more spontaneous, naturally occurring elements of behavior over time at a meaningful temporal resolution for studying neurophysiological mechanisms. Creating a way to identify migraine-like activity at faster timescales would allow for externally validating migraine-like brain states. This, in turn, could be synchronized with brain activity to create more robust brain activity profiles of migraine.

One such migraine-related phenotype, facial grimace, is utilized across various contexts as a measurement of pain in animals that can be measured instantaneously and tracked over time11. Facial grimace is often used as an indicator of spontaneous pain based on the idea that humans (especially non-verbal humans) and other mammalian species display natural changes in facial expression when experiencing pain11. Studies measuring facial grimace as an indication of pain in mice in the last decade have utilized scales such as the Mouse Grimace Scale (MGS) to standardize the characterization of pain in rodents12. The facial expression variables of the MGS include orbital tightening (squint), nose bulge, cheek bulge, ear position, and whisker change. Even though the MGS has been shown to reliably characterize pain in animals13, it is notoriously subjective and relies on accurate scoring, which can vary across experimenters. Additionally, the MGS is limited in that it utilizes a non-continuous scale and lacks the temporal resolution needed to track naturally occurring behavior across time.

One way to combat this is by objectively quantifying a consistent facial feature. Squint is the most consistently trackable facial feature6. Squint accounts for the majority of the total variability in the data when accounting for all of the MGS variables (squint, nose bulge, cheek bulge, ear position, and whisker change)6. Because squint contributes most to the overall score obtained using the MGS and reliably tracks response to CGRP6,14, it is the most reliable way to track spontaneous pain in migraine mouse models. This makes squint a quantifiable non-homeostatic behavior induced by CGRP. Several labs have used facial expression features, including squint, to represent potential spontaneous pain associated with migraine6,15.

Several challenges have remained regarding carrying out automated squints in a way that can be coupled with mechanistic studies of migraine. For example, it has been difficult to reliably track squint without relying on a fixed position that must be calibrated in the same manner across sessions. Another challenge is the ability to carry out this type of analysis on a continuous scale instead of discrete scales like the MGS. To mitigate these challenges, we aimed to integrate machine learning, in the form of DeepLabCut (DLC), into our data analysis pipeline. DLC is a pose estimation machine learning model developed by Mathis and colleagues and has been applied to a wide range of behaviors16. Using their pose estimation software, we were able to train models that could accurately predict points on a mouse eye at near-human accuracy. This solves the issues of repetitive manual scoring while also drastically increasing temporal resolution. Further, by creating these models, we have made a repeatable means to score squint and estimate migraine-like brain activity over larger experimental groups. Here, we present the development and validation of this method for tracking squint behaviors in a way that can be time-locked to other mechanistic measurements such as neurophysiology. The overarching goal is to catalyze mechanistic studies requiring time-locked squint behaviors in rodent models.

Protocol

NOTE: All animals utilized in these experiments were handled according to protocols approved by the Institutional Animal Care and Use Committee (IACUC) of the University of Iowa.

1. Prepare equipment for data collection

  1. Ensure the availability of all necessary equipment: ensure that the recommended hardware for running DLC has at least 8 GB of memory. See the Table of Materials for information related to hardware and software.
    NOTE: Data can be collected in any format but must be converted to a format readable by DLC before analysis. The most common formats are AVI and MP4.
  2. Configure at least one camera so that one eye of an animal can be detected. If both eyes are visible, perform additional filtering, as it may cause interference in tracking. See section 10 for an example of such filtering for the data provided here.
  3. Install DLC using the package found at Deeplabcut.github.io/DeepLabCut/docs/installation.
  4. In the camera setup, include a single camera at a side angle (~90°) to the mouse. To follow this example, sample at 10 Hz, with the mice restrained but free to access their full range of head movements relative to the body. Keep between 2 and 4 inches from the camera to the animal.

2. Setting up DLC

  1. After installing DLC, create the environment to work out of. To do this, navigate to the folder where the DLC software was downloaded using the change directory with the following command.
    cd folder_name
    NOTE: This will be where the DEEPLABCUT.yaml file is located.
  2. Run the first command to create the environment and enable it by typing the second command.
    conda env create -f DEEPLABCUT.yaml
    conda activate Deeplabcut
    NOTE: Ensure the environment is activated before each use of DLC.
    After activating the environment, open the graphical user interface (GUI) with the following command and begin creating the model.
    python -m deeplabcut

3. Create the model

  1. After the GUI is opened, begin creating a model by clicking on Create New Project at the bottom.
  2. Name the project something meaningful and unique to identify it later and enter a name as experimenter. Check the Location section to see where the project will be saved.
  3. Select Browse folders and find the videos to train the model. Select Copy videos to project folder if the videos are not to be moved from their original directory.
  4. Select Create to generate a new project on the computer.
    NOTE: The videos must cover the full range of the behavior you will observe (i.e., squint, non-squint, and all behaviors in between). The model will only be able to recognize behavior similar to that in the training data, and if some components of the behavior are missing, the model may have trouble recognizing it.

4. Configure the settings

NOTE: This is where details like what points to track, how many frames to extract from each training video, default labeling dot size, and variables relating to how the model will train can be defined.

  1. After creating the model, edit the configuration settings by selecting Edit config.yaml. Select edit to open the configuration settings file to specify key settings relating to the model.
  2. Modify bodyparts to include all parts of the eye to track, then modify numframes2pick to the number of frames needed per training video to get 400 total frames. Lastly, change dotsize to six so that the default size when labeling is small enough to be accurately placed around the edges of the eye.

5. Extract training frames

  1. Following configuration, navigate to the Extract Frames tab at the top of the GUI and select Extract Frames at the bottom right of the page.
  2. Monitor progress using the loading bar at the bottom of the GUI.

6. Label training frames

  1. Navigate to the Label Frames tab in the GUI and select Label Frames. Find the new window that shows folders for each of the selected training videos. Select the first folder, and a new labeling GUI will open.
  2. Label the points defined during configuration for every frame of the selected video. After all frames are labeled, save them and repeat the process for the next video.
  3. For adequate labeling of squint, use two points as close to the largest peak of the eye (center) as possible and indicate the up/down positions for each point. Approximate squint as the average of these two lengths.
    NOTE: When labeling, DLC does not automatically save progress. Periodic saving is recommended to avoid loss of labeled data.

7. Create a training dataset

  1. After manually labeling, navigate to the Train network tab and select Train network to prompt the software to start training the model.
  2. Monitor progress in the command window.

8. Evaluate the network

  1. After network training is complete, navigate to the Evaluate network tab and select Evaluate network. Wait for a few moments until the blue loading circle disappears indicating it has finished self-evaluating and the model is ready for use.

9. Analyze data/generate labeled videos

  1. To analyze videos, navigate to the Analyze videos tab. Select Add more videos and select the videos to be analyzed.
  2. Select Save result(s) as csv if a csv output of the data is sufficient.
  3. When the videos have all been acquired, select Analyze videos at the bottom to begin analysis of the videos.
    NOTE: This step must be completed before generating labeled videos in step 9.5
  4. After the videos have been analyzed, navigate to the Create videos tab and select the analyzed videos.
  5. Select Create videos and the software will begin generating labeled videos that represent the data shown in the corresponding .csv.

10. Process final data

  1. Apply the macros found at https://research-git.uiowa.edu/rainbo-hultman/facial-grimace-dlc to convert raw data into the format used for this analysis (i.e., Euclidian distance).
  2. Import and apply macros labeled Step1 and Step 2 to the csv to filter out all suboptimal data points and convert the data to an averaged Euclidean distance for the centermost points at the top and bottom of the eye.
  3. Run macro called Step3 to mark each point as a 0 no squint and 1 squint based on the threshold value in the script, which is set at 75 pixels.
    NOTE: The parameters for these macros may require adjustment depending on the experimental setup (see discussion). The threshold for squint and the automatic filter for the maximum value of the eye are parameters that may be changed depending on the size of the animal and distance from the camera. You might also adjust the values used for removing suboptimal points depending on how selectively the data have to be filtered.

Representative Results

Here, we provide a method for the reliable detection of squint at high temporal resolution using DeepLabCut. We optimized training parameters, and we provide an evaluation of this method's strengths and weaknesses (Figure 1).

After training our models, we verified that they were able to correctly estimate the top and bottom points of the eyelid (Figure 2), which serve as the coordinate points for the Euclidean distance measure. Euclidean distance is defined as the average lengths of the distances between the two top and bottom points of the eye. Our model was able to detect instances of non-squint (Figure 2A) and squint (Figure 2B). The blue dots indicate points used to determine the Euclidean distance for each frame. The green, yellow, orange, and purple dots were used to help the model correctly estimate the Euclidean distance and decrease the likelihood value when the head is in a suboptimal position (i.e., accounting for head movement and changes in position across sessions). We then validated the accuracy of the model using a number of different methods.

To validate the ideal number of frames used for the model, we trained and tested four models of varying sample frame size (Figure 3). We first compared the root mean square error (RMSE) values between the test and training data to validate how well the models could accurately predict test data that they had not been trained on. This comparison showed that variability between the manually labeled points and the model-labeled points leveled off after 300 frames. This trend correlated with the reported averages for likelihood that also appeared to level off after 300 labeled frames. We used these reported likelihood values to filter points that were less than 0.92. These likelihood values indicate how confident the model is that a given point was labeled correctly based on the training data. We averaged these values for the points that contribute to the Euclidean distance metric to examine how well the models performed relative to one another. While there was no significant difference between 300 and 400 frames, we used 400 frames because it averaged above the 0.95 likelihood value, which is nearing our threshold for manual filtering and aligns with the threshold utilized in similar models for pose estimation16.

Another way that we validated the accuracy of the model was with a confusion matrix comparing manually annotated frames to DLC-labeled frames. Two blinded individuals manually annotated 300 frames of the same eye in eight videos. We used these data to construct a confusion matrix to assess true and false positives and negatives (Figure 4), where manually scored data were used as the ground truth. For DLC, a positive squint value was recorded when the Euclidean distance was recorded as less than 75 pixels (i.e., the animal squints), and a negative value was recorded for values greater than 75 pixels (i.e., the animal does not squint). We found a positive predictive value of 96.96%, which is the percentage of time the model accurately predicts squint relative to a manually annotated squint. We found a negative predictive value of 99.66%, which is the percentage of time the model accurately predicts no squint relative to manually annotated squint. These show the proportion of negative and positive values that were correctly labeled. We also found a true positive rate of 98.1% and a true negative rate of 99.46%, which represent the model's accurate prediction of positive and negative values relative to all values positive and negative values, respectively. Our Matthews correlation coefficient, or MCC, was 93.8%, indicating the correlation coefficient between observed and predicted values.

Once we were confident that our model reliably tracks squint, we compared this DLC method against a previously published squint tracking method using a preclinical migraine dataset14. We will refer to this other method as the "area squint model (ASM)" because it was developed using open eye area as the continuous variable measuring squint14. The area squint model utilizes trained facial detection software combined with a custom MATLAB script to analyze the mean pixel area of the eye while excluding frames with a tracking error rate of >15%14. One major limitation is that the "ASM" is not open source and, therefore, not widely accessible. DLC allows for increased optimization and adaptability without requiring a significant purchase of software and hardware.

We used a data set of 10 female and 10 male CD1 mice. Experimentally, all animals were acclimated in gentle restraints for 30 min over a total of 3 days prior to the start of recordings. Each animal was recorded for 5 min of baseline and then 5 min for treatment recordings. During treatment sessions, animals were treated with either PBS (vehicle) or 0.1 mg/kg CGRP (treatment) intraperitoneally to induce a migraine-like state. Data were collected in a well-lit room using cameras equipped with infrared light to illuminate the face, ensuring accurate landmark detection. The infrared camera included a Kowa LM35JC 2/3" 35 mm F1.6 manual iris C-mount lens with a focal distance of 254 mm and an appropriately adjusted aperture. After we collected the data, we utilized the ASM and DLC to analyze the data. Since manual scoring has been conventionally utilized in the field to quantify facial grimace, with squint being one component of the facial grimace14, we also compared our data to manually scored data.

Based on previous findings that peripheral injection of CGRP induces a squint response in mice, we expected to observe significant differences in squint response between vehicle and CGRP treatment6,14. We compared ASM, manual, and DLC methods and found that our model robustly detected a squint phenotype, as did the manual and ASM methods (Figure 5). It is important to note that the ASM model was used to assess CGRP-induced pain and squint. In that study, Rea et al. compared squint response following CGRP to squint response following formalin injection of the hind paw as a "more traditional" pain induction assay14. Moreover, CGRP is well documented as inducing touch hypersensitivity in mice through the use of von Frey3,17. Consistent with the field, we normalized the average squint during the treatment session to a 5 min pretreatment baseline for each animal and compared PBS (n = 10) versus CGRP-treated (n = 11) animals. Statistical analyses of the PBS versus CGRP-treated groups are as follows. We found that CGRP-treated animals exhibited decreased mean pixel area using the area squint method of tracking (p = 0.012, Figure 5A) and exhibited decreased Euclidean distance when manually scored (p = 0.0007, Figure 5B) and using our DLC model (p = 0.007, Figure 5C). When we compared each method over time in a single representative animal, the same pattern was observed (Figure 5). This animal showed a very clear squint phenotype in response to CGRP treatment but not to PBS. All models were able to detect these differences, but the data were most clearly represented in our DLC model (Figure 5). Precise and accurate metrics are especially important when data must be analyzed at finer resolutions where averaging is not indicative of the complete behavioral readout (e.g., brain activity). The DLC method of detecting squint in mice allows us to collect data at a millisecond timescale and time-lock it to measures of brain activity (e.g., local field potentials), which occurs on a millisecond time scale. We can then utilize this technique to build a more robust profile of a brain state indicative of spontaneous pain in the context of migraine and other complex brain disorders.

Figure 1
Figure 1: Overview of the procedure for generating a trained network with DLC. General schematic of the process by which eye features of an animal are tracked and then analyzed using machine learning. Abbreviation: DLC = DeepLabCut. Please click here to view a larger version of this figure.

Figure 2
Figure 2: Example of automated squint tracking in a representative CD1 mouse. (A) Example of a frame showing DLC tracking squint (colored dots) on the outline of the eye during the treatment day when the mouse is not squinting. (B) Example of a frame showing automated detection of squint on the treatment day, using our DLC model. Euclidean distance was measured using the average distance between B and C, the blue dots, on the top and bottom of the eye. The blue sets of dots at the top and bottom of the eye are used when tracking Euclidean distance. The other points (green, yellow, orange, purple) are framing landmarks used to both help the model estimate the Euclidean distance points and filter out suboptimal head positioning after data collection. Abbreviation: DLC = DeepLabCut. Please click here to view a larger version of this figure.

Figure 3
Figure 3: Justification for the number of frames used to train the model. (A) Root-mean squared error analysis indicates the average distance between predicted and observed values for test and train data sets. The training data set represents the frames sampled when training the model, and the test data set represents the non-training frames used to validate how well the model could identify similar but different images. We used five sets of training and test data and found that RMSE values leveled off around 300 frames for the test group. (B) The likelihood that a given point is correctly labeled (mean + SEM). This showed that 400 manually labeled frames were ideal because the raw data sets averaged above 0.95 likelihood, while having an RMSE score closest to that of the training data. This meant the model was able to closely approximate the points it had been trained on while also reporting on most of the frames with a high likelihood. Abbreviation: RMSE = root-mean squared error. Please click here to view a larger version of this figure.

Figure 4
Figure 4: Confusion matrix for DLC squint measurements. We sampled 300 s from eight videos (five CGRP and three PBS) and compared those points to a manually labeled binary yes or no score for squint. We quantified predicted values as those identified by DLC and actual values as those scored manually by a human. We then compared this to the manually scored data to see how often squint was correctly identified relative to that manually scored binary yes or no of squint. Abbreviations: DLC = DeepLabCut; CGRP = calcitonin-gene-related peptide; PBS = phosphate-buffered saline; TP = true positives; FP = false positives; FN = false negatives; TN = true negatives; PPV = positive predictive value; NPV = negative predictive value; TPR = true positive rate; TNR = true negative rate; MCC = Matthew's correlation coefficient. Please click here to view a larger version of this figure.

Figure 5
Figure 5: Squint phenotype across three different models for detecting squint. Top two rows contain the same representative animal with each condition (PBS or CGRP) across three different models for detecting squint. Bottom row reflects averages across all animals. (A) There was a decrease in mean pixel area (mean overall pixel area/baseline) in CGRP-treated versus PBS-treated mice (t(18) = 2.805, p = 0.012) after processing all data using the previously published and validated area squint model14. (B) There was a similar response in manually scored data (t(18) = 4.064, p = 0.0007). (C) CGRP-treated mice showed decreased average eyelid to eyelid distance (treatment Euclidean distance/pretreatment Euclidean distance, baseline) than PBS-treated mice (t(18) = 3.040, p = 0.007 when utilizing DLC to process all data. N = 20 (10 females, 10 males). Error bars indicate mean ± SEM. Please click here to view a larger version of this figure.

Discussion

This protocol provides an easily accessible in-depth method for using machine-learning-based tools that can differentiate squint at near-human accuracy while maintaining the same (or better) temporal resolution of prior approaches. Primarily, it makes evaluation of automated squint more readily available to a wider audience. Our new method for evaluating automated squint has several improvements compared to previous models. First, it provides a more robust metric than ASM by utilizing fewer points that actually contribute to the quantification of squint. This lessens the likelihood of false positives and negatives by making the analysis rely on fewer points when generating the values that denote squint. In other words, the DLC model makes each point around the eye necessary but not sufficient for the inclusion of a time point. This allows us to filter suboptimal data using the same number of points as ASM without having to rely on the greater variability that comes from relying on so many constituent points. Additionally, we reduced potential human error by designing models that do not completely rely on the accuracy of trained individuals.

When processing data, we found that our method accurately filtered suboptimal points and outlier points that were larger than what was possible, given the maximum size of the mouse eye (protocol section 10). We utilized macros that checked whether each of the 10 points surrounding the eye individually had a likelihood value greater than 0.92 and filtered any below that value. In the future, this can be adjusted to make the processed data more or less selective. The macros also filtered any Euclidean distance values greater than 200 pixels, as we found the greatest possible distance between the top and bottom of the eye was 150 pixels. This may need to change depending on the experimental setup. If the camera is not the same distance from the eye, then the maximum value could be significantly more or less. The strength of these macros is that they allowed us to extract measurements between the top and bottom of the eye in a way that was dependent on the model reporting a higher likelihood for all constituent points surrounding the eye.

DLC and the ASM are both limited in that they rely on the mouse to be in a fixed position at a predetermined distance from the camera to allow consistent magnification scaling between baseline and treatment conditions. Thus, movement from the animal itself, incorrect positioning within the apparatus, or a change in the experimental procedure would compromise the ability of the model to detect the total area of the eye. Our model somewhat improves on these limitations by utilizing the Euclidean distance, that is, the up and down distance of the length of the eye, which allows for improved tracking despite differences in the angles of the camera, movement from the animal, and experimental variation across different sessions without requiring additional recalibration. However, we acknowledge that improvements in normalization to account for head movement might result in even better tracking of squint in moving animals.

Another limitation of our method is that it filtered out points where the Euclidean distance approached zero, denoting the closing of the eye. Despite filtering these significant contributors to squint, we were still able to detect a CGRP-induced squint response more robustly than prior methods (p = 0.007). Removing this component of squint becomes especially limiting when trying to compare to additional points of interest, such as brain activity. We think that finding significance while removing those points shows the robustness of this method, but we acknowledge that removing these components of squint is not ideal. Future studies utilizing this method should include a greater number of outlier frames to better train the models in recognizing squint as it approaches zero. Overall, the development of a method for reliably tracking automated squint may enable studies aimed at associating important features of naturally occurring behavior with its brain state, allowing for robust investigation of brain activity profiles such as in the context of migraine.

Disclosures

The authors have nothing to disclose.

Acknowledgements

Thanks to Rajyashree Sen for insightful conversations. Thanks tothe McKnight Foundation Neurobiology of Disease Award (RH), NIH 1DP2MH126377-01 (RH), the Roy J. Carver Charitable Trust (RH), NINDS T32NS007124 (MJ), Ramon D. Buckley Graduate Student Award (MJ), and VA-ORD (RR&D) MERIT 1 I01 RX003523-0 (LS).

Materials

CUDA toolkit 11.8
cuDNN SDK 8.6.0
Intel computers with Windows 11, 13th gen 
LabFaceX 2D Eyelid Tracker Add-on Module for a Free Roaming Mouse: FaceX LLC NA Any camera that can record an animal's eye is sufficient, but this is our eye tracking hardware.
NVIDIA GPU driver that is version 450.80.02 or higher
NVIDIA RTX A5500, 24 GB DDR6 NVIDIA [490-BHXV] Any GPU that meets the minimum requirements specified for your version of DLC, currently 8 GB, is sufficient. We used NVIDIA GeForce RTX 3080 Ti GPU
Python 3.9-3.11
TensorFlow version 2.10

References

  1. Disease, G. B. D., Injury, I., Prevalence, C. Global, regional, and national incidence, prevalence, and years lived with disability for 354 diseases and injuries for 195 countries and territories, 1990-2017: A systematic analysis for the global burden of disease study 2017. Lancet. 392 (10159), 1789-1858 (2018).
  2. Russo, A. F. Cgrp as a neuropeptide in migraine: Lessons from mice. Br J Clin Pharmacol. 80 (3), 403-414 (2015).
  3. Wattiez, A. S., Wang, M., Russo, A. F. Cgrp in animal models of migraine. Handb Exp Pharmacol. 255, 85-107 (2019).
  4. Hansen, J. M., Hauge, A. W., Olesen, J., Ashina, M. Calcitonin gene-related peptide triggers migraine-like attacks in patients with migraine with aura. Cephalalgia. 30 (10), 1179-1186 (2010).
  5. Mason, B. N., et al. Induction of migraine-like photophobic behavior in mice by both peripheral and central cgrp mechanisms. J Neurosci. 37 (1), 204-216 (2017).
  6. Rea, B. J., et al. Peripherally administered cgrp induces spontaneous pain in mice: Implications for migraine. Pain. 159 (11), 2306-2317 (2018).
  7. Kopruszinski, C. M., et al. Prevention of stress- or nitric oxide donor-induced medication overuse headache by a calcitonin gene-related peptide antibody in rodents. Cephalalgia. 37 (6), 560-570 (2017).
  8. Juhasz, G., et al. No-induced migraine attack: Strong increase in plasma calcitonin gene-related peptide (cgrp) concentration and negative correlation with platelet serotonin release. Pain. 106 (3), 461-470 (2003).
  9. Aditya, S., Rattan, A. Advances in cgrp monoclonal antibodies as migraine therapy: A narrative review. Saudi J Med Med Sci. 11 (1), 11-18 (2023).
  10. Goadsby, P. J., et al. A controlled trial of erenumab for episodic migraine. N Engl J Med. 377 (22), 2123-2132 (2017).
  11. Mogil, J. S., Pang, D. S. J., Silva Dutra, G. G., Chambers, C. T. The development and use of facial grimace scales for pain measurement in animals. Neurosci Biobehav Rev. 116, 480-493 (2020).
  12. Whittaker, A. L., Liu, Y., Barker, T. H. Methods used and application of the mouse grimace scale in biomedical research 10 years on: A scoping review. Animals (Basel). 11 (3), 673 (2021).
  13. Langford, D. J., et al. Coding of facial expressions of pain in the laboratory mouse. Nat Methods. 7 (6), 447-449 (2010).
  14. Rea, B. J., et al. Automated detection of squint as a sensitive assay of sex-dependent calcitonin gene-related peptide and amylin-induced pain in mice. Pain. 163 (8), 1511-1519 (2022).
  15. Tuttle, A. H., et al. A deep neural network to assess spontaneous pain from mouse facial expressions. Mol Pain. 14, 1744806918763658 (2018).
  16. Mathis, A., et al. Deeplabcut: Markerless pose estimation of user-defined body parts with deep learning. Nat Neurosci. 21 (9), 1281-1289 (2018).
  17. Wattiez, A. S., et al. Different forms of traumatic brain injuries cause different tactile hypersensitivity profiles. Pain. 162 (4), 1163-1175 (2021).
This article has been published
Video Coming Soon
Keep me updated:

.

Cite This Article
McCutcheon, N., Johnson, M. S., Rea, B., Ghumman, M., Sowers, L., Hultman, R. An Automated Squint Method for Time-syncing Behavior and Brain Dynamics in Mouse Pain Studies. J. Vis. Exp. (213), e67136, doi:10.3791/67136 (2024).

View Video