Bioinformatics, Computational and Systems Biology
Beyonca Mitchell
Undergraduate
Georgia Institute of Technology
Atlanta, Georgia, United States
Gregory Carr
Principal Investigator
Lieber Institute for Brain Development/Johns Hopkins University
Baltimore, Maryland, United States
Ye Li
Staff Scientist
Lieber Institute for Brain Development/Johns Hopkins University
Baltimore, Maryland, United States
Specialty Track: Data Analysis and Deep Learning
Sustained attention is the ability to focus on a specific task for a prolonged period of time. In psychiatric disorders, such as schizophrenia and ADHD, sustained attention is impaired. Attention deficits can exacerbate other symptoms of these disorders, including social isolation, learning disabilities, and overall diminished cognitive function. In humans, attention function is usually measured using continuous performance tests (CPTs): participants produce a behavioral response when presented with a set of stimuli (S+ or target) and withhold the response when shown a different set of stimuli (S- or non-target). A mouse version called the rodent continuous performance test (rCPT) was developed for use within Bussey-Saksida touchscreen chambers. Measuring task engagement in human CPTs is straightforward, but it is difficult in the rCPT. Monitoring task engagement throughout attention tasks is vital for understanding cognitive processes driving changes in performance. For example, a mouse can fail to respond despite seeing the stimulus because they were unable to identify it as a target or it was asleep in the corner and missed the trial. The goal of my project is to develop a machine learning-based method for tracking task engagement over time and understand how task engagement affects performance in the rCPT. Utilization of DeepLabCut (DLC), a machine learning platform for markerless pose estimation, and Visual Field Analysis (VFA) determined the physical location of mice within the touchscreen chamber and the portions of the chamber within the visual field as a proxy for task engagement.
For this study, we used 24 C57BL/6J mice, (16 males and 8 females), group-housed (4/cage), maintained 12hr light/dark cycle (lights on a 0600-hours), and food-restricted 85-90% free-feeding body weight. Handling for 2-3 days before introduction to a highly palatable food reward, Nesquik Strawberry Milk. Following chamber habituation, mice advanced to rCPT training. During Stage 1 of the rCPT, mice were trained to press a white square on the screen for strawberry milk rewards (20 uL) on a fixed ratio 1 schedule for 45 minutes. If mice miss the target, an Inter-Trial-Interval of 2 seconds begins, and the next trial starts. After earning 60 rewards during two Stage 1 sessions, Stage 2 follows (60 rewards also required). An array of horizontal or vertical bars (counterbalanced within the group) is introduced (S+) in place of the white square from Stage 1. In Stage 3 a non-target (S-) is added and mice must discriminate between S+ and S-, and we document the sensitivity measure d’ from signal detection theory. D’ provides a composite measure of performance robust to response biases. Seven Stage 3 sessions with a minimum d’ value of 0.6 on the last two sessions are required to progress to the Time-On-Task probe stage, 90 minutes long. Amphetamine, a psychostimulant, is administered in small doses 30 minutes prior to these sessions. DeepLabCut and Visual Field Analysis were compiled to create a package that will analyze recorded TOT sessions and output results of whether the animal was oriented or nonoriented during a miss.
During the TOT probe stage, the expected results were to see an increase in non-orientation to the screen due to the well-described vigilance decrement over time. In Figure 1, for both male and female mice, there was no significant difference between Target groups and the 50% Target group showed the lowest orientation duration and highest nonoriented duration, where the 80% and 20% Target groups demonstrated similar rates of orientation. The target stimuli appear more and less often for these groups, exhibiting more opportunity to misses a target and motivation for maintaining engagement so that a target is not missed, respectively. Using the data script from DLC and VFA, the number of total hits versus hits oriented and nonoriented and the number of misses versus misses oriented and nonoriented were plotted in a nested bar graph to model a comparison of control data (hits) with non-control data (misses). This measures the validity of the package created with DLC and VFA. The graph in Figure 2 demonstrates that the package has about a 10% error rate since the number of hits nonoriented should be 0% and the output of the program was 10%. Logical reasoning suggests that the animal must be oriented at the screen in order for the target stimuli to be within the range of vision and to identify the target. This possibility of error may not be accounted for when a mouse hits the screen with the side of its body or tail when its nonoriented within the visual field for hits. Further analysis of sessions where the hit rate in combination with high nonoriented readouts from the package will be investigated in case of these circumstances. Additionally, about 55% of the misses were oriented and 45% of the misses were nonoriented, almost equal rates were reported proposing implicated disengagement during the TOT . These calculations were made across all Varying Target Score groups for the TOT probe stage. Moreover, an additional cohort is in the process of testing the same experiment to verify the data from this cohort. Results are expected to verify the same outcome.
Research Project funded by the Lieber Institute for Brain Development
We declare no other conflicts of interests
[1] Josserand, Mathilde, et al. “Visual Field Analysis: A Reliable Method to Score Left and Right Eye Use Using Automated Tracking.” Behavior Research Methods, vol. 54, no. 4, 2021, pp. 1715–1724, https://doi.org/10.3758/s13428-021-01702-6.
[2] Kim, Chi Hun, et al. “The Continuous Performance Test (RCPT) for Mice: A Novel Operant Touchscreen Test of Attentional Function.” Psychopharmacology, vol. 232, no. 21–22, 29 July 2015, https://doi.org/10.1007/s00213-015-4081-0.
[3] Mathis, Alexander, et al. “Deeplabcut: Markerless Pose Estimation of User-Defined Body Parts with Deep Learning.” Nature Neuroscience, 20 Aug. 2018, www.nature.com/articles/s41593-018-0209-y.