Object recognition analysis in mice using nose-point digital video tracking

https://doi.org/10.1016/j.jneumeth.2007.11.002Get rights and content

Abstract

Preferential exploration of novel locations and objects by rodents has been used to test the effects of various manipulations on object recognition memory. However, manual scoring is time-consuming, requires extensive training, and is subject to inter-observer variability. Since rodents explore primarily by sniffing, we assessed the ability of new nose-point video tracking software (NPVT) to automatically collect object recognition data. Mice performed a novel object/novel location task and data collected by NPVT, two expert observers, and one inexperienced observer were compared. Percent time spent exploring the objects were correlated between the two expert observers and between NPVT and the two expert observers. In contrast, the inexperienced observer showed no correlation with either expert observer or NPVT. NPVT collected more reliable data compared to the inexperienced observer. NPVT and the expert observers gave similar group averages for arbitrarily assigned groups of mice, whereas the analysis of the inexperienced observer gave different results. Finally, NPVT generated valid results in a NO/NL experiment comparing mice expressing human apolipoprotein E3 versus E4, a risk factor for age-related cognitive decline. Video tracking with nose-point detection generates useful analyses of rodent object recognition task performance and possibly for other behavioral tests.

Introduction

Digital video tracking has revolutionized animal behavior analysis. Until recently, it was necessary for highly trained observers to manually score behavioral variables either in real-time during behavioral experiments or from video recordings (Adams and Jones, 1984, Lamberty and Gower, 1991, Save et al., 1992a, Save et al., 1992b). Therefore, behavioral analysis required a large resource investment for both training observers and for manually scoring behaviors. Digital video tracking programs greatly reduce the burden of behavioral scoring because they are easy to use, thereby reducing training requirements, and they collect data automatically, which frees up human resources (Grossmann and Skinner, 1996, Morris, 1984). However, conventional video tracking programs track only the center point of an animal. Thus, they make it difficult to analyze behavioral tests and compromise reliability of tests such as object recognition, in which multiple body points are involved in the activity and should be tracked simultaneously.

Recently, video tracking programs are being developed to include nose-point detection capabilities. The CinePlex software (Plexon, Dallas, TX) can simultaneously track the center and nose-point of a rat by tracking LED lights affixed to the animal. In contrast to this approach, new software allows simultaneous tracking of multiple body points without the need for LEDs (Ethovision XT, Noldus Information Technology, Wageningen, Netherlands). Other programs can automatically calculate outcome variables such as object ‘sniffing’ which incorporate the position of the nose-point of an animal (ObjectScan, Clever Sys., Reston, VA and ViewPoint Life Sciences, Montreal, Canada). However, to our knowledge, no studies have been published that use such a system to automatically collect and analyze rodent object recognition data. In this study, we determined whether the new nose-point video tracking (NPVT) capabilities would allow automatically analyzing performance in the novel object/novel recognition test (NO/NL). The NO/NL is an object recognition task that assesses both hippocampus-independent (novel object recognition) and hippocampus-dependent (novel location recognition) forms of object recognition memory in mice (Benice et al., 2006), and is sensitive to effects of genetic factors (Malleret et al., 2001), as well as factors such as age (Benice et al., 2006), and circulating androgen levels in combination with genetic factors (Pfankuch et al., 2005). Manual analysis of object recognition performance has proved to be very labor-intensive and highly vulnerable to inter-observer differences in data collection. In order to obtain useful data, experimental observers must have extensive experience recognizing when mice are actually exploring the objects. For example, approach to the object, nose first, within 2–4 cm of the object surface is considered and scored as exploring the object. Other behaviors such as climbing on the objects, using the objects as a platform to attempt escape from the arena, or exploring under or around the objects, are not scored as exploration (Benice et al., 2006). The prediction in this experiment was that NPVT could emulate the manual method of NO/NL scoring by counting object exploration when the nose-point of the mouse was detected within a 2–4 cm radius ‘object zone’ surrounding an object, with one object zone defined per object. In addition, climbing and exploring underneath the objects could be excluded by simultaneously removing data points in which the center point of the mouse was also in the object zone. Furthermore, NPVT data were compared to data collected by two expert observers and one inexperienced observer. We hypothesized that NO/NL data collected by NPVT would more closely resemble data collected by expert observers compared to data collected by an inexperienced observer.

Section snippets

Animals

Thirty-two 6-month-old male mice of various apolipoprotein E (APOE) genotypes (n = 1 mouse lacking apoE, n = 3 human apoE2 targeted replacement mice, n = 12 human apoE3 targeted replacement mice, and n = 16 human apoE4 targeted replacement mice (Sullivan et al., 1997), bred in our laboratory, were tested. All mice shared the same C57BL/6J background. The mice were kept on a 12:12 h light–dark schedule (lights on at 6 a.m.) with lab chow (PicoLab Rodent Diet 20, #5053; PMI Nutrition International, St.

NPVT, inexperienced observer, expert observers, and correlation

The scores for percentage time exploring were highly and significantly correlated between the two expert observers for each object (Table 1). NPVT scores for each object were significantly correlated with both of the expert observers (p < 0.01 for all correlations). In contrast, the inexperienced observer's scores only correlated significantly with one expert observer for the ‘horse’ object (p < 0.01), but failed to correlate with either expert observer for any other object (p > 0.10). Similar to the

Discussion

In this study, we assessed the utility of multi-point video tracking for automatically rating exploratory behavior in the NO/NL task. The ability to automate the collection of NO/NL data can potentially save hours of human labor and greatly increase test throughput thereby enabling the use of this test for genetic studies, while ensuring very reliable test results compared to manual scoring. Object exploration data collected by NPVT was compared to data that was manually scored by two expert

Acknowledgements

We would like to thank Peter van Meer, Megan Warner, and Sarah Kazcmarek for help with the manual scoring of object recognition data and all members of the Noldus team, and especially Ruud Tegelenbosch and Will van Dommelen, for their continuous technical support for this project. This work was supported by EMF AG-NS-0201, NASA NNJ05HE63G, Alzheimer's Association IIRG-05-14021, and T32 NS007 466-05.

Cited by (0)

View full text