First comparative approach to touchscreen-based visual object–location paired-associates learning in humans (<em>Homo sapiens</em>) and a nonhuman primate (<em>Microcebus murinus</em>).

A recent study suggests that a specific, touchscreen-based task on visual object–location paired-associates learning (PAL), the so-called Different PAL (dPAL) task, allows effective translation from animal models to humans. Here, we adapted the task to a nonhuman primate (NHP), the gray mouse lemur, and provide first evidence for the successful comparative application of the task to humans and NHPs. Young human adults reach the learning criterion after considerably less sessions (one order of magnitude) than young, adult NHPs, which is likely due to faster and voluntary rejection of ineffective learning strategies in humans and almost immediate rule generalization. At criterion, however, all human subjects solved the task by either applying a visuospatial rule or, more rarely, by memorizing all possible stimulus combinations and responding correctly based on global visual information. An error-profile analysis in humans and NHPs suggests that successful learning in NHPs is comparably based either on the formation of visuospatial associative links or on more reflexive, visually guided stimulus–response learning. The classification in the NHPs is further supported by an analysis of the individual response latencies, which are considerably higher in NHPs classified as spatial learners. Our results, therefore, support the high translational potential of the standardized, touchscreen-based dPAL task by providing first empirical and comparable evidence for two different cognitive processes underlying dPAL performance in primates. (PsycINFO Database Record (c) 2018 APA, all rights reserved)