A team of researchers led by Emory University has decoded visual images from a dog’s brain and used a machine-learning algorithm to analyze the patterns in the neural data in order to better understand how the canine mind reconstructs what it sees. The findings suggest that dogs are more attuned to actions in their environments rather than to who or what is performing the actions.
The scientists recorded fMRI neural data for two awake, unrestrained dogs while they watched several videos in 30-minute sessions, for a total of 90 minutes. The videos were shot from a dog’s perspective (at about waist high to a human) and contained scenes related to dogs’ experiences, including dogs sniffing, playing, eating, or walking on a leash, cars, bikes, or motorcycles going on a road, a cat entering a house, a deer crossing a path, as well as people sitting, hugging, kissing, eating, or offering a rubber bone or ball to the camera. Then, the researchers segmented the video data by time stamps into different classifiers, such as objects (for instance, dog, car, human, cat) or actions (sniffing, playing, eating).
Only two of the dogs that were trained for experiments in the fMRI had the focus and temperament to lie still and watch the three half-hour videos. Two humans also underwent the same experiment while lying in an fMRI. Then, the researchers applied a machine-learning algorithm to the data, which was trained to map the brain-data into object- and action-based classifiers.
The analysis revealed major differences between the functioning of humans’ and dogs’ brains. “We humans are very object oriented,” said study senior author Gregory Berns, a professor of Psychology at Emory. “There are ten times as many nouns as there are verbs in the English language because we have a particular obsession with naming objects. Dogs appear to be less concerned with who or what they are seeing and more concerned with the action itself.”
According to Professor Berns, dogs and humans have major differences in their visual systems too. Although dogs see only in shades of blue and yellow, they have a slightly higher density of vision receptors designed to detect movement. “It makes perfect sense that dogs’ brains are going to be highly attuned to actions first and foremost,” he explained. “Animals have to be very concerned with things happening in their environment to avoid being eaten or to monitor animals they might want to hunt. Action and movement are paramount.”
“While our work is based on just two dogs it offers proof of concept that these methods work on canines,” added study lead author Erin Phillips, who conducted the research in Berns’ Canine Cognitive Neuroscience Lab. “I hope this paper helps pave the way for other researchers to apply these methods on dogs, as well as on other species, so we can get more data and bigger insights into how the minds of different animals work.”
The study is published in the Journal of Visualized Experiments.