image
Image courtesy of Catchpoint.
Research and Breakthroughs

New Tool for Analyzing Mouse Vocalizations May Provide Additional Insights for Autism Modeling

Signal processing technique improves analysis of ultrasonic vocalizations

Vocalization plays a significant role in social communication across species such as speech by humans and song by birds. Male mice produce ultrasonic vocalizations in the presence of females and both sexes sing during friendly social encounters. Mice have been genetically well characterized and used extensively for research on autism as well as in other areas, but until now there have been limitations to studying their ultrasonic vocalizations.

In a unique collaboration between Children’s Hospital Los Angeles and the USC Viterbi School of Engineering, researchers have developed and demonstrated a novel signal-processing tool that enables unbiased, data-driven analysis of these sounds. The study was published in the journal Neuron on May 3.

Research into the underlying neurobiological basis and heritable nature of vocalizations in humans and animals has identified promising genes and neural networks involved in vocal production, auditory processing and social communication. “Understanding the complicated vocalizations of mice – and how they relate to their social behavior – will be crucial to advancing vocal and social communication research, including understanding how genes that affect vocal communication relate to children with developmental disorders like autism,” said Pat Levitt, PhD, Simms/Mann Chair in Developmental Neurogenetics at Children’s Hospital Los Angeles and the WM Keck Provost Professor in Neurogenetics at the Keck School of Medicine at USC.

The novel signal-processing tool provides rapid, automated, unsupervised and time/date stamped analysis of the ultrasonic vocalizations of mice. Because of the time and date stamp attached to the vocalizations, the investigators expect that this tool will be useful in correlating vocalizations with video recorded behavioral interactions, allowing additional information to be mined from mouse models relevant to the social deficits experienced by persons with autism.

According to Allison Knoll, PhD, of CHLA, a first co-author on the study, researchers in the field have been aware of and working to interpret the meaning of mouse vocalization by categorizing the sounds using a syllable classification system – with discrete sounds defined as syllables. Because there is such a wide variation in the types of ultrasonic vocalizations made by mice, in order to analyze the information researchers have had to develop ways of categorizing and combining sounds they perceived to be similar using manual or semi-automated techniques.

“This tool removes bias by fully automating the processing of vocalizations using signal-processing methods employed in human speech and language analysis,” said Knoll. The signal-processing tool, called Mouse Ultrasonic Profile ExTraction (MUPET), is available through open-access software.

First co-author, Maarten Van Segbroeck, PhD, at the USC Viterbi School of Engineering’s Signal Analysis and Interpretation Laboratory (SAIL) said, “Researching animal vocalizations is traditionally a very time-consuming manual effort as one needs to annotate and analyze the data. With MUPET, researchers can now automatically process many hours of mice ultrasonic vocalizations in just minutes. By combining our expertise in human speech processing with unsupervised machine learning, we can rapidly and automatically process and capture large amount of audio recordings into a repertoire of “syllable” units. By comparing these repertoires of mice across different studies, MUPET permits researchers to find patterns and differences in their vocalizations. We are very excited to see how MUPET can help researchers open new avenues to understanding and interpreting animal vocalization behavior.“

Shrikanth Narayanan, PhD, the Niki & C. L. Max Nikias Chair in Engineering at USC — and an electrical engineer, computer scientist and trained linguist who oversees the SAIL Lab which developed the software, said, "The ability to uncover patterns automatically from vast amounts of multimodal behavioral data (animal vocalizations and beyond) in an objective, scalable and efficient manner using mathematical principles and machine learning algorithms opens up exciting possibilities for scientific discovery. It enhances expert analysis with machine intelligence and can accelerate both basic research and its translation. The promise and impact of such engineering methods in biomedical and clinical realms continues to be profound.”

The tool can be found at: http://sail.usc.edu/mupet.