top of page

CRY WOLF Project

Correlating Wolf Behavior and Wolf Communication

Wapiti Pack at Elk Creek

Footage Courtesy of Bob Landis

How can new artificial intelligence algorithms help humans understand wolf communication? Yellowstone National Park, in collaboration with Grizzly Systems and other partners, have partnered to deploy autonomous recording units (ARUs) in their efforts to monitor presence and distribution across vast landscapes. Accurate population and occupancy estimates play a vital role in shaping state and federal management policies. Through the utilization of advanced artificial intelligence algorithms, scientists can efficiently analyze enormous data sets to identify wolf vocalizations as well as track the presence of other species native to the Rocky Mountains. Because wolf vocalizations carry for relatively long distances, the use of ARU's can enhance existing census efforts.

IMG-0838.jpg

Passive Acoustic Monitoring (PAM) has emerged as a cost-effective and noninvasive technique for wolf surveys, providing detection probabilities exceeding those attained through camera trapping. We are building ARU's with classifiers for real-time detection, as well as ML models for post-processing analysis of the behavioral functions of wolf vocalizations. While bioacoustic monitoring is not a novel concept, the advent of advanced AI algorithms has opened up new possibilities to reduce costs and enhance researcher productivity in telemetry monitoring (for more information see Using machine learning to decode animal communication). The Greater Yellowstone region holds realistic, lower-cost potential for bioacoustic research, due to the long-term knowledge already gained from radio collaring, flight surveys, camera traps, and field surveys. As such, this collaborative research project aims to collect 24x7x365 bioacoustics data at pre-determined locations in the GYE which can be set aside, similar to genetic data, and used later for research of any species that vocalizes below 12khz.

Questions

 

​Some of the fundamental questions driving the research objectives include:
 

  1. When and how often do wolves vocalize?

  2. Can we identify individuals or packs via idiolects and dialects?

  3. Are there different functions for different types of wolf howls or chorus howls?

  4. Can count the number of wolves in a chorus howl?

  5. Can low-cost and low-touch acoustic recorders inform population estimates?

  6. Can acoustic recorders be practically used for mitigation of livestock conflict?

hat.png

Objectives

 

​The aim of this collaborative research project is to explore and evaluate bioacoustics parameters of wolf vocalizations that will:
 

  1. build a 24x7x365 bioacoustics and observations dataset in the GYE for any species or soundscape below 12khz

  2. test systems that automate real-time and non real-time collection and classification (by species and individuals) of bioacoustics and imagery data in the cloud (see t.ly/2dQ0q and t.ly/o0_xO)

  3. model wolf occupancy, distribution, and abundance from acoustic data

  4. evaluate behavioral and ecological questions about the purpose and flexibility of communication in wolves ("come here", "where are you", "this is me/us") - (see t.ly/JeZBD)

  5. create GAN AI models to identify sound elements, patterns, and groupings in wolf howls that will facilitate identity of ecological and behavioral correlates and thereby the sounds’ potential communicative significance (see t.ly/o9ke1)

  6. provide opportunities for education and outreach on this aspect of animal communication and its applications for conservation and stewardship

  7. test non-lethal use cases for livestock-conflict scenarios

Management & Financial Benefits

  • Reduce labor costs associated with manually gathering population data.

  • Reduce redundancy of acoustic data collection across species

  • Reduce the costs and safety issues associated with the use of flying craft to gather population information.

  • Aid in the growing list of tools for predator-livestock conflict mitigation.

 

Conservation Benefits

Screenshot 2023-02-16 134923_edited.jpg

Collaboration Partners

turner.jpg
syn.png
university-of-cambridge-logo-1.png
Gordon_and_Betty_Moore_Foundation_logo.svg.png
conservation nation.png
logo.jpg
BRG logo (002).jpg
json-ld-logo.png
Screenshot 2023-02-24 173430.jpg
logo.png
Screenshot 2023-12-31 154954.png
download.png

Cloud Data Platform

The notion that wolves howl at a full moon is a popular cultural myth and has been perpetuated in folklore, literature, and even some movies...in spite of no evidence. Less mythic in nature, there are general opinions in the scientific community that wolves tend to vocalize the most at dusk and dawn. The above graph shows dots for each wolf vocalization event over a 6 month period. The Y-axis starts at 0 for midnight and runs up to 23 for 11pm. Dots appear on the hour of day they occurred. As the chart shows, during these months, the three different wolf packs across the GYE vocalize mostly during dark.

Click on the arrows at the bottom of portal to view various types of analytics we are performing on the data.

And click on the links in the left column of the table in order to download and play any of the vocalizations we have recorded.

Screenshot 2023-11-24 164939.png
Screenshot 2023-06-16 132821.png

Gabe, a highschool intern annotating a wolf chorus howl for our machine learning algorithm

A Little about the Technology

 

Supervised Wolf Bioacoustic Detection

There is extensive precedent for applying ML for supervised bioacoustic detection tasks; examples include a sperm whale click detector, a humpback detector, and a model that detects and classifies birdsong, among many others. Employing similar methods, we can train a convolutional neural network (CNN) either from scratch or using pretrained weights to classify an acoustic window as non-signal or wolf signal depending on the absence or presence of a wolf vocalization in the given acoustic segment.

 

Further, we take advantage of collaborative work done by other teams, such as PNW_CnetSynature, and BirdNet (D Sassover) regarding AI-based wolf detection.  We encourage academic institutions to combine efforts with our public and private institutions to iterate more quickly on the best general classifier, focusing on a common pipeline and growing the dataset (and its relevant ambient correlates) across canids and eventually all large carnivore species. 

YNP1_20230524_222502

​​

Supervised Wolf Chorus Counting

To our knowledge, there are no attempts at automated acoustic counting of overlapping signals, though there are several approaches that may be promising. 

  • Train a model (e.g. LSTM-CRF) to predict the number of overlapping spectral elements at fine timescales using open-source data. Assess the model’s ability to generalize to new datasets. Train a model to predict the number of wolves in a chorus based on human annotation of the number of wolves vocalizing concurrently.

  • Train a model to align video with acoustic data, as in examples of human music instrument playing.

 

Unsupervised Wolf Source Separation

Using previous work in source separation and emphasizing the unsupervised MixIT training algorithm used to separate overlapping birdsong mixtures, we can attempt to separate wolf choruses into predictions for the individuals present in the chorus. Though not functionally limited in the number of sources it can handle, it is unclear how the model will perform as the number of concurrently vocalizing wolves increases.

Unsupervised Meaning Discovery in Wolf Vocalizations

The CETI project has produced ML methods that suggest wolf vocalizations, with little or no understanding of the meanings, can be analyzed to reveal meaningful units in the sounds. The approach in this paper, APPROACHING AN UNKNOWN COMMUNICATION SYSTEM BY LATENT SPACE EXPLORATION AND CAUSAL INFERENCE, with modification for wolf vocalizations, is promising.

 

Our Data Set

  • Wolf Recordings (video optional) annotated with start and stop times of wolf howl events. ​

    • Chorus Howls: 19.6 hours of uninterrupted recordings spanning 20 years
    • Individual Howls: 5.3 hours of uninterrupted recordings spanning 20 years
  • Ambient and Similar-to-Wolf Data: 

    • The classifier was also trained on 10 hours of ambient recordings from 5 locations in the GYE, as well as on elk vocalizations, coyotes, planes, vehicles to enable optimal model performance in classifying wolf vs. non-wolf/background sounds. Airplanes are one of the top false positives.

Our Github Repositories (email us for access)

Related Scientific Research

Some Types of Wolf Vocalizations

Screenshot 2023-02-20 165049.jpg
Screenshot 2023-02-20 165459.jpg

Some situations that can evoke wolf howls

Screenshot 2023-02-19 093515.jpg
bottom of page