Google Translate for the zoo? How humans might talk to animals

The writer is founder of Sifted, an FT-backed media company covering European start-ups

New technological tools often enable fresh scientific discoveries. Take the case of Antonie van Leeuwenhoek, the 17th-century Dutch amateur scientist and pioneer microscopist, who built at least 25 single-lens microscopes with which he studied fleas, weevils, red blood cells, bacteria and his own spermatozoa, among other things.

In hundreds of letters to the Royal Society and other scientific institutions, van Leeuwenhoek meticulously recorded his observations and discoveries, not always for a receptive readership. But he has since been recognised as the father of microbiology, having helped us understand and fight all manner of diseases.

Centuries later, new technological tools are enabling a global community of biologists and amateur scientists to explore the natural world of sound in richer detail and at greater scale than ever before. Just as microscopes helped humans observe things not visible to the naked eye, so ubiquitous microphones and machine-learning models enable us to listen to sounds we cannot otherwise hear. We can eavesdrop on an astonishing soundscape of planetary “conversations” among bats, whales, honey bees, elephants, plants and coral reefs. “Sonics is the new optics,” Karen Bakker, a professor at the University of British Columbia, tells me.

Billions of dollars are pouring into so-called generative artificial intelligence, such as OpenAI’s ChatGPT, with scores of start-ups being launched to commercialise these foundation models. But in one sense, generative AI is something of a misnomer: these models are mostly used to rehash existing human knowledge in novel combinations rather than to generate anything genuinely new.

What may have a bigger scientific and societal impact is “additive AI”, using machine learning to explore specific, newly created data sets — derived, for example, from satellite imagery, genome sequencing, quantum sensing or bio-acoustic recordings — and extend the frontiers of human knowledge. When it comes to sonic data, Bakker even raises the tantalising possibility over the next two decades of interspecies communication as humans use machines to translate and replicate animal sounds, creating a kind of Google Translate for the zoo. “We do not yet possess a dictionary of Sperm Whalish, but we now have the raw ingredients to create one,” Bakker writes in her book The Sounds of Life.

This sonic revolution has been triggered by advances in both hardware and software. Cheap, durable, long-lasting microphones and sensors can be attached to trees in the Amazon, rocks in the Arctic or to dolphins’ backs, enabling real-time monitoring. That stream of bioacoustic data is then processed by machine-learning algorithms, which can detect patterns in infrasonic (low frequency) or ultrasonic (high frequency) natural sounds, inaudible to the human ear.

But, Bakker stresses, this data only makes sense when combined with human observations about natural behaviours gained from painstaking fieldwork by biologists or crowdsourced analysis from amateurs. For example, Zooniverse, the citizen science research initiative that can mobilise more than 1mn volunteers, has helped gather all kinds of data and training sets for machine learning models. “People think that AI is like magical fairy dust that you can sprinkle on everything, but that is not really how it works,” Bakker says. “We are using machine learning to automate and accelerate what humans were already doing.” 

These research projects have also led to some practical and commercial spin-offs. Studies of honeybee communication inspired scientists at Georgia Tech to create a “hive mind” algorithm to optimise the efficiency of servers in internet hosting centres. Cryptographers have been studying the buzzes, clicks, creaks and squeaks of whales to understand whether their “bionic Morse code” could be mimicked to encrypt communications.

Bakker also champions real-time protection of the biodiversity of endangered regions. Machine-learning systems monitoring rainforest microphones can flag the sounds of buzz saws as well as the cries of panicked animals.

It is hard to reconcile this emerging field of bioacoustic data with the argument that scientific research is no longer disruptive. Bakker argues that our current paradigm of scientific understanding might be exhausted, but that means we need to develop a new one. “It is just a failure of our imagination,” she says. We are only at the very beginning of investigating our sonic universe. Who knows what we may find?

Read the full article Here

Leave a Reply

Your email address will not be published. Required fields are marked *

DON’T MISS OUT!
Subscribe To Newsletter
Be the first to get latest updates and exclusive content straight to your email inbox.
Stay Updated
Give it a try, you can unsubscribe anytime.
close-link