Sound imaging system for visualizing multiple sound sources from two species
-
1
Kyoto University, Graduate School of Informatics, Japan
-
2
Kyoto University, Graduate School of Informatics, Japan
-
3
RIKEN, Brain Science Institute, Japan
We present a sound imaging system that visualizes spatio-temporal structures for choruses, that is, when and where nocturnal small animals chorus in their habitat. A precise investigation method is crucial for ethological studies because spatio-temporal structures for chorus are affected by many factors. While choruses of many individuals are widely observed in many species, investigating its structure in their habitat is difficult because many competing sound sources exist. Observation is especially difficult for small nocturnal animals, such as frogs and crickets, because they call at night at the same time and stop calling if observers get too closer.
Our solution is the sound imaging system. It consists of dozens of sound-to-light converting devices called Fireflies each of which captures nearby sounds and drives a light-emitting-diode (LED), and an off-the-shelf video camera. The Fireflies are deployed in the habitat. The system observes the multiple sounds using different Fireflies. The observers influence on their habitat minimally because they can leave the Fireflies and the video camera in the field. The system can capture different distributions of animals by changing the deployment of Fireflies. Some results of indoor experiments for performance evaluation and outdoor experiments for field test using Japanese Tree Frogs (Hyla japonica) were reported in T. Mizumoto et al, Journal of Comparative Physiology A, 2011.
In this presentation, we evaluate the current version of Firefly and then describe the concepts and preliminary results of the new Firefly with a band-pass filter to distinguish different species. The Firefly is evaluated w.r.t. the three characteristics. (1) Volume to light intensity: we measured the light intensity of Fireflies using five different sensitivity configurations and ten different sound volumes (dB SPL). Based on this measurement, we can estimate the volume of the sound by the intensity of the light. (2) Battery-life: The results of indoor experiments suggest that the system works for at least three hours. (3) Water-proof capability: Two-year experiments under various humidity and weather conditions demonstrated that the system worked in the humidity from 40% to 100%, including the light rain. This is because the Firefly consists of robust discrete parts and covered by a plastic box.
The new Firefly, named Firefly2, is designed to distinguish the calls of different species solely from the time series of blinking. It consists of two LEDs and band-pass filters. Since the Firefly has one LED, we have sometimes encountered the difficulties in distinguishing the calls of different species. If two species has different frequency components, the Firefly2 indicates each calling patterns by respective colors. The band-pass filter can be customized to the target animals, because the Firefly2 is implemented using a programmable microcomputer. Both indoor and outdoor experiments of Firefly2 are conducted for H. japonica and Schlegel’s green tree frog (Rhacophorus schlegelii) because their mating seasons are overlapped. The band-path filters are set to 2450-2550 Hz for R. schlegelii and 2950-3050 Hz for H. japonica. The results confirmed that the two species of frog calls can be distinguished with two LEDs of the Firefly2.
Acknowledgements
This study was partially supported by Grant-in-Aid for Challenging Exploratory Research (No.23650097), Grant-in-Aid for JSPS Fellows (No. 23.6572), and RIKEN's Special Postdoctoral Researcher Program.
References
Takeshi Mizumoto, Ikkyu Aihara, Takuma Otsuka, Ryu Takeda, Kazuyuki Aihara, and Hiroshi G. Okuno, Sound Imaging of Nocturnal Animal Calls in Their Natural Habitat, Journal of Comparative Physiology A, Vol. 197 No. 9, pp. 915-921, 2011. DOI: 10.1007/s00359-011-0652-7
Keywords:
acoustic communication,
Nocturnal Animal,
Sound Imaging,
visualization
Conference:
Tenth International Congress of Neuroethology, College Park. Maryland USA, United States, 5 Aug - 10 Aug, 2012.
Presentation Type:
Poster (but consider for participant symposium and student poster award)
Topic:
Communication
Citation:
Mizumoto
T,
Awano
H,
Aihara
I,
Otsuka
T and
Okuno
HG
(2012). Sound imaging system for visualizing multiple sound sources from two species.
Conference Abstract:
Tenth International Congress of Neuroethology.
doi: 10.3389/conf.fnbeh.2012.27.00247
Copyright:
The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers.
They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.
The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.
Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.
For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.
Received:
30 Apr 2012;
Published Online:
07 Jul 2012.
*
Correspondence:
Mr. Takeshi Mizumoto, Kyoto University, Graduate School of Informatics, Kyoto, Kyoto, 606-8501, Japan, mizumoto@kuis.kyoto-u.ac.jp