ICASSP 2015 in Brisbane

MICbots

I’m flying tomorrow from Tokyo to Brisbane to attend the ICASSP 2015 conference. Who would have guessed I’d be back to Brisbane and its conference center 7 years after Interspeech 2008… if I’d had to choose a conference location to be repeated, I’d probably have gone with Honolulu, but anyway.

I’ll be chairing a special session Wednesday morning on “Audio for Robots – Robots for Audio“, that I am co-organizing with Emmanuel Vincent (INRIA) and Walter Kellerman (Friedrich-Alexander-Universität Erlangen-Nürnberg). I will also present the following two papers:

  • MICbots: collecting large realistic datasets for speech and audio research using mobile robots,” with Emmanuel Vincent, John R. Hershey, and Daniel P. W. Ellis. [.pdf] [.bib]
    Abstract: Speech and audio signal processing research is a tale of data collection efforts and evaluation campaigns. Large benchmark datasets for automatic speech recognition (ASR) have been instrumental in the advancement of speech recognition technologies.  However, when it comes to robust ASR, source separation, and localization, especially using microphone arrays, the perfect dataset is out of reach, and many different data collection efforts have each made different compromises between the conflicting factors in terms of realism, ground truth, and costs. Our goal here is to escape some of the most difficult trade-offs by proposing MICbots, a low-cost method of collecting large amounts of realistic data where annotations and ground truth are readily available. Our key idea is to use freely moving robots equiped with microphones and loudspeakers, playing recorded utterances from existing (already annotated) speech datasets.  We give an overview of previous data collection efforts and the trade-offs they make, and describe the benefits of using our robot-based approach. We finally explain the use of this method to collect room impulse response measurement.
  • Deep NMF for Speech Separation,” with John R. Hershey and Felix Weninger. [.pdf] [.bib]
    Abstract: Non-negative matrix factorization (NMF) has been widely used for challenging single-channel audio source separation tasks. However, inference in NMF-based models relies on iterative inference methods, typically formulated as multiplicative updates.  We propose “deep NMF”, a novel non-negative deep network architecture which results from unfolding the NMF iterations and untying its parameters. This  architecture can be discriminatively trained for optimal separation performance. To optimize its non-negative parameters, we show how a new form of back-propagation, based on multiplicative updates, can be used to preserve non-negativity, without the need for constrained optimization. We show on a challenging speech separation task that deep NMF improves in terms of accuracy upon NMF and is competitive with conventional sigmoid deep neural networks, while requiring a tenth of the number of parameters.

If you are attending the conference, don’t hesitate to come by and ask the hard questions…

(The photo above is myself happily posing with Dot, Hot, and Lot, our first three MICbots.)

One Reply to “ICASSP 2015 in Brisbane”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.