Skip to content

論文

2021

  • K. Kamo, Y. Mitsui, Y. Kubo, N. Takamune, D. Kitamura, H. Saruwatari, Y. Takahashi, and K. Kondo, “Joint-diagonalizability-constrained multichannel nonnegative matrix factorization based on time-variant multivariate complex sub-Gaussian distribution,” Elsevier Signal Processing, vol. 188, p. 108183, Jun. 2021.
  • T. Nakamura, S. Kozuka, and H. Saruwatari, “Time-Domain Audio Source Separation with Neural Networks Based on Multiresolution Analysis,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 1687–1701, 2021.
  • T. Nakamura and H. Kameoka, “Harmonic-Temporal Factor Decomposition for Unsupervised Monaural Separation of Harmonic Sounds,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 68–82, 2021.
  • Y. Saito, T. Nakamura, Y. Ijima, K. Nishida, and S. Takamichi, “Non-parallel and many-to-many voice conversion using variational autoencoders integrating speech recognition and speaker verification,” Acoustical Science and Technology, vol. 42, no. 1, pp. 1–11, 2021.
  • Y. Saito, S. Takamichi, and H. Saruwatari, “Perceptual-similarity-aware deep speaker representation learning for multi-speaker generative modeling,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 1033–1048, 2021.
  • T. Saeki, Y. Saito, S. Takamichi, and H. Saruwatari, “Real-time full-band voice conversion with sub-band modeling and data-driven phase estimation of spectral differentials,” IEICE Transactions on Information and Systems, vol. E104.D, no. 7, pp. 1002–1016, 2021.
  • A. Aiba, M. Yoshida, D. Kitamura, S. Takamichi, and H. Saruwatari, “Noise Robust Acoustic Anomaly Detection System with Nonnegative Matrix Factorization Based on Generalized Gaussian Distribution,” IEICE Transactions on Information and Systems, vol. E104.D, no. 3, pp. 441–449, 2021.
  • T. Saeki, S. Takamichi, and H. Saruwatari, “Incremental Text-to-Speech Synthesis Using Pseudo Lookahead with Large Pretrained Language Model,” IEEE Signal Processing Letters, vol. 28, pp. 857–861, 2021.
  • N. Ueno, S. Koyama, and H. Saruwatari, “Directionally weighted wave field estimation exploiting prior information on source direction,” IEEE Transactions on Signal Processing, vol. 69, pp. 2383–2395, 2021.
  • Y. Mitsufuji, N. Takamune, S. Koyama, and H. Saruwatari, “Multichannel blind source separation based on evanescent-region-aware non-negative tensor factorization in spherical harmonic domain,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 607–617, 2021.
  • K. Mitsui, T. Koriyama, and H. Saruwatari, “Deep Gaussian Process Based Multi-speaker Speech Synthesis with Latent Speaker Representation,” Elsevier Speech Communication, vol. 132, pp. 132–145, 2021.
  • S. Mizoguchi, Y. Saito, S. Takamichi, and H. Saruwatari, “DNN-based low-musical-noise single-channel speech enhancement based on higher-order-moments matching,” IEICE Transactions on Information and Systems, 2021. (accepted)

2020

  • N. Makishima, Y. Mitsui, N. Takamune, D. Kitamura, H. Saruwatari, Y. Takahashi, and K. Kondo, “Independent deeply learned matrix analysis with automatic selection of stable microphone-wise update and fast sourcewise update of demixing matrix,” Signal Processing (Elsevier), vol. 178, no. 107753, Sep. 2020.
  • M. Aso, S. Takamichi, N. Takamune, and H. Saruwatari, “Acoustic model-based subword tokenization and prosodic-context extraction without language knowledge for text-to-speech synthesis,” Elsevier Speech Communication, vol. 125, pp. 53–60, Sep. 2020.
  • Y. Kubo, N. Takamune, D. Kitamura, and H. Saruwatari, “Blind Speech Extraction Based on Rank-Constrained Spatial Covariance Matrix Estimation With Multivariate Generalized Gaussian Distribution,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 28, pp. 1948–1963, Jun. 2020.
  • H. Tamaru, Y. Saito, S. Takamichi, T. Koriyama, and H. Saruwatari, “Generative moment matching network-based neural double-tracking for synthesized and natural singing voices,” IEICE Transactions on Information and Systems, vol. E103-D, no. 3, pp. 639–647, May 2020.
  • J. Koguchi, S. Takamichi, M. Morise, H. Saruwatari, and S. Sagayama, “DNN-based full-band speech synthesis using GMM approximation of spectral envelope,” IEICE Transactions on Information and Systems, vol. E103.D, no. 12, pp. 2673–2681, 2020.
  • Y. Saito, K. Akuzawa, and K. Tachibana, “Joint adversarial training of speech recognition and synthesis models for many-to-one voice conversion using phonetic posteriorgrams,” IEICE Transactions on Information and Systems, vol. E103.D, no. 9, pp. 1978–1987, 2020.
  • H. Tamaru, S. Takamichi, and H. Saruwatari, “Perception analysis of inter-singer similarity in Japanese song,” Acoustical Science and Technology, vol. 41, no. 5, pp. 804–807, 2020.
  • S. Takamichi, Y. Saito, N. Takamune, D. Kitamura, and H. Saruwatari, “Phase Reconstruction from Amplitude Spectrograms Based on Directional-Statistics Deep Neural Networks,” Elsevier Signal Processing, vol. 169, 2020.
  • S. Takamichi, R. Sonobe, K. Mitsui, Y. Saito, T. Koriyama, N. Tanji, and H. Saruwatari, “JSUT and JVS: free Japanese voice corpora for accelerating speech synthesis research,” Acoustical Science and Technology, vol. 41, no. 5, pp. 761–768, 2020.
  • Y. Takida, S. Koyama, N. Ueno, and H. Saruwatari, “Reciprocity gap functional in spherical harmonic domain for gridless sound field decomposition,” Elsevier Signal Processing, vol. 169, 2020.

2019

  • D. Sekizawa, S. Takamichi, and H. Saruwatari, “Prosody correction preserving speaker individuality for Chinese-accented Japanese HMM-based text-to-speech synthesis,” IEICE Transactions on Information and Systems, vol. E102.D, no. 6, pp. 1218–1221, Jun. 2019.
  • S. Takamichi and D. Morikawa, “Perceived azimuth-based creditability and self-reported confidence for sound localization experiments using crowdsourcing,” Acoustical Science and Technology, vol. 40, no. 2, pp. 142–143, Mar. 2019.
  • H. Nakajima, D. Kitamura, N. Takamune, H. Saruwatari, and N. Ono, “Bilevel optimization using stationary point of lower-level objective function for discriminative basis learning in nonnegative matrix factorization,” IEEE Signal Processing Letters, vol. 26, no. 6, pp. 818–822, 2019.
  • S. Mogami, N. Takamune, D. Kitamura, H. Saruwatari, Y. Takahashi, K. Kondo, and N. Ono, “Independent low-rank matrix analysis based on time-variant sub-Gaussian source model for determined blind source separation,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 28, pp. 503–518, 2019.
  • N. Makishima, S. Mogami, N. Takamune, D. Kitamura, H. Sumino, S. Takamichi, H. Saruwatari, and N. Ono, “Independent Deeply Learned Matrix Analysis for Determined Audio Source Separation,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 27, no. 10, pp. 1601–1615, 2019.
  • Y. Saito, S. Takamichi, and H. Saruwatari, “Vocoder-free text-to-speech synthesis incorporating generative adversarial networks using low-/multi-frequency STFT amplitude spectra,” Computer Speech & Language, vol. 58, pp. 347–363, 2019.
  • Y. Mitsufuji, S. Uhlich, N. Takamune, D. Kitamura, S. Koyama, and H. Saruwatari, “Multichannel non-negative matrix factorization using banded spatial covariance matrices in wavenumber domain,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 28, pp. 49–60, 2019.
  • H. Sawada, N. Ono, H. kameoka, D. Kitamura, and H. Saruwatari, “A review of blind source separation methods: two converging routes to ILRMA originating from ICA and NMF,” APSIPA Transactions on Signal and Information Processing, vol. 8, no. E12, 2019.
  • S. Koyama and L. Daudet, “Sparse Representation of a Spatial Sound Field in a Reverberant Environment,” IEEE Journal of Selected Topics in Signal Processing, vol. 13, no. 1, pp. 172–184, 2019.
  • T. Koriyama and T. Kobayashi, “Statistical parametric speech synthesis using deep Gaussian processes,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 27, no. 5, pp. 948–959, 2019.
  • N. Maikusa, R. Sonobe, S. Kinoshita, N. Kawada, S. Yagishi, T. Masuoka, T. Kinoshita, S. Takamichi, and A. Homma, “Automatic detection of Alzheimer’s dementia using speech features of the revised Hasegawa’s Dementia Scale,” Geriatric Medicine, vol. 57, no. 2, pp. 1117–1125, 2019.
  • N. Ueno, S. Koyama, and H. Saruwatari, “Three-Dimensional Sound Field Reproduction Based on Weighted Mode-Matching Method,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 27, no. 12, pp. 1852–1867, 2019.

2018

  • T. Kano, S. Takamichi, S. Sakti, G. Neubig, T. Toda, Satoshi, and Nakamura, “An End-to-end Model for Cross-Lingual Transformation of Paralinguistic Information,” Machine Translation, pp. 1–16, Apr. 2018.
  • Y. Saito, S. Takamichi, and H. Saruwatari, “Statistical Parametric Speech Synthesis Incorporating Generative Adversarial Networks,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 26, no. 1, pp. 84–96, Jan. 2018. [第34回 電気通信普及財団 テレコムシステム技術学生賞]
  • D. Kitamura, S. Mogami, Y. Mitsui, N. Takamune, H. Saruwatari, N. Ono, and Y. Takahashi, “Generalized independent low-rank matrix analysis using heavy-tailed distributions for blind source separation,” EURASIP Journal on Advances in Signal Processing, 2018. (accepted)
  • N. Murata, S. Koyama, N. Takamune, and H. Saruwatari, “Sparse Representation Using Multidimensional Mixed-Norm Penalty With Application to Sound Field Decomposition,” IEEE Transactions on Signal Processing, vol. 66, no. 12, pp. 3327–3338, 2018.
  • S. Koyama, N. Murata, and H. Saruwatari, “Sparse Sound Field Decomposition for Super-resolution in Recording and Reproduction,” Journal of the Acoustical Society of America, vol. 143, no. 6, pp. 3780–3895, 2018.
  • N. Ueno, S. Koyama, and H. Saruwatari, “Sound Field Recording Using Distributed Microphones Based on Harmonic Analysis of Infinite Order,” IEEE Signal Processing Letters, vol. 25, no. 1, pp. 135–139, 2018.

2017

  • Y. Saito, S. Takamichi, and H. Saruwatari, “Voice Conversion Using Input-to-Output Highway Networks,” IEICE Transactions on Information and Systems, 2017.
  • Y. Bando, H. Saruwatari, N. Ono, S. Makino, K. Itoyama, D. Kitamura, M. Ishimura, M. Takakusaki, N. Mae, K. Yamaoka, Y. Matsui, Y. Ambe, M. Konyo, S. Tadokoro, K. Yoshii, and H. G. Okuno, “Low-latency and high-quality two-stage human-voice-enhancement system for a hose-shaped rescue robot,” Journal of Robotics and Mechatronics, vol. 29, no. 1, 2017.

2016

  • S. Takamichi, T. Toda, G. Neubig, S. Sakti, and S. Nakamura, “A statistical sample-based approach to GMM-based voice conversion using tied-covariance acoustic models,” IEICE Transactions on Information and Systems, vol. E99-D, no. 10, pp. 2490–2498, Oct. 2016.
  • D. Kitamura, N. Ono, H. Sawada, H. Kameoka, and H. Saruwatari, “Determined blind source separation unifying independent vector analysis and nonnegative matrix factorization,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, no. 9, pp. 1626–1641, Sep. 2016.
  • S. Takamichi, T. Toda, A. W. Black, G. Neubig, S. Sakti, and S. Nakamura, “Post-filters to Modify the Modulation Spectrum for Statistical Parametric Speech Synthesis,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, no. 4, pp. 755–767, Apr. 2016. [日本音響学会 独創研究奨励賞 板倉記念対象論文]
  • S. Koyama, K. Furuya, K. Wakayama, S. Shimauchi, and H. Saruwatari, “Analytical approach to transforming filter design for sound field recording and reproduction using circular arrays with a spherical baffle,” Journal of the Acoustical Society of America, vol. 139, no. 3, pp. 1024–1036, Mar. 2016.
  • Y. Oshima, S. Takamichi, T. Toda, G. Neubig, S. Sakti, and S. Nakamura, “Non-Native Text-To-Speech Preserving Speaker Individuality Based on Partial Correction of Prosodic and Phonetic Characteristics,” IEICE Transactions on Information and Systems, vol. E99-D, no. 12, 2016.

2015

  • S. Koyama, K. Furuya, Y. Haneda, and H. Saruwatari, “Source-location-informed sound field recording and reproduction,” IEEE Journal of Selected Topics in Signal Processing, vol. 9, no. 5, pp. 881–894, Aug. 2015.
  • D. Kitamura, H. Saruwatari, H. Kameoka, Y. Takahashi, K. Kondo, and S. Nakamura, “Multichannel signal separation combining directional clustering and nonnegative matrix factorization with spectrogram restoration,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 23, no. 4, pp. 654–669, Apr. 2015.
  • F. D. Aprilyanti, J. Even, H. Saruwatari, K. Shikano, S. Nakamura, and T. Takatani, “Suppresion of noise and late reverberation based on blind signal extraction and Wiener filtering,” Acoustical Science and Technology, vol. 36, no. 6, pp. 302–313, Jan. 2015.

2014

  • S. Koyama, K. Furuya, Y. Hiwasaki, Y. Haneda, and Y. Suzuki, “Wave Field Reconstruction Filtering in Cylindrical Harmonic Domain for With-Height Recording and Reproduction,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 22, no. 10, pp. 1546–1557, Oct. 2014.
  • R. Miyazaki, H. Saruwatari, S. Nakamura, K. Shikano, K. Kondo, J. Blanchette, and M. Bouchard, “Musical-noise-free blind speech extraction integrating microphone array and iterative spectral subtraction,” Signal Processing (Elsevier), vol. 102, pp. 226–239, Sep. 2014.
  • S. Koyama, K. Furuya, H. Uematsu, Y. Hiwasaki, and Y. Haneda, “Real-time Sound Field Transmission System by Using Wave Field Reconstruction Filter and Its Evaluation,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E97-A, no. 9, pp. 1840–1848, Sep. 2014.
  • T. Aketo, H. Saruwatari, and S. Nakamura, “Robust sound field reproduction against listener’s movement utilizing image sensor,” Journal of Signal Processing, vol. 18, no. 4, pp. 213–216, Jul. 2014.
  • T. Miyauchi, D. Kitamura, H. Saruwatari, and S. Nakamura, “Depth estimation of sound images using directional clustering and activation-shared nonnegative matrix factorization,” Journal of Signal Processing, vol. 18, no. 4, pp. 217–220, Jul. 2014.
  • D. Kitamura, H. Saruwatari, K. Yagi, K. Shikano, Y. Takahashi, and K. Kondo, “Music signal separation based on supervised nonnegative matrix factorization with orthogonality and maximum-divergence penalties,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E97-A, no. 5, pp. 1113–1118, May 2014.


書籍

2021

  • Y. Ishikawa, S. Takamichi, T. Umemoto, Y. Tsubota, M. Aikawa, K. Sakamoto, K. Yui, S. Fujiwara, A. Suto, and K. Nishiyama, “Team-based flipped learning framework: Achieving high student engagement in learning ,” in Blended Language Learning: Evidence-based Trends and Applications (book chapter), Aug. 2021. (to appear)
  • 山本 龍一 and 高道 慎之介, “Pythonで学ぶ音声合成,” インプレス社, Aug. 2021.

2018

  • H. Saruwatari and R. Miyazaki, “Musical-noise-free blind speech extraction based on higher-order statistics analysis,” in Audio Source Separation, S. Makino, Ed. Springer, 2018, pp. 333–364.
  • D. Kitamura, N. Ono, H. Sawada, H. Kameoka, and H. Saruwatari, “Determined blind source separation with independent low-rank matrix analysis,” in Audio Source Separation, S. Makino, Ed. Springer, 2018, pp. 125–155.

2017

  • 小山 翔一, “Q01 サンプリング定理をやさしく教えてください,” in 音響学入門ペディア, 羽田 陽一, 大川 茂樹, and 木谷 俊介, Eds. コロナ社, 2017.
  • 高道 慎之介, “Q04 z変換をやさしく教えてください,” in 音響学入門ペディア, 羽田 陽一, 大川 茂樹, and 木谷 俊介, Eds. コロナ社, 2017.
  • 北村 大地, “Q11 ビームフォーミングって何ですか?,” in 音響学入門ペディア, 羽田 陽一, 大川 茂樹, and 木谷 俊介, Eds. コロナ社, 2017.
  • 高道 慎之介, “音声合成,” in 音声言語の自動翻訳 -コンピュータによる自動翻訳を目指して-, 日本音響学会, Ed. コロナ社, 2017.

2016

  • 猿渡洋, “ブラインド音源分離,” in 音響キーワードブック, 日本音響学会, Ed. コロナ社, 2016, pp. 386–387.

2015

  • 猿渡洋, “音響の波動論とデジタルモデル基礎,” in 機械力学ハンドブック ―動力学・振動・制御・解析―, 金子成彦 and 大熊政明, Eds. 朝倉書店, 2015, pp. 523–527.
  • 猿渡洋, “音源分離,” in 機械力学ハンドブック ―動力学・振動・制御・解析―, 金子成彦 and 大熊政明, Eds. 朝倉書店, 2015, pp. 533–535.

2014

  • H. Saruwatari and R. Miyazaki, “Statistical analysis and evaluation of blind speech extraction algorithms,” in Advances in Modern Blind Source Separation Techniques: Theory and Applications, G. Naik and W. Wang, Eds. Springer, May 2014, pp. 291–322.
  • 小山翔一, “波面再構成フィルタに基づくリアルタイム音場伝送,” in 感覚デバイス開発 ?機器が担うヒト感覚の生成・拡張・代替技術?, エヌ・ティー・エス, 2014.


解説記事など

2021

  • 高道慎之介, “2021年総合大会報告,” 電子情報通信学会 情報・システムソサイエティ誌, May 2021. (to appear)
  • 高道慎之介, “音声アバターを選ぶ時代~ボイスチェンジャー技術の動向~,” in 電気学会誌, 141, Feb. 2021, no. 2, pp. 93–96.

2019

  • 高道慎之介, “位相のための確率分布と深層学習について教えて下さい,” 日本音響学会誌, vol. 75, no. 3, Mar. 2019.

2018

  • 高道慎之介, “研究と実応用の話,” 日本音響学会誌, vol. 74, no. 1, Jan. 2018.
  • 高道慎之介 and 戸田智基, “音声翻訳システムにおける音声変換の利用,” 日本音響学会誌, vol. 74, no. 9, 2018.

2017

  • 小山翔一, “未来の音の収録・再生・編集技術の実現に向けて(創立100周年記念特集 「基礎・境界」が支えた100年,これからの100年),” 電子情報通信学会誌, vol. 100, no. 6, pp. 474–478, 2017.

2015

  • 小山翔一, “スパース音場表現に基づく超解像型収音・再現 (\<小特集\>スパース表現に基づく音響信号処理),” 日本音響学会誌, vol. 71, no. 11, pp. 632–638, 2015.

2014

  • 猿渡洋, “小特集「マイクロホンアレーの新しい技術展開」にあたって,” 日本音響学会誌, vol. 70, no. 7, pp. 371–372, Jul. 2014.
  • 古家賢一 and 小山翔一, “波面合成技術の原理と応用 (特集: 立体音響技術),” 映像情報メディア学会誌, vol. 68, no. 8, pp. 621–624, 2014.
  • 小山翔一, “サイバースペースにおける空間 (ちょっとしたエッセイ, コーヒーブレーク),” 日本音響学会誌, vol. 70, no. 4, p. 231, 2014.
  • 猿渡洋, “逆フィルタによる音場再現技術 (特集: 立体音響技術),” 映像情報メディア学会誌, vol. 68, no. 8, pp. 612–615, 2014.


招待講演

2021

  • 高道 慎之介, “音声合成のコーパスをつくろう,” in Tokyo BISH Bash #5, Jun. 2021.
  • 高道慎之介, “音声認識の基礎と音声合成,” in 日本ロボット学会第132回セミナー, Feb. 2021.
  • 高道慎之介, “ここまで来た&これから来る音声合成,” in 明治大学 先端メディアコロキウム, Jan. 2021.
  • H. Saruwatari, “Multichannel audio source separation based on unsupervised and semi-supervised learning,” in Proceedings of Chinese Computer Federation, Jan. 2021.

2020

  • H. Saruwatari, “Multichannel audio source separation based on unsupervised and semi-supervised learning,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Dec. 2020.

2019

  • 高道慎之介, “人間を利用する音声言語処理の試み,” in 第242回自然言語処理研究会, Oct. 2019.
  • 高道慎之介, “音声合成・変換の国際コンペティションへの参加を振り返って,” in FIT2019 企画セッション「コンペの覇者」, Sep. 2019.
  • 猿渡洋, “教師無し学習に基づく自律的な音メディア信号処理とその応用,” in Software Defined Media シンポジウム2019, Jul. 2019.
  • 高道慎之介, “統計的ボイチェン研究事情,” in GREE #VRSionUp!6, Jul. 2019.
  • Y. Takida, S. Koyama, N. Ueno, and H. Saruwatari, “Comparison of Interpolation Methods for Gridless Sound Field Decomposition Based on Reciprocity Gap Functional,” in Proceedings of International Congress on Sound and Vibration (ICSV), Montreal, Jul. 2019. (to appear)
  • 高道慎之介, “音声コーパス設計と次世代の音声研究に向けた提言,” in 2019年6月音声研究会, Jun. 2019.
  • S. Takamichi, “Group-delay modelling based on deep neural network with sine-skewed generalized cardioid distribution,” in Proceedings of International Conference on Soft Computing & Machine Learning (SCML), Wuhan, China, Apr. 2019. (invited)

2018

  • 高道慎之介 and 亀岡弘和, “音声分野における敵対的学習の可能性と展望,” in IBIS2018 企画セッション, Nov. 2018.
  • M. Une, Y. Saito, S. Takamichi, D. Kitamura, R. Miyazaki, and H. Saruwatari, “Generative approach using the noise generation models for DNN-based speech synthesis trained from noisy speech,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Hawaii, Nov. 2018.
  • S. Koyama, “Sparsity-based sound field reconstruction,” in Tohoku Universal Acoustical Communication Month, Seminar on the spatial aspects of hearing and their applications, keynote lecture, Sendai, Oct. 2018.
  • 高道慎之介, “分布あるいはモーメント間距離最小化に基づく統計的音声合成,” in 第18回ステアラボ人工知能セミナー招待講演, Oct. 2018.
  • S. Takamichi, “What can GAN and GMMN do for augmented speech communication?,” in GMI workshop, Hiroshima, Aug. 2018.
  • 猿渡洋, “教師無し最適化に基づくブラインド音源分離とその応用,” in 東京大学先端研・富士電機第二回産学連携技術交流会, Feb. 2018.
  • 高道慎之介, “騙し騙され音声合成,” in IPSJ-ONE, 情報処理学会 第80回全国大会, 2018.

2017

  • S. Takamichi, “Modulation spectrum-based speech parameter trajectory smoothing for DNN-based speech Synthesis using FFT spectra,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Kuala Lumpur, Dec. 2017.
  • D. Kitamura, N. Ono, and H. Saruwatari, “Experimental analysis of optimal window length for independent low-rank matrix analysis,” in Proceedings of Proceedings of 25th European Signal Processing Conference, Greek island of Kos, Aug. 2017.
  • S. Koyama, N. Murata, and H. Saruwatari, “Effect of multipole dictionary in sparse sound field decomposition for super-resolution in recording and reproduction,” in Proceedings of International Congress on Sound and Vibration (ICSV), London, Jul. 2017.
  • 高道慎之介, “最先端の統計的音声処理,” in 徳山高専テクノ・アカデミア 技術セミナー, 2017.
  • 高道慎之介, “深層学習を深く学習するための基礎,” in 日本音響学会2017年秋季研究発表会ビギナーズセミナー, 2017.
  • 高道慎之介, “音声なりすまし検出技術とそれを騙す音声合成技術,” in セコムIS研究所トーク, 2017.
  • 高道慎之介, “音声合成・変換あれこれ 〜声のゆらぎの再現と音声翻訳応用〜,” in 第1回若手異分野交流研究会, 2017.
  • 猿渡洋, “ブラインド音源分離 ~時空間スモールデータの非ガウス・低ランクモデリングとその最適化の数理~,” in 人工知能学会合同研究会, 2017.
  • 猿渡洋, “ブラインド音源分離再考 -時空間の非ガウス・スパース・低ランクモデリング-,” in 日本音響学会2017年春季研究発表会講演論文集, 2017, pp. 1–8-12.
  • 亀岡弘和, 小野順貴, and 猿渡洋, “音響分野におけるブラインド適応信号処理の展開,” in 電子情報通信学会総合大会, 2017, no. AI-2-2.

2016

  • H. Nakajima, D. Kitamura, N. Takamune, S. Koyama, H. Saruwatari, Y. Takahashi, and K. Kondo, “Audio signal separation using supervised NMF with time-variant all-pole-model-based basis deformation,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Jeju, Dec. 2016.
  • S. Takamichi, “Speech synthesis that deceives anti-spoofing verification,” in NII Talk, Dec. 2016.
  • 高道慎之介, “なりすましセキュリティを騙す高品質音声合成・変換,” in 4th Microsoft Research Intern Alumni Networking, Dec. 2016.
  • 高道慎之介, “統計的音声合成・変換の高品質化と言語教育応用,” in 京都大学 学術情報メディアセンター セミナー, Nov. 2016.
  • 高道慎之介, “統計的音声合成・変換の高品質化とその応用,” in 山梨大学 テニュアトラック普及・定着事業 第56回サイエンスカフェ講演会, Nov. 2016.
  • S. Koyama, N. Murata, and H. Saruwatari, “Super-resolution in sound field recording and reproduction based on sparse representation,” in Proceedings of 5th Joint Meeting of the Acoustical Society of America and Acoustical Society of Japan, Honolulu, Nov. 2016.
  • H. Saruwatari, K. Takata, N. Ono, and S. Makino, “Flexible microphone array based on multichannel nonnegative matrix factorization and statistical signal estimation,” in The 22nd International Congress on Acoustics (ICA2016), Sep. 2016, no. ICA2016-312.
  • S. Koyama, “Source-Location-Informed Sound Field Recording and Reproduction: A Generalization to Arrays of Arbitrary Geometry,” in Proceedings of 2016 AES International Conference on Sound Field Control, Guildford, Jul. 2016 [Online]. Available at: http://www.aes.org/e-lib/browse.cfm?elib=18303

2015

  • S. Koyama, A. Matsubayashi, N. Murata, and H. Saruwatari, “Sparse Sound Field Decomposition Using Group Sparse Bayesian Learning,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Dec. 2015, pp. 850–855.
  • 小山翔一, “スパース信号表現って何?,” in 日本音響学会秋季研究発表会, ビギナーズセミナー:今話題のあの技術ってどうなってるの?, Sep. 2015.
  • D. Kitamura, N. Ono, H. Sawada, H. Kameoka, and H. Saruwatari, “Relaxation of rank-1 spatial constraint in overdetermined blind source separation,” in in Proceedings of The 2015 European Signal Processing Conference (EUSIPCO2015), Nice, Sep. 2015, pp. 1271–1275.
  • H. Saruwatari, “Statistical-model-based speech enhancement With musical-noise-free properties,” in in Proceedings of 2015 IEEE International Conference on Digital Signal Processing (DSP2015), Singapore, 2015.
  • 猿渡洋, “統計的バイノーラル信号表現とその音源分離への応用,” in 電子情報通信学会技術研究報告(電気/応用音響), 2015.

2014

  • D. Kitamura, H. Saruwatari, S. Nakamura, Y. Takahashi, K. Kondo, and H. Kameoka, “Hybrid multichannel signal separation using supervised nonnegative matrix factorization with spectrogram restoration,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Siem Reap, Dec. 2014.
  • 猿渡洋, “高次統計量は何を語る? ~ 教師無し学習に基づく自律的な音メディア信号処理 ~,” in 音学シンポジウム2014, May 2014.
  • 小山翔一, “音場再現技術の基本原理と展開,” in 音学シンポジウム2014, May 2014.


国際会議

2021

  • S. Koyama, T. Nishida, K. Kimura, T. Abe, N. Ueno, and J. Brunnström, “MeshRIR: A Dataset of Room Impulse Responses on Meshed Grid Points for Evaluating Sound Field Analysis and Synthesis Methods,” in Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), Oct. 2021. (accepted)
  • R. Horiuchi, S. Koyama, J. G. C. Ribeiro, N. Ueno, and and Hiroshi Saruwatari, “Kernel learning for sound field estimation with L1 and L2 regularizations,” in Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), Oct. 2021. (accepted)
  • R. Ominato, N. Wakui, S. Takamichi, and S. Yano, “Discriminating between left and right ears using linear and nonlinear dimensionality reduction,” in SmaSys2021, Oct. 2021. (accepted)
  • K. Kimura, S. Koyama, N. Ueno, and H. Saruwatari, “Mean-Square-Error-Based Secondary Source Placement in Sound Field Synthesis With Prior Information on Desired Field,” in Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), Oct. 2021. (accepted)
  • R. Ominato, N. Wakui, S. Takamichi, and S. Yano, “Discriminating between left and right ears using linear and nonlinear dimensionality reduction,” in SmaSys2021, Oct. 2021. (accepted)
  • R. Arakawa, Z. Kashino, S. Takamichi, A. A. Verhulst, and M. Inami, “Digital Speech Makeup: Voice Conversion Based Altered Auditory Feedback for Transforming Self-Representation,” in ACM ICMI, Oct. 2021. (accepted)
  • T. Hasumi, T. Nakamura, N. Takamune, H. Saruwatari, D. Kitamura, Y. Takahashi, and K. Kondo, “Empirical Bayesian Independent Deeply Learned Matrix Analysis For Multichannel Audio Source Separation,” in Proceedings of European Signal Processing Conference (EUSIPCO), Aug. 2021. (accepted)
  • K. Saito, T. Nakamura, K. Yatabe, Y. Koizum, and H. Saruwatari, “Sampling-Frequency-Independent Audio Source Separation Using Convolution Layer Based on Impulse Invariant Method,” in Proceedings of European Signal Processing Conference (EUSIPCO), Aug. 2021. (accepted)
  • N. Narisawa, R. Ikeshita, N. Takamune, D. Kitamura, T. Nakamura, H. Saruwatari, and T. Nakatani, “Independent Deeply Learned Tensor Analysis for Determined Audio Source Separation,” in Proceedings of European Signal Processing Conference (EUSIPCO), Aug. 2021. (accepted)
  • T. Nakamura, T. Koriyama, and H. Saruwatari, “Sequence-to-Sequence Learning for Deep Gaussian Process Based Speech Synthesis Using Self-Attention GP Layer,” in Proceedings of Interspeech, Aug. 2021. (accepted)
  • D. Xin, Y. Saito, S. Takamichi, T. Koriyama, and H. Saruwatari, “Cross-lingual Speaker Adaptation using Domain Adaptation and Speaker Consistency Loss for Text-To-Speech Synthesis,” in Proceedings of Interspeech, Aug. 2021. (accepted)
  • K. Mizuta, T. Koriyama, and H. Saruwatari, “Harmonic WaveGAN: GAN-Based Speech Waveform Generation Model with Harmonic Structure Discriminator,” in Proceedings of Interspeech, Aug. 2021. (accepted)
  • K. Yufune, T. Koriyama, S. Takamichi, and H. Saruwatari, “Accent Modeling of Low-Resourced Dialect in Pitch Accent Language Using Variational Autoencoder,” in Proceedings of The 11th ISCA SSW, Aug. 2021. (accepted)
  • W. Nakata, T. Koriyama, S. Takamichi, N. Tanji, Y. Ijima, R. Masumura, and H. Saruwatari, “Audiobook Speech Synthesis Conditioned by Cross-Sentence Context-Aware Word Embeddings,” in Proceedings of The 11th ISCA SSW, Aug. 2021. (accepted)
  • Y. Ueda, K. Fujii, Y. Saito, S. Takamichi, Y. Baba, and H. Saruwatari, “HumanACGAN: conditional generative adversarial network with human-based auxiliary classifier and its evaluation in phoneme perception,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Jun. 2021. (accepted)
  • D. Xin, T. Komatsu, S. Takamichi, and H. Saruwatari, “Disentangled Speaker and Language Representations using Mutual Information Minimization and Domain Adaptation for Cross-Lingual TTS,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Jun. 2021. (accepted)
  • Y. Ishikawa, S. Takamichi, T. Umemoto, M. Aikawa, K. Sakamoto, K. Yui, S. Fujiwara, A. Suto, and K. Nishiyama, “Japanese EFL learners’ speaking practice utilizing text-to-speech technology within a team-based flipped learning framework,” in Proceedings of International Conference on Human-Computer Interaction (HCII), Jun. 2021. (accepted)
  • Y. Kondo, Y. Kubo, N. Takamune, D. Kitamura, and H. Saruwatari, “Deficient basis estimation of noise spatial covariance matrix for rank-constrained spatial covariance matrix estimation method in blind speech extraction,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Jun. 2021. (accepted)
  • H. Kai, S. Takamichi, S. Shiota, and H. Kiya, “Lightweight voice anonymization based on data-driven optimization of cascaded voice modification modules,” in Proceedings of IEEE Spoken Language Technology Workshop (SLT), Jan. 2021.
  • T. Nishida, N. Ueno, S. Koyama, and H. Saruwatari, “Sensor Placement in Arbitrarily Restricted Region for Field Estimation Based on Gaussian Process,” in Proceedings of European Signal Processing Conference (EUSIPCO), Jan. 2021, pp. 2289–2293.
  • J. Brunnström and S. Koyama, “Kernel-Interpolation-Based Filtered-X Least Mean Square for Spatial Active Noise Control in Time Domain,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2021.
  • N. Ueno, S. Koyama, and H. Saruwatari, “Convex and Differentiable Formulation for Inverse Problems in Hilbert Spaces with Nonlinear Clipping Effects,” in IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 2021. (accepted)
  • S. Koyama, K. Kimura, and N. Ueno, “Sound Field Reproduction With Weighted Mode Matching and Infinite-Dimensional Harmonic Analysis: An Experimental Evaluation,” in International Conference on Immersive and 3D Audio (I3DA), 2021. (invited)
  • S. Koyama, T. Amakasu, N. Ueno, and H. Saruwatari, “Amplitude Matching: Majorization-Minimization Algorithm for Sound Field Control Only With Amplitude Constraint,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2021.
  • S. Koyama, J. Brunnström, H. Ito, N. Ueno, and and H. Saruwatari, “Spatial Active Noise Control Based on Kernel Interpolation of Sound Field,” in IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2021. (accepted)
  • T. Hasumi, T. Nakamura, N. Takamune, H. Saruwatari, D. Kitamura, Y. Takahashi, and K. Kondo, “Multichannel Audio Source Separation with Independent Deeply Learned Matrix Analysis Using Product of Source Models,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2021. (accepted)
  • X. Luo, S. Takamichi, T. Koriyama, Y. Saito, and H. Saruwatari, “Emotion-Controllable Speech Synthesis Using Emotion Soft Labels and Fine-Grained Prosody Factors,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2021. (accepted)
  • S. Misawa, N. Takamune, T. Nakamura, D. Kitamura, H. Saruwatari, M. Une, and S. Makino, “Speech enhancement by noise self-supervised rank-constrained spatial covariance matrix estimation via independent deeply learned matrix analysis,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2021. (accepted)
  • T. Saeki, S. Takamichi, and H. Saruwatari, “Low-Latency Incremental Text-to-Speech Synthesis with Distilled Context Prediction Network,” in Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 2021. (accepted)

2020

  • K. Kamo, Y. Kubo, N. Takamune, D. Kitamura, H. Saruwatari, Y. Takahashi, and K. Kondo, “Joint-Diagonalizability-Constrained Multichannel Nonnegative Matrix Factorization Based on Multivariate Complex Student’s t-distribution,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Dec. 2020, pp. 869–874.
  • J. Koguchi, S. Takamichi, and M. Morise, “PJS: phoneme-balanced Japanese singing-voice corpus,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Dec. 2020, pp. 487–491.
  • T. Saeki, Y. Saito, S. Takamichi, and H. Saruwatari, “Real-time, full-band, online DNN-based voice conversion system using a single CPU,” in Proceedings of Interspeech, Oct. 2020, pp. 1021–1022.
  • Y. Okamoto, K. Imoto, S. Takamichi, R. Yamanishi, T. Fukumori, and Y. Yamashita, “RWCP-SSD-Onomatopoeia: Onomatopoeic Word Dataset for Environmental Sound Synthesis,” in Proceedings of Detection and Classification of Acoustic Scenes and Events (DCASE), Oct. 2020, pp. 125–129.
  • M. Aso, S. Takamichi, and H. Saruwatari, “End-to-end text-to-speech synthesis with unaligned multiple language units based on attention,” in Proceedings of Interspeech, Oct. 2020, pp. 4009–4013.
  • D. Xin, Y. Saito, S. Takamichi, T. Koriyama, and H. Saruwatari, “Cross-lingual Text-To-Speech Synthesis via Domain Adaptation and Perceptual Similarity Regression in Speaker Space,” in Proceedings of Interspeech, Oct. 2020, pp. 2947–2951.
  • N. Kimura, Z. Su, and T. Saeki, “End-to-End Deep Learning Speech Recognition Model for Silent Speech Challenge,” in Proceedings of Interspeech, Oct. 2020, pp. 1025–1026.
  • Y. Yamashita, T. Koriyama, Y. Saito, S. Takamichi, Y. Ijima, R. Masumura, and H. Saruwatari, “Investigating Effective Additional Contextual Factors in DNN-based Spontaneous Speech Synthesis,” in Proceedings of Interspeech, Oct. 2020, pp. 3201–3205.
  • S. Goto, K. Ohnishi, Y. Saito, K. Tachibana, and K. Mori, “Face2Speech: towards multi-speaker text-to-speech synthesis using an embedding vector predicted from a face image,” in Proceedings of Interspeech, Oct. 2020, pp. 1321–1325.
  • K. Mitsui, T. Koriyama, and H. Saruwatari, “Multi-speaker Text-to-speech Synthesis Using Deep Gaussian Processes,” in Proceedings of Interspeech, Oct. 2020, pp. 2032–2036.
  • H. Takeuchi, K. Kashino, Y. Ohishi, and H. Saruwatari, “Harmonic Lowering for Accelerating Harmonic Convolution for Audio Signals,” in Proc. Interspeech, Sep. 2020, pp. 185–189.
  • N. Iijima, K. Shoichi, and H. Saruwatari, “Binaural Rendering From Distributed Microphone Signals Considering Loudspeaker Distance in Measurements,” in IEEE International Workshop on Multimedia Signal Processing (MMSP), Sep. 2020, pp. 1–6.
  • S. Kozuka, T. Nakamura, and H. Saruwatari, “Investigation on Wavelet Basis Function of DNN-based Time Domain Audio Source Separation Inspired by Multiresolution Analysis,” in Proceedings of Internoise, Aug. 2020.
  • R. Okamoto, S. Yano, N. Wakui, and S. Takamichi, “Visualization of differences in ear acoustic characteristics using t-SNE,” in Proceedings of AES convention, May 2020.
  • T. Koriyama and H. Saruwatari, “Utterance-level Sequential Modeling For Deep Gaussian Process Based Speech Synthesis Using Simple Recurrent Unit,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), May 2020, pp. 7249–7253.
  • T. Saeki, Y. Saito, S. Takamichi, and H. Saruwatari, “Lifter training and sub-band modeling for computationally efficient and high-quality voice conversion using spectral differentials,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), May 2020, pp. 7784–7788.
  • T. Nakamura and H. Saruwatari, “Time-domain Audio Source Separation based on Wave-U-Net Combined with Discrete Wavelet Transform,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), May 2020, pp. 386–390.
  • K. Kamo, Y. Kubo, N. Takamune, D. Kitamura, H. Saruwatari, Y. Takahashi, and K. Kondo, “Regularized Fast Multichannel Nonnegative Matrix Factorization with ILRMA-based Prior Distribution of Joint-Diagonalization Process,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), May 2020, pp. 606–610.
  • T. Kondo, K. Fukushige, N. Takamune, D. Kitamura, H. Saruwatari, R. Ikeshita, and T. Nakatani, “Convergence-Guaranteed Independent Positive Semidefinite Tensor Analysis Based on Student’s T Distribution,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), May 2020, pp. 681–685.
  • Y. Saito, S. Takamichi, and H. Saruwatari, “SMASH corpus: a spontaneous speech corpus recording third-person audio commentaries on gameplay,” in Proceedings of Language Resources and Evaluation Conference (LREC), May 2020, pp. 6571–6577.
  • K. Ariga, T. Nishida, S. Koyama, N. Ueno, and H. Saruwatari, “Mutual-Information-Based Sensor Placement for Spatial Sound Field Recording,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), May 2020, pp. 166–170.
  • Y. Yamashita, T. Koriyama, Y. Saito, S. Takamichi, Y. Ijima, R. Masumura, and H. Saruwatari, “DNN-based Speech Synthesis Using Abundant Tags of Spontaneous Speech Corpus,” in Proceedings of Language Resources and Evaluation Conference (LREC), May 2020, pp. 6438–6443.
  • H. Ito, S. Koyama, N. Ueno, and H. Saruwatari, “Spatial Active Noise Control Based on Kernel Interpolation with Directional Weighting,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), May 2020, pp. 8404–8408. (invited)
  • S. Koyama, G. Chardon, and L. Daudet, “Optimizing Source and Sensor Placement for Sound Field Control: An Overview,” in IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2020. (overview)
  • G. Chardon, S. Koyama, and L. Daudet, “Numerical Evaluation of Source and Sensor Placement Methods For Sound Field Control,” in Forum Acusticum, 2020.

2019

  • N. Makishima, N. Takamune, D. Kitamura, H. Saruwatari, Y. Takahashi, and K. Kondo, “Robust demixing filter update algorithm based on microphone-wise coordinate descent for independent deeply learned matrix analysis,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Lanzhou, Nov. 2019, pp. 1868–1873.
  • Y. Kubo, N. Takamune, D. Kitamura, and H. Saruwatari, “Acceleration of rank-constrained spatial covariance matrix estimation for blind speech extraction,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Lanzhou, Nov. 2019, pp. 332–338.
  • M. Une, Y. Kubo, N. Takamune, D. Kitamura, H. Saruwatari, and S. Makino, “Evaluation of multichannel hearing aid system using rank-constrained spatial covariance matrix estimation,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Lanzhou, Nov. 2019, pp. 1874–1879.
  • M. Nakanishi, N. Ueno, S. Koyama, and H. Saruwatari, “Two-dimensional sound field recording with multiple circular microphone arrays considering multiple scattering,” in Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz, Oct. 2019.
  • R. Arakawa, S. Takamichi, and H. Saruwatari, “TransVoice: Real-Time Voice Conversion for Augmenting Near-Field Speech Communication,” in Proceedings of UIST, New Orleans, Oct. 2019.
  • Y. Kubo, N. Takamune, D. Kitamura, and H. Saruwatari, “Efficient full-rank spatial covariance estimation using independent low-rank matrix analysis for blind source separation,” in Proceedings of European Signal Processing Conference (EUSIPCO), A Coruña, Sep. 2019.
  • N. Makishima, N. Takamune, D. Kitamura, H. Saruwatari, Y. Takahashi, and K. Kondo, “Column-wise update algorithm for independent deeply learned matrix analysis,” in Proceedings of international congress on acoustics (ICA), Aachen, Sep. 2019, pp. 2805–2812. [Young Scientist Conference Attendance Grant]
  • I. H. Parmonangan, H. Tanaka, S. Sakriani, S. Takamichi, and S. Nakamura, “Speech Quality Evaluation of Synthesized Japanese Speech using EEG,” in Proceedings of Interspeech, Graz, Sep. 2019, pp. 1228–1232.
  • T. Nakamura, Y. Saito, S. Takamichi, Y. Ijima, and H. Saruwatari, “V2S attack: building DNN-based voice conversion from automatic speaker verification,” in Proceedings of The 10th ISCA SSW, Vienna, Sep. 2019.
  • H. Ito, S. Koyama, N. Ueno, and H. Saruwatari, “Three-dimensional spatial active noise control based on kernel-induced sound field interpolation,” in Proceedings of international congress on acoustics (ICA), Aachen, Sep. 2019.
  • M. Aso, S. Takamichi, N. Takamune, and H. Saruwatari, “Subword tokenization based on DNN-based acoustic model for end-to-end prosody generation,” in Proceedings of The 10th ISCA SSW, Vienna, Sep. 2019.
  • Y. Saito, S. Takamichi, and H. Saruwatari, “DNN-based Speaker Embedding Using Subjective Inter-speaker Similarity for Multi-speaker Modeling in Speech Synthesis,” in Proceedings of The 10th ISCA SSW, Vienna, Sep. 2019.
  • R. Arakawa, S. Takamichi, and H. Saruwatari, “Implementation of DNN-based real-time voice conversion and its improvements by audio data augmentation and mask-shaped device,” in Proceedings of The 10th ISCA SSW, Vienna, Sep. 2019.
  • T. Koriyama, S. Takamichi, and T. Kobayashi, “Sparse Approximation of Gram Matrices for GMMN-based Speech Synthesis,” in Proceedings of The 10th ISCA SSW, Vienna, Aug. 2019.
  • Y. Takida, S. Koyama, N. Ueno, and H. Saruwatari, “Comparison of Interpolation Methods for Gridless Sound Field Decomposition Based on Reciprocity Gap Functional,” in Proceedings of International Congress on Sound and Vibration (ICSV), Montreal, Jul. 2019. (to appear) [Invited]
  • I. H. Parmonangan, H. Tanaka, S. Sakriani, S. Takamichi, and S. Nakamura, “EEG Analysis towards Evaluating Synthesized Speech Quality,” in Proceedings of IEEE EMBC, Berlin, Jul. 2019.
  • N. Makishima, N. Takamune, D. Kitamura, H. Saruwatari, Y. Takahashi, K. Kondo, and H. Nakajima, “Generalized-Gaussian-distribution-based independent deeply learned matrix analysis for multichannel audio source separation,” in Proceedings of International Congress and Exhibition on Noise Control Engineering (INTERNOISE), Madrid, Jun. 2019.
  • H. Tamaru, Y. Saito, S. Takamichi, T. Koriyama, and H. Saruwatari, “Generative moment matching network-based random modulation post-filter for DNN-based singing voice synthesis and neural double-tracking,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brighton, May 2019.
  • K. Naruse, S. Yoshida, S. Takamichi, T. Narumi, T. Tanikawa, and M. Hirose, “Estimating Confidence in Voices using Crowdsourcing for Alleviating Tension with Altered Auditory Feedback,” in Proceedings of Asian CHI Symposium: Emerging HCI Research Collection in ACM Conference on Human Factors in Computing Systems (CHI), Glasgow, May 2019.
  • T. Koriyama and T. Kobayashi, “A Training Method Using DNN-guided Layerwise Pretraining for Deep Gaussian Processes,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brighton, May 2019.
  • H. Ito, S. Koyama, N. Ueno, and H. Saruwatari, “Feedforward Spatial Active Noise Control Based on Kernel Interpolation of Sound Field,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brighton, May 2019. (to appear)
  • Y. Takida, S. Koyama, N. Ueno, and H. Saruwatari, “Robust Gridless Sound Field Decomposotion Based on Structured Reciprocity Gap Functional in Spherical Harmonic Domain,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brighton, May 2019. (to appear)
  • K. Yoshino, Y. Murase, N. Lubis, K. Sugiyama, H. Tanaka, S. Sakriani, S. Takamichi, and S. Nakamura, “Spoken Dialogue Robot for Watching Daily Life of Elderly People,” in Proceedings of IWSDS, Sicily, Apr. 2019.

2018

  • M. Une, Y. Saito, S. Takamichi, D. Kitamura, R. Miyazaki, and H. Saruwatari, “Generative approach using the noise generation models for DNN-based speech synthesis trained from noisy speech,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Hawaii, Nov. 2018, pp. 99–103.
  • T. Akiyama, S. Takamichi, and H. Saruwatari, “Prosody-aware subword embedding considering Japanese intonation systems and its application to DNN-based multi-dialect speech synthesis,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Hawaii, Nov. 2018.
  • S. Mogami, N. Takamune, D. Kitamura, H. Saruwatari, Y. Takahashi, K. Kondo, H. Nakajima, and N. Ono, “Independent low-rank matrix analysis based on time-variant sub-Gaussian source model,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Hawaii, Nov. 2018. [APSIPA ASC 2018 Best Paper Award]
  • H. Suda, G. Kotani, S. Takamichi, and D. Saito, “A revisit to feature handling for high-quality voice conversion,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Hawaii, Nov. 2018.
  • S. Shiota, S. Takamichi, and T. Matsui, “Data augmentation with moment-matching networks for i-vector based speaker verification,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Hawaii, Nov. 2018.
  • S. Koyama, “Sparsity-based sound field reconstruction,” in Tohoku Universal Acoustical Communication Month, Seminar on the spatial aspects of hearing and their applications, keynote lecture, Sendai, Oct. 2018. [Invited]
  • N. Ueno, S. Koyama, and H. Saruwatari, “Kernel Ridge Regression With Constraint of Helmholtz Equation for Sound Field Interpolation,” in Proceedings of International Workshop on Acoustic Signal Enhancement (IWAENC), Tokyo, Sep. 2018, pp. 436–440.
  • S. Takamichi, Y. Saito, N. Takamune, D. Kitamura, and H. Saruwatari, “Phase reconstruction from amplitude spectrograms based on von-Mises-distribution deep neural network,” in Proceedings of International Workshop on Acoustic Signal Enhancement (IWAENC), Tokyo, Sep. 2018.
  • Y. Takida, S. Koyama, and H. Saruwatari, “Exterior and Interior Sound Field Separation Using Convex Optimization: Comparison of Signal Models,” in Proceedings of European Signal Processing Conference (EUSIPCO), Rome, Sep. 2018, pp. 2567–2571.
  • S. Mogami, H. Sumino, D. Kitamura, N. Takamune, S. Takamichi, H. Saruwatari, and N. Ono, “Independent Deeply Learned Matrix Analysis for Multichannel Audio Source Separation,” in Proceedings of European Signal Processing Conference (EUSIPCO), Rome, Sep. 2018.
  • Y. Takida, S. Koyama, N. Ueno, and H. Saruwatari, “Gridless Sound Field Decomposition Based on Reciprocity Gap Functional in Spherical Harmonic Domain,” in Proceedings of IEEE sensor array and multichannel signal processing workshop (SAM), Sheffield, Jul. 2018, pp. 627–631. [Best Student Paper Award, ONRG sponsored student travel grants]
  • S. Takamichi and H. Saruwatari, “CPJD Corpus: Crowdsourced Parallel Speech Corpus of Japanese Dialects,” in Proceedings of Language Resources and Evaluation Conference (LREC), Miyazaki, May 2018, pp. 434–437.
  • S. Koyama, G. Chardon, and L. Daudet, “Joint Source and Sensor Placement for Sound Field Control Based on Empirical Interpolation Method,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Calgary, Apr. 2018, pp. 501–505.
  • N. Ueno, S. Koyama, and H. Saruwatari, “Sound Field Reproduction with Exterior Radiation Cancellation Using Analytical Weighting of Harmonic Coefficients,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Calgary, Apr. 2018, pp. 466–470. [IEEE SPS Japan Student Conference Paper Award]
  • Y. Saito, S. Takamichi, and H. Saruwatari, “Text-to-speech synthesis using STFT spectra based on low-/multi-resolution generative adversarial networks,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Calgary, Apr. 2018, pp. 5299–5303.
  • Y. Saito, Y. Ijima, K. Nishida, and S. Takamichi, “Non-parallel voice conversion using variational autoencoders conditioned by phonetic posteriorgrams and d-vectors,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Calgary, Apr. 2018, pp. 5274–5278.

2017

  • N. Mae, Y. Mitsui, S. Makino, D. Kitamura, N. Ono, T. Yamada, and H. Saruwatari, “Sound source localization using binaural different for hose-shaped rescue robot,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Kuala Lumpur, Dec. 2017.
  • Y. Mitsui, D. Kitamura, N. Takamune, H. Saruwatari, Y. Takahashi, and K. Kondo, “Independent low-rank matrix analysis based on parametric majorization-equalization algorithm,” in Proceedings of IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), Curaçao, Dec. 2017.
  • S. Koyama and L. Daudet, “Comparison of Reverberation Models for Sparse Sound Field Decomposition,” in Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz, Oct. 2017, pp. 214–218.
  • S. Takamichi, D. Saito, H. Saruwatari, and N. Minematsu, “The UTokyo speech synthesis system for Blizzard Challenge 2017,” in Proceedings of Blizzard Challenge Workshop, Stockholm, Aug. 2017.
  • S. Takamichi, “Modulation spectrum-based speech parameter trajectory smoothing for DNN-based speech synthesis using FFT spectra,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Kuala Lumpur, Aug. 2017. [Invited]
  • D. Kitamura, N. Ono, and H. Saruwatari, “Experimental analysis of optimal window length for independent low-rank matrix analysis,” in Proceedings of Proceedings of 25th European Signal Processing Conference, Greek island of Kos, Aug. 2017. [Invited]
  • S. Takamichi, T. Koriyama, and H. Saruwatari, “Sampling-based speech parameter generation using moment-matching network,” in Proceedings of Interspeech, Stockholm, Aug. 2017.
  • H. Miyoshi, Y. Saito, S. Takamichi, and H. Saruwatari, “Voice Conversion Using Sequence-to-Sequence Learning of Context Posterior Probabilities,” in Proceedings of Interspeech, Stockholm, Aug. 2017.
  • S. Koyama, N. Murata, and H. Saruwatari, “Effect of Multipole Dictionary in Sparse Sound Field Decomposition For Super-resolution in Recording and Reproduction,” in Proceedings of International Congress on Sound and Vibration (ICSV), London, Jul. 2017. [Invited]
  • Y. Mitsui, D. Kitamura, S. Takamichi, N. Ono, and H. Saruwatari, “Blind source separation based on independent low-rank matrix analysis with sparse regularization for time-series activity,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), New Orleans, Mar. 2017, pp. 21–25. [Student Paper Contest Finalist]
  • N. Ueno, S. Koyama, and H. Saruwatari, “Listening-area-informed Sound Field Reproduction Based On Circular Harmonic Expansion,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), New Orleans, Mar. 2017, pp. 111–115.
  • N. Murata, S. Koyama, N. Takamune, and H. Saruwatari, “Spatio-temporal Sparse Sound Field Decomposition Considering Acoustic Source Signal Characteristics,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), New Orleans, Mar. 2017, pp. 441–445.
  • N. Ueno, S. Koyama, and H. Saruwatari, “Listening-area-informed Sound Field Reproduction With Gaussian Prior Based On Circular Harmonic Expansion,” in Proceedings of Hands-free Speech Communication and Microphone Arrays (HSCMA), San Francisco, Mar. 2017, pp. 196–200.
  • R. Sato, H. Kameoka, and K. Kashino, “Fast algorithm for statistical phrase/accent command estimation based on generative model incorporating spectral features,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), New Orleans, Mar. 2017, pp. 5595–5599.
  • Y. Saito, S. Takamichi, and H. Saruwatari, “Training algorithm to deceive anti-spoofing verification for dnn-based speech synthesis,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), New Orleans, Mar. 2017, pp. 4900–4904. [Spoken Language Processing Student Grant]
  • N. Mae, M. Ishimura, D. Kitamura, N. Ono, T. Yamada, S. Makino, and H. Saruwatari, “Ego noise reduction for hose-shaped rescue robot combining independent low-rank matrix analysis and multichannel noise cancellation,” in Proceedings of International Conference on Latent Variable Analysis and Signal Separation (LVA/ICA), Grenoble, Feb. 2017, pp. 141–151.

2016

  • H. Nakajima, D. Kitamura, N. Takamune, S. Koyama, H. Saruwatari, Y. Takahashi, and K. Kondo, “Audio Signal Separation using Supervised NMF with Time-variant All-Pole-Model-Based Basis Deformation,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Jeju, Dec. 2016. [Invited]
  • S. Koyama, N. Murata, and H. Saruwatari, “Super-resolution in sound field recording and reproduction based on sparse representation,” in Proceedings of 5th Joint Meeting of the Acoustical Society of America and Acoustical Society of Japan, Honolulu, Nov. 2016. [Invited]
  • M. Ishimura, S. Makino, T. Yamada, N. Ono, and H. Saruwatari, “Noise reduction using independent vector analysis and noise cancellation for a hose-shaped rescue robot,” in Proceedings of International Workshop on Acoustic Signal Enhancement (IWAENC), Xian, Sep. 2016, no. PS-III-04.
  • D. Kitamura, N. Ono, H. Saruwatari, Y. Takahashi, and K. Kondo, “Discriminative and reconstructive basis training for audio source separation with semi-supervised nonnegative matrix factorization,” in Proceedings of International Workshop on Acoustic Signal Enhancement (IWAENC), Xian, Sep. 2016, no. PS-III-02.
  • M. Takakusaki, D. Kitamura, N. Ono, T. Yamada, S. Makino, and H. Saruwatari, “Ego-noise reduction for a hose-shaped rescue robot using determined rank-1 multichannel nonnegative matrix factorization,” in Proceedings of International Workshop on Acoustic Signal Enhancement (IWAENC), Xian, Sep. 2016, no. PS-II-02.
  • K. Kobayashi, S. Takamichi, S. Nakamura, and T. Toda, “The NU-NAIST voice conversion system for the Voice Conversion Challenge 2016,” in Proceedings of Interspeech, San Francisco, Sep. 2016, pp. 1667–1671.
  • L. Li, H. Kameoka, T. Higuchi, and H. Saruwatari, “Semi-supervised joint enhancement of spectral and cepstral sequences of noisy speech,” in Proceedings of Interspeech, San Francisco, Sep. 2016, pp. 3753–3757.
  • H. Nakajima, D. Kitamura, N. Takamune, S. Koyama, H. Saruwatari, N. Ono, Y. Takahashi, and K. Kondo, “Music signal separation using supervised NMF with all-pole-model-based discriminative basis deformation,” in Proceedings of The 2016 European Signal Processing Conference (EUSIPCO), Budapest, Aug. 2016, pp. 1143–1147.
  • N. Murata, H. Kameoka, K. Kinoshita, S. Araki, T. Nakatani, S. Koyama, and H. Saruwatari, “Reverberation-robust underdetermined source separation with non-negative tensor double deconvolution,” in Proceedings of The 2016 European Signal Processing Conference (EUSIPCO), Budapest, Aug. 2016, pp. 1648–1652.
  • S. Koyama, “Source-Location-Informed Sound Field Recording and Reproduction: A Generalization to Arrays of Arbitrary Geometry,” in Proceedings of 2016 AES International Conference on Sound Field Control, Guildford, Jul. 2016 [Online]. Available at: http://www.aes.org/e-lib/browse.cfm?elib=18303 [Invited]
  • Y. Mitsufuji, S. Koyama, and H. Saruwatari, “Multichannel blind source separation based on non-negative tensor factorization in wavenumber domain,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Shanghai, Mar. 2016, pp. 56–60.
  • N. Murata, S. Koyama, H. Kameoka, N. Takamune, and H. Saruwatari, “Sparse sound field decomposition with multichannel extension of complex NMF,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Shanghai, Mar. 2016, pp. 345–349.
  • S. Koyama and H. Saruwatari, “Sound field decomposition in reverberant environment using sparse and low-rank signal models,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Shanghai, Mar. 2016, pp. 395–399.

2015

  • S. Koyama, A. Matsubayashi, N. Murata, and H. Saruwatari, “Sparse Sound Field Decomposition Using Group Sparse Bayesian Learning,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Dec. 2015, pp. 850–855. [Invited]
  • N. Murata, S. Koyama, N. Takamune, and H. Saruwatari, “Sparse Sound Field Decomposition with Parametric Dictionary Learning for Super-Resolution Recording and Reproduction,” in Proceedings of IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), Dec. 2015.
  • S. Koyama, K. Ito, and H. Saruwatari, “Source-location-informed sound field recording and reproduction with spherical arrays,” in Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz, Oct. 2015.
  • D. Kitamura, N. Ono, H. Sawada, H. Kameoka, and H. Saruwatari, “Efficient multichannel nonnegative matrix factorization exploiting rank-1 spatial model,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brisbane, Apr. 2015, pp. 276–280.
  • S. Koyama, N. Murata, and H. Saruwatari, “Structured sparse signal models and decomposition algorithm for super-resolution in sound field recording and reproduction,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brisbane, Apr. 2015, pp. 619–623.
  • Y. Murota, D. Kitamura, S. Koyama, H. Saruwatari, and S. Nakamura, “Statistical modeling of binaural signal and its application to binaural source separation,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brisbane, Apr. 2015, pp. 494–498.
  • D. Kitamura, N. Ono, H. Sawada, H. Kameoka, and H. Saruwatari, “Relaxation of rank-1 spatial constraint in overdetermined blind source separation,” in Proceedings of European Signal Processing Conference (EUSIPCO), Nice, 2015, pp. 1261–1265. [Invited]
  • H. Saruwatari, “Statistical-model-based speech enhancement With musical-noise-free properties,” in in Proceedings of 2015 IEEE International Conference on Digital Signal Processing (DSP2015), Singapore, 2015. [Invited]

2014

  • D. Kitamura, H. Saruwatari, S. Nakamura, Y. Takahashi, K. Kondo, and H. Kameoka, “Hybrid multichannel signal separation using supervised nonnegative matrix factorization with spectrogram restoration,” in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Siem Reap, Dec. 2014. [Invited]
  • S. Koyama, P. Srivastava, K. Furuya, S. Shimauchi, and H. Ohmuro, “STSP: Space-Time Stretched Pulse for Measuring Spatio-Temporal Impulse Response,” in Proceedings of International Workshop on Acoustic Signal Enhancement (IWAENC), Sep. 2014, pp. 309–313.
  • D. Kitamura, H. Saruwatari, S. Nakamura, Y. Takahashi, K. Kondo, and H. Kameoka, “Divergence optimization in nonnegative matrix factorization with spectrogram restoration for multichannel signal separation,” in Proceedings of Hands-free Speech Communication and Microphone Arrays (HSCMA), Nancy, May 2014, no. 1569905839.
  • F. Aprilyanti, H. Saruwatari, K. Shikano, S. Nakamura, and T. Takatani, “Optimized joint noise suppression and dereverberation based on blind signal extraction For hands-free speech recognition system,” in Proceedings of Hands-free Speech Communication and Microphone Arrays (HSCMA), Nancy, May 2014, no. 1569905697.
  • S. Nakai, H. Saruwatari, R. Miyazaki, S. Nakamura, and K. Kondo, “Theoretical analysis of biased MMSE short-time spectral amplitude estimator and its extension to musical-noise-free speech enhancement,” in Proceedings of Hands-free Speech Communication and Microphone Arrays (HSCMA), Nancy, May 2014, no. 1569905751.
  • Y. Murota, D. Kitamura, S. Nakai, H. Saruwatari, S. Nakamura, Y. Takahashi, and K. Kondo, “Music signal separation based on Bayesian spectral amplitude estimator with Automatic target prior adaptation,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Florence, May 2014, pp. 7540–7544.
  • S. Koyama, S. Shimauchi, and H. Ohmuro, “Sparse Sound Field Representation in Recording and Reproduction for Reducing Spatial Aliasing Artifacts,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Florence, May 2014, pp. 4476–4480.
  • Y. Haneda, K. Furuya, S. Koyama, and K. Niwa, “Close-talking spherical microphone array using sound pressure interpolation based on spherical harmonic expansion,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Florence, May 2014, pp. 604–608.


国内会議

2021

  • 宇田川 健太, 齋藤 佑樹, and 猿渡 洋, “人間の知覚評価フィードバックによる音声合成の話者適応,” in 音声研究会 (SP), Oct. 2021.
  • 藤井 一貴, 齋藤 佑樹, and 猿渡 洋, “韻律情報で条件付けされた非自己回帰型End-to-End日本語音声合成の検討,” in 第138回音声言語情報処理研究会, Oct. 2021.
  • 川村 真也, 中村 友彦, 北村 大地, 猿渡 洋, 高橋 祐, and 近藤 多伸, “楽譜情報を援用した音楽音響信号に対する混合Differentiable DSPモデルの合成パラメータ推定,” in 第132回音楽情報科学研究会, Sep. 2021, vol. 2021–MUS–132, no. 24.
  • 渡辺 瑠伊, 北村 大地, 中村 友彦, 猿渡 洋, 高橋 祐, and 近藤 多伸, “深層学習に基づく間引きインジケータ付き周波数帯域補間手法による音源分離処理の高速化,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2021, pp. 155–158.
  • 溝渕 悠朔, 北村 大地, 中村 友彦, 猿渡 洋, 高橋 祐, and 近藤 多伸, “非負値行列因子分解を用いた被り音の抑圧,” in 第132回音楽情報科学研究会, Sep. 2021, vol. 2021–MUS–132, no. 24.
  • 松永 裕太, 佐伯 高明, 高道 慎之介, and 猿渡 洋, “講演音声におけるフィラーの出現傾向と個人性に関する分析,” in 日本音声学会第35回全国大会, Aug. 2021. (to appear)
  • 蓮実拓也, 中村友彦, 高宗典玄, 猿渡洋, 北村大地, 高橋祐, and 近藤多伸, “非負値行列因子分解を導入したproduct of experts型音源モデルに基づく独立深層学習行列分析による多チャネル音源分離,” in 第131回音楽情報科学研究会, Jun. 2021. (to appear)
  • 齋藤 弘一, 中村 友彦, 矢田部浩平, and 猿渡洋, “周波数領域でのフィルタ設計に基づくサンプリング周波数非依存畳み込み層を用いたDNN音源分離,” in 情報処理学会研究報告, Jun. 2021. (to appear)
  • 高道 慎之介, 中田 亘, 郡山 知樹, 丹治 尚子, 井島 勇祐, 増村 亮, and 猿渡 洋, “J-KAC:日本語オーディオブック・紙芝居朗読音声コーパス,” in 情報処理学会研究報告, Jun. 2021. (to appear)
  • 中村 友彦 and 猿渡 洋, “多重解像度深層分析を用いた楽音分離の実験的評価,” in 情報処理学会研究報告, 2021–MUS–131, Jun. 2021, pp. 1–11. (to appear)
  • 中田亘, 郡山知樹, 高道慎之介, 井島勇祐, 増村亮, and 猿渡洋, “言語モデルによる文横断情報を用いたオーディオブック音声合成の検討,” in 日本音響学会春季研究発表会講演論文集, Mar. 2021, pp. 953–956.
  • 中村泰貴, 郡山知樹, and 猿渡洋, “深層ガウス過程を用いたsequence-to-sequence音声合成のモデル構造の評価,” in 日本音響学会春季研究発表会講演論文集, Mar. 2021, pp. 1035–1036.
  • 岡本 悠希, 井本 桂右, 高道 慎之介, 山西 良典, and 山下 洋一, “Onoma-to-wave:オノマトペを利用した環境音合成手法の提案,” in 日本音響学会春季研究発表会講演論文集, Mar. 2021, pp. 843–846.
  • 甲斐 優人, 高道 慎之介, 塩田 さやか, and 貴家 仁志, “プライバシー保護のためのカスケード型音声加工法を用いた音声仮名化,” in 日本音響学会春季研究発表会講演論文集, Mar. 2021, pp. 869–872.
  • 近藤 樹、高宗 典玄、北村 大地、猿渡 洋、池下 林太郎、中谷 智広, “スタガードモデル化三重対角型共分散行列を用いた独立半正定値テンソル分析によるブラインド音源分離,” in 日本音響学会春季研究発表会講演論文集, Mar. 2021, pp. 257–260.
  • 蓮実拓也, 中村友彦, 高宗典玄, 猿渡洋, 北村大地, 高橋祐, and 近藤多伸, “経験ベイズ独立深層学習行列分析による多チャネル音源分離,” in 日本音響学会春季研究発表会講演論文集, Mar. 2021, pp. 217–220.
  • 木村圭佑, 小山翔一, 植野夏樹, and 猿渡洋, “音場合成のための所望音場の事前情報を用いた二乗誤差期待値最小化規準スピーカ配置最適化法,” in 日本音響学会春季研究発表会講演論文集, Mar. 2021.
  • 上田陽太, 藤井一貴, 齋藤佑樹, 高道慎之介, 馬場雪乃, and 猿渡洋, “HumanACGAN:人間の知覚を補助分類器に用いた条件付き敵対的生成ネットワークと音素知覚における評価,” in 日本音響学会春季研究発表会講演論文集, Mar. 2021, pp. 819–822.
  • 宋裕進, 塩田さやか, 高道慎之介, 村上大輔, 松井知子, and 猿渡 洋, “短時間発話を用いた話者照合のための音声加工の効果に関する検討 ,” in 音声言語情報処理研究発表会 (SLP), Mar. 2021.
  • 倉田将希, 高道慎之介, 佐伯高明, 荒川陸, 齋藤佑樹, 樋口啓太, and 猿渡 洋, “リアルタイムDNN 音声変換フィードバックによるキャラクタ性の獲得手法 ,” in 音声言語情報処理研究発表会 (SLP), Mar. 2021.
  • 佐伯高明, 高道慎之介, and 猿渡洋, “大規模言語モデルによる未観測文の生成機構を持つEnd-to-Endインクリメンタル音声合成,” in 音声研究会 (SP), Mar. 2021, pp. 85–90.
  • 齋藤 弘一, 中村 友彦, 矢田部浩平, 小泉悠馬, and 猿渡洋, “アンチエイリアシング機構を導入したサンプリング周波数非依存な畳み込み層を用いた音源分離,” in 情報処理学会研究報告, Mar. 2021, pp. 1–6.
  • 成澤直輝, 池下林太郎, 高宗典玄, 北村大地, 中村友彦, 猿渡洋, and 中谷智広, “独⽴深層学習テンソル分析に基づく多チャネル⾳源分離,” in 日本音響学会春季研究発表会講演論文集, Mar. 2021, pp. 117–120.
  • 辛 徳泰, 小松 達也, 高道 慎之介, and 猿渡 洋, “ドメイン適応と相互情報量最小化によるdisentangledな話者・言語表現に基づいたクロスリンガル音声合成,” in 日本音響学会春季研究発表会講演論文集, Mar. 2021, pp. 923–926.
  • 近藤祐斗, 久保優騎, 高宗典玄, 北村大地, and 猿渡洋, “ランク制約付き空間共分散行列推定法における補助関数法に基づく雑音欠落ランク空間基底に対する新しい更新則,” in 日本音響学会春季研究発表会講演論文集, Mar. 2021, pp. 137–140.
  • 竹内博俊, 大石康智 and柏野邦夫, and 猿渡洋, “対称調波畳み込み,” in 日本音響学会春季研究発表会講演論文集, Mar. 2021, pp. 367–370.
  • 齋藤 弘一, 中村 友彦, 矢田部浩平, 小泉悠馬, and 猿渡洋, “潜在アナログフィルタ表現に基づく畳み込み層を用いたサンプリング周波数非依存なDNN音源分離,” in 日本音響学会春季研究発表会講演論文集, Mar. 2021, pp. 125–128.
  • 水田和輝, 郡山知樹, and 猿渡洋, “音声の周波数特性を考慮した畳み込み層を持つ波形生成モデルの検討,” in 日本音響学会春季研究発表会講演論文集, Mar. 2021, pp. 851–852.
  • 齋藤佑樹, 高道慎之介, and 猿渡洋, “主観的話者間類似度を考慮したDNN話者埋め込みのためのActive Learning,” in SLP研究会 , Mar. 2021, pp. 1–6.
  • 加茂佳吾, 久保優騎, 高宗典玄, 北村大地, 猿渡洋, 高橋祐, and 近藤多伸, “多変量複素Sub-Gauss分布に基づく同時対角化制約付き多チャネル非負値行列因子分解におけるmajorization-equalizationアルゴリズムを用いた更新則,” in 日本音響学会春季研究発表会講演論文集, Mar. 2021, pp. 261–264.
  • 郡山知樹 and 猿渡洋, “活性化関数とカーネル関数の関係性を用いたガウス過程音声合成の評価,” in 日本音響学会春季研究発表会講演論文集, Mar. 2021, pp. 815–818.
  • 高道 慎之介 and 佐伯 高明, “知覚モデルに基づくストレスフリーなリアルタイム高帯域声質変換の研究,” in 彩の国ビジネスアリーナ, Jan. 2021.

2020

  • 高道 慎之介, 塩田 さやか, 稲熊 寛文, and 柳田 智也, “国際会議Interspeech2020報告,” in 情報処理学会研究報告, Dec. 2020.
  • 竹内雅樹, 安在帥, 松藤圭亮, 李根学, 小笠原佑樹, 高木健, 伊福部達, 藪謙一郎, 高道慎之介, 上羽瑠美, 関野正樹, and 小野寺宏, “LPC残差波を用いたハンズフリー型電気式人工喉頭の開発及び従来型の電気式人工喉頭との音声比較,” in 日本生体医工学会関東支部若手研究者発表会2020, Dec. 2020.
  • 竹内雅樹, 安 在帥, 松藤圭亮, 李 根学, 小笠原 佑樹, 高木 健, 伊福部達, 藪謙一郎, 高道慎之介, 上羽瑠美, 関野正樹, and 小野寺宏, “LPC残差波を用いて自然発声に近い音声を得るハンズフリー型電気式人工喉頭の開発,” in 電気学会マグネティクス研究会, Nov. 2020, pp. 7–12.
  • 高道 慎之介 and 佐伯 高明, “知覚モデルに基づくストレスフリーなリアルタイム高帯域声質変換の研究,” in CEATEC2021, Nov. 2020.
  • 齋藤 佑樹, 高道 慎之介, and 猿渡 洋, “主観的話者間類似度のグラフ埋め込みに基づくDNN話者埋め込み,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2020, pp. 697–698.
  • 中村泰貴, 郡山知樹, and 猿渡洋, “深層ガウス過程音声合成におけるsequence-to-sequence 学習の初期検討,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2020, pp. 753–754.
  • 近藤祐斗, 久保優騎, 高宗典玄, 北村大地, and 猿渡洋, “ブラインド音声抽出のためのランク制約付き空間共分散行列推定法における雑音欠落ランク空間基底推定,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2020, pp. 135–138.
  • 三井健太郎, 郡山知樹, and 猿渡洋, “多話者音声合成のための深層ガウス過程潜在変数モデルを用いた音響モデル・話者表現の同時学習,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2020, pp. 691–694.
  • 湯舟航耶, 郡山知樹, 高道慎之介, and 猿渡洋, “変分オートエンコーダを用いたアクセントの潜在変数表現の検討,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2020, pp. 727–730.
  • 成澤直輝, 高宗典玄, 北村大地, 中村友彦, and 猿渡洋, “⾳源分離のための周波数間相関を考慮した多変量複素Gauss分布に基づく深層学習による分散共分散⾏列推定の検討,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2020, pp. 315–318.
  • 佐伯高明, 齋藤佑樹, 高道慎之介, and 猿渡洋, “サブバンドフィルタリングに基づくリアルタイム広帯域DNN声質変換の実装と評価,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2020, pp. 715–718.
  • 西田智哉, 植野夏樹, 小山翔一, and 猿渡洋, “ガウス過程に基づく音場計測のためのセンサ配置最適化,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2020, pp. 211–214.
  • 加茂佳吾, 久保優騎, 高宗典玄, 北村大地, 猿渡洋, 高橋祐, and 近藤多伸, “多変量複素Sub-Gauss分布に基づく同時対角化制約付き多チャネル非負値行列因子分解の様々な残響条件下における実験的評価,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2020, pp. 147–150.
  • 佐伯高明, 齋藤佑樹, 高道慎之介, and 猿渡洋, “サブバンドフィルタリングに基づくリアルタイム広帯域DNN声質変換の実装と評価,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2020, pp. 715–718.
  • 竹内博俊, 大石康智, 柏野邦夫, and 猿渡洋, “調波畳み込みの高速化,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2020, pp. 307–310.
  • 甲斐 優人, 高道 慎之介, 塩田 さやか, and 貴家 仁志, “音声プライバシーのためのブラックボックス型音声加工法,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2020, pp. 767–770.
  • 藤井 一貴, 齋藤 佑樹, 高道 慎之介, 馬場 雪乃, and 猿渡 洋, “人間GAN:人間による知覚的識別に基づく敵対的生成ネットワーク,” in 情報処理学会研究報告, Jun. 2020.
  • 小口 純矢 and 高道 慎之介, “PJS:音素バランスを考慮した日本語歌声コーパス,” in 情報処理学会研究報告, 2020-SLP-132, Jun. 2020, no. 34, pp. 1–3.
  • 内藤 悟嗣, 齋藤 佑樹, 高道 慎之介, 齋藤 康之, and 猿渡 洋, “VOCALOID曲の歌唱におけるブレス位置の自動推定,” in 情報処理学会研究報告, 2020-SLP-132, Jun. 2020, no. 33, pp. 1–3.
  • 大森 陽, 小口 純矢, and 高道 慎之介, “Life-M: ランドマーク画像を題材としたフリーの音楽コーパス,” in 情報処理学会研究報告, 2020-MUS-127, Jun. 2020, no. 28, pp. 1–4.
  • 高橋 勇希, 小口 純矢, 高道 慎之介, 矢野 昌平, and 猿渡 洋, “聴覚印象を考慮したインパルス応答測定信号設計,” in 情報処理学会研究報告, 2020-SLP-132, Jun. 2020, no. 22, pp. 1–3.
  • 小塚 詩穂里, 中村 友彦, and 猿渡 洋, “ニューラルネットワークとウェーブレット基底関数の同時学習に基づく多重解像度深層分析を用いた時間領域音源分離,” in 電子情報通信学会技術研究報告, EA2019-119, Mar. 2020, no. 439, pp. 279–284.
  • 小口純矢, 高道慎之介, 猿渡洋, and 嵯峨山茂樹, “広帯域DNN音声合成のためのスペクトル包絡のGMM近似,” in 日本音響学会春季研究発表会講演論文集, Mar. 2020.
  • 阿曽真至, 高道慎之介, 高宗典玄, and 猿渡洋, “音響モデル尤度に基づく subword 分割の韻律推定精度における評価,” in 日本音響学会春季研究発表会講演論文集, Mar. 2020.
  • 小塚 詩穂里, 中村 友彦, and 猿渡 洋, “リフティングスキームによる離散ウェーブレット変換を導入した深層ニューラルネットに基づく時間領域音源分離,” in 日本音響学会春季研究発表会講演論文集, Mar. 2020, pp. 325–328.
  • 郡山知樹 and 猿渡洋, “深層ガウス過程音声合成における関数の確率微分方程式表現の利用の検討,” in 日本音響学会春季研究発表会講演論文集, Mar. 2020, pp. 1127–1128.
  • 西田智哉, 植野夏樹, 小山翔一, and 猿渡洋, “ガウス過程に基づく場の計測のための推定・候補領域を独立に設定可能なセンサ配置法,” in 電子情報通信学会技術研究報告, Mar. 2020, pp. 147–152.
  • 植野夏樹, 小山翔一, and 猿渡洋, “微分可能な凸損失関数を用いたオーバーサンプリングされたクリップ信号の復元,” in 電子情報通信学会技術研究報告, Mar. 2020, pp. 147–152.
  • 飯島尚仁, 小山翔一, and 猿渡洋, “HRTF測定距離を考慮した分散マイクロフォンアレイ信号からのバイノーラル信号生成,” in 日本音響学会春季研究発表会講演論文集, Mar. 2020, pp. 229–232.
  • 宇根昌和, 久保優騎, 高宗典玄, 北村大地, 猿渡洋, and 牧野 昭二, “基底共有型半教師あり独立低ランク行列分析に基づく多チャネル補聴器システム,” in 日本音響学会春季研究発表会講演論文集, Mar. 2020, pp. 217–220.
  • 久保優騎, 高宗典玄, 北村大地, and 猿渡洋, “ランク制約付き空間共分散行列推定法に基づく拡散性雑音存在下でのブラインド複数方向性音源分離,” in 日本音響学会春季研究発表会講演論文集, Mar. 2020, pp. 301–304.
  • 近藤樹, 高宗典玄, 北村大地, 猿渡洋, 池下林太郎, and 中谷智広, “三重対角型周波数共分散行列を用いた独立半正定値テンソル分析によるブラインド音源分離,” in 日本音響学会春季研究発表会講演論文集, Mar. 2020, pp. 293–296.
  • 三井健太郎, 郡山知樹, and 猿渡洋, “深層ガウス過程に基づく多話者音声合成,” in 日本音響学会春季研究発表会講演論文集, Mar. 2020, pp. 1043–1044.
  • 加茂佳吾, 久保優騎, 高宗典玄, 北村大地, 猿渡洋, 高橋祐, and 近藤多伸, “一般化Gauss分布に基づく同時対角化制約付き多チャネルNMFを用いたブラインド音源分離,” in 電子情報通信学会技術研究報告, EA2019-119, Mar. 2020, no. 439, pp. 13–19.
  • 加茂佳吾, 久保優騎, 高宗典玄, 北村大地, 猿渡洋, 高橋祐, and 近藤多伸, “同時対角化行列の事前分布を用いた高速多チャネル非負値行列因子分解によるブラインド音源分離,” in 日本音響学会春季研究発表会講演論文集, Mar. 2020, pp. 297–300.
  • 山下優樹, 郡山知樹, 齋藤佑樹, 高道慎之介, 井島勇祐, 増村亮, and 猿渡洋, “DNNに基づく話し言葉音声合成における追加コンテキストの効果,” in 電子情報通信学会技術研究報告, EA2019-119, Mar. 2020, no. 441, pp. 65–70.
  • 芹川武尊, 郡山知樹, and 猿渡洋, “Attentionに基づく音声変換のためのアラインメント予測モデルの検討,” in 日本音響学会春季研究発表会講演論文集, Mar. 2020, pp. 1077–1078.
  • 蛭田宜樹, 郡山知樹, 太刀岡勇気, and 小林隆夫, “スタイル適応したDNN音声合成における話者性の検討,” in 日本音響学会春季研究発表会講演論文集, Mar. 2020, pp. 1103–1104.
  • 牧島直輝, 高宗典玄, 北村大地, 猿渡洋, 高橋祐, and 近藤多伸, “独立深層学習行列分析におけるマイクロホン毎及び音源毎の座標降下法に基づく分離行列更新法の周波数別自動選択法,” in 日本音響学会春季研究発表会講演論文集, Mar. 2020, pp. 321–324.
  • 佐伯高明, 齋藤佑樹, 高道慎之介, and 猿渡洋, “差分スペクトル法に基づくDNN声質変換のためのリフタ学習およびサブバンド処理,” in 情報処理学会研究報告, Feb. 2020.
  • 三井健太郎, 郡山知樹, and 猿渡洋, “深層ガウス過程とアクセントの潜在変数表現に基づく音声合成の検討,” in 電子情報通信学会技術研究報告, EA2019-119, Jan. 2020, no. 398, pp. 31–36.
  • J. G. C. Ribeiro, N. Ueno, S. Koyama, and H. Saruwatari, “Region-to-region acoustic transfer function estimation with distributed sources and receivers based on kernel interpolation,” in 電子情報通信学会技術研究報告, Jan. 2020, pp. 83–88.
  • S. Koyama, “Sparsity-based sound field reconstruction,” in Acoustical Science and Technology, 2020. (overview, invited)

2019

  • 久保優騎, 高宗典玄, 北村大地, and 猿渡洋, “ブラインド音声抽出のための多変量複素一般化Gauss分布に基づくランク制約付き空間共分散行列推定法及びその高速化,” in 電子情報通信学会技術研究報告, EA2019-119, Dec. 2019, no. 334, pp. 85–92.
  • 藤井一貴, 齋藤佑樹, 高道慎之介, 馬場雪乃, and 猿渡洋, “人間GAN:人間による知覚的識別に基づく敵対的生成ネットワーク,” in 第22回情報論的学習理論ワークショップ, Nov. 2019.
  • 中村泰貴, 齋藤佑樹, 高道慎之介, 井島勇祐, and 猿渡洋, “話者V2S攻撃:話者認証から構築される声質変換とその音声なりすまし可能性の評価,” in コンピュータセキュリティシンポジウム 2019, Nov. 2019, no. 4, pp. 697–703.
  • 中村 友彦 and 猿渡 洋, “Haar変換を導入した時間領域深層ニューラルネットに基づく音源分離,” in 電子情報通信学会技術研究報告, EA2019-119, Nov. 2019, no. 306, pp. 41–48.
  • 高道慎之介, 三井健太郎, 齋藤佑樹, 郡山知樹, 丹治尚子, and 猿渡洋, “JVS:フリーの日本語多数話者音声コーパス,” in 情報処理学会研究報告, Oct. 2019, vol. 2019-SLP-129, no. 4, pp. 1–4.
  • 加茂佳吾, 久保優騎, 高宗典玄, 北村大地, 猿渡洋, 高橋祐, and 近藤多伸, “多変量複素Student’s t分布に基づくFastMNMF を用いたブラインド音源分離,” in 電子情報通信学会技術研究報告, EA2019-119, Oct. 2019, no. 253, pp. 23–29.
  • 近藤樹, 高宗典玄, 北村大地, 猿渡洋, 池下林太郎, and 中谷智広, “多変量複素Student’s t分布に基づく独立半正定値テンソル分析によるブラインド音源分離,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2019, pp. 153–156.
  • 伊東勇登, 小山翔一, 植野夏樹, and 猿渡洋, “音場のカーネル補間に基づくフィードバック型三次元空間能動騒音制御,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2019, pp. 183–186.
  • 成瀬加菜, 吉田成朗, 高道慎之介, 鳴海拓志, 谷川智洋, and 廣瀬通孝, “自信声フィードバックによる緊張緩和手法の提案:クラウドソーシングを利用した自信声加工パラメータの推定,” in 第24回バーチャルリアリティ学会大会, Sep. 2019.
  • 牧島直輝, 高宗典玄, 北村大地, 猿渡洋, 高橋祐, and 近藤多伸, “独立深層学習行列分析におけるマイクロホン毎の座標降下法に基づく分離行列更新,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2019, pp. 157–160.
  • 宇根昌和, 久保優騎, 高宗典玄, 北村大地, 猿渡洋, and 牧野昭二, “ランク制約付き空間共分散モデル推定を用いた多チャネル補聴器システムの評価,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2019, pp. 161–164.
  • 久保優騎, 高宗典玄, 北村大地, and 猿渡洋, “ランク制約付き空間共分散モデル推定法の逆行列展開による高速化,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2019, pp. 287–290.
  • 佐伯高明, 齋藤佑樹, 高道慎之介, and 猿渡洋, “差分スペクトル法に基づくDNN声質変換の計算量削減に向けたフィルタ推定,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2019.
  • 田丸浩気, 齋藤佑樹, 高道慎之介, 郡山知樹, and 猿渡洋, “ユーザ歌唱のための generative moment matching network に基づくneural double-tracking,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2019.
  • 郡山知樹 and 猿渡洋, “深層ガウス過程に基づく音声合成におけるリカレント構造を用いた系列モデリングの検討,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2019, pp. 1025–1026.
  • 岡本悠希, 柳生拓巳, 井本桂右, 小松達也, 高道慎之介, 山西良典, and 山下洋一, “多様な環境音の合成と変換のための基礎検討,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2019.
  • 齋藤佑樹, 高道慎之介, and 猿渡洋, “主観的話者間類似度に基づくDNN話者埋め込みを用いた多数話者DNN音声合成の実験的評価,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2019.
  • 阿曽真至, 高道慎之介, 高宗典玄, and 猿渡洋, “End-to-end 韻律推定に向けたDNN音響モデルに基づくsubword分割,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2019.
  • 伊東 勇登, 小山 翔一, 植野 夏樹, and 猿渡 洋, “音場の補間に基づく分散配置マイクロフォンを用いたフィードフォワード型空間能動騒音制御,” in 日本音響学会春季研究発表会講演論文集, Mar. 2019, no. 3-6-12, pp. 279–282.
  • 久保優騎, 高宗典玄, 北村大地, and 猿渡洋, “乗算型更新式に基づくランク制約付き空間共分散モデルの推定,” in 日本音響学会2019年春季研究発表会講演論文集, Mar. 2019, pp. 2–6-1.
  • 最上伸一, 高宗典玄, 北村大地, 猿渡洋, 高橋祐, 近藤多伸, and 中嶋広明, “独立低ランク行列分析におけるmajorization-equalizationアルゴリズムを用いた空間パラメータの高速更新,” in 日本音響学会2019年春季研究発表会講演論文集, Mar. 2019, pp. 2–6-2.
  • 齋藤佑樹, 高道慎之介, and 猿渡洋, “DNN音声合成に向けた主観的話者間類似度を考慮したDNN話者埋め込み,” in 日本音響学会2019年春季研究発表会講演論文集, Mar. 2019, pp. 3–10-7.
  • 郡山知樹, 高道慎之介, and 小林隆夫, “グラム行列のスパース近似を用いた生成的モーメントマッチングネットに基づく音声合成の検討,” in 日本音響学会2019年春季研究発表会講演論文集, Mar. 2019, pp. 3–10-6.
  • 高道慎之介 and 猿渡洋, “正弦関数摂動 von Mises 分布DNNに基づく位相復元における群遅延の最尤推定値の近似法,” in 日本音響学会2019年春季研究発表会講演論文集, Mar. 2019, pp. 3–10-2.
  • 牧島直輝, 高宗典玄, 北村大地, 猿渡洋, 高橋祐, 近藤多伸, and 中嶋広明, “時変複素一般化ガウス分布に基づく独立深層学習行列分析,” in 日本音響学会2019年春季研究発表会講演論文集, Mar. 2019, pp. 1–6-7.
  • 牧島直輝, 最上伸一, 高宗典玄, 高道慎之介, 北村大地, 猿渡洋, 高橋祐, 近藤多伸, and 中嶋広明, “教師あり及び半教師あり条件下における独立深層学習行列分析の実験的評価,” in 日本音響学会2019年春季研究発表会講演論文集, Mar. 2019, pp. 2-Q-28.
  • 高道慎之介 and 齋藤大輔, “歌声合成に向けた酒酔い歌声の特徴量分析に関する初期検討,” in 日本音響学会2019年春季研究発表会講演論文集, Mar. 2019, pp. 2-P-36.
  • 中村泰貴, 齋藤佑樹, 西田京介, 井島勇祐, and 高道慎之介, “音素事後確率とd-vector を用いたノンパラレル多対多VAE音声変換における学習データ量とd-vector 次元数に関する評価,” in 日本音響学会2019年春季研究発表会講演論文集, Mar. 2019, pp. 2-P-30.
  • 田丸浩気, 齋藤佑樹, 高道慎之介, 郡山知樹, and 猿渡洋, “Generative moment matching net に基づく歌声のランダム変調ポストフィルタと double-tracking への応用,” in 日本音響学会2019年春季研究発表会講演論文集, Mar. 2019, pp. 2–10-5.
  • 関澤太樹, 高道慎之介, 高宗典玄, and 猿渡洋, “外国人留学生日本語の音声合成における話者性を保持した韻律補正,” in 日本音響学会2019年春季研究発表会講演論文集, Mar. 2019, pp. 2–10-2.
  • 阿曽真至, 高道慎之介, 高宗典玄, and 猿渡洋, “End-to-end 韻律推定に向けたsubword lattice構造を考慮したDNN音響モデル学習,” in 日本音響学会2019年春季研究発表会講演論文集, Mar. 2019, pp. 2–10-1.
  • 荒川陸, 高道慎之介, and 猿渡洋, “リアルタイムDNN音声変換の実装とデータ拡張法による音質改善法,” in 日本音響学会2019年春季研究発表会講演論文集, Mar. 2019, pp. 1–10-4.
  • 溝口聡, 齋藤佑樹, 高道慎之介, and 猿渡洋, “多様なカートシスを持つ雑音に対応した低ミュージカルノイズDNN音声強調,” in 日本音響学会2019年春季研究発表会講演論文集, Mar. 2019, pp. 1–6-6.
  • 久保優騎, 高宗典玄, 北村大地, and 猿渡洋, “ブラインド音源分離における多変量複素Student’s t分布に基づくランク制約付き空間共分散モデルの推定,” in 電子情報通信学会技術研究報告, Mar. 2019, pp. 173–178.
  • 福重敢太, 高宗典玄, 北村大地, 猿渡洋, 池下林太郎, and 中谷智広, “収束保証型独立半正定値テンソル分析に基づくブラインド音源分離,” in 電子情報通信学会技術研究報告, Mar. 2019, pp. 167–172.
  • 高道慎之介 and 猿渡洋, “正弦関数摂動von Mises分布DNNのモード近似を用いた位相復元,” in 情報処理学会研究報告, 2018-SLP-126, Feb. 2019, no. 9, pp. 1–6.
  • 郡山知樹, 高道慎之介, and 小林隆夫, “グラム行列のスパース近似を用いた生成的モーメントマッチングネットワークに基づく音声合成の検討,” in 情報処理学会研究報告, 2018-SLP-126, Feb. 2019.

2018

  • 田丸浩気, 齋藤佑樹, 高道慎之介, 郡山知樹, and 猿渡洋, “モーメントマッチングに基づくDNN合成歌声のランダム変調ポストフィルタとニューラルダブルトラッキングへの応用,” in 情報処理学会研究報告, 2018-SLP-125, Dec. 2018, no. 20, pp. 1–6.
  • 牧島直輝, 最上伸一, 高宗典玄, 北村大地, 猿渡洋, 高橋祐, 近藤多伸, and 中嶋広明, “ヘビーテイル生成モデルに基づく独立深層学習行列分析による多チャネル音源分離,” in 第33回 信号処理シンポジウム, Nov. 2018, pp. 202–207.
  • 溝口聡, 齋藤佑樹, 高道慎之介, and 猿渡洋, “カートシスマッチングに基づく低ミュージカルノイズDNN音声強調の評価,” in 電子情報通信学会技術研究報告, EA2018-121, Nov. 2018, no. 4, pp. 1–6.
  • 瀧田 雄太, 小山 翔一, 植野 夏樹, and 猿渡 洋, “Annihilatingフィルタを用いた球面調和関数領域reciprocity gap functional: 複数周波数を用いたグリッドレス音場分解,” in 第33回 信号処理シンポジウム, Nov. 2018, pp. 184–189.
  • 溝口聡, 齋藤佑樹, 高道慎之介, and 猿渡洋, “カートシスマッチングと深層学習に基づく低ミュージカルノイズ音声強調,” in 日本音響学会2018年秋季研究発表会講演論文集, Sep. 2018, pp. 2–1-7. [日本音響学会 第18回学生優秀発表賞]
  • 瀧田 雄太, 小山 翔一, 植野 夏樹, and 猿渡 洋, “球面調和領域におけるreciprocity gap functional に基づくグリッドレス音場分解に関する実験的評価,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2018, no. 1-P-38, pp. 341–344.
  • 植野 夏樹, 小山 翔一, and 猿渡 洋, “微分で求める!積分を用いない高次アンビソニックス係数の導出,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2018, no. 1-P-25, pp. 307–310.
  • 久保優騎, 高宗典玄, 北村大地, and 猿渡洋, “独立低ランク行列分析を用いたフルランク空間共分散モデルに基づくブラインド音源分離,” in 日本音響学会2018年秋季研究発表会講演論文集, Sep. 2018, pp. 3–1-9.
  • 牧島直輝, 高宗典玄, 高道慎之介, 北村大地, 猿渡洋, 高橋祐, 近藤多伸, and 中嶋広明, “半教師あり独立深層学習行列分析におけるデータ拡張に基づく音源モデル適応,” in 日本音響学会2018年秋季研究発表会講演論文集, Sep. 2018, pp. 2–1-8. [日本音響学会 第18回学生優秀発表賞]
  • 高道慎之介, 齋藤佑樹, 高宗典玄, 北村大地, and 猿渡洋, “方向統計DNNに基づく振幅スペクトログラムからの位相復元,” in 日本音響学会2018年秋季研究発表会講演論文集, Sep. 2018, pp. 2–4-2.
  • 高道慎之介 and 森川大輔, “クラウドソーシング音像定位実験における確信度・感覚レベルに基づく信頼度の効果,” in 日本音響学会2018年秋季研究発表会講演論文集, Sep. 2018, pp. 1-Q-4.
  • 最上伸一, 高宗典玄, 北村大地, 猿渡洋, 高橋祐, 近藤多伸, 中嶋広明, and 小野順貴, “一般化反復射影法に基づく時変劣ガウス独立低ランク行列分析,” in 日本音響学会2018年秋季研究発表会講演論文集, Sep. 2018, pp. 3–1-10.
  • コンヴェールマクシム, 深山覚, 中野倫靖, 高道慎之介, 猿渡洋, and 後藤真孝, “ニューラルネットワークによる自動和声付けのための和音表現方法の検討,” in 情報処理学会研究報告, 2018-MUS-120, Aug. 2018, no. 1, pp. 1–8.
  • 齋藤大輔, 森勢将雅, 塩田さやか, 木谷俊介, 小橋川哲, 高道慎之介, 武岡成人, and 橘亮輔, “音学シンポジウム2018の開催にあたって,” in 情報処理学会研究報告, 2018-SLP-122, Jun. 2018, no. 1, pp. 1–2.
  • 相場亮人, 吉田実, 後藤理, 北村大地, 高道慎之介, and 猿渡洋, “雑音下異常検知における前処理としてのNMF音源抽出手法の検討,” in 情報処理学会研究報告, 2018-MUS-122, Jun. 2018, no. 34, pp. 1–5.
  • 高道慎之介, 齋藤佑樹, 高宗典玄, 北村大地, and 猿渡洋, “Von Mises分布DNNに基づく振幅スペクトログラムからの位相復元,” in 情報処理学会研究報告, 2018-MUS-122, Jun. 2018, no. 54, pp. 1–6. [音学シンポジウム優秀賞]
  • 高道慎之介, 秋山 貴則, and 猿渡洋, “日本語韻律構造を考慮したprosody-aware subword embeddingとDNN多方言音声合成への適用,” in 情報処理学会研究報告, 2018-SLP-***, May 2018, no. ***, pp. ***–***.
  • 北村大地, 角野隼斗, 高宗典玄, 高道慎之介, 猿渡 洋, and 小野順貴, “独立深層学習行列分析に基づく多チャネル音源分離の実験的評価,” in 電子情報通信学会技術研究報告, EA2017-104, Mar. 2018, no. 515, pp. 13–20.
  • 植野夏樹, 小山翔一, and 猿渡洋, “調和スペクトルの解析的重み付けに基づく音場再現と放射抑圧,” in 日本音響学会2018年春季研究発表会講演論文集, Mar. 2018, no. 3-4-6, pp. 485–488.
  • 瀧田雄太, 小山翔一, and 猿渡洋, “内部・外部音場分離のための信号モデルの比較,” in 日本音響学会2018年春季研究発表会講演論文集, Mar. 2018, pp. 1–4-14.
  • 北村大地, 高宗典玄, 最上伸一, 三井祥幹, 猿渡洋, 高橋祐, and 近藤多伸, “ヘビーテイルな分布に基づく⾮負値⾏列因⼦分解を⽤いたスパース雑⾳除去,” in 日本音響学会2018年春季研究発表会講演論文集, Mar. 2018, pp. 469–472.
  • 角野隼斗, 北村大地, 高宗典玄, 高道慎之介, 猿渡洋, and 小野順貴, “独立深層学習行列分析に基づく多チャネル音源分離,” in 日本音響学会2018年春季研究発表会講演論文集, Mar. 2018, pp. 1–4-16. [日本音響学会, 学生優秀発表賞]
  • 園部良介, 高道慎之介, and 猿渡洋, “JSUTコーパス:End-to-End音声合成に向けたフリーの大規模日本語音声コーパス,” in 日本音響学会2018年春季研究発表会講演論文集, Mar. 2018, pp. 1-Q-37.
  • 園部良介, 高道慎之介, 猿渡洋, 矢岸進, 増岡厳, and 本間昭, “改訂 長谷川式簡易知能評価スケール(HDS-R)の音声言語特徴量を用いたアルツハイマー型認知症の識別,” in 日本音響学会2018年春季研究発表会講演論文集, Mar. 2018, pp. 1-Q-44.
  • 秋山貴則, 高道慎之介, and 猿渡洋, “日本語音声合成のためのsubword内モーラを考慮したProsody-aware subword embedding,” in 日本音響学会2018年春季研究発表会講演論文集, Mar. 2018, pp. 2–9-3. [日本音響学会, 学生優秀発表賞]
  • 塩田さやか, 高道慎之介, and 松井知子, “Moment-matching networkによるi-vector生成を用いた話者照合,” in 日本音響学会2018年春季研究発表会講演論文集, Mar. 2018, pp. 2–8-3.
  • 宇根昌和, 齋藤佑樹, 高道慎之介, 北村大地, 宮崎亮一, and 猿渡洋, “雑音環境下音声を用いたDNN音声合成のための雑音生成モデルの敵対的学習,” in 日本音響学会2018年春季研究発表会講演論文集, Mar. 2018, pp. 3–8-8.
  • 高道慎之介 and 森川大輔, “クラウドソーシング参加者の信頼度と音像定位精度の関係,” in 日本音響学会2018年春季研究発表会講演論文集, Mar. 2018, pp. 1-P-16.
  • 齋藤佑樹, 高道慎之介, and 猿渡洋, “多重周波数解像度のSTFTスペクトルを⽤いた敵対的DNN⾳声合成,” in 日本音響学会2018年春季研究発表会講演論文集, Mar. 2018, pp. 3–8-14.
  • 須田 仁志, 小谷 岳, 高道 慎之介, and 齋藤 大輔, “高品質声質変換のための特徴量分析再訪,” in 日本音響学会2018年春季研究発表会講演論文集, Mar. 2018, pp. 2-Q-27.
  • 最上伸一, 三井祥幹, 高宗典玄, 北村大地, 猿渡洋, 高橋祐, 近藤多伸, 中嶋広明, and 亀岡弘和, “Iダイバージェンスに基づく独⽴低ランク⾏列分析の実験的評価,” in 日本音響学会2018年春季研究発表会講演論文集, Mar. 2018, pp. 1–4-8. [日本音響学会, 学生優秀発表賞]
  • 矢田部浩平 and 北村大地, “近接分離最適化によるブラインド⾳源分離,” in 日本音響学会2018年春季研究発表会講演論文集, Mar. 2018, pp. 1–4-10.
  • 三井祥幹, 高宗典玄, 北村大地, 猿渡洋, 高橋祐, 近藤多伸, and 中嶋広明, “空間モデル正則化を⽤いた独⽴低ランク⾏列分析に基づくブラインド⾳源分離,” in 日本音響学会2018年春季研究発表会講演論文集, Mar. 2018, pp. 1–4-15.
  • 小山 翔一, G. Chardon, and L. Daudet, “Empirical interpolation method に基づく音場制御における音源・センサ配置法,” in 日本音響学会2018年春季研究発表会講演論文集, Mar. 2018, pp. 481–484.
  • 大野涼平, 高道慎之介, 森勢将雅, and 北原鉄朗, “話者適応型RBMを用いたユーザが所望するかわいい音声への声質変換,” in 日本音響学会2018年春季研究発表会講演論文集, Mar. 2018, pp. 2-Q-33.
  • 齋藤佑樹, 井島勇祐, 西田京介, and 高道慎之介, “音素事後確率とd-vectorを用いたVariational Autoencoderによるノンパラレル多対多音声変換,” in 電子情報通信学会技術研究報告, SP2017-88, Mar. 2018, no. 517, pp. 21–26. [平成29年度音声研究会 研究奨励賞]
  • 瀧田雄太, 小山翔一, 植野夏樹, and 猿渡洋, “凸最適化による信号分離を用いたグリッドレス音場分解の性能向上,” in 電子情報通信学会技術研究報告, Jan. 2018, pp. 19–24.

2017

  • 三井祥幹, 高宗典玄, 北村大地, 猿渡洋, 高橋祐, and 近藤多伸, “空間事前情報を用いた独立低ランク行列分析,” in 第32回 信号処理シンポジウム, Nov. 2017, pp. 360–365.
  • 最上伸一, 三井祥幹, 高宗典玄, 北村大地, 猿渡洋, 高橋祐, and 近藤多伸, “Iダイバージェンスを用いた独立低ランク行列分析,” in 第32回 信号処理シンポジウム, Nov. 2017, pp. 354–359.
  • 最上伸一, 北村大地, 高宗典玄, 三井祥幹, 猿渡洋, 小野順貴, 高橋祐, and 近藤多伸, “複素Student’s t 分布に基づく独立低ランク行列分析の実験的評価,” in 日本音響学会2017年秋季研究発表会講演論文集, Sep. 2017, pp. 3–12-12.
  • 三井祥幹, 北村大地, 高宗典玄, 猿渡洋, 高橋祐, and 近藤多伸, “パラメトリック補助関数法に基づく独立低ランク行列分析を用いたブラインド音源分離,” in 日本音響学会2017年秋季研究発表会講演論文集, Sep. 2017, pp. 3–12-13.
  • 宇根昌和, 齋藤佑樹, 高道慎之介, 北村大地, 宮崎亮一, and 猿渡洋, “雑音環境下音声を用いた音声合成のための雑音生成モデルの敵対的学習,” in 電子情報通信学会技術研究報告, Sep. 2017, pp. 1–6.
  • 植野夏樹, 小山翔一, and 猿渡洋, “無限次元調和解析に基づく音場補間とそのカーネル回帰としての解釈,” in 日本音響学会2017年秋季研究発表会講演論文集, Sep. 2017, no. 1-12-5, pp. 453–456.
  • 瀧田雄太, 小山翔一, 植野夏樹, and 猿渡洋, “複数球状マイクロフォンアレイを用いたグリッドレス音場分解の検討,” in 日本音響学会2017年秋季研究発表会講演論文集, Sep. 2017, no. 1-12-3, pp. 447–450. [日本音響学会, 学生優秀発表賞]
  • 高道慎之介 and 猿渡洋, “クラウドソーシングを利用した対訳方言音声コーパスの構築,” in 日本音響学会2017年秋季研究発表会講演論文集, Sep. 2017, pp. 1–8-4.
  • 齋藤佑樹, 高道慎之介, and 猿渡洋, “敵対的DNN 音声合成におけるダイバージェンスの影響の調査,” in 日本音響学会2017年秋季研究発表会講演論文集, Sep. 2017, pp. 1–8-7.
  • 高道慎之介, 郡山知樹, 齋藤佑樹, and 猿渡洋, “Moment-matching network に基づく一期一会音声合成における発話間ゆらぎの評価,” in 日本音響学会2017年秋季研究発表会講演論文集, Sep. 2017, pp. 1–8-9.
  • 最上伸一, 北村大地, 高宗典玄, 三井祥幹, 猿渡 洋, 小野順貴, 高橋 祐, and 近藤多伸, “複素Student’s t分布に基づく独立低ランク行列分析,” in 電子情報通信学会技術研究報告, Jul. 2017, no. 131–136.
  • 三好裕之, 齋藤佑樹, 高道慎之介, and 猿渡洋, “コンテキスト事後確率のSequence-to-Sequence学習を用いた音声変換とDual Learningの評価,” in 電子情報通信学会技術研究報告, Jul. 2017.
  • 猿渡洋, “ブラインド音源分離再考 -時空間の非ガウス・スパース・低ランクモデリング-,” in 日本音響学会2017年春季研究発表会講演論文集, Jun. 2017, pp. 1–8-12.
  • 齋藤大輔, 森勢将雅, 饗庭絵里子, 伊藤貴之, 大谷真, 北原鉄朗, 塩田さやか, 高道慎之介, 滝口哲也, 深山覚, 吉井和佳, and 渡邉貫治, “音学シンポジウム2017の開催にあたって,” in 情報処理学会研究報告, Jun. 2017.
  • 南條浩輝, 高道慎之介, 北原鉄朗, and 森勢将雅, “外国語音声を好みの声質にかえる技術の検討– 聞きつづけたくなる外国語教材をめざして–,” in 情報処理学会研究報告, Jun. 2017.
  • 安部祐一, 坂東宜昭, 永野光, 昆陽雅司, 山崎公俊, 糸山克寿, 猿渡洋, 岡谷貴之, 奥乃博, and 田所論, “感覚機能統合型能動スコープカメラの開発,” in 日本機械学会 ロボティクス・メカトロニクス講演会 (Robomech 2017), May 2017, pp. 1P2–P01.
  • 三井祥幹, 溝口聡, 猿渡洋, 越智景子, 北村大地, 小野順貴, 石村大, 前成美, 高草木萌, 松井裕太郎, 山岡洸瑛, and 牧野昭二, “柔軟索状ロボットにおける独立低ランク行列分析と統計的音声強調に基づく高品質ブラインド音源分離の開発,” in 日本機械学会 ロボティクス・メカトロニクス講演会 (Robomech 2017), May 2017, pp. 1P2-P04.
  • 高道慎之介, 郡山知樹, and 猿渡洋, “Moment-matching networkに基づく音声合成における音声パラメータのランダム生成,” in 情報処理学会研究報告, Mar. 2017.
  • 齋藤佑樹, 高道慎之介, and 猿渡洋, “敵対的DNN音声合成におけるF0・継続長の生成,” in 日本音響学会2017年春季研究発表会講演論文集, Mar. 2017, pp. 2–6-6.
  • 遠藤宣明, 中嶋広明, 高宗典玄, 高道慎之介, 猿渡洋, 小野順貴, 高橋祐, and 近藤多伸, “NMFにおける識別的基底学習のための停留点条件を利用した2段階最適化,” in 日本音響学会2017年春季研究発表会講演論文集, Mar. 2017, pp. 1–1-5. [日本音響学会, 学生優秀発表賞]
  • 齋藤佑樹, 高道慎之介, and 猿渡洋, “Highway networkを用いた差分スペクトル法に基づく敵対的DNN音声変換,” in 日本音響学会2017年春季研究発表会講演論文集, Mar. 2017, pp. 1–6-14.
  • 三好裕之, 齋藤佑樹, 高道慎之介, and 猿渡洋, “コンテキスト事後確率のSequence-to-Sequence学習を用いた音声変換,” in 日本音響学会2017年春季研究発表会講演論文集, Mar. 2017, pp. 1–6-15.
  • 高道慎之介, 郡山知樹, and 猿渡洋, “Moment matching networkを用いた音声パラメータのランダム生成の検討,” in 日本音響学会2017年春季研究発表会講演論文集, Mar. 2017, pp. 2–6-9.
  • 三井祥幹, 溝口聡, 猿渡洋, 越智景子, 北村大地, 小野順貴, 石村大, 前成美, 高草木萌, 松井裕太郎, 山岡洸瑛, and 牧野昭二, “独立低ランク行列分析と統計的音声強調を用いた柔軟索状ロボットにおけるブラインド音源分離システムの開発,” in 日本音響学会2017年春季研究発表会講演論文集, Mar. 2017, pp. 1-P-3.
  • 大野涼平, 高道慎之介, 森勢将雅, and 北原鉄朗, “統計的声質変換における印象変化の調査,” in 日本音響学会2017年春季研究発表会講演論文集, Mar. 2017, pp. 2-P-39.
  • 佐藤遼太郎, 亀岡弘和, and 柏野邦夫, “韻律指令列推定のための基本周波数・音韻特徴量系列同時生成モデルの検討,” in 日本音響学会2017年春季研究発表会講演論文集, Mar. 2017, pp. 1–6-1.
  • 高道慎之介 and 中村哲, “GMMに基づく固有声変換のための変調スペクトル制約付きトラジェクトリ学習・適応,” in 日本音響学会2017年春季研究発表会講演論文集, Mar. 2017, pp. 1–6-9.
  • 浅見太一, 小川厚徳, 小川哲司, 大谷大和, 倉田岳人, 齋藤大輔, 塩田さやか, 篠原雄介 高道慎之介, 南條浩輝, 橋本佳, 樋口卓哉, 増村亮, 吉野幸一郎, and 渡部晋治, “国際会議INTERSPEECH2016報告,” in 情報処理学会研究報告, Feb. 2017, pp. 1–7.
  • 齋藤佑樹, 高道慎之介, and 猿渡洋, “DNNテキスト音声合成のための Anti-spoofing に敵対する学習アルゴリズム,” in 情報処理学会研究報告, Feb. 2017, pp. 1–6.
  • 三井祥幹, 北村大地, 高道慎之介, 小野順貴, and 猿渡洋, “スパース時系列正則化付き独立低ランク行列分析における効率的な解法の検討,” in 電子情報通信学会技術研究報告, Jan. 2017, pp. 25–30.
  • 植野夏樹, 小山翔一, and 猿渡洋, “受聴エリア事前情報を用いた音場再現 ~任意のスピーカ配置と指向特性における検証~,” in 電子情報通信学会技術研究報告, Jan. 2017, pp. 7–12. [電気音響研究会, 学生研究奨励賞]
  • 亀岡弘和, 小野順貴, and 猿渡洋, “音響分野におけるブラインド適応信号処理の展開,” in 電子情報通信学会総合大会, 2017, no. AI-2-2.

2016

  • 佐藤遼太郎, 亀岡弘和, and 柏野邦夫, “基本周波数パターンと音韻特徴量系列の同時生成モデルによる韻律指令列推定,” in 電子情報通信学会技術研究報告, Dec. 2016, pp. 43–48. [音声研究会, 学生ポスター賞]
  • 村田直毅, 小山翔一, 高宗典玄, and 猿渡洋, “音源信号の特性を考慮した時空間スパース音場分解の検討,” in 第31回 信号処理シンポジウム, Nov. 2016, pp. 311–316.
  • 束村陽, 小山翔一, and 猿渡洋, “残響環境下音場分解のための信号モデルと分離アルゴリズムの検討,” in 日本音響学会秋季研究発表会講演論文集, Toyama, Sep. 2016, pp. 389–392 (3–7-9).
  • 齋藤佑樹, 高道慎之介, and 猿渡洋, “DNN 音声合成のための Anti-Spoofing を考慮した学習アルゴリズム,” in 日本音響学会秋季研究発表会講演論文集, Toyama, Sep. 2016, pp. 149–150 (3–5-1). [日本音響学会, 学生優秀発表賞]
  • 三井祥幹, 北村大地, 高道慎之介, and 猿渡洋, “スパース時系列正則化に基づく独立低ランク行列分析を用いたブラインド音源分離,” in 日本音響学会秋季研究発表会講演論文集, Toyama, Sep. 2016, pp. 325–328 (1–7-3).
  • 植野夏樹, 小山翔一, and 猿渡洋, “受聴エリア事前情報を用いた音場再現 ―直線状アレイによる検証―,” in 日本音響学会秋季研究発表会講演論文集, Toyama, Sep. 2016, pp. 415–418 (3–7-17). [日本音響学会, 学生優秀発表賞]
  • 村田直毅, 小山翔一, 高宗典玄, and 猿渡洋, “多重極辞書を用いたスパース音場分解の検討,” in 日本音響学会秋季研究発表会講演論文集, Toyama, Sep. 2016, pp. 393–396 (3–7-10).
  • 竹川佳成, 植村あい子, 奥村健太, 高道慎之介, 中村友彦, 平井辰典, 森尻有貴, and 矢澤櫻子, “新博士によるパネルディスカッションV「新博士さんいらっしゃい!」,” in 情報処理学会研究報告, Jul. 2016, pp. 1–6 (2016-MUS-112).
  • 齋藤佑樹, 高道慎之介, and 猿渡洋, “Anti–spoofingに敵対するDNN音声変換の評価,” in 電子情報通信学会技術研究報告, Jan. 2016, pp. 29–34. [音声研究会, 学生ポスター賞]

2015

  • 猿渡洋, “統計的バイノーラル信号表現とその音源分離への応用,” in 電子情報通信学会技術研究報告(電気/応用音響), 2015. [招待講演]
  • 室田勇騎, 北村大地, 小山翔一, 猿渡洋, and 中村哲, “バイノーラル音源分離における時系列事前分布モデルとスペクトル基底の同時適応,” in 電子情報通信学会技術研究報告(電気/応用音響), 2015, pp. 27–32.
  • 室田勇騎, 北村大地, 小山翔一, 猿渡洋, and 中村哲, “時系列事前分布モデルとスペクトル基底の同時適応を用いたバイノーラル音源分離の実験的評価,” in 日本音響学会春季研究発表会講演論文集, 2015, pp. 549–552 (1–10-12).
  • 北村大地, 小野順貴, 澤田宏, 亀岡弘和, and 猿渡洋, “過決定条件BSSにおけるランク1空間制約の緩和,” in 日本音響学会春季研究発表会講演論文集, 2015, pp. 629–632 (3–10-11).
  • 小山翔一, 古家賢一, 羽田陽一, and 猿渡洋, “音源位置事前情報を用いた超解像型音場収音・再現における空間相関分布の導入,” in 日本音響学会春季研究発表会講演論文集, 2015, pp. 635–638 (3–10-13).
  • 小山翔一, 村田直毅, and 猿渡洋, “超解像型音場収音・再現のためのグループスパース信号表現と分解アルゴリズム,” in 日本音響学会春季研究発表会講演論文集, 2015, pp. 639–642 (3–10-14).
  • 北村大地, 小野順貴, 澤田宏, 亀岡弘和, and 猿渡洋, “多チャネル非負値行列因子分解におけるランク1空間モデルの音源分離性能評価,” in 音学シンポジウム2015(第107回音楽情報科学研究会), 2015.
  • 村田直毅, 小山翔一, 高宗典玄, and 猿渡洋, “スパース音場分解とパラメトリック辞書学習による超解像型収音・再現,” in 日本音響学会秋季研究発表会講演論文集, 2015, pp. 671–674 (1-P-32).
  • 小山翔一 and 猿渡洋, “球・円状マイクロフォンアレイ信号からバイノーラル信号への変換に対する解析的アプローチ,” in 日本音響学会秋季研究発表会講演論文集, 2015, pp. 685–686 (2-P-3).
  • 小山翔一 and 猿渡洋, “スパース性と低ランク性に基づく信号表現による音場分解,” in 日本音響学会秋季研究発表会講演論文集, 2015, pp. 565–568 (3–6-5).
  • 村田直毅, 小山翔一, 亀岡弘和, 高宗典玄, and 猿渡洋, “スパース音場分解における時間周波数領域低ランクモデルの導入,” in 日本音響学会秋季研究発表会講演論文集, 2015, pp. 569–572 (3?6–6). [日本音響学会, 学生優秀発表賞]
  • 中嶋広明, 北村大地, 高宗典玄, 小山翔一, 猿渡洋, 小野順貴, 高橋祐, and 近藤多伸, “全極モデルを用いた基底変形型教師ありNMFによる音楽信号分離,” in 日本音響学会秋季研究発表会講演論文集, 2015, pp. 573–576 (3?6–7). [日本音響学会, 学生優秀発表賞]
  • 北村大地, 猿渡洋, 小野順貴, 澤田宏, and 亀岡弘和, “ランク1空間近似を用いたBSSにおける音源及び空間モデルの考察,” in 日本音響学会秋季研究発表会講演論文集, 2015, pp. 583–586 (3–6-10).

2014

  • 小山翔一, 島内末廣, and 大室仲, “スパース音場表現に基づく音場収音・再現の超解像化,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2014, pp. 571–574 (2–1-9).
  • 室田勇騎, 北村大地, 小山翔一, 猿渡洋, and 中村哲, “バイノーラル信号音源分離における両耳事前分布モデルの考察,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2014, pp. 595–598 (1–1-3). [日本音響学会, 学生優秀発表賞]
  • D. Kitamura, N. Ono, H. Sawada, H. Kameoka, and H. Saruwatari, “Efficient multichannel nonnegative matrix factorization with rank-1 spatial model,” in 日本音響学会秋季研究発表会講演論文集, Sep. 2014, pp. 579–582 (2–1-11).
  • 小山翔一, 島内末廣, and 猿渡洋, “超解像型音場収音・再現のためのスパース音場表現とパラメトリック辞書学習,” in 第29回 信号処理シンポジウム, 2014, pp. 478–483.
  • 室田勇騎, 北村大地, 小山翔一, 猿渡洋, and 中村哲, “チャネル別事前分布推定と両耳共通スペクトルゲインを用いた定位保持型バイノーラル音源分離,” in 第29回 信号処理シンポジウム, 2014, pp. 486–491.

プレプリント

2020

  • N. T. Shinnosuke Takamichi Mamoru Komachi and H. Saruwatari, “JSSS: free Japanese speech corpus for summarization and simplification,” in arXiv, Oct. 2020.