論文- 伊藤 彰則 -
表示方法: 表示形式: 表示順:
件数:214件
[2018]
1.Dialog-based interactive movie recommendation: Comparison of dialog strategies.[Smart Innovation, Systems and Technologies,82,(2018),77-83]Mori, H. and Chiba, Y. and Nose, T. and Ito, A.
10.1007/978-3-319-63859-1_10
http://www.scopus.com/inward/record.url?eid=2-s2.0-85026643237&partnerID=40
2.A study on 2D photo-realistic facial animation generation using 3D facial feature points and deep neural networks.[Smart Innovation, Systems and Technologies,82,(2018),113-118]Sato, K. and Nose, T. and Ito, A. and Chiba, Y. and Ito, A. and Shinozaki, T.
10.1007/978-3-319-63859-1_15
http://www.scopus.com/inward/record.url?eid=2-s2.0-85026660860&partnerID=40
3.Voice conversion from arbitrary speakers based on deep neural networks with adversarial learning.[Smart Innovation, Systems and Technologies,82,(2018),97-103]Miyamoto, S. and Nose, T. and Ito, S. and Koike, H. and Chiba, Y. and Ito, A. and Shinozaki, T.
10.1007/978-3-319-63859-1_13
http://www.scopus.com/inward/record.url?eid=2-s2.0-85026657460&partnerID=40
4.Development and evaluation of julius-compatible interface for Kaldi ASR.[Smart Innovation, Systems and Technologies,82,(2018),91-96]Yamada, Y. and Nose, T. and Chiba, Y. and Ito, A. and Shinozaki, T.
10.1007/978-3-319-63859-1_12
http://www.scopus.com/inward/record.url?eid=2-s2.0-85026650752&partnerID=40
5.Evaluation of nonlinear tempo modification methods based on sinusoidal modeling.[Smart Innovation, Systems and Technologies,82,(2018),104-111]Nakamura, K. and Chiba, Y. and Nose, T. and Ito, A.
10.1007/978-3-319-63859-1_14
http://www.scopus.com/inward/record.url?eid=2-s2.0-85026650013&partnerID=40
6.Detection of singing mistakes from singing voice.[Smart Innovation, Systems and Technologies,82,(2018),130-136]Miyagawa, I. and Chiba, Y. and Nose, T. and Ito, A.
10.1007/978-3-319-63859-1_17
http://www.scopus.com/inward/record.url?eid=2-s2.0-85026642663&partnerID=40
7.Response selection of interview-based dialog system using user focus and semantic orientation.[Smart Innovation, Systems and Technologies,82,(2018),84-90]Tada, S. and Chiba, Y. and Nose, T. and Ito, A.
10.1007/978-3-319-63859-1_11
http://www.scopus.com/inward/record.url?eid=2-s2.0-85026658197&partnerID=40
[2017]
8.A study on tailor-made speech synthesis based on deep neural networks.[Smart Innovation, Systems and Technologies,63,(2017),159-166]Yamada, S. and Nose, T. and Ito, A.
10.1007/978-3-319-50209-0_20
http://www.scopus.com/inward/record.url?eid=2-s2.0-85006074119&partnerID=40
9.Development of an easy Japanese writing support system with text-to-speech function.[Smart Innovation, Systems and Technologies,64,(2017),221-228]Nagano, T. and Prafianto, H. and Nose, T. and Ito, A.
10.1007/978-3-319-50212-0_27
http://www.scopus.com/inward/record.url?eid=2-s2.0-85006077842&partnerID=40
10.Synthesis of photo-realistic facial animation from text based on HMM and DNN with animation unit.[Smart Innovation, Systems and Technologies,64,(2017),29-36]Sato, K. and Nose, T. and Ito, A.
10.1007/978-3-319-50212-0_4
http://www.scopus.com/inward/record.url?eid=2-s2.0-85006010347&partnerID=40
11.Collection of example sentences for non-task-oriented dialog using a spoken dialog system and comparison with hand-crafted DB.[Communications in Computer and Information Science,713,(2017),458-464]Kageyama, Y. and Chiba, Y. and Nose, T. and Ito, A.
10.1007/978-3-319-58750-9_63
http://www.scopus.com/inward/record.url?eid=2-s2.0-85024499760&partnerID=40
12.Estimation of user's willingness to talk about the topic: Analysis of interviews between humans.[Lecture Notes in Electrical Engineering,999 LNEE,(2017),411-419]Chiba, Y. and Ito, A.
10.1007/978-981-10-2585-3_34
http://www.scopus.com/inward/record.url?eid=2-s2.0-85009476470&partnerID=40
13.Demonstration experiment of data hiding into OOXML document for suppression of plagiarism.[Smart Innovation, Systems and Technologies,63,(2017),3-10]Ito, A.
10.1007/978-3-319-50209-0_1
http://www.scopus.com/inward/record.url?eid=2-s2.0-85006054847&partnerID=40
14.A precise evaluation method of prosodic quality of non-native speakers using average voice and prosody substitution.[ICALIP 2016 - 2016 International Conference on Audio, Language and Image Processing - Proceedings,(2017),7846620-]Prafianto, H. and Nose, T. and Ito, A.
10.1109/ICALIP.2016.7846620
http://www.scopus.com/inward/record.url?eid=2-s2.0-85016097292&partnerID=40
15.Recognition of sounds using square cauchy mixture distribution.[2016 IEEE International Conference on Signal and Image Processing, ICSIP 2016,(2017),7888359-]Ito, A.
10.1109/SIPROCESS.2016.7888359
http://www.scopus.com/inward/record.url?eid=2-s2.0-85018714138&partnerID=40
16.Construction and analysis of phonetically and prosodically balanced emotional speech database.[2016 Conference of the Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques, O-COCOSDA 2016,(2017),7918977-]Takeishi, E. and Nose, T. and Chiba, Y. and Ito, A.
10.1109/ICSDA.2016.7918977
http://www.scopus.com/inward/record.url?eid=2-s2.0-85020215768&partnerID=40
17.Cluster-based approach to discriminate the user's state whether a user is embarrassed or thinking to an answer to a prompt.[Journal on Multimodal User Interfaces,11(2),(2017),185-196]Chiba, Y. and Nose, T. and Ito, A.
10.1007/s12193-017-0238-y
http://www.scopus.com/inward/record.url?eid=2-s2.0-85009237851&partnerID=40
[2016]
18.発話状態推定に基づく協調的感情音声合成による音声対話システムの評価.[電子情報通信学会誌A,J199-A(1),(2016),25-35]加瀬嵩人,能勢隆,千葉祐弥,伊藤彰則
19.Multiple description vector quantizer design based on redundant representation of central code.[European Signal Processing Conference,2016-November,(2016),7760219-]Ito, A.
10.1109/EUSIPCO.2016.7760219
http://www.scopus.com/inward/record.url?eid=2-s2.0-85006074472&partnerID=40
20.Influence of the height of a robot on comfortableness of verbal interaction.[IAENG International Journal of Computer Science,43(4),(2016),447-455]Hiroi, Y. and Ito, A.
http://www.scopus.com/inward/record.url?eid=2-s2.0-85007586931&partnerID=40
Page: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [next]
戻るこのページのトップへ
copyright(c)2005 Tohoku University