IEEE International Joint Conference on Neural Networks (IJCNN 2019)将于2019年7月14日至7月19日,在匈牙利布达佩斯举行。IJCNN是神经网络及相关领域的研究人员和其他专业人士的首要国际会议,也被CCF推荐为人工智能方向的C类会议。

 

17级同学付求爱在实验室雷凯老师指导下,完成一篇长文”Multi-Task Learning with Capsule Networks”,并以确认被IJCNN 2019录用!中稿论文的简介如下:

 

论文标题: Multi-Task Learning with Capsule Networks

 

论文作者: Kai Lei, Qiuai Fu, Yuzhi Liang*

 

英文摘要: Multi-task learning is a machine learning approach learning multiple tasks jointly while exploiting commonalities and differences across tasks. A shared representation is learned by multi-task learning, and what is learned for each task can help other tasks be learned better. Most of existing multi-task learning methods adopt deep neural network as the classifier of each task. However, a deep neural network can exploit its strong curve-fitting capability to achieve high accuracy in training data even when the learned representation is not good enough. This is contradictory to the purpose of multi-task learning. In this paper, we propose a framework named multi-task capsule (MT-Capsule) which improves multi-task learning with capsule network. Capsule network is a new architecture which can intelligently model part-whole relationships to constitute viewpoint invariant knowledge and automatically extend the learned knowledge to different new scenarios. The experimental results on large real-world datasets show MT-Capsule can significantly outperform the state-of-the-art methods.

 

中文简介: 多任务学习是一种在利用任务之间的共性和差异的同时,共同学习多个任务的机器学习方法。共享表示是通过多任务学习来学习的,为每个任务学习的内容可以帮助其他任务更好地学习。现有的多任务学习方法大多采用深度神经网络作为每个任务的分类器,深度神经网络可以利用其强大的曲线拟合能力来实现训练数据的高精度,即使在学习的表示还不够好的情况下,然而这与多任务学习的目的是相矛盾的。本文提出了一个多任务胶囊网络(MT-Capsule)框架,该框架利用胶囊网络改进了多任务学习。胶囊网络是一种新型的神经网络模型,它可以智能地对部分整体关系进行建模,构成视点不变的知识,并自动将所学知识扩展到不同的新场景中。在大型真实数据集上的实验结果表明,MT-Capsule可以显著优于目前最好的方法。