IMCAFS

Home

interpretation of acl 2017 china research paper: understanding the frontier of natural language processing in china

Posted by fierce at 2020-04-13
all

Original machine heart

Reporter: Gao Jingyi

ACL (the Association for Computational Linguistics) is the most influential and dynamic academic conference in the field of natural language processing and computational linguistics. It is sponsored by the association of computational linguistics and held once a year. This year's ACL 55th annual meeting will be held in Vancouver, Canada, from July 30 to August 4, 2017.

Recently, ACL 2017 published the employed papers, including 194 long papers, 107 short papers, 21 software demonstrations and 21 papers received and published by TACL (transactions of the Association for Computational Linguistics), which will be published in ACL 2017 A paper on a keynote presentation.

On April 22, the youth working committee of the Chinese information society and Tencent held an ACL 2017 conference in Beijing The Symposium "invited some of the authors of the selected papers in China to give thematic reports on their papers, aiming to promote the development of research on natural language processing in China and the communication between researchers, so as to jointly explore new developments and technologies in the field of natural language processing.

The report will be divided into four themes, namely machine translation, parsing / semantic / discourse, emotion / information extraction, social media / segmentation / Q & A.

The following describes the poster presentation papers at the ACL 2017 paper report meeting, and provides an open download link for related slides (in order to facilitate presentation, the link has been shortened). Interested readers can download it for study. It is worth mentioning that there are 7 papers received by ACL in natural language processing group of Tsinghua University. Thunlp group gives a brief introduction to these papers. We quote them in this article, hoping to help you.

Part one: Machine Translation

Report 1: prior knowledge integration for neural machine translation using poster regulation

By: Jiang Zhang, Yang Liu, Huanbo Luan, Jingfang Xu and Maosong sun

Slide address: http://t.cn/rxmngeg

In this paper, a unified framework is provided for the combination of discrete prior knowledge and continuous neural network by using posterior regularization, which can add any prior knowledge in the form of characteristic function without changing the neural network architecture.

Report 2: visualizing and understanding natural machine translation

By: yanzhuo Ding, Yang Liu, Huanbo Luan and Maosong sun

Slide address: http://t.cn/rxmnbu8

In this paper, hierarchical correlation propagation is used to visually analyze neural machine translation, which can calculate the correlation of any two nodes in neural network, and does not require that the function in neural network must be able to obtain partial derivative, which provides an important calculation method for in-depth understanding and debugging of neural machine translation system.

Report 3: integrating word regulation knowledge into attention based neural machine translation

By Jinchao Zhang, Mingxuan Wang, Qun Liu and Jie Zhou

Slide address: http://t.cn/rxmnsh7

Report 4: modeling source syntax for neural machine translation

By: Junhui Li, Deyi Xiong, zaopeng Tu, Muhua Zhu and Guodong Zhou

Slide address: http://t.cn/rxmmw9j

Report 5: improved neural machine translation with a syntax aware encoder and decoder

By: Huang Chen, Shujian Huang, David Chiang and Jiang Chen

Slide address: http://t.cn/rxmm54z

Report 6: sequence to dependency neural machine translation

By: Sichuan Wu, Dongdong Zhang, Nan Yang, Mu Li and Ming Zhou

Slide address: http://t.cn/rxmmjne

Part two: analysis / semantics / discourse

Report 1: parsing to 1-endpoint-crossing, pagenumber-2 graphs

By Junjie Cao, Sheng Huang, Weiwei sun, Xiaojun Wan

Slide address: http://t.cn/rxmmngj

Report 2: a progressive learning approach to Chinese SRL using heterogeneous data

By: Qiaolin Xia, Zhifang Sui and Baobao Chang

Slide address: http://t.cn/rxmm8fd

Report 3: Journey mode identification in esays

By: Wei Song, Dong Wang, Ruiji Fu, Lizhen Liu, Ting Liu, Guoping Hu

Slide address: http://t.cn/rxmm127

Report 4: generating and expanding large scale pseudotraining data for zero pronoun resolution

By: Ting Liu, Yiming Cui, Qingyu Yin, Wei Nan Zhang, Shijin Wang and Guoping Hu

Slide address: http://t.cn/rxmuwrr

Part three: emotion / information extraction

Report 1: linguistically regulated LSTM for sentient classification

By: Qiao Qian, minglie Huang and Xiaoyan

Slide address: http://t.cn/rxmu4j2

Report 2: prerequisite relation learning for concepts in MOOCS

By: Liang pan, Cheng Li, Juanzi Li and Jie Tang

Slide address: http://t.cn/rxmust6

Report 3: learning with noise: enhance distance supervised relation extraction with dynamic transformation matrix

By Bingfeng Luo, Yangfeng Feng, Zheng Wang, Zhanxing Zhu, Songfang Huang, Rui Yan and Dongyan Zhao

Slide address: http://t.cn/rxmulbq

Report 4: joint extraction of entities and relations based on a new tagging scheme

Author: suncong Zheng, Feng Wang and Hongyun Bao

Slide address: http://t.cn/rxmunzm

Report 5: automatically labeled data generation for large scale event extraction

By Yubo Chen, Kang Liu and Jun Zhao

Slide address: http://t.cn/rxm3vz0

Part 4: social media / participle / Q & A

Report 1: case: context aware network embedding for relationship modeling

By: cunchao Tu, Han Liu, Zhiyuan Liu and Maosong sun

Slide address: http://t.cn/rxm3aeq

Aiming at the problem of network representation learning, this paper proposes a context sensitive network node representation learning model, which performs well in social network link prediction and other tasks.

Report 2: advanced multi criteria learning for Chinese word segmentation

By Xinchi Chen, Zhan Shi, Xipeng Qi and xuanjing Huang

Slide address: http://t.cn/rxm3tn9

Report 3: generating natural answer by incorporating copying and retrieving mechanisms in sequence to sequence learning

By Shizhu he, Kang Liu and Jun Zhao

Slide address: http://t.cn/rxm3xmb

Report 4: attention over attention neural networks for reading comprehension

Author: Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu and guiding Hu

Slide address: http://t.cn/rxm3k3k

Report 5: sequential matching network: a new architecture for multi turn response selection in retrieval based Chatbots

By Yu Wu, Wei Wu, Chen Xing, Ming Zhou and Zhoujun Li

Slide address: http://t.cn/rxm3wbm

This is the original machine for heart. Please contact the official account for authorization.

✄------------------------------------------------

Join machine heart (full time reporter / Intern): [email protected]

Contribute or seek reports: [email protected]

Advertising & business cooperation: [email protected]