Skip to main content

Get to know the Amazon research community at ACL 2019!

Amazon’s research teams are looking forward to meeting you at ACL 2019. Come and visit us at the Amazon booth, and read on for more information about academic collaboration, career opportunities, and our teams.

Publications and Talks at ACL

General Chair

  • Lluis Marquez

Workshops

Publications

  • Multi-task Learning with Task, Group, and Universe Feature Learning
    Shiva Pentyala, Mengwen Liu and Markus Dreyer
  • A Large-Scale Corpus for Conversation Disentanglement
    Jonathan K. Kummerfeld, Sai R. Gouravajhala, Joseph J. Peper, Vignesh Athreya, Chulaka Gunasekara, Jatin Ganhotra, Siva Sankalp Patel, Lazaros C Polymenakos and Walter Lasecki
  • A Wind of Change: Detecting and Evaluating Lexical Semantic Change across Times and Domains
    Dominik Schlechtweg, Anna Hätty, Marco Del Tredici and Sabine Schulte im Walde
  • Choosing Transfer Languages for Cross-Lingual Learning
    Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell and Graham Neubig
  • Cross-lingual Knowledge Graph Alignment via Graph Matching Neural Network
    Kun Xu, liwei wang, Mo Yu, Yansong Feng, Yan Song, Zhiguo Wang and Dong Yu
  • Deep Neural Model Inspection and Comparison via Functional Neuron Pathways
    James Fiacco, Samridhi Choudhary and Carolyn Rose
  • Determining Relative Argument Specificity and Stance for Complex Argumentative Structures
    Esin Durmus, Faisal Ladhak and Claire Cardie
  • Disentangled Representation Learning for Non-Parallel Text Style Transfer
    Vineet John, Lili Mou, Hareesh Bahuleyan and Olga Vechtomova
  • Inducing Document Structure for Aspect-based Summarization
    Lea Frermann and Alexandre Klementiev
  • Joint Entity Extraction and Assertion Detection for Clinical Text
    Parminder Bhatia, Busra Celikkaya and Mohammed Khalilia
  • Learning Transferable Feature Representations Using Neural Networks
    Himanshu Sharad Bhatt, Shourya Roy, Arun Rajkumar and Sriranjani Ramakrishnan
  • Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks
    Yi Tay, Aston Zhang, Anh Tuan Luu, Jinfeng Rao, Shuai Zhang, Shuohang Wang, Jie Fu and Siu Cheung Hui
  • Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives
    Yi Tay, Shuohang Wang, Anh Tuan Luu, Jie Fu, Minh C. Phan, Xingdi Yuan, Jinfeng Rao, Siu Cheung Hui and Aston Zhang
  • Multimodal Abstractive Summarization for How2 Videos
    Shruti Palaskar, Jindřich Libovický, Spandana Gella and Florian Metze
  • Poetry to Prose Conversion in Sanskrit as a Linearisation Task: A case for Low-Resource Languages
    Amrith Krishna, Vishnu Sharma, Bishal Santra, Aishik Chakraborty, Pavankumar Satuluri and Pawan Goyal
  • Ranking Generated Summaries by Correctness: An Interesting but Challenging Application for Natural Language Inference
    Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan and Iryna Gurevych
  • Robust Zero-Shot Cross-Domain Slot Filling with Example Values
    Darsh Shah, Raghav Gupta, Amir Fayazi and Dilek Hakkani-Tur
  • Span-Level Model for Relation Extraction
    Kalpit Dixit and Yaser Al-Onaizan
  • Training Neural Machine Translation To Apply Terminology Constraints
    Georgiana Dinu, Prashant Mathur, Marcello Federico and Yaser Al-Onaizan
  • Topic Modeling with Wasserstein Autoencoders
    Feng Nan, Ran Ding, Ramesh Nallapati and Bing Xiang
  • Translating Translationese: A Two-Step Approach to Unsupervised Machine Translation
    Nima Pourdamghani, Nada Aldarrab, Marjan Ghazvininejad, Kevin Knight and Jonathan May
  • Why Didn’t You Listen to Me? Comparing User Control of Human-in-the-Loop Topic Models
    Varun Kumar, Alison Smith-Renner, Leah Findlater, Kevin Seppi and Jordan Boyd-Graber
  • Multimodal and Multi-view Models for Emotion Recognition
    Gustavo Aguilar, Viktor Rozgic, Weiran Wang and Chao Wang

Internships for PhD Students

We offer 3-6 month internships year-round, with opportunities in Aachen, Atlanta, Austin, Bangalore, Barcelona, Berlin, Boston, Cambridge, Cupertino, Graz, Haifa, Herzliya, Manhattan Beach, New York, Palo Alto, Pasadena, Pittsburgh, San Francisco, Shanghai, Seattle, Sunnyvale, Tel Aviv, Tübingen, Turin, and Vancouver. To apply, email your resume to ACL2019@amazon.com, and let us know if there are any specific locations, teams, or research leaders that you are interested in working with. 

Job Opportunities for Graduating Students and Experienced Researchers

We are looking for results-driven individuals who apply advanced computer vision and machine learning techniques, love to work with data, are deeply technical, and highly innovative. If you long for the opportunity to invent and build solutions to challenging problems that directly impact the way Amazon transforms the consumer experience, we are the place for you. To apply, email your resume to ACL2019@amazon.com and let us know if there are any specific locations, teams, or research leaders that you are interested in working with. 

Amazon Scholars

Amazon Scholars is a new program for academic leaders to work with Amazon in a flexible capacity, ranging from part-time to full-time research roles. Learn more at amazon.jobs/scholars.

BAIR Lab Opening

Amazon & the University of California Berkeley ARtificial Intelligence Research (BAIR) Lab Partnered to open the BAIR Open Research Commons, a new industrial affiliate program launched to accelerate cutting-edge AI research. The BAIR Commons is designed to streamline collaborative, cutting-edge research by students, faculty, and corporate research scholars. 

Amazon and NSF Collaborate to Accelerate Fairness in AI Research

NSF and Amazon are partnering to jointly support computational research focused on fairness in AI, with the goal of contributing to trustworthy AI systems that are readily accepted and deployed to tackle grand challenges facing society. NSF has long supported transformative research in artificial intelligence (AI) and machine learning (ML). The resulting innovations offer new levels of economic opportunity and growth, safety and security, and health and wellness. 

Check out the details here

Amazon Web Services (AWS) Research Grants

In partnership with Machine Learning@Amazon, AWS offers up to $20,000 in compute tokens each quarter to professors and students. Academics have used these grants for projects ranging from Hack End weekends to massive MRI imaging projects. AWS provides building blocks for developing applications ranging from Elastic MapReduce for Hadoop analytics to fast and scalable storage with Amazon DynamoDB. Learn more & apply here.
 
Amazon Research Awards

ARA is an unrestricted gift to recognize exceptional faculty, and fund projects leading toward a PhD degree or conducted as a part of post-doctoral work. Each selected proposal is assigned an Amazon research contact, as we believe that both sides benefit from direct interaction on the topic of their research. We invite ARA recipients to visit Amazon offices worldwide to give talks related to their work and meet with our research groups face-to-face. We encourage ARA recipients to publish the outcome of the project and commit any related code to open source code repositories. Learn more here.

Applications will be open on September 10th, 2019 with a submission deadline of October 4, 2019. 

  • Computer vision
  • Fairness in artificial intelligence
  • Knowledge management and data quality
  • Machine learning algorithms and theory
  • Natural language processing
  • Online advertising
  • Operations research and optimization
  • Personalization
  • Robotics
  • Search and information retrieval
  • Security, privacy and abuse prevention

Publishing at Amazon

Amazon is committed to innovating at the frontiers of machine learning and artificial intelligence. Our scientists are encouraged to engage in the research community in the form of written publications, open source code and public datasets. We have instituted a new, fast-track publication approval process, to help share our research efforts as quickly as possible, while maintaining the highest standards of quality. Check out some of our most recent publications here

Diversity at Amazon

We are a company of builders working on behalf of a global customer base. Diversity is core to our leadership principles, as we seek diverse perspectives so that we can be “Right, A Lot”. We welcome people from all backgrounds and perspectives to innovate with us. Learn more at amazon.com/diversity.

Questions about career opportunities or academic partnerships? Contact us at ACL2019@amazon.com. 

Meet Amazonians working in NLP

Natural Language Processing at Amazon

"I spoke to the future and it listened" - Gizmodo

Meet the team of world-class scientists behind Alexa.

Graduate Research Careers

Learn more about graduate research careers for PhD and Masters students.

Amazon Lex - Quickly Build Conversational Interfaces

With Amazon Lex, the same deep learning technologies that power Amazon Alexa are now available to any developer.

Learn more about Amazon Polly

Amazon Polly is a service that turns text into lifelike speech, making it easy to add voice to your website, mobile app, or device.

Introducing the Alexa Prize

The Alexa Prize is an annual competition for university students dedicated to accelerating the field of conversational AI.

2017 Alexa Prize Finals

Learn more about the Alexa Prize: https://developer.amazon.com/alexaprize