- Review application status
- Amazon Culture & benefits
- Job Categories
- English, British
Alexa Machine Learning
212 open jobs
Making spoken language the next user interface paradigm
What is Alexa?
Amazon Alexa is leading the way in making spoken language the next user interface paradigm. Alexa is the voice service that powers Amazon’s family of Echo products, Amazon Fire TV, and other third party products. Echo is a device that you can talk to from across the room to play music, get the news, control lights and your thermostat and more.
Technologies We Focus On
The Alexa Science and Machine Learning team made the magic of Alexa possible, but that was just the beginning. Our goal is to make voice interfaces ubiquitous and as natural as speaking to a human. We have a relentless focus on the customer experience and customer feedback. We use many real-world data sources including customer interactions and a variety of techniques like highly scalable deep learning. Learning at this massive scale requires new research and development. The team is responsible for cutting-edge research and development in virtually all fields of Human Language Technology: Automatic Speech Recognition (ASR), Artificial Intelligence (AI), Natural Language Understanding (NLU), Question Answering, Dialog Management, and Text-to-Speech (TTS). See an interview with VP Rohit Prasad here.
Alexa Scientists and Developers have large-scale impact on customer’s lives and on the industry-wide shift to voice user interfaces. Scientists and engineers in the Alexa team also invent new tools and APIs to accelerate development of voice services by empowering developers through the Alexa Skills Kit and the Alexa Voice Service. For example, developers can now create a new voice experience by simply providing a few sample sentences.
We also look outward. Your discoveries in speech recognition, natural language understanding, deep learning, and other disciplines of machine learning can fuel new ideas and applications that have direct impact on peoples’ lives. We firmly believe that our team must engage deeply with the academic community and be part of the scientific discourse. There are many opportunities for presentations at internal Machine Learning conferences, which acts as a springboard for publications at premier conferences. We also partner with universities through the Alexa Prize.
Research recently published by the Alexa science team is listed below.
- Roland Maas, Ariya Rastrow, Kyle Goehner, Gautam Tiwari, Shaun Joseph, Bjorn Hoffmeister, "Domain-Specific Utterance End-Point Detection for Speech Recognition,” to appear at Interspeech 2017.
- Anjishnu Kumar, Pavankumar Muddireddy, Markus Dreyer, Bjorn Hoffmeister, "Zero-Shot Learning across Heterogenous Overlapping Domains," to appear at Interspeech 2017.
- Brian King, I-Fan Chen, Yonatan Vaizman, Yuzong Liu, Roland Maas, SHK (Hari) Parthasarathi, Bjorn Hoffmeister, "Robust Speech Recognition Via Anchor Word Representations," to appear at Interspeech 2017.
- Harish Arsikere, Sri Garimella, "Robust online i-vectors for unsupervised adaptation of DNN acoustic models: A study in the context of digital voice assistants" to appear at Interspeech 2017.
- Ming Sun, David Snyder, Yixin Gao, Varun Nagaraja, Mike Rodehorst, Sankaran Panchapagesan, Nikko Strom, Spyros Matsoukas, Shiv Vitaladevuni, "Compressed time delay neural network for small-footprint keyword spotting," to appear at Interspeech 2017.
- Xing Fan, Emilio Monti, Lambert Mathias, and Markus Dreyer, “Transfer Learning for Neural Semantic Parsing”, to appear at ACL 2017 Workshop on Representation Learning for NLP.
- Roland Maas, Sree Hari Krishnan Parthasarathi, Brian King, Ruitong Huang, Bjorn Hoffmeister, Anchored Speech Detection, Interspeech, 2016.
- Sankaran Panchapagesan, Ming Sun, Aparna Khare, Spyros Matsoukas, Arindam Mandal, Bjorn Hoffmeister, Shiv Vitaladevuni, Multi-task learning and Weighted Cross-entropy for DNN-based Keyword Spotting, Interspeech, 2016.
- Faisal Ladhak, Ankur Gandhe, Markus Dreyer, Lambert Mathias, Ariya Rastrow, Bjorn Hoffmister, LatticeRNN: Recurrent Neural Networks over Lattices, Interspeech, 2016.
- Janne Pylkkonen, Thomas Drugman, Max Bisani, Optimizing Speech Recognition Evaluation Using Stratified Sampling, Interspeech, 2016.
- Thomas Drugman, Janne Pylkkonen, Reinhard Kneser, Active and Semi-Supervised Learning in ASR: Benefits on the Acoustic and Language Models, Interspeech, 2016.
- George Tucker, Minhua Wu, Ming Sun, Sankaran Panchapagesan, Gengshen Fu, Shiv Vitaladevuni, Model Compression applied to small- footprint keyword spotting, Interspeech, 2016.
- Francois Mairesse, Paul Raccuglia, Shiv Vitaladevuni, Search-based Evaluation from Truth Transcripts for Voice Search Applications, SIGIR, 2016.
- Sri Garimella, Arindam Mandal, Nikko Strom, Bjorn Hoffmeister, Spyros Matsoukas, Sree Hari Krishnan Parthasarathi, Robust i-vector based Adaptation of DNN Acoustic Model for Speech Recognition, Interspeech, 2015.
- Nikko Strom, Scalable Distributed DNN Training Using Commodity GPU Cloud Computing, Interspeech, 2015.
- Sree Hari Krishnan Parthasarathi, Bjorn Hoffmeister, Spyros Matsoukas, Arindam Mandal, Nikko Ström, Sri Garimella: fMLLR based feature-space speaker adaptation of DNN acoustic models, Interspeech, 2015.
- Baiyang Liu, Bjorn Hoffmeister, Airya Rastrow, Accurate Endpointing with Expected Pause Duration, Interspeech, 2015.
Do you have the chops to tutor Alexa? Visit our cheat sheet on Machine Learning positions with our team here. Check out our open positions below in areas from speech science to technical program management. We have global opportunities available in the following locations:
Meet Amazonians working in Alexa Machine Learning
Alexa Machine Learning & Science
"I spoke to the future and it listened" - Gizmodo
Meet the team of world-class scientists behind Alexa.
Introducing the Alexa Prize
The Alexa Prize is an annual competition for university students dedicated to accelerating the field of conversational AI.
State of the Union: Alexa and Advances in Conversational AI
Alexa Head Scientist Rohit Prasad presents how Amazon uses machine learning & cloud computing to fuel AI innovation, making Alexa smarter.
2016 MobileBeat Conference Interview
Alexa Head Scientist Rohit Prasad's interview at VentureBeat's 2016 MobileBeat Conference
Keynote: Conversational AI in Amazon Alexa
A talk by Senior Manager, AI Science Ashwin Ram at Udacity Intersect 2017