Skip to main content

Get to know the Amazon research community at CVPR 2019!

Amazon’s computer vision teams are looking forward to meeting you at CVPR 2019. Come and visit us at the Amazon booth, and read on for more information about academic collaboration, career opportunities, and our teams.

Chair Members

  • General Chair | Larry Davis
  • Area Chair | James RehgSubhransu Maji

 

Orals

  • Tuesday, June 18 | 0900-1015 | Promenade Ballroom | Oral Session 1-1C: Action & Video

STEP: Spatio-Temporal Progressive Learning for Video Action Detection

Xitong Yang, Xiaodong Yang, Ming-Yu Liu, Fanyi Xiao, Larry S. Davis, Jan Kautz

  • Tuesday, June 18 | 1330-1520 | Promenade Ballroom | Oral Session 1-2C: Scenes & Representation

d-SNE: Domain Adaptation using Stochastic Neighborhood Embedding

Xiang Xu,Xiong Zhou, Ragav Venkatesan, Gurumurthy SwaminathanOrchid Majumder

  • Wednesday, June 19 | 0830-1000 | Promenade Ballroom | Oral Session 2-1C: Motion & Biometrics

Taking a Deeper Look at the Inverse Compositional Algorithm

Zhaoyang Lv, Frank Dellaert, James Rehg, Andreas Geiger

  • Wednesday, June 19 | 1330-1520 | Promenade Ballroom | Oral Session 2-2C: Computational Photography & Graphics

Fast Spatially-Varying Indoor Lighting Estimation

Mathieu Garon, Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Jean-François Lalond 

  • Wednesday, June 19 | 1330-1520 | Grand Ballroom | Oral Session 2-2B: Language & Reasoning

Streamlined Dense Video Captioning

Jonghwan Mun, Linjie Yang, Zhou Ren, Ning Xu, Bohyung Han

  • Thursday, June 20 | 0830-1000 | Grand Ballroom | Oral Session 3-1B: Learning, Physics, Theory & Datasets

Incremental Object Learning From Contiguous Views

Stefan Stojanov, Samarth Mishra, Ngoc Anh Thai, Nikhil Dhanda, Ahmad Humayun, Chen Yu, Linda B. Smith, and James M. Rehg

  • Thursday, June 20 | 0830-1000 | Promenade Ballroom | Oral Session 3-1C: Segmentation & Grouping

Semantic Correlation Promoted Shape-Variant Context for Segmentation

Henghui Ding, Xudong Jiang, Bing Shuai, Ai Qun Liu, Gang Wang

  • Thursday, June 20 | 1330-1520 | Terrace Theater | Oral Session 3-2A: Deep Learning

Meta-Learning with Differentiable Convex Optimization

Kwonjoon Lee, Subhransu MajiAvinash RavichandranStefano Soatto

  • Thursday, June 20 | 1330-1520 | Grand Ballroom | Orals Session 3-2B: Face & Body

Expressive Body Capture: 3D Hands, Face, and Body From a Single Image

Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed A. A. Osman, Dimitrios Tzionas, Michael J. Black

Workshops

  • Sunday, June 16 | 0800-1230 | Hyatt Shoreline A

EMC2: Energy Efficient Machine Learning and Cognitive Computing for Embedded Applications (3rd Edition)

Raj Parihar, Michael Goldfarb, Satyam Srivastava, Mahdi Bojnordi, Tao Sheng, Krishna Nagar, Debu Pal

  • Sunday, June 16 | 1330-1800 | Hyatt Beacon B

Benchmarking Multi-Target Tracking: How crowded can it get?

Laura Leal-Taixe, Hamid Razatofighi, Anton Milan, Javen Shi, Konrad Schindler, Patrick Dendorfer, Daniel Cremers, Stefan Roth, Ian Reid

  • Monday, June 17 | 1330-1800 | Hyatt Beacon A

Uncertainty and Robustness in Deep Visual Learning

Sergey Prokudin, Kevin Murphy, Peter Gehler, Zeynep Akata, Sebastian Nowozin

  • Monday, June 17 | 0900-1730 | Seaside Ballroom A

The Sixth Fine-Grained Visual Categorization Workshop (FGVC6)

Ryan Farrell, Oisin MacAodha, Subhransu Maji, Serge Belongie

  • Monday June 17 | 0900-1800 | Seaside 7

Egocentric PerceptionInteraction and Computing

Dima Damen, Antonino Furnari, Walterio Mayol-Cuevas, David Crandall, Giovani Maria Farinella, Kristen Grauman

​Workshop Keynotes

  • Fourth International Workshop on Egocentric Perception, Interaction, and Computing
    • Egocentric Perception of Social and Health-Related Behaviors – James Rehg
  • Workshop on Benchmarking Multi-Target Tracking: How crowded can it get?
    • Multi-Target Tracking: From Classical to Modern – James Rehg

Publications

Tuesday, June 18 | 1015-1300 | Exhibit Hall | Poster Session 1-1P

  • Bag of Tricks for Image Classification with Convolutional Neural Networks

Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li

  • Co-occurrent Features in Semantic Segmentation

Hang Zhang, Han Zhang,Chenguang Wang, Junyuan Xie

  • AdaFrame: Adaptive Frame Selection for Fast Video Recognition

Zuxuan Wu, Caiming Xiong, Chih-Yao Ma, Richard Socher, Larry S. Davis

  • End-To-End Time-Lapse Video Synthesis From a Single Outdoor Image

Seonghyeon Nam, Chongyang Ma, Menglei Chai, William Brendel, Ning Xu, Seon Joo Kim

  • MAN: Moment Alignment Network for Natural Language Moment Retrieval via Iterative Graph Adjustment

Da Zhang, Xiyang Dai, Xin Wang, Yuan-Fang Wang, Larry S. Davis

  • Learning to Generate Synthetic Data via Composting

Shashank Tripathi, Siddhartha Chandra, Amit Agrawal, Ambrish Tyagi, James Rehg, Visesh Chari

  • STEP: Spatio-Temporal Progressive Learning for Video Action Detection

Xitong Yang, Xiaodong Yang, Ming-Yu Liu, Fanyi Xiao, Larry S. Davis, Jan Kautz

  • Modeling Local Geometric Structure of 3D Point Clouds Using Geo-CNN

Shiyi Lan, Ruichi Yu, Gang Yu, Larry Davis

Tuesday, June 18 | 1520-1800 | Exhibit Hall | Poster Session 1-2P

  • d-SNE: Domain Adaptation using Stochastic Neighborhood Embedding

Xiang Xu, Xiong Zhou, Ragav Venkatesan, Gurumurthy SwaminathanOrchid Majumder

  • OCGAN: One-class Novelty Detection Using GANs with Constrained Latent Representations

Pramuditha Perera, Ramesh Nallapati, Bing Xiang

  • Region Proposal by Guided Anchoring

Jiaqi Wang, Kai Chen, Shuo Yang, Chen Chang Loy, Duhua Lin

  • Unifying Heterogeneous Classifiers with Distillation

Jayakorn Vongkulbhisal, Phongtharin Vinayavekhin, Marco Visentini-Scarzanella

Wednesday, June 19 | 1000-1245 | Exhibit Hall | Poster Session 2-1P

  • Sensitive-Sample Fingerprinting of Deep Neural Networks

Zecheng He, Tianwei Zhang, Ruby Lee

  • Unsupervised 3D Pose Estimation with Geometric Self-Supervision

Ching-Hang Chen, Ambrish Tyagi, Amit Agrawal, Dylan Drover, Rohith Mysore Vijaya Kumar, Stefan Stojanov, James Rehg

  • Taking a Deeper Look at the Inverse Compositional Algorithm

Zhaoyang Lv, Frank Dellaert, James Rehg, Andreas Geiger

  • A Bayesian Perspective on the Deep Image Prior

Zezhou Cheng, Matheus Gadelha, Subhransu Maji, and Daniel Sheldon

Wednesday, June 19 | 1520-1800 | Exhibit Hall | Poster Session 2-2P

  • Streamlined Dense Video Captioning

Jonghwan Mun, Linjie Yang, Zhou Ren, Ning Xu; Bohyung Han

  • Bayesian Hierarchical Dynamic Model for Human Action Recognition

Rui Zhao, Wanru Xu, Hui Su, Qiang Ji

  • The Pros and Cons: Rank-Aware Temporal Attention for Skill Determination in Long Videos

Hazel Doughty, Walterio Mayol-Cuevas, Dima Damen

  • Learning to Regress 3D Face Shape and Expression From an Image Without 3D Supervision

Soubhik Sanyal, Timo Bolkart, Haiwen Feng, Michael J. Black

  • FA-RPN: Floating Region Proposals for Face Detection

Mahyar Najibi, Bharat Singh, Larry S. Davis

Thursday, June 20 | 1000-1245 | Exhibit Hall | Poster Session 3-1P

  • Explicit Bias Discovery in Visual Question Answering Models

Varun Manjunatha, Nirat Saini, Larry S. Davis

  • Variational Information Distillation for Knowledge Transfer

Sungsoo Ahn, Shell Xu Hu, Andreas Damianou, Neil Lawrence, Zhenwen Dai

  • Capture, Learning, and Synthesis of 3D Speaking Styles

Daniel Cudeiro, Timo Bolkart, Cassidy Laidlaw, Anurag Ranjan, Michael J. Black

  • EIGEN: Ecologically-Inspired GENetic Approach for Neural Network Structure Searching from Scratch

Jian Ren, Zhe Li, Jianchao Yang, Ning Xu, Tianbao Yang, David Foran

  • All-Weather Deep Outdoor Lighting Estimation

Jinsong Zhang, Kalyan Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Jonathan Eisenman, Jean-François Lalonde

  • Semantic Correlation Promoted Shape-Variant Context for Segmentation

Henghui Ding, Xudong Jiang, Bing Shuai, Ai Qun Liu, Gang Wang

Thursday, June 20 | 1520-1800 | Exhibit Hall | Poster Session 3-2P

  • Trust Region Based Adversarial Attack on Neural Networks

Zhewei Yao, Amir Gholami, Peng Xu, Kurt Keutzer, Michael Mahoney

  • Generalizing Eye Tracking with Bayesian Adversarial Learning

Kang Wang, Rui Zhao, Hui Su, Qiang Ji

  • Expressive Body Capture: 3D Hands, Face, and Body From a Single Image

Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed A. A. Osman, Dimitrios Tzionas, Michael J. Black

  • Learning Joint Reconstruction of Hands and Manipulated Objects

Yana Hasson, Gül Varol, Dimitrios Tzionas, Igor Kalevatykh, Michael J. Black, Ivan Laptev, Cordelia Schmid

  • Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation

Anurag Ranjan, Varun Jampani, Lukas Balles, Kihwan Kim, Deqing Sun, Jonas Wulff, Michael J. Black

  • Meta-Learning with Differentiable Convex Optimization

Kwonjoon Lee, Subhransu MajiAvinash RavichandranStefano Soatto

  • Large-scale Distributed Second-order Optimization Using Kronecker-factored Approximate Curvature for Deep Convolutional Neural Networks

Kazuki Osawa,Yohei Tsuji, Yuichiro Ueno, Akira Naruse, Rio Rokota, Satoshi Matsuoka

 

Internships for PhD Students

We offer 3-6 month internships year-round, with opportunities in Aachen, Atlanta, Austin, Bangalore, Barcelona, Berlin, Boston, Cambridge, Cupertino, Graz, Haifa, Herzliya, Manhattan Beach, New York, Palo Alto, Pasadena, Pittsburgh, San Francisco, Shanghai, Seattle, Sunnyvale, Tel Aviv, Tübingen, Turin, and Vancouver. To apply, email your resume to CVPR2019@amazon.com, and let us know if there are any specific locations, teams, or research leaders that you are interested in working with. 

Job Opportunities for Graduating Students and Experienced Researchers

We are looking for results-driven individuals who apply advanced computer vision and machine learning techniques, love to work with data, are deeply technical, and highly innovative. If you long for the opportunity to invent and build solutions to challenging problems that directly impact the way Amazon transforms the consumer experience, we are the place for you. To apply, email your resume to CVPR2019@amazon.com and let us know if there are any specific locations, teams, or research leaders that you are interested in working with. 

Amazon Scholars

Amazon Scholars is a new program for academic leaders to work with Amazon in a flexible capacity, ranging from part-time to full-time research roles. Learn more at amazon.jobs/scholars.

Amazon and NSF Collaborate to Accelerate Fairness in AI Research

NSF and Amazon are partnering to jointly support computational research focused on fairness in AI, with the goal of contributing to trustworthy AI systems that are readily accepted and deployed to tackle grand challenges facing society. NSF has long supported transformative research in artificial intelligence (AI) and machine learning (ML). The resulting innovations offer new levels of economic opportunity and growth, safety and security, and health and wellness. 

Check out the details here

Amazon Web Services (AWS) Research Grants

In partnership with Machine Learning@Amazon, AWS offers up to $20,000 in compute tokens each quarter to professors and students. Academics have used these grants for projects ranging from Hack End weekends to massive MRI imaging projects. AWS provides building blocks for developing applications ranging from Elastic MapReduce for Hadoop analytics to fast and scalable storage with Amazon DynamoDB. Learn more & apply here.
 
Amazon Research Awards

ARA is an unrestricted gift to recognize exceptional faculty, and fund projects leading toward a PhD degree or conducted as a part of post-doctoral work. Each selected proposal is assigned an Amazon research contact, as we believe that both sides benefit from direct interaction on the topic of their research. We invite ARA recipients to visit Amazon offices worldwide to give talks related to their work and meet with our research groups face-to-face. We encourage ARA recipients to publish the outcome of the project and commit any related code to open source code repositories. Learn more here.

Publishing at Amazon

Amazon is committed to innovating at the frontiers of machine learning and artificial intelligence. Our scientists are encouraged to engage in the research community in the form of written publications, open source code and public datasets. We have instituted a new, fast-track publication approval process, to help share our research efforts as quickly as possible, while maintaining the highest standards of quality. Check out some of our most recent publications here

Diversity at Amazon

We are a company of builders working on behalf of a global customer base. Diversity is core to our leadership principles, as we seek diverse perspectives so that we can be “Right, A Lot”. We welcome people from all backgrounds and perspectives to innovate with us. Learn more at amazon.com/diversity.

Meet Amazonians working in Amazon@CVPR

Learn more about Computer Vision teams and products at Amazon.

Amazon Go

Amazon Go is a new kind of store featuring the world’s most advanced shopping technology. No lines, no checkout – just grab and go!

Echo Look

“Alexa, take a photo.” Introducing Echo Look—hands-free camera and style assistant.

Amazon Rekognition - Deep Learning-Based Image Analysis

AWS Webinar: Amazon Rekognition is a service that makes it easy to add image analysis to your applications.

Prime Air

We're excited about Prime Air, a delivery system designed to safely get packages to customers in 30 minutes or less using drones.

Core AI

Amazon has developed an automated system that uses machine learning to determine the maturity of fruits and vegetables.

Augmented Reality

AR view lets you visualize products in your home, before you buy them. Visit amazon.com/adlp/arview to learn more.