Big Data Engineer

Job ID: 1289381 | Amazon Web Services, Inc.


AWS WWRO (World Wide Revenue Ops) team is looking for a Big Data Engineer to play a key role in building their industry leading Customer Information Analytics Platform. Are you passionate about Big Data and highly scalable data platforms? Do you enjoy building end to end Analytics solutions to help drive business decisions? And if you have experience in building and maintaining highly scalable data warehouses and data pipelines with high transaction volumes then we need you!!!
The full stack Data Engineer will design, develop, implement, test, document, and operate large-scale, high-volume, high-performance data structures for analytics and deep learning. Implement data ingestion routines both real time and batch using best practices in data modeling, ETL/ELT processes leveraging AWS technologies and Big data tools. Provide on-line reporting and analysis using business intelligence tools and a logical abstraction layer against large, multi-dimensional data-sets and multiple sources. Gather business and functional requirements and translate these requirements into robust, scalable, operable solutions that work well within the overall data architecture. Produce comprehensive, usable data-set documentation and metadata. Provides input and recommendations on technical issues to the project manager.


· This position requires a Bachelor's Degree in Computer Science or a related technical field, and 5+ years of meaningful employment experience.
· 5+ years of experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools.
· 2+ years of experience data modeling concepts
· Demonstrated strength in architecting data warehouse solutions and integrating technical components
· Expert-level skills in writing and optimizing SQL.
· Proficiency in one of the scripting languages - python, ruby, Linux or similar.
· Experience operating very large data warehouses, Big Data technologies and data lakes.


· Master's Degree in Computer Science or related field.
· Experience in writing Map-reduce and/or Spark jobs.
· Demonstrated strength in architecting data warehouse solutions and integrating technical components
· Good analytical skills with excellent knowledge of SQL.
· Experience in gathering requirements and formulating business metrics for reporting.
· Experience with Kafka, Flume and AWS tool stack such as Redshift and Kinesis are preferred.
· Experience building on AWS using S3, EC2, Redshift, DynamoDB, Lambda, Quick-sight, etc.
· Experience using software version control tools (Git, Jenkins, Apache Subversion)
· AWS certifications or other related professional technical certifications.
· Experience with cloud or on-premise middle-ware and other enterprise integration technologies.
· Meets/exceeds Amazon’s leadership principles requirements for this role.
· Meets/exceeds Amazon’s functional/technical depth and complexity for this role

Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the bias of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit: