At Amazon, we are committed to being the most customer-centric company on earth. The North American Supply Chain Organization (NASCO) is comprised of high-powered dynamic teams which are shaping network execution through the development and application of innovative supply chain management technology. Our goal is to improve and enhance the Amazon fulfillment network to drive the best customer experience in a reliable and cost-efficient manner. Within NASCO, the North American Inbound Operations & Technology (IOT) is looking for self-motivated, experienced, and highly curious individuals with Data Engineering and Analytical skills to join our newly formed Inbound Visibility & Prediction Software Team.
We are looking for talented, passionate, startup-minded Data Engineers who have experience building innovative, mission critical, high-volume, Data Driven applications. They will have tremendous opportunity to make a large impact on the design, architecture, and implementation of transformative technologies that revolutionize how the Amazon Inbound Network operates.
In this role, you will:
· Own system architecture and development for product initiatives and feature development.
· Learn to develop globalized web services, at scale.
· Investigate, prototype and deliver innovative system solutions.
· Work with diverse teams throughout Amazon to deliver mission-critical systems.
· Build the future and have fun doing it!
· Bachelors Degree in computer science, data engineering, or a related field.
· 2+ years' of relevant experience in dimensional data modeling, Extract-Transform-Load (ETL) development, and data warehousing.
· Data warehousing experience with Oracle, Redshift, Teradata, etc.
· Experience with relevant big data technologies (e.g., Hadoop, Hive, Hbase, Pig, Spark, etc.).
· Strong fluency and experience in functional programming languages (e.g., Scala, Python, Perl, etc.).
· Experience articulating business questions into analytical questions, and using quantitative techniques to arrive at a solution using available data.
· Experience processing, filtering, and presenting large quantities (millions to billions of rows) of data.
· Excellent written and verbal communication skills on quantitative topics.
· Industry experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets.
· Experience building and operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets including data and host availability monitoring.
· Experience leading large-scale data warehousing and analytics projects, including using AWS technologies (e.g., Redshift, S3, EC2, Apollo, Data Pipeline, etc.) and other big data technologies.
· Experience automating host and storage management through Python or other programming languages.
· Excellent communication skills to be able to work with business owners to develop and define key business questions and to build data sets that answer those questions.
· Familiarity with supply chain management concepts.