Senior Data Engineer (Python, AWS, Spark, Big Data)
Location: Remote
Duration: 4 months with possible extension

Enterprise Data Machine Learning (EDML) employs innovative minds like yourself to design and develop software-systems that can meet the demand of our ever-growing customer base. Like a startup inside an enterprise, EDML focuses on using a customer-centric approach to building our product to enable data-driven conversations with our customers.

As one of the Business Intelligence Data Engineers, you’ll be able to work closely with customers, product management, and other subject matter experts in the technology industry to drive forward solutions that have immediate impact on the day-to-day ability for other data scientists and machine learning engineers to productionize their models by iteratively improving how we instrument our software-systems for observability and consequentially help enable infrastructure automation, product analysis, application performance.

Our team culture allows you to work in a highly collaborative team of professionals in a horizontal job role that is excellent in written and communication skills to ensure that instrumentation, practices, processes,
and tools are documented and released in a way that helps others understand the stability and functionality of each application. In this role, you’ll be working directly with application performance and observability team members to help shape the direction of software products at the highest level by visualizing and making transparent key product performance indicators to enable better business intelligence. To do this, you’ll need to leverage your depth of knowledge and expertise in data engineering, building interactive dashboards, software development lifecycle, and previous experience working in creating workflow related software products.

What You’ll Do
● Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies
● Partner with business teams and communicate with product managers to automate business processes, build visualizations on key performance indicators, and provide real time business insights to improve business performance
● Perform ad hoc analysis on data and provide recommendations on ways to refine existing data models and improve both management and governance of data in cooperation with backend teams
● Utilize database tools such as PostgreSQL with RDBMS, NoSQL databases, and Cloud based data warehousing services such as RDS, Redshift, and Snowflake
● Write in programming languages like Python with high quality unit tests. Conducting reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance

Basic Qualifications
● Data Engineering Experience
○ Previous project experience with big data technologies
○ Knowledge of Agile engineering practices
○ PostgreSQL 11.3 or later
● Application Development Experience
○ UNIX/Linux including basic commands and shell scripting
○ Strong programming fundamentals using Python 3
○ Working knowledge of Python libraries like pandas, numpy, sqlalchemy, and dask
● AWS Cloud Experience
○ Experience with visualization tools like Tableau and Quicksight.
○ Data warehousing experience (Redshift or Snowflake)

Preferred Qualifications
● Certifications
○ AWS Associate Developer
○ AWS Specialty in Big Data
● Prior Projects
○ Hands on experience with building end-to-end data pipelines for operational dashboards
○ Hands on experience building interactive dashboards for operational reporting and decision making