The DataOps Engineer will be responsible for building, automating, and optimizing scalable data pipelines and ensuring data quality, availability, and governance. This role requires a strong foundation in cloud ecosystems, particularly Azure and AWS, and experience in establishing CI/CD pipelines and real-time data processing systems. The ideal candidate will work closely with cross-functional teams to enhance data reliability and operational efficiency.
Experience:
- 6+ years of experience in data engineering, DataOps, or related roles.
- Proven experience with cloud ecosystems such as Azure (Azure Data Factory, Synapse, Databricks) and AWS (Glue, S3, Redshift, Lambda).
- Strong programming skills in Python and proficiency in SQL; experience with Scala or Java is a plus.
- Hands-on experience with CI/CD tools like Jenkins, Azure DevOps, or GitHub Actions.
- Familiarity with big data technologies like Apache Spark, Hadoop, and Kafka.
Required Skills:
- Expertise in building and automating data pipelines in cloud environments.
- Proficiency in data governance, lineage, and quality management frameworks.
- Experience with real-time data streaming platforms like Kafka or Azure Event Hubs.
- Deep understanding of Infrastructure as Code (IaC) tools like Terraform or CloudFormation.
- Strong debugging and performance optimization skills.
Soft Skills:
- Analytical mindset with excellent problem-solving capabilities.
- Strong communication skills to work effectively with cross-functional teams.
- Proactive, self-driven, and focused on continuous improvement.
- Ability to manage multiple tasks and meet deadlines in a fast-paced environment.
Education:
- Bachelor’s or master’s degree in computer science, Data Science, Information Systems, or a related field.