At Dynatrace, Information Systems Engineering manages and transforms data into information for decisionmakers. This includes assessment, design, acquisition and/or implementation of tools, stores and pipelines for turning data into information.
We are seeking a Lead Data Engineer who will provide key technical direction for and hands-on effort with a small team of data engineers supporting our Business Intelligence function.
A core role will be directing and helping to implement transformative pipelines of business data into our Snowflake environment. The ideal candidate will have experience and demonstrable skill with Snowflake, and Snowpark and Spark using Python. We are interested in candidates who can demonstrate technical leadership of at least small teams of data engineers, including mentoring and upskilling more junior members of the team.
Key responsibilities:
Lead the design, implementation, and maintenance of scalable data pipelines in the Snowflake eco-system including third party vendor tools such as AWS, Fivetran, etc.
Key contributor to a Data Engineering strategy to ensure efficient data management for operations and enterprise analytics
Key technical expert for business stakeholder engagement on business data initiatives
Collaboration with colleagues in Data Modeling, BI and Data Governance teams for platform initiatives
Provide the technical interface to data engineering vendors
Ensure data engineering standards align with industry best practices for data governance, data quality, and data security
Evaluate and recommend new data technologies and tools to improve data engineering processes and outcomes
Qualifications
Significant experience in a hands-on data engineering role, especially in relation to business operations data
Bachelor’s degree in Computer Science, Information Systems or related field, or equivalent experience
Experience managing stakeholder engagement, collaborating across teams, and working on multiple simultaneous projects
Hands-on experience implementing robust, scalable data pipelines
Extensive experience acquiring data from REST APIs
Strong background in Python/Spark programming, with the ability to write efficient, maintainable, and scalable data pipeline code
Solid understanding of data warehousing, data lakes, MPP data platforms, and data processing frameworks
Strong understanding of database technologies, including SQL and NoSQL databases.
Experience with CI/CD pipelines and DevOps practices for data engineering
Excellent problem-solving and analytical skills.
Snowflake certification or other relevant data engineering certification is a plus