At Dynatrace, Information Systems Engineering manages and transforms data into information for decisionmakers. This includes assessment, design, acquisition and/or implementation of tools, stores and pipelines for turning data into information.
We are seeking a Lead Data Engineer who will provide key technical direction for and hands-on effort with a small team of data engineers supporting our Business Intelligence function.
A core role will be directing and helping to implement transformative pipelines of business data into our Snowflake environment. The ideal candidate will have experience and demonstrable skill with Snowflake, and Snowpark and Spark using Python. We are interested in candidates who can demonstrate technical leadership of at least small teams of data engineers, including mentoring and upskilling more junior members of the team.
Key responsibilities:
Lead the design, implementation, and maintenance of scalable data pipelines in the Snowflake eco-system including third party vendor tools such as AWS, Fivetran, etc.
Key contributor to a Data Engineering strategy to ensure efficient data management for operations and enterprise analytics
Key technical expert for business stakeholder engagement on business data initiatives
Collaboration with colleagues in Data Modeling, BI and Data Governance teams for platform initiatives
Provide the technical interface to data engineering vendors
Ensure data engineering standards align with industry best practices for data governance, data quality, and data security
Evaluate and recommend new data technologies and tools to improve data engineering processes and outcomes
What will help you succeed
Qualifications:
Significant experience in a hands-on data engineering role, especially in relation to business operations data
Bachelor’s degree in Computer Science, Information Systems or related field, or equivalent experience
Experience managing stakeholder engagement, collaborating across teams, and working on multiple simultaneous projects
Hands-on experience implementing robust, scalable data pipelines
Extensive experience acquiring data from REST APIs
Strong background in Python/Spark programming, with the ability to write efficient, maintainable, and scalable data pipeline code
Solid understanding of data warehousing, data lakes, MPP data platforms, and data processing frameworks
Strong understanding of database technologies, including SQL and NoSQL databases.
Experience with CI/CD pipelines and DevOps practices for data engineering
Excellent problem-solving and analytical skills.
Snowflake certification or other relevant data engineering certification is a plus
Dynatrace believes that potential is defined by more than qualifications or background. If you're passionate about this job, working in a tech environment, and are eager to learn, we invite you to apply.
Why you will love being a Dynatracer
A one-product software company creating real value for the largest enterprises and millions of end customers globally, striving for a world where software works perfectly.
Working with the latest technologies and at the forefront of innovation in tech on scale; but also, in other areas like marketing, design, or research.
A team that thinks outside the box welcomes unconventional ideas, and pushes boundaries.
An environment that fosters innovation, enables creative collaboration, and allows you to grow.
A globally unique and tailor-made career development program recognizing your potential, promoting your strengths, and supporting you in achieving your career goals.
A truly international mindset with Dynatracers from different countries & cultures all over the world, and English as the corporate language that connects us all
A culture that is being shaped by the diverse personalities, expertise, and backgrounds of our global team.