Emburse data engineers develop the data pipelines and systems in the central platform empowering Emburse’s SaaS products. As a data engineer, you will build the pipelines that populate the data warehouse and data lakes, implement tenant data security, support the data science platforms and techniques, and integrate data solutions and APIs with Emburse products and analytics. The role is based within the Emburse Platform analytics team, a fast moving and product-focused team responsible for delivering next generation business intelligence and data science capabilities across the business. Emburse, known for its innovation and award-winning technologies employ modern technologies including Snowflake, Data Bricks/Spark, AWS and Looker. In this role you will have access to the best and brightest minds in our industry to grow your experience and career within Emburse.
What You'll Do
Develops code (e.g. python), infrastructure and tools for the extraction, transformation, and loading of data from a wide variety of data sources, using SQL, streaming and related data lake and data warehouse technologies
Builds analytical tools to utilize, model and visualize data
Assembles large, complex data sets for the needs business for ad-hoc requests and as part of on-going software engineering projects
Develops scripts to automate manual processes, address data quality, enable integration or monitor processes
Understands data security to a high degree as applicable to multi-tenant environments, multiple regions and financial industry data
Takes personal responsibility for quality and maintainability of the product and actively identifies areas for improvement
Identifies problems/risks of own work and others
Identifies viable alternative solutions and presents them
Follows SDLC processes, including adopting agile-based processes, peer code-reviews, and technical preparations required for scheduled releases and demos
Partners with product, analytics and data science team to drive requirements that take into account all parties' needs
Establishes monitoring, responds to alerts and resolves issues within data pipeline
Develops sophisticated data-oriented software or systems with minimum supervision
On-boards and mentors less experienced team members
Makes complex contributions to technical documentation/knowledge base/data directionaries and team/engineering presentations
Optimizes processes, fixes complex bugs and demonstrates advanced debugging skills
Collaborates with product owners, software developers, data scientists, devops and analysts
Expanded Code review responsibilities
Performs advanced refactoring
What We're Looking For
Bachelor’s degree in Computer Science or related field, or equivalent years’ experience
Advanced working SQL knowledge and experience working with a variety of relational databases
Experience working with a modern scalable data lake or data warehouses
Experience working with a modern data pipeline or data workflow management tool
Experience working in a product-oriented environment alongside software engineers and product managers
Experience with Python in a full SDLC/production deployment environment
Preferred: Experience with AWS services
Experience working with Snowflake
Experience working with Looker or an equivalent
Business Intelligence suite
Experience working with Fivetran or an equivalent ETL/ELT suite
Experience with Databricks or an equivalent Spark-based suite, Financial Industry experience preferred
We use our own cookies and cookies from third parties to measure your activity in this website and improve the service through analysis of your browsing activity. If you continue browsing, we consider that you accept their use. You can change this configuration and obtain more information here.