smartphones, tablets and other IoT related devices. We are continually innovating so that our customers can deploy advanced and effective tools to protect their communities, countries and companies.
Headquartered out of Waterloo, ON but with over 400 employees spread out globally, Magnet is continuing to expand and grow. Where we are today, is not where we want to be tomorrow.
The Data Engineer role will build the data pipelines needed to support the analytics for sales, service, and marketing teams to achieve the corporate revenue and contribution objectives. This individual will use their strong software development and DevOps skills to ensure the pipelines and analytics platform are optimized and efficient. The ability to identify inefficiencies will drive continuous improvements and innovation in our processes and operations.
What You Will Accomplish
Develop and maintain data pipelines from various data source to Azure data lake
Maintain analytics platform uptime and stability of pipelines and integration points
Identify areas of improvement in the data pipelines and provide solutions to optimize
Assist in product/tool discovery and assessment
Assist in process development pertaining to business engagement with analytics platform
Ensure governance is followed within the analytics platform
Maintain design documentation across all components of the platform
Ensure all code developed are maintained in Github
Ensure development follows data ops methodology
Ensure deployment of infrastructure is following Infrastructure as Code best practices
Responsible for PR process on peer and 3rd party code before production deployment
Develop required transformation models in DBT
Support production pipelines and analytics infrastructure
Deploy new domains into Snowflake
Develop and maintain Quick Start deployment guide in Azure, Python, Terraform, DBT, and Snowflake
Maintain and monitor data pipelines proactively to ensure high service availability
Ensure all data is tagged and catalogued in the data lake and data warehouse
Ensure data quality checks are developed in pipelines to ensure high quality data is loaded to Data warehouse
What We Are Looking For
We’re looking for someone who checks off most, but not all, of the boxes listed in “skills and experiences”. It’s more important to us to find candidates who can display indicators of success through skills they have developed and experiences they have been a part of, than to find folks who have ‘been there, done that”. We want to be part of your development journey, and we’ll learn as much from you as you learn from us.
There are a couple must haves, but we will keep that list short:
3 – 5 years in both traditional and modern data architecture, ETL and data migration and processing included relational databases, cloud data warehousing
3+ years of experience of IT platform implementation in a highly technical and analytical role
Proficient experience in data modeling with Kimball methodology
Experience with Infrastructure as Code (Terraform)
Proficient in Python, SQL
Deep knowledge of Azure architecture / services
ETL/ELT experience (Talend, Fivetran, Azure Data Factory, Glue, Lambda, DBT)
Experience with Cloud Data Warehouse technologies (Redshift, Snowflake, BigQuery)
Proficient in CI/CD processes and source control tools such as Github
Experience with JIRA and Confluence a plus
To apply for this job please visit www.glassdoor.ca.