Job Description
<h3>π Description</h3> β’ Design, develop, and maintain scalable ELT/ETL pipelines using Matillion and Snowflake to support diverse data integration and transformation needs across the company.
β’ Architect end-to-end data workflows that ensure high performance, reliability, and data integrity for both batch and near real-time use cases.
β’ Collaborate with cross-functional teams including Data Analysts, DevOps, and business stakeholders to gather requirements and deliver data solutions that drive value.
β’ Define and implement best practices for data modeling, metadata management, data lineage, and governance, utilizing the features of Snowflake and Matillion.
β’ Optimize data storage, retrieval, and computation to ensure efficient processing and cost control within our cloud infrastructure.
β’ Monitor, troubleshoot, and resolve issues related to data pipelines, performance bottlenecks, and data quality challenges <h3>π― Requirements</h3> β’ 3+ years of continuous experience working with Matillion ELT/ETL for cloud data warehouses, including designing complex orchestration jobs, transformation components, and API integrations.
β’ Advanced knowledge of Snowflake, including schema design, security, performance tuning, stream and tasks, and cost optimization strategies.
β’ Bachelorβs or Masterβs degree in Computer Science, Information Systems, Engineering, or a related field, or equivalent work experience.
β’ 5+ years of professional experience in data engineering, with at least 2 years in a senior role.
β’ Proven experience in architecting large-scale, distributed data systems and implementing data lakes, warehouses, and marts.
β’ Deep understanding of data modeling (dimensional, normalized, and denormalized), data governance, and data quality frameworks.
β’ It would be a plus if you also possess previous experience in:
β’ Strong grasp of cloud platforms (AWS, Azure, or GCP) as they relate to data storage, processing, and security. Certification or working toward certification in Matillion, Snowflake
β’ Hands-on experience with data cataloging tools and metadata management frameworks.
β’ Exposure to machine learning workflows and MLOps in a data engineering context <h3>ποΈ Benefits</h3> β’ Flexible working hours
β’ Professional onboarding and training options
β’ Powerful team looking forward to working with you
β’ Career coaching and development opportunities
β’ Health benefits
β’ 401(k)