Job Description
<h3>π Description</h3> β’ Spark is seeking a Senior Data Engineer to join our data team.
β’ You can be a key part of the change by helping Spark build robust data infrastructure that powers our platforms and insights.
β’ In this role, you will be responsible for architecting, implementing, and maintaining the core data infrastructure that enables our business operations and analytics capabilities.
β’ As a Senior Data Engineer, you'll focus on building scalable backend systems, implementing modern data tools, and improving the data pipelines that serve as the foundation of our data ecosystem.
β’ You'll work closely with product, engineering, and the broader data team to ensure our data infrastructure meets both current needs and future growth.
β’ Build and maintain critical data infrastructure components including (but not limited to) SFTP servers, cloud-based data warehouses, transformation and orchestration tools, and data visualization tools.
β’ Establish monitoring, alerting, and logging systems to ensure data pipeline reliability and performance.
β’ Develop and enforce data governance policies, including data quality checks and access controls.
β’ Build and maintain data pipelines in dbt with analytics engineers.
β’ Collaborate with analysts and analytics engineers to understand their data requirements and build appropriate backend solutions.
β’ Mentor and upskill junior engineers, fostering a collaborative and growth-oriented environment.
β’ Design, implement, and optimize scalable data pipelines using modern ETL/ELT frameworks (Airflow, GCP tools). <h3>π― Requirements</h3> β’ 5+ years of experience in data engineering roles with focus on infrastructure and backend systems
β’ Strong Python programming skills for building data processing applications and automation
β’ Experience designing, implementing, and troubleshooting workflows using orchestration tools like Airflow or Prefect, including custom operators and scheduling optimization
β’ Advanced proficiency in SQL and experience implementing dbt/Dataform in production environments
β’ Proven experience designing and implementing cloud-based data infrastructure on GCP/AWS
β’ Experience setting up secure data transfer protocols (SFTP, API integrations)
β’ Strong understanding of version control, CI/CD pipelines, and infrastructure-as-code principles
β’ Excellent problem-solving skills and ability to debug complex data pipeline issues
β’ Strong communication skills for cross-functional collaboration and technical documentation <h3>ποΈ Benefits</h3> β’ Equity compensation
β’ Health care, including dental and vision through our PEO Sequoia
β’ Flexible work location; co-working available
β’ 401k
β’ Paid Time Off
β’ Monthly Remote Work Stipend (help cover costs of home-office needs)
β’ Paid Parental Leave
β’ Up to 12 weeks for birthing parents
β’ Up to 8 weeks for non-birth parents
β’ 11 paid holidays
β’ 2 week sabbatical at 5 years of employment
β’ Wellbeing Perks through SpringHealth, OneMedical, PerkSpot, and SoFi