Back to Jobs

Senior Data Engineer

Spark
πŸ‡ΊπŸ‡Έ United States – Remote
Full-time
$145K–$177K
Estimated
Remote
Apply Now

Required Skills

⏰ Full Time
🟠 Senior
🚰 Data Engineer
πŸ¦… H1B Visa Sponsor
Airflow
AWS
Cloud
Docker
ETL
Google Cloud Platform
Kubernetes
Python
Spark
SQL
🟑 Mid-level
JavaScript
Kafka
Node.js
React
TypeScript
Go
Amazon Redshift
Apache
Azure
BigQuery
Cassandra
NoSQL
Numpy
Pandas
Scikit-Learn
Tableau
R
Scala
Data Visualization
Excel
Sql
Aws
Gcp
Analytics
Communication
Cross-functional Collaboration

Job Description

<h3>πŸ“‹ Description</h3> β€’ Spark is seeking a Senior Data Engineer to join our data team. β€’ You can be a key part of the change by helping Spark build robust data infrastructure that powers our platforms and insights. β€’ In this role, you will be responsible for architecting, implementing, and maintaining the core data infrastructure that enables our business operations and analytics capabilities. β€’ As a Senior Data Engineer, you'll focus on building scalable backend systems, implementing modern data tools, and improving the data pipelines that serve as the foundation of our data ecosystem. β€’ You'll work closely with product, engineering, and the broader data team to ensure our data infrastructure meets both current needs and future growth. β€’ Build and maintain critical data infrastructure components including (but not limited to) SFTP servers, cloud-based data warehouses, transformation and orchestration tools, and data visualization tools. β€’ Establish monitoring, alerting, and logging systems to ensure data pipeline reliability and performance. β€’ Develop and enforce data governance policies, including data quality checks and access controls. β€’ Build and maintain data pipelines in dbt with analytics engineers. β€’ Collaborate with analysts and analytics engineers to understand their data requirements and build appropriate backend solutions. β€’ Mentor and upskill junior engineers, fostering a collaborative and growth-oriented environment. β€’ Design, implement, and optimize scalable data pipelines using modern ETL/ELT frameworks (Airflow, GCP tools). <h3>🎯 Requirements</h3> β€’ 5+ years of experience in data engineering roles with focus on infrastructure and backend systems β€’ Strong Python programming skills for building data processing applications and automation β€’ Experience designing, implementing, and troubleshooting workflows using orchestration tools like Airflow or Prefect, including custom operators and scheduling optimization β€’ Advanced proficiency in SQL and experience implementing dbt/Dataform in production environments β€’ Proven experience designing and implementing cloud-based data infrastructure on GCP/AWS β€’ Experience setting up secure data transfer protocols (SFTP, API integrations) β€’ Strong understanding of version control, CI/CD pipelines, and infrastructure-as-code principles β€’ Excellent problem-solving skills and ability to debug complex data pipeline issues β€’ Strong communication skills for cross-functional collaboration and technical documentation <h3>πŸ–οΈ Benefits</h3> β€’ Equity compensation β€’ Health care, including dental and vision through our PEO Sequoia β€’ Flexible work location; co-working available β€’ 401k β€’ Paid Time Off β€’ Monthly Remote Work Stipend (help cover costs of home-office needs) β€’ Paid Parental Leave β€’ Up to 12 weeks for birthing parents β€’ Up to 8 weeks for non-birth parents β€’ 11 paid holidays β€’ 2 week sabbatical at 5 years of employment β€’ Wellbeing Perks through SpringHealth, OneMedical, PerkSpot, and SoFi

Job Details

Employment Type

Full-time

Salary Range

$145K–$177K

Estimated

Location

πŸ‡ΊπŸ‡Έ United States – Remote

Remote Work

Remote Friendly