Data Engineer – Cloud Data Pipelines

Overview:

We’re seeking a Data Engineer with expertise in building robust, scalable cloud-based data pipelines. You’ll play a key role in transforming raw data into meaningful insights across the organization.

Responsibilities:

  • Design and implement cloud-native data pipelines using Spark, Kafka, and cloud services
  • Build ETL workflows to ingest and process structured and unstructured data
  • Work closely with Data Scientists and Analysts to enable self-serve analytics
  • Optimize performance, reliability, and scalability of big data platforms

Requirements:

  • BS/MS in Computer Science, Engineering, or a related field
  • 3+ years of experience with Python, SQL, and distributed systems
  • Hands-on experience with AWS (Glue, Redshift, S3), GCP, or Azure
  • Proficiency in Apache Spark, Airflow, and Kafka
  • Experience with data modeling, data warehousing, and pipeline orchestration

Job Category: Data Engineer
Job Type: Full Time
Job Location: San Francisco

Apply for this position

Allowed Type(s): .pdf, .doc, .docx