Description:
Data Engineer Lakehouse (PySpark / Databricks / Airflow / AWS) Role Overview Responsible for designing, developing, and maintaining data pipelines into a Lakehouse architecture to support analytics, reporting, and regulatory needs. Works extensively with PySpark on Databricks, orchestrates workflows with Apache Airflow, and leverages AWS cloud services for storage, compute, and security. Key Responsibilities Design, Build and optimize ETL/ELT pipelines using PySpark on Databricks.Manage Apache A
Sep 23, 2025;
from:
dice.com