About the job
- Design and implement robust, scalable data pipelines and processing solutions utilizing Azure and Microsoft Fabric.
- Enhance and optimize ETL/ELT workflows for extensive datasets.
- Engage with Lakehouse architectures (Medallion – Bronze, Silver, Gold) leveraging Fabric or Azure Databricks.
- Integrate diverse data sources via Azure Data Factory and Fabric Data Pipelines.
- Construct and maintain data models and Lakehouse structures within OneLake.
- Guarantee data quality, reliability, and performance across the entire platform.
- Refine Spark workloads and data transformations using PySpark and SQL.
- Collaborate with Data Architects, BI teams, and Data Scientists to enable insightful analytics and reporting.
- Contribute actively to DataOps methodologies, CI/CD pipelines, and automation processes.
- Support data governance and adhere to security best practices throughout the platform.

