Job Details

ID #54933054
State New Mexico
City Guadalajara
Job type Full-time
Salary USD TBD TBD
Source KMS Technology
Showed 2025-12-08
Date 2025-12-08
Deadline 2026-02-06
Category Et cetera
Create resume
Apply Now

Senior Data Engineer

New Mexico, Guadalajara 00000 Guadalajara USA
Apply Now

We are seeking a highly experienced Senior Data Engineer specializing in the Microsoft Fabric platform. This role is responsible for leading the setup, design, and development of scalable, high-performance data transformation solutions across a unified Lakehouse architecture. The ideal candidate will excel in building dynamic ELT pipelines, optimizing workspace performance, and enforcing governance standards using Fabric Data Pipelines, Synapse SQL, and OneLake.You will play a key role in shaping foundational architecture, implementing best practices, and ensuring reliable, high-quality data operations across Fabric environments. Responsibilities:Design and develop dynamic, parameterized, and reusable data pipelines within Microsoft Fabric for efficient data movement and control flow.Implement advanced data transformations by developing complex ELT logic using T-SQL within Synapse SQL endpoints and stored procedures.Lead the technical setup and foundational configuration of Fabric environments, including resource provisioning, security setup, and governance best practices.Define and enforce metadata management standards, including lineage, tagging, and discoverability requirements across Fabric.Architect OneLake ingestion and organization patterns, ensuring data is correctly structured (e.g., Delta format) for downstream Fabric workloads.Ensure workspace compatibility and performance optimization across both Dedicated Premium and Shared Fabric capacity environments.Translate complex business data models into scalable, resilient architectural solutions using Fabric components such as Lakehouse, Warehouses, and Notebooks.Optimize T-SQL logic and pipeline execution to reduce latency, improve performance, and manage compute resource costs.Implement robust monitoring, alerting, and error-handling frameworks to ensure operational reliability and data integrity for production pipelines.(Nice-to-have) Integrate PySpark/Spark notebooks into Fabric pipelines for advanced data processing or machine learning workloads.

 

Apply Now Report job