Key responsibilities include, to architect, build, and optimize scalable and cloud-agnostic data solutionsusing Azure, Databricks, Spark, Lakehouse, and Delta Lake tables, develop, implement, and maintain big data pipelines for ingesting, processing, and storing large volumes of structured and unstructured data, manage and optimize data lake and data warehouse architectures for performance, cost, and scalability.
Work within Azure environments (Azure Synapse, Data Factory, ADLS, etc.)
to develop and maintain cloud-based data solutions.
Implement best DevOps practices for CI/CD pipelines, infrastructure-as-code, and automation.
Utilize Spark, Databricks, and distributed computing to process and analyse large datasets efficiently.
Write advanced Python and T-SQL scripts for data transformations, ETL/EL processes, and real-time data processing.
#J-18808-Ljbffr