صاحب العمل نشط
حالة تأهب وظيفة
سيتم تحديثك بأحدث تنبيهات الوظائف عبر البريد الإلكترونيحالة تأهب وظيفة
سيتم تحديثك بأحدث تنبيهات الوظائف عبر البريد الإلكترونيIts a 6 months contract role extendable further based on client discretion.
Minimum 2 years of experience is required. Budget 10k to 12k AED Visa Medical Insurance Work Permit. Please let me know if you would be interested in the role or have any friends looking for job.
What Youll Do:
Design develop and maintain data pipelines for ingestion transformation and loading of data into the data warehouse.
Design develop and maintain data pipelines using PySpark and Azure Data Factory (ADF).
Implement data governance frameworks and ensure data quality security and compliance with industry standards and regulations.
Develop complex SQL queries and manage relational databases to ensure data accuracy and performance.
Establish and maintain data lineage tracking within the data fabric to ensure transparency and traceability of data flows.
Implement ETL processes to ensure the integrity and quality of data.
Optimize data pipelines for performance scalability and reliability.
Develop data transformation processes and algorithms to standardize cleanse and enrich data for analysis. Apply data quality checks and validation rules to ensure the accuracy and reliability of data.
Mentor junior team members review code and drive best practices in data engineering methodologies.
Collaborate with crossfunctional teams including data scientists business analysts and software engineers to understand data requirements and deliver solutions that meet business objectives. Work closely with stakeholders to prioritize and execute data initiatives.
Maintain comprehensive documentation of data infrastructure designs ETL processes and data lineage. Ensure compliance with data governance policies security standards and regulatory requirements.
Qualifications :
What Youll Bring:
Strong proficiency in SQL and at least one programming language (e.g. Python) for data manipulation and scripting.
Strong experience with PySpark ADF Databricks and SQL
Preferable experience with MS Fabric.
Proficiency in data warehousing concepts and methodologies.
Strong knowledge of Azure Synapse and Azure Databricks.
Handson experience with data warehouse platforms (e.g. Snowflake Redshift BigQuery) and ETL tools (e.g. Informatica Talend Apache Spark).
Deep understanding of data modeling principles data integration techniques and data governance best practices.
Preferrable experience with Power BI or other data visualization tools to develop dashboards and reports.
Remote Work :
Yes
Employment Type :
Fulltime
عن بُعد