




**Main Duties and Responsibilities** * Develop and maintain robust, efficient, and scalable ETL processes for data extraction, transformation, and loading tasks. * Design, implement, and optimize data models to meet business requirements and ensure data integrity and accuracy. * Collaborate with cross-functional teams, including data scientists, software engineers, and business analysts, to understand data needs and provide data-related solutions. * Perform data analysis to identify and resolve data quality issues, inconsistencies, and performance bottlenecks. * Build and maintain data pipeline architecture to enable data ingestion from multiple sources into data warehouses or data lakes. * Work closely with stakeholders to understand business objectives, identify data-related opportunities, and deliver actionable insights. * Develop and maintain documentation related to data processes, data models, and system architecture. Collaborate with the Data Governance team to ensure compliance with data privacy regulations and implement appropriate security measures. * **Desired Qualifications** * Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. * Proven work experience (3 years) as a Data Engineer or in a similar role. * Strong knowledge and hands-on experience with ETL processes, data modeling, and data warehousing concepts. * Proficient in SQL, with the ability to write complex queries and optimize their performance. * Experience with the Python programming language and its associated data libraries (e.g., Pandas, NumPy) for data manipulation and analysis. * In-depth knowledge of relational database systems (e.g., PostgreSQL, MySQL) and experience with query optimization techniques. * Familiarity with the Azure cloud platform and experience with its data services (e.g., Azure Data Factory).


