Role Description:
We are seeking an experienced Data Engineering Specialist to join our dynamic team. The ideal candidate should have a strong background in Python and Spark, with a minimum of 3 years of relevant experience. The primary focus of this role is to design, develop, and maintain robust data pipelines and data integration solutions. The candidate should have a solid understanding of big data fundamentals, including Hive, Hadoop, and related technologies. Experience in agile delivery and working with cloud-based platforms is highly desirable, and knowledge of the insurance domain and data modeling is a definite plus.
Responsibilities:
- Design, develop, and maintain scalable data pipelines and data integration solutions.
- Collaborate with cross-functional teams to gather and analyze data requirements.
- Extract, transform, and load (ETL) data from various sources into the data warehouse or data lake.
- Develop and optimize data processing jobs using Python and Spark.
- Implement and maintain data governance and data quality standards.
- Perform data profiling and analysis to identify data quality issues and provide solutions.
- Work closely with stakeholders to understand business needs and translate them into technical requirements.
- Collaborate with data scientists and analysts to support their data needs and ensure data availability.
- Monitor and optimize the performance of data processing workflows.
Requirements:
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- Minimum of years of experience in data engineering, with a focus on Python, SQL and Spark.
- Experience in agile software development methodologies and delivering data engineering projects in an agile environment.
- Familiarity with cloud-based platforms (., AWS, Azure, Google Cloud) and hands-on experience in deploying data solutions in the cloud is a plus.
- Understanding of data modeling concepts and experience working with relational databases and data wa