Responsibilities:
Designing and implementing data pipelines to collect, clean, and transform data from various sources
Building and maintaining data storage and processing systems, such as databases, data warehouses, and data lakes
Ensuring data is properly secured and protected
Developing and implementing data governance policies and procedures
Collaborating with business analysts, data analysts, data scientists, and other stakeholders to understand their data needs and ensure they have access to the data they need
Sharing knowledge with the wider business, working with other BAs and technology teams to make sure processes and ways of working is documented.
Collaborate with Big Data Solution Architects to design, prototype, implement, and optimize data ingestion pipelines so that data is shared effectively across various business systems.
Ensure the design, code and procedural aspects of solution are production ready, in terms of operational, security and compliance standards.
Participate in day-to-day project and agile meetings and provide technical support for faster resolution of issues.
Clearly and concisely communicating to the business, on status of items and blockers.
Have an end-to-end knowledge of the data landscape within the company.
Skills & Experience:
10+ years of design & development experience with big data technologies like Azure, AWS or GCP
Preferred is Azure & Databricks, with experience in Azure DevOps
Experience in data visualising technology in DL, like PowerBI
Proficient in Python, PySpark and SQL
Proficient in querying and manipulating data from various DB (relational and big data).
Experience of writing effective and maintainable unit and integration tests for ingestion pipelines.
Experience of using static analysis and code quality tools and building CI/CD pipelines.
Excellent communication, problem-solving, and leadership skills, and be able to work well in a fast-paced, dynamic environment.