Responsibilities
Create and maintain optimal data pipeline architecture
Assemble large, complex data sets that meet functional / non-functional business requirements.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Open Source and AWS big data technologies
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Work with data and analytics experts to strive for greater functionality in our data systems.
Qualifications
Experience building and optimizing big data pipelines, architectures and datasets.
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Experience interacting with customers and various stakeholders.
Strong analytical skills related to working with unstructured datasets.
Build processes supporting data transformation, data structures, metadata, dependency and workload management.
Working knowledge of message queuing, stream processing, and highly scalable big data lakes.
Strong project management and organizational skills.
Experience supporting and working with cross-functional teams in a dynamic environment.
They should also have experience using the following software/tools:
Big data technologies: Hadoop, Spark, Kafka, etc.
Relational SQL and NoSQL databases, including Postgres and Cassandra.
Data pipeline and workflow management tools: Airflow, NiFi etc.
Cloud services: AWS - EMR, RDS, Redshift, Glue. Azure - Databricks, Data Factory. GCP - Dataproc, Pub/Sub
Stream-processing systems: Storm, Spark Streaming, Flink etc.