Responsibilities:
Design, develop, and maintain scalable data pipelines using modern data engineering tools and technologies.
Extract, transform, and load (ETL) data from various sources (., databases, APIs, cloud storage) into data warehouses or data lakes.
Understand data requirements from business team and ensure data quality and consistency.
Optimize data pipelines for performance and efficiency.
Optimize data infrastructure for ML model training and deployment.
Explore and experiment with new data engineering techniques and technologies.
Qualifications:
Bachelors degree in Computer Science, Data Science, or a related field.
1-2 years of hands-on experience in data engineering or a similar role.
Strong proficiency in Python and common data engineering tools (., Apache Spark, Hadoop, Airflow).
Knowledge of SQL and experience working with relational and NoSQL databases.
Understanding of data warehousing and data lake concepts.
Familiarity with AI/ML and LLM workflows and data requirements.
Excellent problem-solving, analytical, and communication skills.
Passion for data engineering and a desire to contribute to AI/ML projects.
Experience with cloud platforms (., AWS, GCP, Azure) and cloud-based data engineering services is big plus.
Knowledge of data visualization tools (., Tableau, Power BI).
Experience with machine learning frameworks (., TensorFlow, PyTorch).
Familiarity with LLM architectures and applications.
Experience
0 - 2 Years
No. of Openings
1
Education
B.Sc, B.E, B.Tech, M.C.A, M.Sc
Role
Data Engineer
Industry Type
IT-Hardware & Networking / IT-Software / Software Services
Gender
[ Male / Female ]
Job Country
India
Type of Job
Full Time
Work Location Type
Work from Home