company_logo

Data Engineering Intern

DeHaat

Updated on: 06 July 2025

Additional Details

Work Location

Gurgaon, India

location

Job Type

Internship + Fte

job_type

Batch

2026 | 2025

batch

Stream Required

Bachelor's or Master's in Computer Science, Information Systems, Data Science

stream

Salary

4 - 6 LPA(Expected)

salary

Job Description

Overview: We’re looking for a highly motivated and technically proficient Data Engineering Intern to join our team. This internship is designed to provide real-world, impactful experience far beyond observational work. You will contribute directly to building and optimizing our data infrastructure, focusing on pipeline robustness, warehouse performance, and data reliability. Your efforts will play a vital role in enabling our Data Science, Business Intelligence, and Product teams to make informed, data-driven decisions.

 

Key Responsibilities:

  • Pipeline Development & Maintenance: Build and maintain robust ETL/ELT pipelines to ingest data into our Amazon Redshift warehouse.
  • Database Management & Optimization: Handle schema design, data modeling, and SQL performance tuning on Redshift and Postgres.
  • Troubleshooting & Resolution: Investigate and resolve data integrity issues, pipeline failures, and performance bottlenecks.
  • Data Loading & Quality Assurance: Oversee data ingestion, implement quality checks, and ensure data accuracy and integrity.
  • BI & Analytics Enablement: Collaborate with analysts to structure data for seamless reporting in Tableau, Metabase, or similar BI tools.

Qualifications:

  • Currently pursuing a Bachelor's or Master's in Computer Science, Information Systems, Data Science, or a related quantitative field.
  • Strong foundational knowledge of computer science and algorithms.

Must-Have Technical Skills:

  • Amazon Redshift: Strong understanding of columnar storage, query optimization using EXPLAIN plans, and COPY-based data ingestion.
  • Advanced SQL: Ability to write and optimize complex queries using joins, CTEs, window functions, and aggregations.
  • Database Design: Practical knowledge of schema design and data modeling (e.g., normalization, star/snowflake schemas).
  • ETL/ELT Concepts: Sound grasp of designing pipelines for data movement from source to warehouse.
  • Scripting (Python preferred): Proficient in scripting for data manipulation, automation, or API interaction.

Preferred Skills:

  • Other AWS Services: Experience with S3, Glue, Lambda, or RDS.
  • Relational Databases: Exposure to PostgreSQL or MySQL.
  • NoSQL: Understanding of NoSQL systems such as DynamoDB or MongoDB.
  • BI Tools: Experience with Tableau, Metabase, Looker, or Power BI.
  • Data Science Curiosity: Interest in tools like Pandas or platforms like SageMaker.
  • Version Control: Experience with Git and collaborative coding workflows

Disclaimer: The Job Company is an independent platform dedicated to providing information about job openings. We are not affiliated with, nor do we represent, any company, agency, or agent mentioned in the job listings. Please refer to our Terms of Services for further details.

Important: If an employer asks you to pay any kind of fee, please notify us immediately. The Job company does not charge any fee from the applicants and we do not post any jobs where companies ask candidates to pay.

Click on the Apply Now button to apply for DeHaat