Data Engineer (SQL & Snowflake)
Location:
Bangalore
Employment Type:
Full-Time
Experience Level:
Mid-Senior
Reports To:
Team Lead
Job Description:
We are looking for a skilled Data Engineer with expertise in SQL and Snowflake to join our data team. The ideal candidate should have 5+ years of experience working with data pipelines, ETL processes, and cloud-based data warehousing solutions. You will be responsible for building and optimizing data pipelines, ensuring data accuracy, and working with cross-functional teams to support various data-driven initiatives. The candidate should be proficient in designing data models and possess a deep understanding of best practices in data engineering.
Key Responsibilities:
- Design and develop efficient data pipelines using SQL and Snowflake for data ingestion, transformation, and storage.
- Build and maintain ETL processes to move large datasets from various sources into the data warehouse.
- Work closely with data analysts and business stakeholders to understand data requirements and deliver appropriate solutions.
- Create complex SQL queries and scripts for data analysis and reporting needs.
- Optimize Snowflake database performance by implementing best practices for clustering, partitioning, and query tuning.
- Ensure data accuracy and consistency by implementing robust validation and testing procedures.
- Collaborate with the team to design and implement scalable data models that meet the needs of business stakeholders.
- Monitor and troubleshoot data pipeline issues, ensuring high availability and reliability of the data platform.
- Create and maintain technical documentation for data flows and processes.
Required Skills & Qualifications:
- 5+ years of experience in data engineering, working with large-scale data systems.
- Strong proficiency in SQL for data querying, analysis, and optimization.
- Experience with Snowflake, including SnowSQL, data warehousing concepts, and performance optimization techniques.
- Experience in building ETL pipelines using tools like Talend, Informatica, or Python-based ETL frameworks.
- Understanding of data modeling techniques and best practices.
- Proficiency with cloud platforms like AWS, Azure, or Google Cloud for data storage and management.
- Experience in working with version control systems like Git.
- Excellent problem-solving skills and ability to work independently and in a team.
- Bachelor’s degree in Computer Science, Information Systems, or related field preferred.
Nice to Have:
- Experience with data orchestration tools like Apache Airflow.
- Knowledge of Python or R for data manipulation and automation.
- Familiarity with data visualization tools like Power BI or Tableau.
- Understanding of DevOps and CI/CD practices in data engineering.
- Experience with machine learning models and data science workflows.