Cloud Data Engineer

Location: Bangalore, India

As a Cloud Data Engineer, you will design, build, and optimise data warehouses, data lakes, and data marts for large financial institutions. You will work closely with technology and business leads to enhance critical enterprise applications across on‑prem environments and AWS, leveraging the Modern Data Stack including Snowflake and Starburst to deliver scalable, high‑quality data solutions.

What You’ll Do and How You’ll Succeed

  • Manage data analysis and integration across disparate systems.
  • Translate functional specifications from business users into technical designs for implementation and deployment.
  • Extract, transform, and load large volumes of structured and unstructured data into AWS data lakes or data warehouses.
  • Work with cross‑functional teams to develop prototypes, produce design artefacts, develop components, support SIT and UAT testing, triage issues, and perform bug fixing.
  • Optimise and fine‑tune data pipeline jobs for performance and scalability.
  • Implement data quality and validation processes to ensure data accuracy and integrity.
  • Provide problem‑solving expertise and perform complex analysis to design business intelligence integrations.
  • Convert physical data integration models and design specifications into source code.
  • Ensure high performance and quality of data integration systems to meet business requirements.

We’d Love to Hear From You If…

Experience

  • You have a bachelor’s degree in IT, Information Systems, Computer Science, Software Engineering, or a related field.
  • You have 5+ years of experience as a Data Engineer, with strong experience building data pipelines and processing large datasets.
  • You have experience working with data lakes, data warehouses, and distributed computing systems.
  • You have experience in the financial services or banking industry (preferred).

Technical Expertise

  • You have 5+ years of strong proficiency in AWS services such as AWS Glue, Redshift, EMR, RDS, Kinesis, S3, Athena, DynamoDB, Step Functions, and Lambda.
  • You have 3+ years of experience with Informatica for building data pipelines.
  • You have 2+ years of experience with Snowflake Data Platform (highly desirable).
  • You have exposure to data virtualisation platforms such as Starburst or Denodo.
  • You are skilled in HiveQL, Python, Spark, Scala, and big data processing.
  • You understand data modelling, database design, and ETL principles.
  • You are familiar with data governance, data security, and cloud compliance practices.
  • You can optimise and fine‑tune Spark jobs and data pipelines for performance.
  • AWS data‑related certifications are a plus.

Ways of Working

  • You communicate effectively and collaborate well within cross‑functional teams.
  • You are committed to high‑quality delivery, continuous improvement, and elegant solution design.
  • You approach problem‑solving with analytical depth and technical rigour.

Apply Now

Realise your potential at Thakral One.

Sending Application...