Senior Data Engineer ( Hadoop Ecosystem)

Not Interested
Bookmark
Report This Job

profile Job Location:

Bangalore - India

profile Monthly Salary: Not Disclosed
profile Experience Required: 5years
Posted on: 5 hours ago
Vacancies: 1 Vacancy

Job Summary


  • Minumum 5 years of experince in Data Engineering.
  • Design develop and maintain robust data pipelines in Hadoop and related ecosystems ensuring data reliability scalability and performance.
  • Implement data ETL processes for batch and streaming analytics requirements.
  • Optimize and troubleshoot distributed systems for ingestion storage and processing.
  • Collaborate with data engineers analysts and platform engineers to align solutions with business needs.
  • Ensure data security integrity and compliance throughout the infrastructure.
  • Maintain documentation and contribute to architecture reviews.
  • Participate in incident response and operational excellence initiatives for the data warehouse.
  • Continuously learn mindset and apply new Hadoop ecosystem tools and data technologies.




Requirements


  • Proficiency in Hadoop ecosystems such as Spark HDFS Hive Iceberg Spark SQL.
  • Extensive experience with Apache Kafka Apache Flink and other relevant streaming technologies.
  • Proven ability to design and implement automated data pipelines and materialized views.
  • Proficiency in Python Unix or similar languages.
  • Good understanding of SQL oracle SQL server or similar languages.
  • Ops & CI/CD: Monitoring (Prometheus/Grafana) logging pipelines (Jenkins/GitHub Actions).
  • Core Engineering: Data structures/algorithms testing (JUnit/pytest) Git clean code.

Benefits


  • Competitive salary and performance-based bonuses.
  • Comprehensive insurance plans.
  • Collaborative and supportive work environment
  • Chance to learn and grow with a talented team.
  • A positive and fun work environment.



Required Skills:

Proficiency in Hadoop ecosystems such as Spark HDFS Hive Iceberg Spark SQL. Extensive experience with Apache Kafka Apache Flink and other relevant streaming technologies. Proven ability to design and implement automated data pipelines and materialized views. Proficiency in Python Unix or similar languages. Good understanding of SQL oracle SQL server or similar languages. Ops & CI/CD: Monitoring (Prometheus/Grafana) logging pipelines (Jenkins/GitHub Actions). Core Engineering: Data structures/algorithms testing (JUnit/pytest) Git clean code.

Minumum 5 years of experince in Data Engineering.Design develop and maintain robust data pipelines in Hadoop and related ecosystems ensuring data reliability scalability and performance.Implement data ETL processes for batch and streaming analytics requirements.Optimize and troubleshoot distributed ...
View more view more

Company Industry

IT Services and IT Consulting

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala