Acceldata is the market leader in Enterprise Data Observability, providing solutions that help build and operate great data products. The Product Support Engineer will take ownership of complex customer environments, mentor junior team members, and drive the reliability of Hadoop and Spark-based data processing systems.
Responsibilities:
- Strong desire to tackle hard technical problems in Hadoop and proven ability to do so with little or no direct daily supervision
- Troubleshooting: Provide tier-2/3 support for data or performance issues in Hadoop clusters across the entire technical stack
- Debugging: Conduct deep-dive debugging and optimisation of Hadoop clusters, including NiFi, Impala, and Spark jobs
- Migration: Lead product support during ODP Hadoop migrations, upgrades, ensuring post-migration stability, addressing upgrades and evolving technical hurdles
- Optimisation: Design and optimise distributed Hadoop-based applications to ensure low-latency, high-throughput performance for big data workloads
- We are looking for someone adaptable enough to bridge the gap between EST and PST hours. This position involves weekend coverage, so the ability to pivot working hours is essential
Requirements:
- 5+ years of hands-on experience working with Hadoop environments
- Technical proficiency in core Hadoop services (HDFS, YARN, and Hive/Impala) and good working knowledge of Kafka, NiFi, Ambari, and Cloudera Manager internals
- Extensive experience in troubleshooting and debugging Hadoop components
- Advanced skills in configuring, tuning, and troubleshooting Red Hat and Debian-based distributions
- Strong desire to tackle hard technical problems in Hadoop and proven ability to do so with little or no direct daily supervision
- Provide tier-2/3 support for data or performance issues in Hadoop clusters across the entire technical stack
- Conduct deep-dive debugging and optimisation of Hadoop clusters, including NiFi, Impala, and Spark jobs
- Lead product support during ODP Hadoop migrations, upgrades, ensuring post-migration stability, addressing upgrades and evolving technical hurdles
- Design and optimise distributed Hadoop-based applications to ensure low-latency, high-throughput performance for big data workloads
- Ability to pivot working hours to cover weekend shifts
- Proficiency in Python, Bash, or Scala for system automation and performance monitoring