Developer will be responsible for analyzing requirements, prototyping data analysis solutions (primarily in Hive SQL or Spark and UNIX scripting), designing, developing and unit testing solutions, and facilitating solution deployment and support.
Candidates need to have strong capabilities in HiveQL, UNIX scripting.
Candidates should have experience with the Hadoop ecosystem and working with large data sets.
The system will consist of batch analytic processing on large sets of data.
Experience with Spark is preferred.
7+ years IT experience
Strong HiveQL and SQL Development skills
Performance Tuning Map Reduce/Hive
The candidate will be interviewing for a big data project which processes over 3 billion records daily. Experience with Map Reduce and MRUnit is preferred. Experience with Spark and Scala is a plus. Experience leading a small team is a plus.