SQL & HIVE analytical developer for BIG data

SQL & HIVE analytical developer for BIG data

Developer will be responsible for the design and development of new surveillances and enhancements to existing surveillances in Stock Market Regulation Technology domain. Review and analyze complex process, system and / or data requirements and specifications. Serve as the technical subject matter expert for systems. Serve as the primary designer for complex component designs for systems. Build, test, deploy, and document complex software components for systems. Create software engineering strategies that help identify and mitigate risks. Lead other team members in peer review of code and identify re-usable frameworks. Document and communicate development status in a timely manner including metric reporting. (more…)

Map Reduce & MRUnit, Spark, Hadoop

Developer will be responsible for analyzing requirements, prototyping data analysis solutions (primarily in Hive SQL or Spark and UNIX scripting), designing, developing and unit testing solutions, and facilitating solution deployment and support.

Candidates need to have strong capabilities in HiveQL, UNIX scripting.

Candidates should have experience with the Hadoop ecosystem and working with large data sets.

The system will consist of batch analytic processing on large sets of data.

Experience with Spark is preferred. (more…)

Spark/Scala, Hive SQL, And UNIX Scripting

Developer will be responsible for analyzing requirements, designing, developing and unit testing solutions (primarily in Spark/Scala, Hive SQL, and UNIX scripting), performance tuning, and facilitating solution deployment and support. Candidates need to have strong capabilities in programming (Java or Scala), understanding of big data processing, UNIX scripting, and experience in SQL (preferably Oracle or Postgres). Candidates should have experience with the Hadoop ecosystem and working with large data sets. The system will consist of batch analytic processing on large sets of data.

Skills/Qualifications:

Required: 3+ years IT experience
Required: Programming – Spark / Scala
Required: SQL Development
Required: Unix / Shell scripting
Required: Designing distributed solutions for parallel processing of large data
Required: Full SDLC Experience (requirements analysis, design, development, unit testing, deployment, support)
Required: Good communication skills
Preferred: Agile Scrum process experience
Preferred: Big-Data technologies, Cloud Computing
Preferred: Test driven developmen

Hadoop Hive Expert – Hive SQL, Spark, UNIX Scripting

Hive Expert – Pattern Development Job Description: Developer will be responsible for analyzing requirements, prototyping data analysis solutions (primarily in Hive SQL or Spark and UNIX scripting), designing, developing and unit testing solutions, and facilitating solution deployment and support. Candidates need to have strong capabilities in HiveQL, UNIX scripting. Candidates should have experience with the Hadoop ecosystem and working with large data sets. The system will consist of batch analytic processing on large sets of data. Experience with Spark is preferred. (more…)