Big Data engineer with solid functional Scala and distributed programming experience (2 or more years)
Some other requirements could be:
· Proven in-depth knowledge of the following functional and technical topics:
· Architecting Shared service Platform on MPP ( Netezza, Teradata) or on the Hadoop based platform
· In-Memory Distributed Data Grid, Distributed File Systems, cluster and parallel compute architectures.
· Traditional databases (IBM DB2, MS SQL, MySQL etc)
· Logical and Physical Data modeling, Data Dictionaries
· High availability and contingency solutions
· Knowledge of Data Quality Management and Master Data Management concepts
· Demonstrable knowledge of Continuous Integration/Test Driven Development and Code Analysis including its application within the software development lifecycle.
· Excellent written and spoken communication skills.
· A track record of making complex business decisions with authority, even in times of ambiguity, considering the potential long term risks and implications.
· Proponent for innovation, best practices, sound design with data & information optimization in mind, strong development habits, and efficient team/project structures.
· Capable of articulating complex designs, code and applications for large scale projects to a simple to understand non-technical client-facing standard
Hadoop Distributions, preferably Hortonworks or Cloudera.
Experience with Languages like Python, Spark Java
Knowledge in AWS.
Por favor, para apuntarte a este trabajo visita www.linkedin.com.