Big Data Architect (with Kafka)

Chandler, AZ

Job Description:

Project Summary: Architecture, Design andProof of Concepts.

 

Responsibilities:

  • Architecture and design of large programs
  • Lead the team and take responsibility of overall deliverables
  • Analyze, Design, support SIT, UAT, PFIX
  • Create Proof of Concept/Technology as required

Technical Skills - In Detail

• Good understanding of Data pipeline including Extraction,acquisition, transformation and visualization

• Prior experience working with RBDMS and Big data distributions

• Experience with requirements gathering, systems development,systems integration and designing/developing API

• Experience with Linux and shell programming

• Experience with frameworks like Anaconda and developing ETLusing PySpark on any major Big Data distributions

• Good understanding of XML processing using Python,Spark RDDand dataframes

• Performance tuning, unit testing and integration testing

• Excellent communication and articulation skills

• Self starter and ability to work in dynamic and agileenvironment

• Experience of working with Hadoop ecosystem – MapReduce, Hive,HBase

• Experience of working with at least one NoSQL database likeC*, MongoDB

 

ElasticSearch




Key Skills:

  • -

Share Profile