Data Engineer Location:Redwood City, CA

Job Description:

Data Engineer Redwood City, CA Gyft.com is looking for a Data Engineer to join our team. Gyft is a fast-paced startup, on the cutting edge of the $100bn gift card market. Our digital gift card platform enables customers to manage their gift cards. Users can upload, send and redeem gift cards from their mobile phones. Gyft has a powerful backend system that integrates with POS for hundreds (and soon thousands) of merchants. Gyft is seamlessly integrated with Facebook to make sending gift cards convenient and fun! Within your role you: Gathering functional requirements, developing technical specifications, and project & test planning Support and maintain all existing data-related functionality including reports and dashboards Designing/developing ETL jobs across multiple platforms and tools including Hadoop and MapReduce Resolve defects/bugs during QA testing, pre-production, production, and post-release patches Work cross-functionally with various Intuit teams: Product Management, Project Management, Data Architects, Data Scientists, Data Analysts, Software Engineers, and other Data Engineers You thrive when given broad objectives with minimal supervision Enjoy a fast paced �startup� like environment and have excellent written and verbal communications skills. Key Qualifications and Experience: BS/MS in computer science or equivalent work experience Demonstrated ability to explain complex technical issues to both technical and non- technical audiences Strong understanding of the Software design/architecture process 5+ years of software development experience on large scale distributed systems Familiarity with Hadoop open source stack including Hive, Pig, Sqoop and RDBMS such as MySQL Work with others to develop, refine and scale data management and analytics procedures, systems, workflows, best practices and other issues. Experience building data structures to support analytical/research/actuarial functions strongly preferred. Experience with RDBMS systems (Oracle, DB2, Teradata, SQL Server strongly preferred). Experience in ETL development (DataStage, Informatica, Ab Initio, SSIS, SAS DI). Experience in C#, F#, Python, or Java development a plus. Gather and process raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.). Proficiency in programming languages like Python, Perl, Ruby, Scala, etc. Experience with Hadoop based technologies such as MapReduce, Hive MongoDB or Cassandra Experience with AWS a strong plus


Share Profile