Senior Data Engineer Location:Seattle, WA

SEATTLE, WA

100,000 - 200,000

Job Description:

Role:Senior Data Engineer

Location:Seattle, WA

A day in the life...Responsible for building real time streaming applications that enable streaming analytics, Business intelligence analytical and operational reporting using modern open source big data technologies.

Ingest data from heterogeneous data sources and publish them as enterprise business events in a Kafka stream.

Building raw data stores and data processing capabilities using big data technologies.


Responsible for building and consuming :-

  • APIs.Building Scalable Data Pipelines for generating training data sets for machine learning deliverable.
  • Mentor Junior resources and drive end to end design, implementation and delivery of engineering components.
  • Experience in Deploying, maintaining and building apps on distributed stream processing engines such as Storm, Flink, Nifi, Spark etc.
  • Experience building data transformation layers, ETL frameworks using big data technologies such as Hive, Spark, Presto etc.
  • Experience working with container management technologies such as Docker, Kubernetes.
  • Strive for continuous improvement of code quality and development practicesWillingness to adapt to and self learn new technologies and deliver on them.
  • Building and maintaining solutions on highly available environments.
  • Working knowledge of CI/CD.Working knowledge of building telemetry and data integrity checks as part of delivery of applications.
  • Technical expertise to build code that is perform ant as well as secure.
  • Technical depth and vision to perform POCâ??s and evaluate different technologies.
  • Translate business issues to technical terms.Understand, leverage and applies best practices effectively.
  • Also leads by example and comes up with coding standards and best practices for technology.
  • Collaborate with cross-functional teams business stakeholders, engineers, program management, project management, etc- to produce the best solutions possible.


  • Anticipate system/application challenges and proposes solutions for the same.
  • Contribute to story sizing and work estimates for implementation, validation, delivery and documentation.
  • Review user stories to ensure a quality user experience, well-defined acceptance criteria and thorough test coverage.
  • Participate in design and code review to ensure quality and test ability of feature code.
  • Implement test automation to validate new and existing code.Adjust positively to quickly-changing priorities and shifting goals.
  • You own this if youâ?¦5-7 years of professional experience in practice area.
  • Proven high level of expertise in Java and related technology stacks.
  • Cloud Computing Experience (e.g AWS, Azure) is a plus.Good Working knowledge of Apache Kafka.Good working knowledge of containerized technologies such as Docker, Kubernetes.
  • Working knowledge of big data technologies such as Apache Strom, Flink, Nifi, Spark, Presto, Elastic Search, DynamoDB and other relational data stores.
  • Experience Building data warehouses/data marts a plusPast experience building a data lake on Hadoop/AWS a plusProven proficiency in API development (REST).BS or MS in Computer Science or equivalent.
  • Agile software development experience.Experience working with Elasticsearch, NOSQL data stores is a plus.Experience working in a metrics driven environment is a plus.Prior experience with machine learning algorithm implementation is a plus.


Share Profile