Technical Program Manager/Release coordinator
Remote Work
Job Description:
Position: Sr Data Architect
Location: - Chicago, IL (Hybrid)
Mandatory skills
AZURE, Medallion Architecture, SQL, Databricks, and Data Lake
Job Description
As a Data Architect, you will be a part of an Agile team to build healthcare applications and
implement new features while adhering to the best coding development standards.
Responsibilities: -
• Design and architect scalable, high-performance, and cost-effective cloud-based solutions
using Azure services and Databricks. Develop comprehensive architecture blueprints that
include data pipelines, data warehousing, and advanced analytics solutions.
• Architect and implement big data processing solutions using Databricks, leveraging Apache
Spark for data transformation, ETL processes, and machine learning workflows. Design data
pipelines that handle large-scale data ingestion, processing, and storage.
• Implement data security, data governance, and compliance measures within the Databricks
environment using Unity Catalog.
• Work closely with Data engineers, DevOps teams, and other stakeholders to ensure
seamless integration of solutions. Collaborate with project managers and business analysts to
gather and interpret requirements, ensuring technical feasibility and alignment with business
objectives.
• Analyze and optimize system performance, identifying bottlenecks and implementing
solutions to enhance processing speed and efficiency. Implement monitoring tools and
strategies to proactively manage the health and performance of the Azure and Databricks
environments.
• Ensure that all solutions adhere to security best practices and compliance requirements,
including data encryption, access control, and logging. Implement strategies to protect
sensitive data and ensure compliance with relevant regulations, such as GDPR, HIPAA, etc.
• Deep understanding of Azure services, including but not limited to Azure Data Factory,
Azure Storage, Azure SQL, Azure Functions, etc. Proficiency in Databricks, Apache Spark,
and related big data technologies.
• Strong scripting skills in Python, SQL, and experience with Terraform or CloudFormation.
• Excellent problem-solving skills and the ability to work in a fast-paced environment.
• Experience with Kubernetes, Docker, and containerized applications. Familiarity with data
lake architectures, Delta Lake, and advanced analytics solutions. Strong understanding of
CI/CD pipelines, DevOps practices, and Infrastructure as Code (IaC).
Key Skills:
- Python
Python