Analytics DevOps and Platform Engineer (Flex - Hybrid)
UCLA Health Systems
Posted Friday, September 20, 2024
Posting ID: 18190_crt:1726872521912
Los Angeles, CA
Description
UCLA Health IT is looking for an outstanding Analytics DevOps and Platform Engineer, (IT Architect), to join the Solutions Architecture and Engineering (SAE) group. The goal for the SAE team is to keep our cloud and on-premises infrastructure ahead of our (rapid) customer growth, ensuring service reliability, performance, efficiency, and security. This role seeks a highly skilled and experienced IT professional with a strong foundation in cloud computing, Windows and Linux administration, Citrix virtualization, DevOps principles, and automation. The ideal candidate will possess a well-rounded skillset encompassing software development, knowledge of HPC and Citrix environments, and relevant cloud certifications. We are looking for a creative technical expert for supporting the transformation of various services from on-premises solutions to Cloud Technologies.
The Analytics DevOps and Platform Engineer will be part of the team that is responsible for designing, developing, and operating the infrastructure and applications focused on Analytics Platform (VDI, etc. on-premises and Cloud) technologies. The Analytics DevOps and Platform Engineer will have a strong technical background including a combination of both development engineering and IT skillsets and is responsible for troubleshooting, diagnosing and fixing environment issues, and developing the monitoring solutions and tools that will automate daily operational activities.
The Analytics DevOps and Platform Engineer will work closely with clients and OHIA team members to understand the stakeholder requirements that drive the analysis and design of technical solutions in our Cloud (Azure) environment. The Analytics DevOps and Platform Engineer will be responsible for the design and implementation of infrastructure and applications' build, release, deployment, and configuration activities; pipeline automation, optimization and management. Other responsibilities include architect, orchestrate, and automate the Citrix Infrastructure and service deployment; working independently and collaboratively with rest of the team and internal OHIA groups to gather requirements, prototyping, architecting, implementing/updating solutions, building and executing test plans, performing quality reviews, managing operations, and triaging and fixing operational issues.
This flexible hybrid role allows for a blend of remote and on-site work, requiring presence on-site at least 5% (3 days) - 10% (6 days) per Quarter or as needed based on operational requirements. Please note, travel to the home office" location is not reimbursed. Each employee will complete a FlexWork Agreement with their manager to outline expectations and ensure mutual understanding. These arrangements are periodically reviewed and may be adjusted or terminated as necessary.
Salary offers are based on a variety of factors including qualifications, experience, and internal equity. The full salary range for this position is $124,600 - $289,400 annually. The University anticipates offering a salary between the minimum and $180,000 annually.
Job Qualifications
Qualifications
JOB QUALIFICATIONS AND EXPERIENCE
UCLA Health IT is looking for an outstanding Analytics DevOps and Platform Engineer, (IT Architect), to join the Solutions Architecture and Engineering (SAE) group. The goal for the SAE team is to keep our cloud and on-premises infrastructure ahead of our (rapid) customer growth, ensuring service reliability, performance, efficiency, and security. This role seeks a highly skilled and experienced IT professional with a strong foundation in cloud computing, Windows and Linux administration, Citrix virtualization, DevOps principles, and automation. The ideal candidate will possess a well-rounded skillset encompassing software development, knowledge of HPC and Citrix environments, and relevant cloud certifications. We are looking for a creative technical expert for supporting the transformation of various services from on-premises solutions to Cloud Technologies.
The Analytics DevOps and Platform Engineer will be part of the team that is responsible for designing, developing, and operating the infrastructure and applications focused on Analytics Platform (VDI, etc. on-premises and Cloud) technologies. The Analytics DevOps and Platform Engineer will have a strong technical background including a combination of both development engineering and IT skillsets and is responsible for troubleshooting, diagnosing and fixing environment issues, and developing the monitoring solutions and tools that will automate daily operational activities.
The Analytics DevOps and Platform Engineer will work closely with clients and OHIA team members to understand the stakeholder requirements that drive the analysis and design of technical solutions in our Cloud (Azure) environment. The Analytics DevOps and Platform Engineer will be responsible for the design and implementation of infrastructure and applications' build, release, deployment, and configuration activities; pipeline automation, optimization and management. Other responsibilities include architect, orchestrate, and automate the Citrix Infrastructure and service deployment; working independently and collaboratively with rest of the team and internal OHIA groups to gather requirements, prototyping, architecting, implementing/updating solutions, building and executing test plans, performing quality reviews, managing operations, and triaging and fixing operational issues.
This flexible hybrid role allows for a blend of remote and on-site work, requiring presence on-site at least 5% (3 days) - 10% (6 days) per Quarter or as needed based on operational requirements. Please note, travel to the home office" location is not reimbursed. Each employee will complete a FlexWork Agreement with their manager to outline expectations and ensure mutual understanding. These arrangements are periodically reviewed and may be adjusted or terminated as necessary.
Salary offers are based on a variety of factors including qualifications, experience, and internal equity. The full salary range for this position is $124,600 - $289,400 annually. The University anticipates offering a salary between the minimum and $180,000 annually.
Job Qualifications
Qualifications
JOB QUALIFICATIONS AND EXPERIENCE
- Bachelor's or Master's degree in Computer Science (or equivalent)
- 10+ Extensive experience with enterprise-scale Linux and Windows technologies (Server platforms, Desktop platforms, Active Directory, IIS, Windows Clustering, Virtualization, and Collaboration tools).
- 7+ years of experience as an Azure/AWS/GCP Cloud with in-depth knowledge of core services and offerings.
- 5+ years of Development experience using languages like Python, Java, JSON, Node.js (JavaScript), Ruby, and relevant scripting languages (plus for development experience).
- AWS/Azure Certified Azure Cloud Engineer, Architect, and Administrator certifications are required.
- Working knowledge of DevOps principles and practices, or experience in a real-time operational role.
- Proven ability to automate processes using tools like Terraform, Chef, Puppet, Jenkins, Azure DevOps, or AWS CloudFormation.
- Experience with containerization technologies like Docker and orchestration tools like Kubernetes for efficient application deployment and management.
- Design, develop, and deploy robust environments using IaC tools (ARM templates, Terraform) and frameworks (Chef, Ansible).
- Experience working with distributed and heterogeneous technology environments (Linux and Windows).
- Deep understanding of core services on Azure/AWS (VMs, Blobs, Key Vaults, Networking) and AWS (EC2, Network, S3/EBS).
- Expertise in deploying and supporting cloud computing services (IaaS, PaaS, SaaS) across both platforms.
- Automate infrastructure provisioning, configuration, scaling, and monitoring across cloud platforms (Azure, AWS) and on-premises data centers.
- Possess expert knowledge of system administration and security protocols for both Windows and Linux environments.
- Experience with containerization (Docker, Apptainer, Singularity) and serverless computing.
- Experience with Agile/Scrum methodologies, software development lifecycle (SDLC), CI/CD, and DevOps principles. Participate in code and architecture reviews to ensure best practices are followed (peer testing, unit testing, documentation).
- Experience with version control systems (DevOps/Git) and project management tools (Jira) and testing tools (Jenkins, Maven, Selenium,).
- Build and manage state-of-the-art monitoring and log analysis tools for infrastructure health and performance (Wiz.io or similar).
- Experience deploying, managing, and monitoring tools (Prometheus, Grafana) to ensure infrastructure and application health.
- Manage and monitor both cloud and on-premises environments, including proactive troubleshooting of complex issues across systems, applications, and cloud platforms.
- Understanding of security best practices for cloud environments and data protection.
- Working knowledge of databases
- Familiarity with data storage solutions (data lakes, data warehouses) on cloud platforms and understanding of data governance principles.
- Experience building and maintaining data pipelines for data ingestion, transformation, and preparation for analytics and ML models.
- Basic understanding of ML libraries and frameworks (TensorFlow, PyTorch) is a plus.
- Expertise in setting up and managing continuous integration/continuous delivery (CI/CD) pipelines for data science and ML workflows using tools like Git, Jenkins, Azure DevOps, or AWS CodePipeline.
- Familiarity with HPC products/platforms and Citrix implementations is a plus
- Extensive experience managing both cloud (AWS/Azure) and on-premises infrastructure, including Windows Server, Linux (RHEL/CentOS/Ubuntu), Active Directory, Group Policy Objects, virtualization (VMware, Citrix), networking, security (firewalls, VPNs), and storage.
- Possess a detailed understanding of architectural dependencies, including clustering, redundancy, and disaster recovery.
- Strong knowledge of IP networking, DNS, load balancing, and best practices for securing cloud and on-premises environments.
- Proficient in automating deployments, configurations, and security settings using scripting languages (JavaScript, Python, C#) and tools (Chef, Puppet, Ansible).
- Working knowledge of database design (MS-SQL, MySQL, NoSQL, MongoDB) , administration, and technologies.
- Working knowledge of data modeling, data warehousing, and ETL processing techniques.
- Strong understanding of cloud infrastructure, (Azure/AWS) and its data storage options (SQL databases, NoSQL databases).
- Experience with data integration methodologies like ETL (Extract-Transform-Load) using SQL Server Integration Services (SSIS) (a plus).
- Experience in developing data transformation scripts (various messaging formats) and managing APIs (RESTful or others) for data exchange.
- Experience in handling large datasets with data mapping, validation, and data cleansing techniques.
- Familiarity with standard development tools (source code repositories, version control systems like Git, IDEs, SQL interpreters).
- Experience with CI/CD tools (Git) for continuous deployment and delivery.
- Experience with scripting languages (PHP, Bash, PowerShell, Perl, Python, Ruby) for data manipulation and automation.
- Experience with MLOps tools (Kubeflow, MLflow) for managing the ML lifecycle.
- Experience with big data technologies (Spark, Hadoop) for large-scale data processing.
The company is an equal opportunity employer and will consider all applications without regards to race, sex, age, color, religion, national origin, veteran status, disability, sexual orientation, gender identity, genetic information or any characteristic protected by law.