\n
\nWe are seeking a skilled Engineer with expertise in RedHat OpenShift and Apache Spark. This role focuses on designing, deploying, and managing scalable data processing solutions in a cloud-native environment. You will collaborate closely with data scientists, software engineers, and DevOps teams to ensure robust, high-performance data pipelines and analytics platforms.
Contract Duration: 12 Months
Responsibilities:
Platform Management: Deploy, configure, and maintain OpenShift clusters to support containerized Spark applications.
Data Pipeline Development: Design and implement large-scale data processing workflows using Apache Spark.
Optimization: Tune Spark jobs for performance, leveraging OpenShift’s resource management capabilities (e.g., Kubernetes orchestration, auto-scaling).
Integration: Integrate Spark with other data sources (e.g., Kafka, cloud storage) and sinks (e.g., databases, data lakes).
CI/CD Implementation: Build and maintain CI/CD pipelines for deploying Spark applications in OpenShift using tools like GitHub Actions, Sonar, and Harness.
Monitoring & Troubleshooting: Monitor cluster health, Spark job performance, and resource utilization using OpenShift tools (e.g., Prometheus, Grafana) and resolve issues proactively.
Security: Ensure compliance with security standards, implementing role-based access control (RBAC) and encryption for data in transit and at rest.
Collaboration: Work with cross-functional teams to define requirements, architect solutions, and support production deployments.
Qualifications:
Experience:
5+ years working on Apache Spark for big data processing.
3+ years of Django development experience.
2+ years of creating and maintaining conda environments.
4+ years managing containerized environments with OpenShift or Kubernetes.
Technical Skills:
Proficiency in Spark frameworks (Python/PySpark, Scala, or Java).
Hands-on experience with OpenShift administration (e.g., cluster setup, networking, storage).
Proficiency in creating and maintaining conda environments and dependencies.
Familiarity with Docker and Kubernetes concepts (e.g., pods, deployments, services, and images).
Knowledge of distributed systems, cloud platforms (AWS, GCP, Azure), and data storage solutions (e.g., S3, HDFS).
Programming: Strong coding skills in Python, Scala, or Java; experience with shell scripting is a plus.
Tools: Experience with Git Actions, Helm, Harness, and CI/CD tools.
Problem-Solving: Ability to debug complex issues across distributed systems and optimize resource usage.
Education: Bachelor’s degree in Computer Science, Engineering, or a related field.
Charlotte, NC
Contract
$53.56/hr - $60.35/hr
Responsibilities:
Platform Management: Deploy, configure, and maintain OpenShift clusters to support containerized Spark applications.
Data Pipeline Development: Design and implement large-scale data processing workflows using Apache Spark.
Optimization: Tune Spark jobs for performance, leveraging OpenShift’s resource management capabilities (e.g., Kubernetes orchestration, auto-scaling).
Integration: Integrate Spark with other data sources (e.g., Kafka, cloud storage) and sinks (e.g., databases, data lakes).
CI/CD Implementation: Build and maintain CI/CD pipelines for deploying Spark applications in OpenShift using tools like GitHub Actions, Sonar, and Harness.
Monitoring & Troubleshooting: Monitor cluster health, Spark job performance, and resource utilization using OpenShift tools (e.g., Prometheus, Grafana) and resolve issues proactively.
Security: Ensure compliance with security standards, implementing role-based access control (RBAC) and encryption for data in transit and at rest.
Collaboration: Work with cross-functional teams to define requirements, architect solutions, and support production deployments.
Qualifications:
Experience:
5+ years working on Apache Spark for big data processing.
3+ years of Django development experience.
2+ years of creating and maintaining conda environments.
4+ years managing containerized environments with OpenShift or Kubernetes.
Technical Skills:
Proficiency in Spark frameworks (Python/PySpark, Scala, or Java).
Hands-on experience with OpenShift administration (e.g., cluster setup, networking, storage).
Proficiency in creating and maintaining conda environments and dependencies.
Familiarity with Docker and Kubernetes concepts (e.g., pods, deployments, services, and images).
Knowledge of distributed systems, cloud platforms (AWS, GCP, Azure), and data storage solutions (e.g., S3, HDFS).
Programming: Strong coding skills in Python, Scala, or Java; experience with shell scripting is a plus.
Tools: Experience with Git Actions, Helm, Harness, and CI/CD tools.
Problem-Solving: Ability to debug complex issues across distributed systems and optimize resource usage.
Education: Bachelor’s degree in Computer Science, Engineering, or a related field.
You will receive the following benefits:
Motion Recruitment Partners (MRP) is an Equal Opportunity Employer. All applicants must be currently authorized to work on a full-time basis in the country for which they are applying, and no sponsorship is currently available. Employment is subject to the successful completion of a pre-employment screening. Accommodation will be provided in all parts of the hiring process as required under MRP’s Employment Accommodation policy. Applicants need to make their needs known in advance.