Senior Data Engineer/Architect/Java/Python/PostGres/Spark/Kafka

Arlington, VA


Full Time

$150k - $190k

Job Description:  Our client is a leading political analytics company that leverages cutting-edge technology to provide comprehensive solutions for data-driven decision-making. They specialize in harnessing the power of big data to deliver actionable insights for clients in and around the bi-partisan arena. 

They are seeking a highly skilled and experienced Senior Data Engineer/Architect to join their dynamic team. This role is a blend of data engineering, software engineering, and big data responsibilities, and ideal candidate will have a strong background in building and optimizing data pipelines, designing robust architectures, and implementing scalable solutions to support growing data needs.

This role is hybrid in Arlington, VA with 4 days on-site and remote Fridays. 
- Design, develop, and maintain scalable data pipelines to process and analyze large datasets.
- Ensure data integrity, quality, and security across all data workflows.
- Implement data validation and transformation processes to support data analytics and reporting.
- Develop and maintain high-quality, reusable code in Java and Python.
- Collaborate with software engineering teams to integrate data solutions into existing applications.
- Optimize code performance and scalability to handle large-scale data processing.
- Architect and manage big data infrastructure using technologies such as Spark and Kafka.
- Implement real-time data processing and streaming solutions to support business needs.
- Develop and maintain data lakes and warehouses to ensure efficient data storage and retrieval.
- Design and manage both SQL and NoSQL databases to support diverse data requirements.
- Design and implement data models and architectures that support data analytics and business intelligence.
- Collaborate with stakeholders to define data requirements and ensure alignment with business goals.
- Develop and maintain documentation for data architecture and processes.
Required Skills/Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- 5+ years of experience in data engineering, software engineering, or a related role.
- Proficiency in Java and Python programming languages.
- Strong experience with SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra).
- Hands-on experience with big data technologies such as Spark and Kafka.
- Familiarity with the ELK stack (Elasticsearch, Logstash, Kibana) for log analysis and monitoring.
- Strong understanding of data architecture principles and best practices.
- Excellent problem-solving skills and the ability to work independently or as part of a team.
- Strong communication skills to collaborate effectively with technical and non-technical stakeholders.
Preferred Skills/ Qualifications:
- Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services.
- Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes).
- Familiarity with machine learning and data science concepts.
- Experience with CI/CD pipelines and DevOps practices.
- Competitive salary and performance-based bonuses.
- Comprehensive health, dental, and vision insurance.
- 401(k) retirement plan with company match.
- Professional development opportunities and ongoing training.

**Applicants must be currently authorized to work in the US on a full-time basis now and in the future.**


Posted by: Lindsay Troyer