Senior AI Security Engineer
New York, New York
Direct Hire
$150k - $180k
Our client is seeking a Senior Security Engineer with expertise in both cybersecurity and AI-driven systems to join our clients team. In this role, you will design, implement, and optimize security solutions leveraging AI agentic architectures and prompt engineering techniques. You will work at the forefront of AI + security, building systems that can autonomously detect, investigate, and respond to threats, while ensuring safe and reliable AI behavior in high-stakes security contexts.
This role is ideal for a seasoned security engineer who is excited about harnessing the power of LLMs and agentic frameworks to transform how our client defends modern infrastructure.
Responsibilities-
AI Security Engineering
-
Research, design, and deploy agentic AI systems to augment threat detection, incident response, and vulnerability management.
-
Develop prompt engineering strategies that improve accuracy, reliability, and adversarial robustness of AI-driven security workflows.
-
Fine-tune LLMs and design structured reasoning patterns for automated security playbooks.
Threat Detection & Response
Integrate AI agents with SIEM, SOAR, and endpoint security platforms to enable autonomous or semi-autonomous response.
Build pipelines that allow AI systems to collect, interpret, and correlate telemetry across infrastructure, applications, and cloud services.
Validate AI-driven detections against red team/adversarial simulation outputs.
Safety, Reliability & Governance
Anticipate and mitigate prompt injection, model manipulation, and other AI-specific attack vectors.
Implement guardrails, evaluation frameworks, and human-in-the-loop systems for responsible AI deployment in security operations.
Contribute to security policies and best practices for AI-assisted decision-making.
Collaboration & Enablement
Partner with ML, Platform, and Security teams to ensure seamless integration of AI security agents into existing infrastructure.
Mentor engineers on safe prompt design, AI system evaluation, and security-first deployment practices.
Contribute thought leadership on the emerging discipline of AI + cybersecurity convergence.
QualificationsRequired:
-
5–7+ years in security engineering, incident response, or infrastructure security.
-
Strong background in threat detection, security automation, and adversarial defense.
-
Hands-on experience with LLMs, prompt engineering, or agentic frameworks (LangChain, AutoGPT, OpenAI function calling, etc.).
-
Familiarity with common AI vulnerabilities (prompt injection, data poisoning, jailbreaks).
-
Proficiency with Python and experience integrating APIs, orchestration layers, and security tools.
Preferred:
-
Experience building AI-enabled SOC automation or threat hunting platforms.
-
Familiarity with ML lifecycle (training, fine-tuning, evaluation).
-
Knowledge of compliance and governance considerations for AI in security (NIST AI RMF, EU AI Act, etc.).
-
Contributions to research, open source, or publications on AI + security.