|
AI, Security Architect At BNY, our culture allows us to run our company better and enables employees' growth and success. As a leading global financial services company at the heart of the global financial system, we influence nearly 20% of the world's investible assets. Every day, our teams harness cutting-edge AI and breakthrough technologies to collaborate with clients, driving transformative solutions that redefine industries and uplift communities worldwide. Recognized as a top destination for innovators, BNY is where bold ideas meet advanced technology and exceptional talent. Together, we power the future of finance - and this is what #LifeAtBNY is all about. Join us and be part of something extraordinary. We're seeking a future team member for the role of AI Security Architect to join our Cybersecurity team. This role can be in Pittsburgh, PA or Lake Mary, FL or NYC, NY. Overview BNY is seeking a AI Security Architect to lead the design, implementation, and governance of security controls for AI/ML systems across the enterprise. This role will define the target architecture and security patterns for AI-enabled products and platforms, ensuring resilient, compliant, and trustworthy AI. The ideal candidate combines deep expertise in cybersecurity and cloud with hands-on knowledge of modern AI/ML infrastructure, data protection, adversarial threat models, and secure MLOps. Primary Responsibilities
- Define enterprise AI security architecture: develop reference architectures, guardrails, and standards for secure data pipelines, model training/inference, and AI-integrated applications across on-prem and cloud.
- Secure MLOps/ML platforms: architect identity, secrets management, network segmentation, and least-privilege access for feature stores, model registries, orchestration, and deployment pipelines.
- Data protection by design: establish controls for sensitive data ingestion, anonymization/pseudonymization, encryption (at rest/in transit), tokenization, and lineage across AI workflows.
- Adversarial ML defense: design controls and tests for model poisoning, evasion, model theft/exfiltration, prompt injection, jailbreaking, data leakage, and output manipulation.
- AI supply chain security: govern third-party models, APIs, and datasets; enforce SBOMs for AI components; evaluate provenance, licensing, and dependency risk.
- Policy and governance integration: translate AI security requirements into actionable standards and control evidence; align with enterprise risk, compliance, and model governance processes.
- Threat modeling and security testing: lead threat modeling for AI systems; design red-teaming and secure evaluation methods for models and agents; integrate chaos/resilience testing.
- Secure development lifecycle: embed AI-specific security checks (static/dynamic scans, IaC policy-as-code, data quality gates, bias/robustness checks) into CI/CD and change management.
- Runtime protection: implementing guardrails, content filters, output validation, rate limiting, anomaly detection, and monitoring for AI services and agentic workflows.
- Observability and incident response: define logging/telemetry (model inputs/outputs, drift, performance, safety events); integrate AI-specific playbooks into SOC operations.
- Zero Trust for AI: design identity-aware access, micro-segmentation, and continuous verification for data scientists, services, and agents.
- Privacy and ethics controls: partner with privacy and legal to operationalize consent, minimization, purpose limitation, and responsible AI guardrails, including human-in-the-loop where appropriate.
- Resilience and continuity: design disaster recovery, backup/restore, model reproducibility, and contingency plans for AI platforms and critical use cases.
- Vendor/platform assessments: evaluate cloud AI services, open-source frameworks, and commercial tools for security posture, compliance, and fit-for-purpose.
- Risk management: lead control testing and risk assessments for AI initiatives; document residual risks and remediation plans; support audits and regulatory queries.
- Reference implementations: deliver secure patterns, sample code, and automation (e.g., reusable Terraform/Policy-as-Code, secrets patterns, logging schemas) to accelerate adoption.
- Stakeholder leadership: partner with platform engineering, data science, enterprise architecture, cyber operations, and product teams to drive end-to-end secure outcomes.
- Coaching and enablement: build education and guidance for architects, data scientists, and engineers on secure AI practices, design patterns, and common pitfalls.
- Continuous improvement: track emerging threats, standards, and best practices; lead updates to architecture and controls; measure effectiveness via KPIs and control health.
Required Qualifications
- 12+ years in cybersecurity/enterprise security architecture with 3+ years focused on AI/ML or data platform security at scale.
- Expertise in cloud security (AWS/Azure/GCP) including identity, secrets management, key management (KMS/HSM), network segmentation, and policy-as-code.
- Strong knowledge of AI/ML workflows: data ingestion/feature engineering, model training/inference, MLOps tooling (model registry, orchestrators, serving).
- Practical experience with adversarial ML concepts and defenses; familiarity with model robustness, prompt injection risks, and secure evaluation methods.
- Proficiency in designing observability/telemetry for AI systems (e.g., logging prompts/outputs, drift/quality metrics, safety events) with SIEM/SOAR integration.
- Hands-on with infrastructure-as-code (Terraform/CloudFormation), CI/CD, and secure SDLC practices tailored to data/ML systems.
- Deep understanding of data protection (encryption, tokenization, anonymization), privacy by design, and secure data lifecycle management.
- Strong stakeholder management and communication skills; ability to convert complex risks into clear architecture decisions and implementation guidance.
Preferred Qualifications
- Experience architecting secure AI agents and LLM applications including guardrails, content filters, and output validation.
- Familiarity with standards and frameworks relevant to AI and data (e.g., NIST AI RMF, cloud CIS benchmarks, OWASP for ML/LLM, privacy controls).
- Background in model governance and risk management (e.g., testing for drift, bias, stability, and explainability) and integration with enterprise control frameworks.
- Programming/scripting proficiency (Python preferred) for reference implementations, automation, and security tooling integrations.
- Experience with container security, Kubernetes, service mesh, and microservices patterns in AI platforms.
- Prior leadership in enterprise-scale transformations, enabling secure adoption of AI across multiple business lines.
At BNY, our culture speaks for itself, check out the latest BNY news at: BNY Newsroom BNY LinkedIn Here's a few of our recent awards:
- America's Most Innovative Companies, Fortune, 2025
- World's Most Admired Companies, Fortune 2025
- "Most Just Companies", Just Capital and CNBC, 2025
Our Benefits and Rewards: BNY offers highly competitive compensation, benefits, and wellbeing programs rooted in a strong culture of excellence and our pay-for-performance philosophy. We provide access to flexible global resources and tools for your life's journey. Focus on your health, foster your personal resilience, and reach your financial goals as a valued member of our team, along with generous paid leaves, including paid volunteer time, that can support you and your family through moments that matter. BNY assesses market data to ensure a competitive compensation package for our employees. The base salary for this position is expected to be between $142,000 and $259,000 per year at the commencement of employment. However, base salary if hired will be determined on an individualized basis, including as to experience and market location, and is only part of the BNY total compensation package, which, depending on the position, may also include commission earnings, discretionary bonuses, short and long-term incentive packages, and Company-sponsored benefit programs.
This position is at-will and the Company reserves the right to modify base salary (as well as any other discretionary payment or compensation) at any time, including for reasons related to individual performance, change in geographic location, Company or individual department/team performance, and market factors.
|