New
Responsible AI Solution Risk Assessor
![]() | |
![]() United States, Washington, Redmond | |
![]() | |
OverviewAre you passionate about advancing the positive impact of AI on a global scale? In this position, you will play a pivotal role in ensuring that our AI initiatives are implemented responsibly and effectively. Our Trust and Integrity Protection (TrIP) team works with other parts of Microsoft to ensure we continue to be one of the most trusted companies in the world. We are seeking a detail-oriented and principled Responsible AI Solution Risk Assessor to be part of a team evaluating AI use cases across our organization. This role is critical in ensuring that AI solutions are developed and deployed in alignment with Microsoft's Responsible AI principles, regulatory requirements, and ethical standards. You will work closely with the Office of Responsible AI and other governance bodies to identify, assess, and mitigate risks associated with AI technologies. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day. Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
ResponsibilitiesRisk Assessment Ownership: Lead the Responsible AI risk assessment process for AI projects within your purview.Use Case Evaluation: Analyze proposed AI solutions for ethical, privacy, and security risks, including identifying sensitive use cases (e.g., facial recognition, biometric analysis, or legally sensitive applications).Escalation Management: Determine when use cases require escalation to internal review boards such as the Deployment Safety Board or other governance entities.Approval Coordination: Ensure all necessary approvals are obtained before development or deal sign-off, maintaining alignment with internal Responsible AI policies.Documentation & Compliance: Maintain thorough documentation of risk assessments, approvals, and mitigation strategies to support audit readiness and compliance.Stakeholder Engagement: Collaborate with product teams, legal, compliance, and engineering to ensure risk considerations are addressed early in the development lifecycle.Policy Integration: Translate Responsible AI policies into actionable assessment criteria and workflows.Continuous Improvement: Contribute to the evolution of risk assessment frameworks and tools based on emerging technologies and regulatory changes. |