Staff Applied Research Scientist
ServiceNow Research does both fundamental and applied research to futureproof AI-powered experiences. We are a group of researchers, applied scientists, and developers who lay the foundations, research, experiment, and de-risk AI technologies that unlock new work experiences in the future. In this role, you will be part of the very dynamic AI Trust & Governance Lab, whose objective is to promote breakthroughs and advances in trustworthy AI. Our work stretches across the main pillars that drive trust, including safety, reliability, robustness, security, and more. We work on methods to identify, understand and measure existing and emergent risks and capabilities. We do this through research, experimentation, prototyping and advising to help teams throughout the company strengthen and deepen their approach to trustworthy AI.
Role:
Applied research involves applying concepts and methods emerging in fundamental research to real world contexts, exploring ways in which they might need to be changed to increase their relevance and scalability for teams throughout the company. In this role, you will work alongside other Trust & Governance applied researchers in the field of trustworthy AI. Overwhelmingly, this will involve working on challenges associated with large, generative models across a wide variety of use cases. In some cases, you
might work on training new models; in others, you might focus on risk detection and measurement for models built or fine-tuned by others. You will thrive in this role by being able to quickly understand the problem at hand, consider potential solutions, and validate your ideas. As part of the Trust & Governance Lab, good communication skills are a must to bridge gaps between teams in fundamental research, product, governance, and beyond.
Qualifications:
To be successful in this role you have:
- A Master’s degree with 4 years of relevant experience (preferably in industry) or a Ph.D. in a relevant field with 2 years of relevant experience;
- Passion for and demonstrated experience in trustworthiness-related topics;
- Familiarity and experience working with a range of approaches to evaluating, benchmarking and testing the performance and risks in large language and other generative AI models;
- Familiarity with adversarial testing approaches, including but not limited to red teaming;
- Experience using state-of-the-art generative models for inference and in fine-tuning and evaluating such models is an important asset;
- Experience building and working with prototypes of models and measurement approaches, as well as a capacity to quickly validate whether a certain avenue is worth pursuing;
- Experience conducting literature reviews and evaluating and implementing (sometimes from scratch) promising solutions identified;
- Familiarity with popular deep learning frameworks and tools, such as PyTorch, Hugging Face models, DeepSpeed, etc.
- Ability to work on large projects with self-directed initiatives;
- Capacity to deal well with ambiguity, prioritizing needs, and delivering results in a dynamic environment;
- Ability to communicate research findings to a non-research audience; and
- A ‘getting things done’ mindset.
Similar AI Jobs
VP, Ai Research and Infrastructure
at Humane
🌎USA
💰$290,000 - $390,000/Yearly
Apply Now8 months agoAI/ML Research Intern
at Intel Corporation
🌎USA
💰$63,000 - $166,000/Yearly
Apply Now9 months agoAI Research Scientist
at Intel Corporation
🌎USA
💰$186,522 - $279,772/Yearly
Apply Now9 months agoHead of AI for Science & Research and Creation
at dsm-firmenich
Apply Now9 months ago- Apply Now9 months ago
Research Engineer, Robotics
at Google DeepMind
🌎USA
💰$112,000 - $203,000/Yearly
Apply Now9 months ago