Senior Machine Learning Engineer – LLM

Moveworks is the AI copilot that takes the friction out of work.

It unifies every business system, giving employees one place to go to find information and automate tasks, increasing employee productivity by simplifying work.

Powered by an genAI infrastructure that leverages the world’s most advanced LLMs and our proprietary MoveLM models, the Moveworks Copilot understands employee requests, devises intelligent plans, then executes actions to get work done across application boundaries.

The world’s most recognizable brands like Databricks, Broadcom, Hearst, and Palo Alto Networks trust Moveworks to automate repetitive support issues, provide a universal search interface, and common use cases across different applications.

Founded in 2016, Moveworks has raised a total of $315 million in funding, and was most recently at $2.1 billion, thanks to our award-winning product and team. In 2023, we were included in the Forbes Cloud 100 list as well as the Forbes AI 50 for the fifth consecutive year. We were also recognized by the 2023 Edison Awards for AI Optimized Productivity, and were included on Fast Company’s Most Innovative Companies list for 2024!

Moveworks has over 500 employees in six offices around the world, and is backed by some of the world’s most prominent investors, including Kleiner Perkins, Lightspeed, Bain Capital Ventures, Sapphire Ventures, Iconiq, and more.

Come join one of the most innovative teams on the planet!

What You Will Do:

We are looking for a Machine Learning Engineer to help build cutting edge ML infrastructure for building and serving LLM’s at Moveworks. This role will be critical in building, optimizing and scaling end-to-end machine learning systems. The ML infra team covers a variety of responsibilities including distributed training and inference pipeline for large language models(LLM), model evaluation and monitoring framework, LLM latency optimization, etc. These frameworks serve as a strong foundation for our hundreds of ML and NLP models in production serving hundreds of millions of enterprise employees. We are solving many challenges on scalability of services as well as optimization of core algorithms.

In this role you will work closely with our machine learning team, data infrastructure team and every core skill. Above all, your work will impact the way our customers experience AI. Put another way, this role is absolutely critical to the long term scalability of our core AI product and ultimately the company. You will be responsible for building and productionizing ML infrastructure that runs state of the art models. If you are looking for a high-impact, fast-moving role to take your work to the next level, we should have a conversation.

  • Design, build and optimize scalable machine learning infrastructure to support training, evaluation, and deployment of large language models.
  • Build abstractions to automate various steps in different ML workflows
  • Collaborate with cross functional teams of engineers, data analytics, machine learning experts, and product to build new features
  • Leverage your experience to drive best practices in ML and data engineering

What You Bring To The Table:

  • 2+ years of industry experience in Machine Learning, Infrastructure or related fields
  • Experience with deep learning framework such as Pytorch or Huggingface or LLM serving frameworks such as vLLM or TensorRT-LLM.
  • Experience with building and scaling end-to-end machine learning systems
  • Experience building scalable micro services and ETL pipelines
  • Expertise in Python and experience with performant language such as C++ or GoLang
  • Bachelor’s in Computer Science, Computer Engineering, Mathematics, or equivalent field.
  • A love of research publications in the machine learning and software engineering communities
  • Effective communicator with experience collaborating cross-functionally with other teams

Nice To Haves:

  • Experience with ML Inference optimization using TensorRT.
  • Experience with distributed training frameworks such as Deepspeed.
  • Experience in managing and scaling GPU Inference services via Kubernetes

Compensation Range: $129,000 – $257,000

  • Our compensation package includes a market competitive salary, equity for all full time roles, exceptional benefits, and, for applicable roles, commissions or bonus plans.

Ultimately, in determining pay, final offers may vary from the amount listed based on geography, the role’s scope and complexity, the candidate’s experience and expertise, and other factors.

Moveworks Is An Equal Opportunity Employer:

  • Moveworks is proud to be an equal opportunity employer. We provide employment opportunities without regard to age, race, color, ancestry, national origin, religion, disability, sex, gender identity or expression, sexual orientation, veteran status, or any other characteristics protected by law.

💰$129,000 - $257,000/Yearly

Job Types

Job Locations

Job Categories

Job Roles

Apply Now

Similar AI Jobs