Skip to content

About Us

Welcome to Fast-LLM! We are a global team of engineers, researchers, and AI professionals led by the Foundation Models Lab at ServiceNow Research, dedicated to advancing large language models (LLMs) and providing the highest-performance tools for serious users. Designed with professionals, research institutions, and enterprises in mind, Fast-LLM offers the speed, scalability, and flexibility needed to train the biggest and most complex models. Our commitment to open-source ensures that you have full control over your workflows, without the limitations or compromises of commercial frameworks.

🚀 Our Mission

Our mission is to deliver a best-in-class library for training large-scale language models, combining cutting-edge performance with robust, customizable features. Fast-LLM is built to meet the needs of researchers and organizations who push the boundaries of generative AI, enabling them to train state-of-the-art models more efficiently. By optimizing training workflows and scaling to massive compute clusters, we help professionals unlock the full potential of LLMs, reducing costs and time-to-deployment for ambitious AI projects.

🌍 Our Vision

We envision Fast-LLM as the go-to solution for serious AI practitioners who require more than what typical frameworks can offer. Our goal is to empower research institutions, corporate AI teams, and universities to train sophisticated models that exceed the capabilities of standard tools. By creating a highly performant and customizable library, we aim to be the backbone of cutting-edge AI research and development, equipping experts with the tools they need to tackle the toughest training challenges.

🎯 Our Values

At Fast-LLM, we adhere to a set of guiding principles that define our approach:

  • Performance-Driven: We are relentless in our pursuit of speed and efficiency. Fast-LLM is built to reduce training time and scale to the largest clusters, enabling our users to achieve breakthrough results faster.
  • Professional-Grade Customization: We understand that serious AI work demands flexibility. Fast-LLM is designed for extensive customization, allowing users to tailor every aspect of the training process to their unique needs.
  • Open Innovation: While we cater to advanced users, our commitment to open-source ensures that innovation remains accessible. We believe in building a community where professionals can collaborate and contribute to shaping the future of AI.
  • Reliability at Scale: Fast-LLM is built with rigorous standards to support production-level workloads. We prioritize stability, reproducibility, and robustness, ensuring that your models can scale from research to real-world applications seamlessly.

👥 Meet the Team

Fast-LLM is led by the Foundation Models Lab at ServiceNow Research, with development driven by a dedicated group of professionals who bring extensive expertise in AI, machine learning, and distributed systems. While the project direction is guided by the Foundation Models Lab, contributions come from a growing network of researchers, developers, and industry experts worldwide. Here are some of the key members leading the project:

  • Joel Lamy Poirier - Lead Developer and maintainer, ServiceNow Research: Joel spearheads the core development, ensuring that Fast-LLM delivers on its promise of speed and scalability.
  • Sean Hughes - Ecosystem Director, ServiceNow Research: Sean focuses on building partnerships and open scientific collaborations to advance Fast-LLM's capabilities and reach.
  • Torsten Scholak - Research Lead, ServiceNow Research: Torsten leads our research efforts, driving the scientific innovations that keep Fast-LLM at the forefront of AI training.

Our core team includes members affiliated with ServiceNow Research, as well as other contributors who bring unique perspectives and skills to the project. We welcome new participants from the broader AI community who share our vision of creating the best tools for training large-scale language models.