Anyscale
Fast LLM Serving with vLLM and PagedAttention
1 year ago - 32:07
The Linux Foundation
Scalable and Efficient LLM Serving With the VLLM Production Stack - Junchen Jiang & Yue Zhu
2 weeks ago - 39:36
PyTorch
vLLM: Easy, Fast, and Cheap LLM Serving for Everyone - Woosuk Kwon & Xiaoxuan Liu, UC Berkeley
9 months ago - 23:33
LMCache Team
Create your multi-node LLM serving K8s cluster with one click
4 months ago - 0:31
Predibase
What Production-Grade LLM Serving Actually Requires (Infrastructure Deep Dive)
2 months ago - 5:58
Ahmed Tremo
How to Efficiently Serve an LLM?
11 months ago - 12:13
InfoQ
LLM Serving: The 4 Hard Truths No One Tells You
13 days ago - 49:59
PyTorch
vLLM: Easy, Fast, and Cheap LLM Serving for Everyone - Kaichao You, Tsinghua University
2 days ago - 15:05
AMD Developer Central
Simon Mo on vLLM: Easy, Fast, and Cost-Effective LLM Serving for Everyone
2 weeks ago - 18:08
Jianchang Su
[MLArchSys 2025]| Runtime Attestation for Secure LLM Serving in Cloud-Native TEE
1 month ago - 8:26
ACMMobiSys
MobiSys 25 Teaser - EdgeLoRA: An Efficient Multi-Tenant LLM Serving System on Edge Devices
2 weeks ago - 1:31
Fuhai Gao
LLM Serving (Rust) demo
7 months ago - 5:06
AI Insight News
Simplify Your Open-Source LLM Serving with Anyscale's Aviary: Ray Serve Automation & Autoscaling
2 years ago - 0:53
kexin.chu2017
[MLArchSys 2025]|SafeKV: Safe KV-Cache Sharing in LLM Serving
1 month ago - 11:27
Anyscale
Enabling Cost-Efficient LLM Serving with Ray Serve
1 year ago - 30:28
PyTorch
SGLang: An Efficient Open-Source Framework for Large-Scale LLM Serving - Liangsheng Yin
2 days ago - 19:37
MLSys Singapore
E15 | MuxServe: Flexible Multiplexing for Efficient Multiple LLM Serving (ICML'24) 【中文】
1 year ago - 35:14
John Snow Labs
Ray Aviary: Open-Source Multi-LLM Serving
1 year ago - 19:16
MIT HAN Lab
MLSys'25 - LServe: Efficient Long-sequence LLM Serving with Unified Sparse Attention
1 month ago - 11:36
Junchen Jiang
Reducing Prefill Delay for LLM Serving in RAG By Sharing Knowledge
1 year ago - 19:10
HotCarbon
Offline Energy-Optimal LLM Serving: Workload-Based Energy Models for LLM Inf. on Heterogen. Syst.
1 year ago - 10:47
AMD Developer Central
Introducing Lemonade Server: Local LLM Serving with GPU and NPU Acceleration
2 days ago - 6:55
MIT HAN Lab
MLSys'25 - QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving
1 month ago - 13:45
The Prompt Index
Efficient LLM Serving on Hybrid Real-time and Best-effort Requests
2 months ago - 3:02
IBM Technology
What is vLLM? Efficient AI Inference for Large Language Models
1 month ago - 4:58
Legion Programming System
Legion Retreat 2024 - Low-Latency, High-Performance LLM Serving and Fine-tuning - Zhihao Jia
6 months ago - 30:35
Anyscale
Introducing Ray Aviary | 🦜🔍 Open Source Multi-LLM Serving
2 years ago - 13:33
Keyur
TPI-LLM: Serving 70B-scale LLMs Efficiently on Low-resource Edge Devices
9 months ago - 8:46
Fahd Mirza
Mélange - Cost Efficient LLM Serving by Using Mixture of GPUs - Hands on Demo
1 year ago - 10:58
Red Hat AI
Unlock LLM Speed: VLLM Crushes the Competition!
1 month ago - 0:48
MLSys Singapore
E07 | Fast LLM Serving with vLLM and PagedAttention
1 year ago - 55:36
GOSIM Foundation
GOSIM CHINA 2024-Kaichao You vLLM: Easy, Fast, and Cheap LLM Serving for Everyone
8 months ago - 31:42
Charan H U
vLLM Inference Engine [ಕನ್ನಡದಲ್ಲಿ] | Easy, Fast, and Cheap LLM Serving with PagedAttention
1 year ago - 15:45
USENIX
OSDI '24 - dLoRA: Dynamically Orchestrating Requests and Adapters for LoRA LLM Serving
10 months ago - 14:34
GOSIM Foundation
GOSIM CHINA 2024 - Kaichao You: vLLM - Easy, Fast, and Cheap LLM Serving for Everyone
7 months ago - 30:12
AI Insight News
vLLM: Fast & Affordable LLM Serving with PagedAttention | UC Berkeley's Open-Source Library
2 years ago - 2:25
Fahd Mirza
LitServe - LLM Serving Inference Engine - Install and Test Locally
10 months ago - 10:29
Fahd Mirza
InstCache - A Predictive Cache for LLM Serving
5 months ago - 7:08
Neural Magic
[vLLM Office Hours #27] Intro to llm-d for Distributed LLM Inference
1 month ago - 1:19:57
YanAITalk
LLM inference optimization: Architecture, KV cache and Flash attention
10 months ago - 44:06
GOSIM Foundation
【GOSIM AI Paris 2025】Erwan Gallen & Eldar Kurtic: vLLM: Multi-Accelerator & Quantized LLM Serving
4 weeks ago - 21:08
Sway Ducky
R&B song about AnyScale's Aviary, LLM serving library (AI music video) - Sway Ducky
1 year ago - 0:55
TRYEXCEPT
Large Language Model Serving - ML Systems Design Interview
3 months ago - 12:59
AMD Developer Central
vLLM: Easy, Fast, and Cheap LLM Serving, Woosuk Kwon, UC Berkeley
6 months ago - 22:30
AI Engineer
Mastering LLM Inference Optimization From Theory to Cost Effective Deployment: Mark Moyou
6 months ago - 33:39
PyCon Lithuania
Isaac Chung - Speed up open source LLM-serving with llama-cpp-python
1 year ago - 23:41
MLSys Singapore
MLSys Seminar @SG is a special interest group for Machine Learning System researchers and engineers in Singapore. We meet ...
@MLSysSingapore subscribers
Arxiv Papers
Autellix: An Efficient Serving Engine for LLM Agents as General Programs
4 months ago - 36:28
Trelis Research
Serve a Custom LLM for Over 100 Customers
1 year ago - 51:56
UCFCompArch
Andes: Defining and Enhancing Quality-of-Experience in LLM-Based Text Streaming Services
4 months ago - 46:59
The Prompt Index
BROS: Revolutionizing LLM Request Handling!
2 months ago - 0:43
NDSS Symposium
NDSS 2025 - I Know What You Asked: Prompt Leakage via KV-Cache Sharing in Multi-Tenant LLM Serving
2 months ago - 16:22
LMCache Team
@LMCacheTeam subscribers
S.P.I.T. Media
Task Scheduling for Decentralized LLM Serving | Dr. Sanjaya Kumar Panda | GenLang 5.0
Streamed 8 days ago - 4:06:11
Arxiv Papers
[short] Infinite-LLM: Efficient LLM Service for Long Context with Attention and Distributed KVCache
1 year ago - 2:59
ACMMobiSys
@ACMMobiSys subscribers
Fahd Mirza
How LLM Use Large Context Windows
1 year ago - 3:33
Vultr
Scaling LLM Inference Globally: Novita AI & Vultr in Partnership
2 weeks ago - 13:44
MIT HAN Lab
MIT HAN Lab: Hardware, AI and Neural-nets Accelerate Deep Learning Computing Group website: hanlab.mit.edu TinyML ...
@MITHANLab subscribers
kexin.chu2017
@kexin.chu2017 subscribers
Arxiv Papers
[QA] Autellix: An Efficient Serving Engine for LLM Agents as General Programs
4 months ago - 8:20