PyTorch
vLLM: Easy, Fast, and Cheap LLM Serving for Everyone - Woosuk Kwon & Xiaoxuan Liu, UC Berkeley
9 months ago - 23:33
Anyscale
Fast LLM Serving with vLLM and PagedAttention
1 year ago - 32:07
InfoQ
LLM Serving: The 4 Hard Truths No One Tells You
11 days ago - 49:59
The Linux Foundation
Scalable and Efficient LLM Serving With the VLLM Production Stack - Junchen Jiang & Yue Zhu
11 days ago - 39:36
Ahmed Tremo
How to Efficiently Serve an LLM?
11 months ago - 12:13
Predibase
What Production-Grade LLM Serving Actually Requires (Infrastructure Deep Dive)
1 month ago - 5:58
LMCache Team
Create your multi-node LLM serving K8s cluster with one click
4 months ago - 0:31
AMD Developer Central
Simon Mo on vLLM: Easy, Fast, and Cost-Effective LLM Serving for Everyone
13 days ago - 18:08
Anyscale
Enabling Cost-Efficient LLM Serving with Ray Serve
1 year ago - 30:28
ACMMobiSys
MobiSys 25 Teaser - EdgeLoRA: An Efficient Multi-Tenant LLM Serving System on Edge Devices
2 weeks ago - 1:31
Jianchang Su
[MLArchSys 2025]| Runtime Attestation for Secure LLM Serving in Cloud-Native TEE
1 month ago - 8:26
AI Insight News
Simplify Your Open-Source LLM Serving with Anyscale's Aviary: Ray Serve Automation & Autoscaling
2 years ago - 0:53
Fuhai Gao
LLM Serving (Rust) demo
7 months ago - 5:06
kexin.chu2017
[MLArchSys 2025]|SafeKV: Safe KV-Cache Sharing in LLM Serving
1 month ago - 11:27
MLSys Singapore
E15 | MuxServe: Flexible Multiplexing for Efficient Multiple LLM Serving (ICML'24) 【中文】
1 year ago - 35:14
John Snow Labs
Ray Aviary: Open-Source Multi-LLM Serving
1 year ago - 19:16
Junchen Jiang
Reducing Prefill Delay for LLM Serving in RAG By Sharing Knowledge
1 year ago - 19:10
MLSys Singapore
E07 | Fast LLM Serving with vLLM and PagedAttention
1 year ago - 55:36
MIT HAN Lab
MLSys'25 - LServe: Efficient Long-sequence LLM Serving with Unified Sparse Attention
1 month ago - 11:36
The Prompt Index
Efficient LLM Serving on Hybrid Real-time and Best-effort Requests
2 months ago - 3:02
MIT HAN Lab
MLSys'25 - QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving
1 month ago - 13:45
Anyscale
Introducing Ray Aviary | 🦜🔍 Open Source Multi-LLM Serving
2 years ago - 13:33
DevConf
PagedAttention: Revolutionizing LLM Inference with Efficient Memory Management - DevConf.CZ 2025
2 weeks ago - 28:05
Legion Programming System
Legion Retreat 2024 - Low-Latency, High-Performance LLM Serving and Fine-tuning - Zhihao Jia
6 months ago - 30:35
HotCarbon
Offline Energy-Optimal LLM Serving: Workload-Based Energy Models for LLM Inf. on Heterogen. Syst.
1 year ago - 10:47
Keyur
TPI-LLM: Serving 70B-scale LLMs Efficiently on Low-resource Edge Devices
9 months ago - 8:46
USENIX
OSDI '24 - dLoRA: Dynamically Orchestrating Requests and Adapters for LoRA LLM Serving
10 months ago - 14:34
Red Hat AI
Unlock LLM Speed: VLLM Crushes the Competition!
1 month ago - 0:48
Fahd Mirza
Mélange - Cost Efficient LLM Serving by Using Mixture of GPUs - Hands on Demo
1 year ago - 10:58
Neural Magic
[vLLM Office Hours #27] Intro to llm-d for Distributed LLM Inference
4 weeks ago - 1:19:57
AI Insight News
vLLM: Fast & Affordable LLM Serving with PagedAttention | UC Berkeley's Open-Source Library
2 years ago - 2:25
GOSIM Foundation
GOSIM CHINA 2024-Kaichao You vLLM: Easy, Fast, and Cheap LLM Serving for Everyone
8 months ago - 31:42
GOSIM Foundation
GOSIM CHINA 2024 - Kaichao You: vLLM - Easy, Fast, and Cheap LLM Serving for Everyone
7 months ago - 30:12
Charan H U
vLLM Inference Engine [ಕನ್ನಡದಲ್ಲಿ] | Easy, Fast, and Cheap LLM Serving with PagedAttention
1 year ago - 15:45
YanAITalk
LLM inference optimization: Architecture, KV cache and Flash attention
10 months ago - 44:06
S.P.I.T. Media
Task Scheduling for Decentralized LLM Serving | Dr. Sanjaya Kumar Panda | GenLang 5.0
Streamed 5 days ago - 4:06:11