VLLM is an open-source high-throughput and memory efficient inference and service engine for LLM. The vLLM 0.5.4 version has a security vulnerability, which arises from the fact that completing API requests with empty prompts will cause the vllm API server to crash, resulting in a denial of service. This rule supports to defend the A6: Vulnerable and Outdated Components of OWASP Top 10 - 2021. Other reference:None