A Proof of Concept of vLLM at the Edge with MCP calling
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 

35 lines
833 B

#!/bin/bash
# Demo script to run the MCP-enabled LLM client
# Make sure the vLLM server is running first!
cd "$(dirname "$0")"
# Activate virtual environment if it exists
if [ -d "venv" ]; then
source venv/bin/activate
fi
# Check if vLLM server is running
if ! curl -s http://127.0.0.1:8000/health > /dev/null 2>&1; then
echo "⚠️ Warning: vLLM server doesn't seem to be running on port 8000"
echo "Please start it first with: ./start_server.sh"
echo ""
read -p "Continue anyway? (y/n) " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
exit 1
fi
fi
# Run the client
echo "Starting MCP client..."
echo ""
if [ $# -eq 0 ]; then
# Default question
python client/mcp_client.py "What's the weather like in Paris?"
else
# Custom question
python client/mcp_client.py "$@"
fi