Overview
This guide will help you set up and run a LLMule provider node. By running a node, you contribute computing power to the network and earn tokens.
Prerequisites
- Node.js 20 or higher
- Ollama or LM Studio installed
- 4GB+ RAM (model dependent)
- Stable internet connection
Setup Options
Option 1: Ollama Setup
- Install Ollama
# macOS/Linux
curl <https://ollama.ai/install.sh> | sh
# Windows
# Download from <https://ollama.ai/download>
- Pull supported models:
# Tiny tier
ollama pull tinyllama
# Small tier
ollama pull mistral:latest
# Medium tier
ollama pull phi-4:latest
- Verify Ollama installation:
ollama list
Option 2: LM Studio Setup
- Download LM Studio:
- Configure LM Studio:
- Open LM Studio
- Go to Settings > API
- Enable OpenAI API compatibility
- Note your port (default: 1234)
- Load Models:
- Download supported models
- Import into LM Studio
- Start the local server
LLMule Client Setup