Compare commits

..

8 Commits

Author SHA1 Message Date
Saifeddine ALOUI
6b63597b8b Update README.md 2025-04-14 14:11:21 +02:00
Saifeddine ALOUI
72699065a1 Merge pull request #25 from sebdotv/minor-fixes
Minor fixes
2025-04-14 14:09:48 +02:00
sebdotv
4a320f0929 update readme: fix Docker run, add Bearer example using curl 2025-04-10 15:20:57 +02:00
sebdotv
618cb57dc9 update example config.ini: queue_size is actually not implemented (yet) 2025-04-10 15:17:58 +02:00
sebdotv
6880c40d7a update Dockerfile: run Python unbuffered to output logs immediately 2025-04-10 15:16:59 +02:00
Saifeddine ALOUI
28ebc14020 Update README.md 2025-04-09 08:56:48 +02:00
Saifeddine ALOUI
c923a7860e Update README.md 2025-03-26 22:09:45 +01:00
Saifeddine ALOUI
349cd117b8 Update README.md 2025-03-26 22:08:40 +01:00
9 changed files with 238 additions and 204 deletions

View File

@@ -21,5 +21,8 @@ COPY authorized_users.txt .
# Start the proxy server as entrypoint
ENTRYPOINT ["ollama_proxy_server"]
# Do not buffer output, e.g. logs to stdout
ENV PYTHONUNBUFFERED=1
# Set command line parameters
CMD ["--config", "./config.ini", "--users_list", "./authorized_users.txt", "--port", "8080"]

201
README.md
View File

@@ -1,75 +1,162 @@
# Ollama Proxy Server
## Ollama Proxy Server
Ollama Proxy Server is a lightweight reverse proxy server designed for load balancing and rate limiting. It is licensed under the Apache 2.0 license and can be installed using pip. This README covers setting up, installing, and using the Ollama Proxy Server.
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE)
[![Python Version](https://img.shields.io/badge/python-3.11-green.svg)](https://www.python.org/downloads/release/python-311/)
[![GitHub Stars](https://img.shields.io/github/stars/ParisNeo/ollama_proxy_server?style=social)](https://github.com/ParisNeo/ollama_proxy_server)
## Prerequisites
Make sure you have Python (>=3.8) and Apache installed on your system before proceeding.
Ollama Proxy Server is a lightweight, secure proxy server designed to add a security layer to one or multiple Ollama servers. It routes incoming requests to the backend server with the lowest load, minimizing server strain and improving responsiveness. Built with Python, this project is ideal for managing distributed Ollama instances with authentication and logging capabilities.
**Author:** ParisNeo
**License:** Apache 2.0
**Repository:** [https://github.com/ParisNeo/ollama_proxy_server](https://github.com/ParisNeo/ollama_proxy_server)
## Features
* **Load Balancing:** Routes requests to the Ollama server with the fewest ongoing requests.
* **Security:** Implements bearer token authentication using a `user:key` format.
* **Asynchronous Logging:** Logs access and errors to a CSV file without blocking request handling.
* **Connection Pooling:** Uses persistent HTTP connections for faster backend communication.
* **Streaming Support:** Properly forwards streaming responses from Ollama servers.
* **Command-Line Tools:** Includes utilities to run the server and manage users.
* **Cross-Platform:** Runs on any OS supporting Python 3.11.
## Project Structure
```
ollama_proxy_server/
|- add_user.py # Script to add users to the authorized list
|- authorized_users.txt.example # Example authorized users file
|- config.ini.example # Example configuration file
|- main.py # Main proxy server script
.gitignore # Git ignore file
Dockerfile # Docker configuration
LICENSE # Apache 2.0 license text
requirements.txt # Runtime dependencies
requirements_dev.txt # Development dependencies
setup.py # Setup script for installation
README.md # This file
```
## Installation
1. Clone or download the `ollama_proxy_server` repository from GitHub: https://github.com/ParisNeo/ollama_proxy_server
2. Navigate to the cloned directory in the terminal and run `pip install -e .`
## Installation using Dockerfile
1. Clone this repository as described above.
2. Build your Container-Image with the Dockerfile provided by this repository
### Prerequisites
### Podman
`cd ollama_proxy_server`
`podman build -t ollama_proxy_server:latest .`
* Python 3.11 or higher
* Git (optional, for cloning the repository)
### Docker
`cd ollama_proxy_server`
`docker build -t ollama_proxy_server:latest .`
### Option 1: Install from PyPI (Not Yet Published)
Once published, install using pip:
```bash
pip install ollama_proxy_server
```
### Option 2: Install from Source
Clone the repository:
```bash
git clone https://github.com/ParisNeo/ollama_proxy_server.git
cd ollama_proxy_server
```
Install dependencies:
```bash
pip install -r requirements.txt
```
Install the package:
```bash
pip install .
```
### Option 3: Use Docker
Build the Docker image:
```bash
docker build -t ollama_proxy_server .
```
Run the container:
```bash
docker run -p 8080:8080 -v $(pwd)/config.ini:/app/config.ini -v $(pwd)/authorized_users.txt:/app/authorized_users.txt ollama_proxy_server
```
Test that it works:
```bash
curl localhost:8080 -H "Authorization: Bearer user1:0XAXAXAQX5A1F"
```
## Configuration
### Servers configuration (config.ini)
Create a file named `config.ini` in the same directory as your script, containing server configurations:
```makefile
[DefaultServer]
url = http://localhost:11434
queue_size = 5
1. **`config.ini`**
[SecondaryServer]
url = http://localhost:3002
queue_size = 3
Copy `config.ini.example` to `config.ini` and edit it:
# Add as many servers as needed, in the same format as [DefaultServer] and [SecondaryServer].
```
Replace `http://localhost:11434/` with the URL and port of the first server. The `queue_size` value indicates the maximum number of requests that can be queued at a given time for this server.
```ini
[server0]
url = http://localhost:11434
### Authorized users (authorized_users.txt)
Create a file named `authorized_users.txt` in the same directory as your script, containing a list of user:key pairs, separated by commas and each on a new line:
```text
user1:key1
user2:key2
```
Replace `user1`, `key1`, `user2`, and `key2` with the desired username and API key for each user.
You can also use the `ollama_proxy_add_user` utility to add user and generate a key automatically:
```makefile
ollama_proxy_add_user --users_list [path to the authorized `authorized_users.txt` file]
```
# Add more servers as needed
# [server1]
# url = http://another-server:11434
```
* `url`: The URL of an Ollama backend server.
2. **`authorized_users.txt`**
Copy `authorized_users.txt.example` to `authorized_users.txt` and edit it:
```
user:key
another_user:another_key
```
## Usage
### Starting the server
Start the Ollama Proxy Server by running the following command in your terminal:
```bash
python3 ollama_proxy_server/main.py --config [configuration file path] --users_list [users list file path] --port [port number to access the proxy]
```
The server will listen on port 808x, with x being the number of available ports starting from 0 (e.g., 8080, 8081, etc.). The first available port will be automatically selected if no other instance is running.
### Client requests
To send a request to the server, use the following command:
```bash
curl -X <METHOD> -H "Authorization: Bearer <USER_KEY>" http://localhost:<PORT>/<PATH> [--data <POST_DATA>]
```
Replace `<METHOD>` with the HTTP method (GET or POST), `<USER_KEY>` with a valid user:key pair from your `authorized_users.txt`, `<PORT>` with the port number of your running Ollama Proxy Server, and `<PATH>` with the target endpoint URL (e.g., "/api/generate"). If you are making a POST request, include the `--data <POST_DATA>` option to send data in the body.
### Running the Server
For example:
```bash
curl -X POST -H "Authorization: Bearer user1:key1" http://localhost:8080/api/generate --data '{'model':'mixtral:latest,'prompt': "Once apon a time,","stream":false,"temperature": 0.3,"max_tokens": 1024}'
```
### Starting the server using the created Container-Image
To start the proxy in background with the above created image, you can use either
1) docker: `docker run -d --name ollama-proxy-server -p 8080:8080 ollama_proxy_server:latest`
2) podman: `podman run -d --name ollama-proxy-server -p 8080:8080 ollama_proxy_server:latest`
python main.py --config config.ini --users_list authorized_users.txt
```
### Managing Users
Use the `add_user.py` script to add new users.
```bash
python add_user.py <username> <key>
```
## Contributing
Contributions are welcome! Please follow these steps:
1. Fork the repository.
2. Create a feature branch (git checkout -b feature/your-feature).
3. Commit your changes (git commit -am 'Add your feature').
4. Push to the branch (git push origin feature/your-feature).
5. Open a Pull Request.
See `CONTRIBUTING.md` for more details (to be added).
## License
This project is licensed under the Apache License 2.0. See the `LICENSE` file for details.
## Acknowledgments
Built by ParisNeo.
Thanks to the open-source community for tools like `requests` and `ascii_colors`.
See you soon!

View File

@@ -1,10 +1,8 @@
[DefaultServer]
url = http://localhost:11434
queue_size = 5
[SecondaryServer]
url = http://localhost:3002
queue_size = 3
# Add more servers as you need.

View File

@@ -1,14 +1,6 @@
"""
project: ollama_proxy_server
file: add_user.py
author: ParisNeo (Saifeddine ALOUI)
description: A utility to add users to the authorized_users.txt file for the Ollama Proxy Server.
license: Apache 2.0
repository: https://github.com/ParisNeo/ollama_proxy_server
"""
import sys
import random
from getpass import getuser
from pathlib import Path
def generate_key(length=10):

View File

@@ -1,11 +0,0 @@
# Example authorized users file for Ollama Proxy Server
# Project: ollama_proxy_server
# Author: ParisNeo (Saifeddine ALOUI)
# License: Apache 2.0
# Repository: https://github.com/ParisNeo/ollama_proxy_server
# Copy this file to authorized_users.txt and edit the entries to add your users and keys.
# Format: username:key
# Example user entries:
alice:abc123!@#XYZ
bob:K9$mPq&*vL

View File

@@ -1,22 +0,0 @@
# Example configuration file for Ollama Proxy Server
# Copy this file to config.ini and edit the values to match your environment.
# Section for backend server URLs
# Each server should have its own section, e.g., [server0], [server1], etc.
# The 'url' key specifies the URL of the backend Ollama server.
[server0]
url = http://localhost:11434
# Add additional servers as needed, e.g.:
# [server1]
# url = http://another-server:11434
# Section for logging configuration
[Logging]
# log_path: the path to the access log file (ensure the application has write permissions)
log_path = access_log.txt
# Section for user management
[Users]
# users_list: the path to the file containing authorized users and their keys
users_list = authorized_users.txt

View File

@@ -1,10 +1,8 @@
"""
project: ollama_proxy_server
file: main.py
author: ParisNeo (Saifeddine ALOUI)
description: A proxy server adding a security layer to one or multiple Ollama servers, routing requests to minimize server load.
license: Apache 2.0
repository: https://github.com/ParisNeo/ollama_proxy_server
author: ParisNeo
description: This is a proxy server that adds a security layer to one or multiple ollama servers and routes the requests to the right server in order to minimize the charge of the server.
"""
import configparser
@@ -19,95 +17,70 @@ from ascii_colors import ASCIIColors
from pathlib import Path
import csv
import datetime
import threading
import shutil
def get_config(filename):
config = configparser.ConfigParser()
config.read(filename)
return [(name, {
'url': config[name]['url'],
'session': requests.Session(),
'ongoing_requests': 0,
'lock': threading.Lock()
}) for name in config.sections()]
return [(name, {'url': config[name]['url'], 'queue': Queue()}) for name in config.sections()]
# Read the authorized users and their keys from a file
def get_authorized_users(filename):
with open(filename, 'r') as f:
lines = f.readlines()
authorized_users = {}
for line in lines:
if line.strip() == "":
if line=="":
continue
try:
user, key = line.strip().split(':')
authorized_users[user] = key
except:
ASCIIColors.red(f"User entry broken: {line.strip()}")
ASCIIColors.red(f"User entry broken:{line.strip()}")
return authorized_users
def log_writer(log_queue, log_file_path):
with open(log_file_path, mode='a', newline='') as csvfile:
fieldnames = ['time_stamp', 'event', 'user_name', 'ip_address', 'access', 'server', 'nb_queued_requests_on_server', 'error']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
if csvfile.tell() == 0:
writer.writeheader()
while True:
log_entry = log_queue.get()
if log_entry is None: # Signal to exit
break
writer.writerow(log_entry)
csvfile.flush()
def main():
parser = argparse.ArgumentParser(description="Ollama Proxy Server by ParisNeo")
parser.add_argument('--config', default="config.ini", help='Path to the config file')
parser = argparse.ArgumentParser()
parser.add_argument('--config', default="config.ini", help='Path to the authorized users list')
parser.add_argument('--log_path', default="access_log.txt", help='Path to the access log file')
parser.add_argument('--users_list', default="authorized_users.txt", help='Path to the authorized users list')
parser.add_argument('--users_list', default="authorized_users.txt", help='Path to the config file')
parser.add_argument('--port', type=int, default=8000, help='Port number for the server')
parser.add_argument('-d', '--deactivate_security', action='store_true', help='Deactivates security')
args = parser.parse_args()
servers = get_config(args.config)
servers = get_config(args.config)
authorized_users = get_authorized_users(args.users_list)
deactivate_security = args.deactivate_security
ASCIIColors.red("Ollama Proxy Server")
ASCIIColors.red("Author: ParisNeo (Saifeddine ALOUI)")
ASCIIColors.red("License: Apache 2.0")
ASCIIColors.red("Repository: https://github.com/ParisNeo/ollama_proxy_server")
global log_queue
log_queue = Queue()
log_file_path = Path(args.log_path)
if not log_file_path.exists() or log_file_path.stat().st_size == 0:
with open(log_file_path, mode='w', newline='') as csvfile:
fieldnames = ['time_stamp', 'event', 'user_name', 'ip_address', 'access', 'server', 'nb_queued_requests_on_server', 'error']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
log_writer_thread = threading.Thread(target=log_writer, args=(log_queue, log_file_path))
log_writer_thread.daemon = True
log_writer_thread.start()
ASCIIColors.red("Ollama Proxy server")
ASCIIColors.red("Author: ParisNeo")
class RequestHandler(BaseHTTPRequestHandler):
def add_access_log_entry(self, event, user, ip_address, access, server, nb_queued_requests_on_server, error=""):
log_entry = {
'time_stamp': str(datetime.datetime.now()),
'event': event,
'user_name': user,
'ip_address': ip_address,
'access': access,
'server': server,
'nb_queued_requests_on_server': nb_queued_requests_on_server,
'error': error
}
log_queue.put(log_entry)
log_file_path = Path(args.log_path)
if not log_file_path.exists():
with open(log_file_path, mode='w', newline='') as csvfile:
fieldnames = ['time_stamp', 'event', 'user_name', 'ip_address', 'access', 'server', 'nb_queued_requests_on_server', 'error']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
with open(log_file_path, mode='a', newline='') as csvfile:
fieldnames = ['time_stamp', 'event', 'user_name', 'ip_address', 'access', 'server', 'nb_queued_requests_on_server', 'error']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
row = {'time_stamp': str(datetime.datetime.now()), 'event':event, 'user_name': user, 'ip_address': ip_address, 'access': access, 'server': server, 'nb_queued_requests_on_server': nb_queued_requests_on_server, 'error': error}
writer.writerow(row)
def _send_response(self, response):
self.send_response(response.status_code)
for key, value in response.headers.items():
self.send_header(key, value)
if key.lower() not in ['content-length', 'transfer-encoding', 'content-encoding']:
self.send_header(key, value)
self.end_headers()
try:
shutil.copyfileobj(response.raw, self.wfile)
# Read the full content to avoid chunking issues
content = response.content
self.wfile.write(content)
self.wfile.flush()
except BrokenPipeError:
pass
@@ -126,11 +99,14 @@ def main():
def _validate_user_and_key(self):
try:
# Extract the bearer token from the headers
auth_header = self.headers.get('Authorization')
if not auth_header or not auth_header.startswith('Bearer '):
return False
token = auth_header.split(' ')[1]
user, key = token.split(':')
token = auth_header.split(' ')[1]
user, key = token.split(':')
# Check if the user and key are in the list of authorized users
if authorized_users.get(user) == key:
self.user = user
return True
@@ -139,73 +115,75 @@ def main():
return False
except:
return False
def proxy(self):
self.user = "unknown"
if not deactivate_security and not self._validate_user_and_key():
ASCIIColors.red('User is not authorized')
client_ip, _ = self.client_address
ASCIIColors.red(f'User is not authorized')
client_ip, client_port = self.client_address
# Extract the bearer token from the headers
auth_header = self.headers.get('Authorization')
if not auth_header or not auth_header.startswith('Bearer '):
self.add_access_log_entry(event='rejected', user="unknown", ip_address=client_ip, access="Denied", server="None", nb_queued_requests_on_server=-1, error="Authentication failed")
else:
token = auth_header.split(' ')[1]
token = auth_header.split(' ')[1]
self.add_access_log_entry(event='rejected', user=token, ip_address=client_ip, access="Denied", server="None", nb_queued_requests_on_server=-1, error="Authentication failed")
self.send_response(403)
self.end_headers()
return
return
url = urlparse(self.path)
path = url.path
get_params = parse_qs(url.query) or {}
post_params = {}
if self.command == "POST":
content_length = int(self.headers['Content-Length'])
post_params = self.rfile.read(content_length)
post_data = self.rfile.read(content_length)
post_params = post_data# parse_qs(post_data.decode('utf-8'))
else:
post_params = {}
min_queued_server = min(servers, key=lambda s: s[1]['ongoing_requests'])
if path in ['/api/generate', '/api/chat', '/v1/chat/completions']:
with min_queued_server[1]['lock']:
min_queued_server[1]['ongoing_requests'] += 1
client_ip, _ = self.client_address
self.add_access_log_entry(event="gen_request", user=self.user, ip_address=client_ip, access="Authorized", server=min_queued_server[0], nb_queued_requests_on_server=min_queued_server[1]['ongoing_requests'])
# Find the server with the lowest number of queue entries.
min_queued_server = servers[0]
for server in servers:
cs = server[1]
if cs['queue'].qsize() < min_queued_server[1]['queue'].qsize():
min_queued_server = server
# Apply the queuing mechanism only for a specific endpoint.
if path == '/api/generate' or path == '/api/chat' or path == '/v1/chat/completions':
que = min_queued_server[1]['queue']
client_ip, client_port = self.client_address
self.add_access_log_entry(event="gen_request", user=self.user, ip_address=client_ip, access="Authorized", server=min_queued_server[0], nb_queued_requests_on_server=que.qsize())
que.put_nowait(1)
try:
post_data_dict = json.loads(post_params.decode('utf-8')) if isinstance(post_params, bytes) else {}
response = min_queued_server[1]['session'].request(
self.command,
min_queued_server[1]['url'] + path,
params=get_params,
data=post_params,
stream=post_data_dict.get("stream", False)
)
post_data_dict = {}
if isinstance(post_data, bytes):
post_data_str = post_data.decode('utf-8')
post_data_dict = json.loads(post_data_str)
response = requests.request(self.command, min_queued_server[1]['url'] + path, params=get_params, data=post_params, stream=post_data_dict.get("stream", False))
self._send_response(response)
except Exception as ex:
self.add_access_log_entry(event="gen_error", user=self.user, ip_address=client_ip, access="Authorized", server=min_queued_server[0], nb_queued_requests_on_server=min_queued_server[1]['ongoing_requests'], error=str(ex))
self.add_access_log_entry(event="gen_error",user=self.user, ip_address=client_ip, access="Authorized", server=min_queued_server[0], nb_queued_requests_on_server=que.qsize(),error=ex)
finally:
with min_queued_server[1]['lock']:
min_queued_server[1]['ongoing_requests'] -= 1
self.add_access_log_entry(event="gen_done", user=self.user, ip_address=client_ip, access="Authorized", server=min_queued_server[0], nb_queued_requests_on_server=min_queued_server[1]['ongoing_requests'])
que.get_nowait()
self.add_access_log_entry(event="gen_done",user=self.user, ip_address=client_ip, access="Authorized", server=min_queued_server[0], nb_queued_requests_on_server=que.qsize())
else:
response = min_queued_server[1]['session'].request(
self.command,
min_queued_server[1]['url'] + path,
params=get_params,
data=post_params
)
# For other endpoints, just mirror the request.
response = requests.request(self.command, min_queued_server[1]['url'] + path, params=get_params, data=post_params)
self._send_response(response)
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
pass
print('Starting server')
server = ThreadedHTTPServer(('', args.port), RequestHandler)
server = ThreadedHTTPServer(('', args.port), RequestHandler) # Set the entry port here.
print(f'Running server on port {args.port}')
try:
server.serve_forever()
except KeyboardInterrupt:
log_queue.put(None) # Signal log_writer to exit
server.server_close()
server.serve_forever()
if __name__ == "__main__":
main()

View File

@@ -1,2 +1,8 @@
requests>=2.31.0
ascii_colors>=0.5.2
ascii-colors==0.2.2
certifi==2024.7.4
charset-normalizer==3.3.2
configparser==6.0.1
idna==3.6
queues==0.6.3
requests==2.31.0
urllib3==2.2.1

View File

@@ -6,23 +6,26 @@ import setuptools
with open("README.md", "r") as fh:
long_description = fh.read()
def read_requirements(path: Union[str, Path]):
with open(path, "r") as file:
return file.read().splitlines()
requirements = read_requirements("requirements.txt")
requirements_dev = read_requirements("requirements_dev.txt")
setuptools.setup(
name="ollama_proxy_server",
version="7.1.0",
author="Saifeddine ALOUI (ParisNeo)",
author_email="aloui.saifeddine@gmail.com",
description="A proxy server adding a security layer to Ollama servers, routing requests to minimize server load",
description="A fastapi server for petals decentralized text generation",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/ParisNeo/ollama_proxy_server",
packages=setuptools.find_packages(),
packages=setuptools.find_packages(),
include_package_data=True,
install_requires=requirements,
entry_points={