Compare commits

...

22 Commits
v0.7.0 ... main

Author SHA1 Message Date
Илья Глазунов
d03ade18c5 increase server backlog to improve connection handling
All checks were successful
Lint Code / lint (push) Successful in 44s
CI/CD Pipeline / lint (push) Successful in 0s
Run Tests / test (3.12) (push) Successful in 2m40s
Run Tests / test (3.13) (push) Successful in 2m32s
CI/CD Pipeline / test (push) Successful in 0s
CI/CD Pipeline / build-and-release (push) Has been skipped
CI/CD Pipeline / notify (push) Successful in 0s
2025-12-04 03:39:52 +03:00
Илья Глазунов
129785706c remove unnecessary blank lines in health and service command files
All checks were successful
Lint Code / lint (push) Successful in 47s
CI/CD Pipeline / lint (push) Successful in 0s
Run Tests / test (3.12) (push) Successful in 2m48s
Run Tests / test (3.13) (push) Successful in 2m40s
CI/CD Pipeline / test (push) Successful in 0s
CI/CD Pipeline / build-and-release (push) Has been skipped
CI/CD Pipeline / notify (push) Successful in 1s
2025-12-04 03:17:28 +03:00
Илья Глазунов
3b59994fc9 fixed pyservectl linter errors and formatting 2025-12-04 03:17:21 +03:00
Илья Глазунов
7662a7924a fixed flake8 lint errors 2025-12-04 03:06:58 +03:00
Илья Глазунов
cec6e927a7 tests for pyservectl 2025-12-04 03:00:56 +03:00
Илья Глазунов
80544d5b95 pyservectl init 2025-12-04 02:55:14 +03:00
Илья Глазунов
b4f63c6804 bump version to 0.9.10
Some checks failed
Run Tests / test (3.12) (push) Successful in 2m39s
Run Tests / test (3.13) (push) Successful in 2m31s
CI/CD Pipeline / lint (push) Successful in 0s
Build and Release / build (push) Successful in 36s
CI/CD Pipeline / test (push) Has been skipped
CI/CD Pipeline / build-and-release (push) Has been skipped
Build and Release / release (push) Successful in 6s
CI/CD Pipeline / notify (push) Successful in 1s
Lint Code / lint (push) Failing after 40s
2025-12-04 01:31:19 +03:00
Илья Глазунов
59d6ae2fd2 fix: correct return type in _load_wsgi_app function 2025-12-04 01:30:18 +03:00
Илья Глазунов
edaccb59bb lint fixes 2025-12-04 01:27:43 +03:00
Илья Глазунов
3454801be7 process_orchestration for asgi added 2025-12-04 01:25:13 +03:00
Илья Глазунов
bb2c3aa357 bump version to 0.9.1
All checks were successful
Build and Release / release (push) Successful in 5s
CI/CD Pipeline / build-and-release (push) Has been skipped
CI/CD Pipeline / notify (push) Successful in 1s
Lint Code / lint (push) Successful in 40s
Run Tests / test (3.12) (push) Successful in 1m3s
Run Tests / test (3.13) (push) Successful in 1m4s
CI/CD Pipeline / lint (push) Successful in 0s
Build and Release / build (push) Successful in 37s
CI/CD Pipeline / test (push) Has been skipped
2025-12-03 13:02:41 +03:00
Илья Глазунов
6761b791c3 version bump in __init__ 2025-12-03 13:02:10 +03:00
Илья Глазунов
fb87445cbd bump version to 0.9.0
Some checks failed
Run Tests / test (3.12) (push) Successful in 1m10s
Run Tests / test (3.13) (push) Successful in 1m9s
CI/CD Pipeline / lint (push) Successful in 0s
Build and Release / build (push) Successful in 41s
CI/CD Pipeline / test (push) Has been skipped
CI/CD Pipeline / build-and-release (push) Has been skipped
Build and Release / release (push) Successful in 5s
CI/CD Pipeline / notify (push) Successful in 1s
Lint Code / lint (push) Failing after 38s
2025-12-03 12:55:42 +03:00
Илья Глазунов
5d863bc97c cython path_matcher added to reduce time on hot operations 2025-12-03 12:54:45 +03:00
Илья Глазунов
6c50a35aa3 bump version to 0.8.0
Some checks failed
Lint Code / lint (push) Failing after 42s
Run Tests / test (3.12) (push) Successful in 1m10s
Run Tests / test (3.13) (push) Successful in 1m10s
CI/CD Pipeline / lint (push) Successful in 0s
Build and Release / build (push) Successful in 35s
CI/CD Pipeline / test (push) Has been skipped
Build and Release / release (push) Successful in 5s
CI/CD Pipeline / build-and-release (push) Has been skipped
CI/CD Pipeline / notify (push) Successful in 1s
2025-12-03 12:24:57 +03:00
Илья Глазунов
3e2704f870 fix linter errors 2025-12-03 12:24:41 +03:00
Илья Глазунов
40e39efa37 lint fix 2025-12-03 12:13:48 +03:00
Илья Глазунов
0d0d1aec80 asgi/wsgi mounting implemented 2025-12-03 12:10:28 +03:00
Илья Глазунов
831eee5d01 bump version to 0.7.1
All checks were successful
CI/CD Pipeline / lint (push) Successful in 0s
CI/CD Pipeline / test (push) Has been skipped
CI/CD Pipeline / build-and-release (push) Has been skipped
CI/CD Pipeline / notify (push) Successful in 0s
Lint Code / lint (push) Successful in 42s
Build and Release / build (push) Successful in 31s
Build and Release / release (push) Successful in 5s
Run Tests / test (3.12) (push) Successful in 1m0s
Run Tests / test (3.13) (push) Successful in 59s
2025-12-03 02:21:23 +03:00
Илья Глазунов
2f49ee576f test for routing module
Some checks failed
Lint Code / lint (push) Failing after 11s
CI/CD Pipeline / lint (push) Successful in 0s
CI/CD Pipeline / test (push) Successful in 0s
CI/CD Pipeline / build-and-release (push) Has been skipped
CI/CD Pipeline / notify (push) Successful in 0s
Run Tests / test (3.12) (push) Failing after 2s
Run Tests / test (3.13) (push) Failing after 1s
2025-12-03 02:20:42 +03:00
Илья Глазунов
2d445462c2 fix in routing
All checks were successful
Lint Code / lint (push) Successful in 39s
CI/CD Pipeline / lint (push) Successful in 0s
Run Tests / test (3.12) (push) Successful in 53s
Run Tests / test (3.13) (push) Successful in 51s
CI/CD Pipeline / test (push) Successful in 1s
CI/CD Pipeline / build-and-release (push) Has been skipped
CI/CD Pipeline / notify (push) Successful in 1s
2025-12-03 02:14:05 +03:00
Илья Глазунов
1b462bd5f0 bump version in __init__.py and some changes in .gitignore
All checks were successful
Lint Code / lint (push) Successful in 40s
CI/CD Pipeline / lint (push) Successful in 0s
Run Tests / test (3.12) (push) Successful in 54s
Run Tests / test (3.13) (push) Successful in 52s
CI/CD Pipeline / test (push) Successful in 0s
CI/CD Pipeline / build-and-release (push) Has been skipped
CI/CD Pipeline / notify (push) Successful in 0s
2025-12-03 00:30:18 +03:00
52 changed files with 12268 additions and 462 deletions

19
.gitignore vendored
View File

@ -10,4 +10,21 @@ logs/*
static/*
.DS_Store
.coverage
.coverage
docs/
dist/
build/
# Cython generated files
*.c
*.so
*.pyd
*.html
*.egg-info/
# IDE
.idea/
.vscode/
*.swp
*.swo

View File

@ -1,4 +1,4 @@
.PHONY: help install build clean test lint format run dev-install dev-deps check release-patch release-minor release-major pipeline-check
.PHONY: help install build build-cython clean test lint format run dev-install dev-deps check release-patch release-minor release-major pipeline-check benchmark
PYTHON = python3
POETRY = poetry
@ -21,12 +21,14 @@ help:
@printf " $(YELLOW)%-20s$(CYAN) %s$(NC)\n" "install-package" "Installing package locally"
@echo ""
@echo "$(YELLOW)Building:$(NC)"
@printf " $(YELLOW)%-20s$(CYAN) %s$(NC)\n" "build" "Building package"
@printf " $(YELLOW)%-20s$(CYAN) %s$(NC)\n" "build" "Building package (with Cython)"
@printf " $(YELLOW)%-20s$(CYAN) %s$(NC)\n" "build-cython" "Building Cython extensions only"
@printf " $(YELLOW)%-20s$(CYAN) %s$(NC)\n" "clean" "Cleaning temporary files"
@echo ""
@echo "$(YELLOW)Testing:$(NC)"
@printf " $(YELLOW)%-20s$(CYAN) %s$(NC)\n" "test" "Running tests"
@printf " $(YELLOW)%-20s$(CYAN) %s$(NC)\n" "test-cov" "Running tests with coverage"
@printf " $(YELLOW)%-20s$(CYAN) %s$(NC)\n" "benchmark" "Running performance benchmarks"
@printf " $(YELLOW)%-20s$(CYAN) %s$(NC)\n" "lint" "Checking code with linters"
@printf " $(YELLOW)%-20s$(CYAN) %s$(NC)\n" "format" "Formatting code"
@printf " $(YELLOW)%-20s$(CYAN) %s$(NC)\n" "check" "Lint and test"
@ -75,18 +77,27 @@ dev-deps:
@echo "$(GREEN)Installing additional tools...$(NC)"
$(POETRY) add --group dev pytest pytest-cov black isort mypy flake8
build: clean
build: clean build-cython
@echo "$(GREEN)Building package...$(NC)"
$(POETRY) build
build-cython:
@echo "$(GREEN)Building Cython extensions...$(NC)"
$(POETRY) run python scripts/build_cython.py build_ext --inplace || echo "$(YELLOW)Cython build skipped (optional)$(NC)"
clean:
@echo "$(GREEN)Cleaning temporary files...$(NC)"
rm -rf dist/
rm -rf build/
rm -rf *.egg-info/
find . -type d -name __pycache__ -exec rm -rf {} +
find . -type f -name "*.pyc" -delete
find . -type f -name "*.pyo" -delete
find . -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true
find . -type f -name "*.pyc" -delete 2>/dev/null || true
find . -type f -name "*.pyo" -delete 2>/dev/null || true
@# Cython artifacts
find $(PACKAGE_NAME) -type f -name "*.c" -delete 2>/dev/null || true
find $(PACKAGE_NAME) -type f -name "*.so" -delete 2>/dev/null || true
find $(PACKAGE_NAME) -type f -name "*.pyd" -delete 2>/dev/null || true
find $(PACKAGE_NAME) -type f -name "*.html" -delete 2>/dev/null || true
test:
@echo "$(GREEN)Running tests...$(NC)"
@ -98,16 +109,20 @@ test-cov:
lint:
@echo "$(GREEN)Checking code with linters...$(NC)"
$(POETRY) run flake8 $(PACKAGE_NAME)/
$(POETRY) run flake8 $(PACKAGE_NAME)/ --exclude='*.pyx,*.pxd'
$(POETRY) run mypy $(PACKAGE_NAME)/
format:
@echo "$(GREEN)Formatting code...$(NC)"
$(POETRY) run black $(PACKAGE_NAME)/
$(POETRY) run isort $(PACKAGE_NAME)/
$(POETRY) run black $(PACKAGE_NAME)/ --exclude='\.pyx$$'
$(POETRY) run isort $(PACKAGE_NAME)/ --skip-glob='*.pyx'
check: lint test
benchmark: build-cython
@echo "$(GREEN)Running benchmarks...$(NC)"
$(POETRY) run python benchmarks/bench_path_matcher.py
run:
@echo "$(GREEN)Starting server in development mode...$(NC)"
$(POETRY) run python run.py --debug

172
README.md
View File

@ -1,145 +1,97 @@
# PyServe
PyServe is a modern, async HTTP server written in Python. Originally created for educational purposes, it has evolved into a powerful tool for rapid prototyping and serving web applications with unique features like AI-generated content.
Python application orchestrator and HTTP server. Runs multiple ASGI/WSGI applications through a single entry point with process isolation, health monitoring, and auto-restart.
<img src="https://raw.githubusercontent.com/ShiftyX1/PyServe/refs/heads/master/images/logo.png" alt="isolated" width="150"/>
<img src="https://raw.githubusercontent.com/ShiftyX1/PyServe/refs/heads/master/images/logo.png" alt="PyServe Logo" width="150"/>
[More on web page](https://pyserve.org/)
Website: [pyserve.org](https://pyserve.org) · Documentation: [docs.pyserve.org](https://docs.pyserve.org)
## Project Overview
## Overview
PyServe v0.6.0 introduces a completely refactored architecture with modern async/await syntax and new exciting features like **Vibe-Serving** - AI-powered dynamic content generation.
PyServe manages multiple Python web applications (FastAPI, Flask, Django, etc.) as isolated subprocesses behind a single gateway. Each app runs on its own port with independent lifecycle, health checks, and automatic restarts on failure.
### Key Features:
```
PyServe Gateway (:8000)
┌────────────────┼────────────────┐
▼ ▼ ▼
FastAPI Flask Starlette
:9001 :9002 :9003
/api/* /admin/* /ws/*
```
- **Async HTTP Server** - Built with Python's asyncio for high performance
- **Advanced Configuration System V2** - Powerful extensible configuration with full backward compatibility
- **Regex Routing & SPA Support** - nginx-style routing patterns with Single Page Application fallback
- **Static File Serving** - Efficient serving with correct MIME types
- **Template System** - Dynamic content generation
- **Vibe-Serving Mode** - AI-generated content using language models (OpenAI, Claude, etc.)
- **Reverse Proxy** - Forward requests to backend services with advanced routing
- **SSL/HTTPS Support** - Secure connections with certificate configuration
- **Modular Extensions** - Plugin-like architecture for security, caching, monitoring
- **Beautiful Logging** - Colored terminal output with file rotation
- **Error Handling** - Styled error pages and graceful fallbacks
- **CLI Interface** - Command-line interface for easy deployment and configuration
## Getting Started
### Prerequisites
- Python 3.12 or higher
- Poetry (recommended) or pip
### Installation
#### Via Poetry (рекомендуется)
## Installation
```bash
git clone https://github.com/ShiftyX1/PyServe.git
cd PyServe
make init # Initialize project
make init
```
#### Или установка пакета
## Quick Start
```bash
# local install
make install-package
```yaml
# config.yaml
server:
host: 0.0.0.0
port: 8000
# after installing project you can use command pyserve
pyserve --help
extensions:
- type: process_orchestration
config:
apps:
- name: api
path: /api
app_path: myapp.api:app
- name: admin
path: /admin
app_path: myapp.admin:app
```
### Running the Server
#### Using Makefile (recommended)
```bash
# start in development mode
make run
# start in production mode
make run-prod
# show all available commands
make help
pyserve -c config.yaml
```
#### Using CLI directly
Requests to `/api/*` are proxied to the api process, `/admin/*` to admin.
## Process Orchestration
The main case of using PyServe is orchestration of python web applications. Each application runs as a separate uvicorn process on a dynamically or manually allocated port (9000-9999 by default). PyServe proxies requests to the appropriate process based on URL path.
For each application you can configure the number of workers, environment variables, health check endpoint path, and auto-restart parameters. If a process crashes or stops responding to health checks, PyServe automatically restarts it with exponential backoff.
WSGI applications (Flask, Django) are supported through automatic wrapping — just specify `app_type: wsgi`.
## In-Process Mounting
For simpler cases when process isolation is not needed, applications can be mounted directly into the PyServe process via the `asgi` extension. This is lighter and faster, but all applications share one process.
## Static Files & Routing
PyServe can serve static files with nginx-like routing: regex patterns, SPA fallback for frontend applications, custom caching headers. Routes are processed in priority order — exact match, then regex, then default.
## Reverse Proxy
Requests can be proxied to external backends. Useful for integration with legacy services or microservices in other languages.
## CLI
```bash
# after installing package
pyserve
# or with Poetry
poetry run pyserve
# or legacy (for backward compatibility)
python run.py
```
#### CLI options
```bash
# help
pyserve --help
# path to config
pyserve -c /path/to/config.yaml
# rewrite host and port
pyserve -c config.yaml
pyserve --host 0.0.0.0 --port 9000
# debug mode
pyserve --debug
# show version
pyserve --version
```
## Development
### Makefile Commands
```bash
make help # Show help for commands
make install # Install dependencies
make dev-install # Install development dependencies
make build # Build the package
make test # Run tests
make test-cov # Tests with code coverage
make lint # Check code with linters
make format # Format code
make clean # Clean up temporary files
make version # Show version
make publish-test # Publish to Test PyPI
make publish # Publish to PyPI
```
### Project Structure
```
pyserveX/
├── pyserve/ # Main package
│ ├── __init__.py
│ ├── cli.py # CLI interface
│ ├── server.py # Main server module
│ ├── config.py # Configuration system
│ ├── routing.py # Routing
│ ├── extensions.py # Extensions
│ └── logging_utils.py
├── tests/ # Tests
├── static/ # Static files
├── templates/ # Templates
├── logs/ # Logs
├── Makefile # Automation tasks
├── pyproject.toml # Project configuration
├── config.yaml # Server configuration
└── run.py # Entry point (backward compatibility)
make test # run tests
make lint # linting
make format # formatting
```
## License
This project is distributed under the MIT license.
[MIT License](./LICENSE)

View File

@ -0,0 +1,215 @@
#!/usr/bin/env python3
"""
Benchmark script for path_matcher performance comparison.
Compares:
- Pure Python implementation
- Cython implementation (if available)
- Original MountedApp from asgi_mount.py
Usage:
python benchmarks/bench_path_matcher.py
"""
import time
import statistics
from typing import Callable, List, Tuple
from pyserve._path_matcher_py import (
FastMountedPath as PyFastMountedPath,
FastMountManager as PyFastMountManager,
path_matches_prefix as py_path_matches_prefix,
)
try:
from pyserve._path_matcher import (
FastMountedPath as CyFastMountedPath,
FastMountManager as CyFastMountManager,
path_matches_prefix as cy_path_matches_prefix,
)
CYTHON_AVAILABLE = True
except ImportError:
CYTHON_AVAILABLE = False
print("Cython module not compiled. Run: python setup_cython.py build_ext --inplace\n")
from pyserve.asgi_mount import MountedApp
def benchmark(func: Callable, iterations: int = 100000) -> Tuple[float, float]:
times = []
for _ in range(1000):
func()
for _ in range(iterations):
start = time.perf_counter_ns()
func()
end = time.perf_counter_ns()
times.append(end - start)
return statistics.mean(times), statistics.stdev(times)
def format_time(ns: float) -> str:
if ns < 1000:
return f"{ns:.1f} ns"
elif ns < 1_000_000:
return f"{ns/1000:.2f} µs"
else:
return f"{ns/1_000_000:.2f} ms"
def run_benchmarks():
print("=" * 70)
print("PATH MATCHER BENCHMARK")
print("=" * 70)
print()
# Test paths
mount_path = "/api/v1"
test_paths = [
"/api/v1/users/123/posts", # Matching - long
"/api/v1", # Matching - exact
"/api/v2/users", # Not matching - similar prefix
"/other/path", # Not matching - completely different
]
iterations = 100000
# =========================================================================
# Benchmark 1: Single path matching
# =========================================================================
print("BENCHMARK 1: Single Path Matching")
print("-" * 70)
print(f" Mount path: {mount_path}")
print(f" Iterations: {iterations:,}")
print()
results = {}
# Original MountedApp
original_mount = MountedApp(mount_path, app=None, name="test") # type: ignore
for test_path in test_paths:
print(f" Test path: {test_path}")
# Original
mean, std = benchmark(lambda: original_mount.matches(test_path), iterations)
results[("Original", test_path)] = mean
print(f" Original MountedApp: {format_time(mean):>12} ± {format_time(std)}")
# Pure Python
py_mount = PyFastMountedPath(mount_path)
mean, std = benchmark(lambda: py_mount.matches(test_path), iterations)
results[("Python", test_path)] = mean
print(f" Pure Python: {format_time(mean):>12} ± {format_time(std)}")
# Cython (if available)
if CYTHON_AVAILABLE:
cy_mount = CyFastMountedPath(mount_path)
mean, std = benchmark(lambda: cy_mount.matches(test_path), iterations)
results[("Cython", test_path)] = mean
print(f" Cython: {format_time(mean):>12} ± {format_time(std)}")
print()
# =========================================================================
# Benchmark 2: Mount Manager lookup
# =========================================================================
print()
print("BENCHMARK 2: Mount Manager Lookup (10 mounts)")
print("-" * 70)
# Setup managers with 10 mounts
mount_paths = [f"/api/v{i}" for i in range(10)]
py_manager = PyFastMountManager()
for p in mount_paths:
py_manager.add_mount(PyFastMountedPath(p, name=p))
if CYTHON_AVAILABLE:
cy_manager = CyFastMountManager()
for p in mount_paths:
cy_manager.add_mount(CyFastMountedPath(p, name=p))
test_lookups = [
"/api/v5/users/123", # Middle mount
"/api/v0/items", # First mount (longest)
"/api/v9/data", # Last mount
"/other/not/found", # No match
]
for test_path in test_lookups:
print(f" Lookup path: {test_path}")
# Pure Python
mean, std = benchmark(lambda: py_manager.get_mount(test_path), iterations)
print(f" Pure Python: {format_time(mean):>12} ± {format_time(std)}")
# Cython
if CYTHON_AVAILABLE:
mean, std = benchmark(lambda: cy_manager.get_mount(test_path), iterations)
print(f" Cython: {format_time(mean):>12} ± {format_time(std)}")
print()
# =========================================================================
# Benchmark 3: Combined match + modify
# =========================================================================
print()
print("BENCHMARK 3: Combined Match + Modify Path")
print("-" * 70)
from pyserve._path_matcher_py import match_and_modify_path as py_match_modify
if CYTHON_AVAILABLE:
from pyserve._path_matcher import match_and_modify_path as cy_match_modify
test_path = "/api/v1/users/123/posts"
print(f" Test path: {test_path}")
print(f" Mount path: {mount_path}")
print()
# Original (separate calls)
def original_match_modify():
if original_mount.matches(test_path):
return original_mount.get_modified_path(test_path)
return None
mean, std = benchmark(original_match_modify, iterations)
print(f" Original (2 calls): {format_time(mean):>12} ± {format_time(std)}")
# Pure Python combined
mean, std = benchmark(lambda: py_match_modify(test_path, mount_path), iterations)
print(f" Pure Python (combined): {format_time(mean):>12} ± {format_time(std)}")
# Cython combined
if CYTHON_AVAILABLE:
mean, std = benchmark(lambda: cy_match_modify(test_path, mount_path), iterations)
print(f" Cython (combined): {format_time(mean):>12} ± {format_time(std)}")
# =========================================================================
# Summary
# =========================================================================
print()
print("=" * 70)
print("SUMMARY")
print("=" * 70)
if CYTHON_AVAILABLE:
print("Cython module is available and was benchmarked")
else:
print("Cython module not available - only Pure Python was benchmarked")
print(" To build Cython module:")
print(" 1. Install Cython: pip install cython")
print(" 2. Build: python setup_cython.py build_ext --inplace")
print()
print("The optimized path matcher provides:")
print(" - Pre-computed path length and trailing slash")
print(" - Boundary-aware prefix matching (prevents /api matching /api-v2)")
print(" - Combined match+modify operation to reduce function calls")
print(" - Longest-prefix-first ordering in MountManager")
if __name__ == "__main__":
run_benchmarks()

32
config.docs.yaml Normal file
View File

@ -0,0 +1,32 @@
# PyServe configuration for serving documentation
# Usage: pyserve -c config.docs.yaml
http:
static_dir: ./docs
templates_dir: ./templates
server:
host: 0.0.0.0
port: 8000
backlog: 1000
default_root: false
ssl:
enabled: false
logging:
level: INFO
console_output: true
extensions:
- type: routing
config:
regex_locations:
"~*\\.(css)$":
root: "./docs"
cache_control: "public, max-age=3600"
"__default__":
root: "./docs"
index_file: "index.html"
cache_control: "no-cache"

View File

@ -0,0 +1 @@
"""Example applications package."""

View File

@ -0,0 +1,194 @@
"""
Example custom ASGI application for PyServe ASGI mounting.
This demonstrates how to create a raw ASGI application without
any framework - similar to Python's http.server but async.
"""
from typing import Dict, Any, List, Callable, Awaitable, Optional
import json
Scope = Dict[str, Any]
Receive = Callable[[], Awaitable[Dict[str, Any]]]
Send = Callable[[Dict[str, Any]], Awaitable[None]]
class SimpleASGIApp:
def __init__(self):
self.routes: Dict[str, Callable] = {}
self._setup_routes()
def _setup_routes(self) -> None:
self.routes = {
"/": self._handle_root,
"/health": self._handle_health,
"/echo": self._handle_echo,
"/info": self._handle_info,
"/headers": self._handle_headers,
}
async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
if scope["type"] != "http":
return
path = scope.get("path", "/")
method = scope.get("method", "GET")
handler = self.routes.get(path)
if handler is None:
if path.startswith("/echo/"):
handler = self._handle_echo_path
else:
await self._send_response(
send,
status=404,
body={"error": "Not found", "path": path}
)
return
await handler(scope, receive, send)
async def _handle_root(self, scope: Scope, receive: Receive, send: Send) -> None:
await self._send_response(
send,
body={
"message": "Welcome to Custom ASGI App mounted in PyServe!",
"description": "This is a raw ASGI application without any framework",
"endpoints": list(self.routes.keys()) + ["/echo/{message}"],
}
)
async def _handle_health(self, scope: Scope, receive: Receive, send: Send) -> None:
await self._send_response(
send,
body={"status": "healthy", "app": "custom-asgi"}
)
async def _handle_echo(self, scope: Scope, receive: Receive, send: Send) -> None:
method = scope.get("method", "GET")
if method == "POST":
body = await self._read_body(receive)
await self._send_response(
send,
body={"echo": body.decode("utf-8") if body else ""}
)
else:
await self._send_response(
send,
body={"message": "Send a POST request to echo data"}
)
async def _handle_echo_path(self, scope: Scope, receive: Receive, send: Send) -> None:
path = scope.get("path", "")
message = path.replace("/echo/", "", 1)
await self._send_response(
send,
body={"echo": message}
)
async def _handle_info(self, scope: Scope, receive: Receive, send: Send) -> None:
await self._send_response(
send,
body={
"method": scope.get("method"),
"path": scope.get("path"),
"query_string": scope.get("query_string", b"").decode("utf-8"),
"root_path": scope.get("root_path", ""),
"scheme": scope.get("scheme", "http"),
"server": list(scope.get("server", ())),
"asgi": scope.get("asgi", {}),
}
)
async def _handle_headers(self, scope: Scope, receive: Receive, send: Send) -> None:
headers = {}
for name, value in scope.get("headers", []):
headers[name.decode("utf-8")] = value.decode("utf-8")
await self._send_response(
send,
body={"headers": headers}
)
async def _read_body(self, receive: Receive) -> bytes:
body = b""
more_body = True
while more_body:
message = await receive()
body += message.get("body", b"")
more_body = message.get("more_body", False)
return body
async def _send_response(
self,
send: Send,
status: int = 200,
body: Any = None,
content_type: str = "application/json",
headers: Optional[List[tuple]] = None,
) -> None:
response_headers = [
(b"content-type", content_type.encode("utf-8")),
]
if headers:
response_headers.extend(headers)
if body is not None:
if content_type == "application/json":
body_bytes = json.dumps(body, ensure_ascii=False).encode("utf-8")
elif isinstance(body, bytes):
body_bytes = body
else:
body_bytes = str(body).encode("utf-8")
else:
body_bytes = b""
response_headers.append(
(b"content-length", str(len(body_bytes)).encode("utf-8"))
)
await send({
"type": "http.response.start",
"status": status,
"headers": response_headers,
})
await send({
"type": "http.response.body",
"body": body_bytes,
})
app = SimpleASGIApp()
async def simple_asgi_app(scope: Scope, receive: Receive, send: Send) -> None:
if scope["type"] != "http":
return
response_body = json.dumps({
"message": "Hello from minimal ASGI app!",
"path": scope.get("path", "/"),
}).encode("utf-8")
await send({
"type": "http.response.start",
"status": 200,
"headers": [
(b"content-type", b"application/json"),
(b"content-length", str(len(response_body)).encode("utf-8")),
],
})
await send({
"type": "http.response.body",
"body": response_body,
})
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8004)

View File

@ -0,0 +1,118 @@
"""
Example FastAPI application for PyServe ASGI mounting.
This demonstrates how to create a FastAPI application that can be
mounted at a specific path in PyServe.
"""
from typing import Optional, Dict, Any
try:
from fastapi import FastAPI, HTTPException
from fastapi.responses import JSONResponse
from pydantic import BaseModel
except ImportError:
raise ImportError(
"FastAPI is not installed. Install with: pip install fastapi"
)
app = FastAPI(
title="Example FastAPI App",
description="This is an example FastAPI application mounted in PyServe",
version="1.0.0",
)
class Item(BaseModel):
name: str
description: Optional[str] = None
price: float
tax: Optional[float] = None
class Message(BaseModel):
message: str
items_db: Dict[int, Dict[str, Any]] = {
1: {"name": "Item 1", "description": "First item", "price": 10.5, "tax": 1.05},
2: {"name": "Item 2", "description": "Second item", "price": 20.0, "tax": 2.0},
}
@app.get("/")
async def root():
return {"message": "Welcome to FastAPI mounted in PyServe!"}
@app.get("/health")
async def health_check():
return {"status": "healthy", "app": "fastapi"}
@app.get("/items")
async def list_items():
return {"items": list(items_db.values()), "count": len(items_db)}
@app.get("/items/{item_id}")
async def get_item(item_id: int):
if item_id not in items_db:
raise HTTPException(status_code=404, detail="Item not found")
return items_db[item_id]
@app.post("/items", response_model=Message)
async def create_item(item: Item):
new_id = max(items_db.keys()) + 1 if items_db else 1
items_db[new_id] = item.model_dump()
return {"message": f"Item created with ID {new_id}"}
@app.put("/items/{item_id}")
async def update_item(item_id: int, item: Item):
if item_id not in items_db:
raise HTTPException(status_code=404, detail="Item not found")
items_db[item_id] = item.model_dump()
return {"message": f"Item {item_id} updated"}
@app.delete("/items/{item_id}")
async def delete_item(item_id: int):
if item_id not in items_db:
raise HTTPException(status_code=404, detail="Item not found")
del items_db[item_id]
return {"message": f"Item {item_id} deleted"}
def create_app(debug: bool = False, **kwargs) -> FastAPI:
application = FastAPI(
title="Example FastAPI App (Factory)",
description="FastAPI application created via factory function",
version="2.0.0",
debug=debug,
)
@application.get("/")
async def factory_root():
return {
"message": "Welcome to FastAPI (factory) mounted in PyServe!",
"debug": debug,
"config": kwargs,
}
@application.get("/health")
async def factory_health():
return {"status": "healthy", "app": "fastapi-factory", "debug": debug}
@application.get("/echo/{message}")
async def echo(message: str):
return {"echo": message}
return application
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8001)

112
examples/apps/flask_app.py Normal file
View File

@ -0,0 +1,112 @@
"""
Example Flask application for PyServe ASGI mounting.
This demonstrates how to create a Flask application that can be
mounted at a specific path in PyServe (via WSGI-to-ASGI adapter).
"""
from typing import Optional
try:
from flask import Flask, jsonify, request
except ImportError:
raise ImportError(
"Flask is not installed. Install with: pip install flask"
)
app = Flask(__name__)
users_db = {
1: {"id": 1, "name": "Alice", "email": "alice@example.com"},
2: {"id": 2, "name": "Bob", "email": "bob@example.com"},
}
@app.route("/")
def root():
return jsonify({"message": "Welcome to Flask mounted in PyServe!"})
@app.route("/health")
def health_check():
return jsonify({"status": "healthy", "app": "flask"})
@app.route("/users")
def list_users():
return jsonify({"users": list(users_db.values()), "count": len(users_db)})
@app.route("/users/<int:user_id>")
def get_user(user_id: int):
if user_id not in users_db:
return jsonify({"error": "User not found"}), 404
return jsonify(users_db[user_id])
@app.route("/users", methods=["POST"])
def create_user():
data = request.get_json()
if not data or "name" not in data:
return jsonify({"error": "Name is required"}), 400
new_id = max(users_db.keys()) + 1 if users_db else 1
users_db[new_id] = {
"id": new_id,
"name": data["name"],
"email": data.get("email", ""),
}
return jsonify({"message": f"User created with ID {new_id}", "user": users_db[new_id]}), 201
@app.route("/users/<int:user_id>", methods=["PUT"])
def update_user(user_id: int):
if user_id not in users_db:
return jsonify({"error": "User not found"}), 404
data = request.get_json()
if data:
if "name" in data:
users_db[user_id]["name"] = data["name"]
if "email" in data:
users_db[user_id]["email"] = data["email"]
return jsonify({"message": f"User {user_id} updated", "user": users_db[user_id]})
@app.route("/users/<int:user_id>", methods=["DELETE"])
def delete_user(user_id: int):
if user_id not in users_db:
return jsonify({"error": "User not found"}), 404
del users_db[user_id]
return jsonify({"message": f"User {user_id} deleted"})
def create_app(config: Optional[dict] = None) -> Flask:
application = Flask(__name__)
if config:
application.config.update(config)
@application.route("/")
def factory_root():
return jsonify({
"message": "Welcome to Flask (factory) mounted in PyServe!",
"config": config or {},
})
@application.route("/health")
def factory_health():
return jsonify({"status": "healthy", "app": "flask-factory"})
@application.route("/echo/<message>")
def echo(message: str):
return jsonify({"echo": message})
return application
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8002, debug=True)

View File

@ -0,0 +1,112 @@
"""
Example Starlette application for PyServe ASGI mounting.
This demonstrates how to create a Starlette application that can be
mounted at a specific path in PyServe.
"""
try:
from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route
from starlette.requests import Request
except ImportError:
raise ImportError(
"Starlette is not installed. Install with: pip install starlette"
)
tasks_db = {
1: {"id": 1, "title": "Task 1", "completed": False},
2: {"id": 2, "title": "Task 2", "completed": True},
}
async def homepage(request: Request) -> JSONResponse:
return JSONResponse({
"message": "Welcome to Starlette mounted in PyServe!"
})
async def health_check(request: Request) -> JSONResponse:
return JSONResponse({"status": "healthy", "app": "starlette"})
async def list_tasks(request: Request) -> JSONResponse:
return JSONResponse({
"tasks": list(tasks_db.values()),
"count": len(tasks_db)
})
async def get_task(request: Request) -> JSONResponse:
task_id = int(request.path_params["task_id"])
if task_id not in tasks_db:
return JSONResponse({"error": "Task not found"}, status_code=404)
return JSONResponse(tasks_db[task_id])
async def create_task(request: Request) -> JSONResponse:
data = await request.json()
if not data or "title" not in data:
return JSONResponse({"error": "Title is required"}, status_code=400)
new_id = max(tasks_db.keys()) + 1 if tasks_db else 1
tasks_db[new_id] = {
"id": new_id,
"title": data["title"],
"completed": data.get("completed", False),
}
return JSONResponse(
{"message": f"Task created with ID {new_id}", "task": tasks_db[new_id]},
status_code=201
)
async def update_task(request: Request) -> JSONResponse:
task_id = int(request.path_params["task_id"])
if task_id not in tasks_db:
return JSONResponse({"error": "Task not found"}, status_code=404)
data = await request.json()
if data:
if "title" in data:
tasks_db[task_id]["title"] = data["title"]
if "completed" in data:
tasks_db[task_id]["completed"] = data["completed"]
return JSONResponse({
"message": f"Task {task_id} updated",
"task": tasks_db[task_id]
})
async def delete_task(request: Request) -> JSONResponse:
task_id = int(request.path_params["task_id"])
if task_id not in tasks_db:
return JSONResponse({"error": "Task not found"}, status_code=404)
del tasks_db[task_id]
return JSONResponse({"message": f"Task {task_id} deleted"})
routes = [
Route("/", homepage),
Route("/health", health_check),
Route("/tasks", list_tasks, methods=["GET"]),
Route("/tasks", create_task, methods=["POST"]),
Route("/tasks/{task_id:int}", get_task, methods=["GET"]),
Route("/tasks/{task_id:int}", update_task, methods=["PUT"]),
Route("/tasks/{task_id:int}", delete_task, methods=["DELETE"]),
]
app = Starlette(debug=True, routes=routes)
def create_app(debug: bool = False) -> Starlette:
return Starlette(debug=debug, routes=routes)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8003)

View File

@ -0,0 +1,103 @@
# Example configuration for ASGI application mounts
# This demonstrates how to mount various Python web frameworks
http:
static_dir: ./static
templates_dir: ./templates
server:
host: 0.0.0.0
port: 8080
backlog: 5
proxy_timeout: 30.0
logging:
level: DEBUG
console_output: true
format:
type: standard
use_colors: true
extensions:
# ASGI Application Mount Extension
- type: asgi
config:
mounts:
# FastAPI application
- path: "/api"
app_path: "examples.apps.fastapi_app:app"
app_type: asgi
name: "fastapi-api"
strip_path: true
# FastAPI with factory pattern
- path: "/api/v2"
app_path: "examples.apps.fastapi_app:create_app"
app_type: asgi
factory: true
factory_args:
debug: true
name: "fastapi-api-v2"
strip_path: true
# Flask application (WSGI wrapped to ASGI)
- path: "/flask"
app_path: "examples.apps.flask_app:app"
app_type: wsgi
name: "flask-app"
strip_path: true
# Flask with factory pattern
- path: "/flask-v2"
app_path: "examples.apps.flask_app:create_app"
app_type: wsgi
factory: true
name: "flask-app-factory"
strip_path: true
# Django application
# Uncomment and configure for your Django project
# - path: "/django"
# django_settings: "myproject.settings"
# module_path: "/path/to/django/project"
# name: "django-app"
# strip_path: true
# Starlette application
- path: "/starlette"
app_path: "examples.apps.starlette_app:app"
app_type: asgi
name: "starlette-app"
strip_path: true
# Custom ASGI application (http.server style)
- path: "/custom"
app_path: "examples.apps.custom_asgi:app"
app_type: asgi
name: "custom-asgi"
strip_path: true
# Standard routing for other paths
- type: routing
config:
regex_locations:
# Health check
"=/health":
return: "200 OK"
content_type: "text/plain"
# Static files
"~*\\.(js|css|png|jpg|gif|ico|svg|woff2?)$":
root: "./static"
cache_control: "public, max-age=31536000"
# Root path
"=/":
root: "./static"
index_file: "index.html"
# Default fallback
"__default__":
spa_fallback: true
root: "./static"
index_file: "index.html"

View File

@ -0,0 +1,113 @@
# PyServe Process Orchestration Example
#
# This configuration demonstrates running multiple ASGI/WSGI applications
# as isolated processes with automatic health monitoring and restart.
server:
host: "0.0.0.0"
port: 8000
backlog: 2048
proxy_timeout: 60.0
logging:
level: DEBUG
console_output: true
format:
type: standard
use_colors: true
extensions:
# Process Orchestration - runs each app in its own process
- type: process_orchestration
config:
# Port range for worker processes
port_range: [9000, 9999]
# Enable health monitoring
health_check_enabled: true
# Proxy timeout for requests
proxy_timeout: 60.0
apps:
# FastAPI application
- name: api
path: /api
app_path: examples.apps.fastapi_app:app
module_path: "."
workers: 2
health_check_path: /health
health_check_interval: 10.0
health_check_timeout: 5.0
health_check_retries: 3
max_restart_count: 5
restart_delay: 1.0
shutdown_timeout: 30.0
strip_path: true
env:
APP_ENV: "production"
DEBUG: "false"
# Flask application (WSGI wrapped to ASGI)
- name: admin
path: /admin
app_path: examples.apps.flask_app:app
app_type: wsgi
module_path: "."
workers: 1
health_check_path: /health
strip_path: true
# Starlette application
- name: web
path: /web
app_path: examples.apps.starlette_app:app
module_path: "."
workers: 2
health_check_path: /health
strip_path: true
# Custom ASGI application
- name: custom
path: /custom
app_path: examples.apps.custom_asgi:app
module_path: "."
workers: 1
health_check_path: /health
strip_path: true
# Routing for static files and reverse proxy
- type: routing
config:
regex_locations:
# Static files
"^/static/.*":
type: static
root: "./static"
strip_prefix: "/static"
# Documentation
"^/docs/?.*":
type: static
root: "./docs"
strip_prefix: "/docs"
# External API proxy
"^/external/.*":
type: proxy
upstream: "https://api.example.com"
strip_prefix: "/external"
# Security headers
- type: security
config:
security_headers:
X-Content-Type-Options: "nosniff"
X-Frame-Options: "DENY"
X-XSS-Protection: "1; mode=block"
Strict-Transport-Security: "max-age=31536000; includeSubDomains"
# Monitoring
- type: monitoring
config:
enable_metrics: true

View File

@ -12,6 +12,22 @@ warn_unused_ignores = True
warn_no_return = True
warn_unreachable = True
strict_equality = True
exclude = (?x)(
^pyserve/_path_matcher\.pyx$
)
[mypy-tests.*]
disallow_untyped_defs = False
[mypy-pyserve._path_matcher]
ignore_missing_imports = True
follow_imports = skip
[mypy-django.*]
ignore_missing_imports = True
[mypy-a2wsgi]
ignore_missing_imports = True
[mypy-asgiref.*]
ignore_missing_imports = True

677
poetry.lock generated
View File

@ -1,5 +1,44 @@
# This file is automatically @generated by Poetry 2.1.2 and should not be changed by hand.
[[package]]
name = "a2wsgi"
version = "1.10.10"
description = "Convert WSGI app to ASGI app or ASGI app to WSGI app."
optional = true
python-versions = ">=3.8.0"
groups = ["main"]
markers = "extra == \"wsgi\" or extra == \"flask\" or extra == \"all-frameworks\""
files = [
{file = "a2wsgi-1.10.10-py3-none-any.whl", hash = "sha256:d2b21379479718539dc15fce53b876251a0efe7615352dfe49f6ad1bc507848d"},
{file = "a2wsgi-1.10.10.tar.gz", hash = "sha256:a5bcffb52081ba39df0d5e9a884fc6f819d92e3a42389343ba77cbf809fe1f45"},
]
[[package]]
name = "annotated-doc"
version = "0.0.4"
description = "Document parameters, class attributes, return types, and variables inline, with Annotated."
optional = true
python-versions = ">=3.8"
groups = ["main"]
markers = "extra == \"fastapi\" or extra == \"all-frameworks\""
files = [
{file = "annotated_doc-0.0.4-py3-none-any.whl", hash = "sha256:571ac1dc6991c450b25a9c2d84a3705e2ae7a53467b5d111c24fa8baabbed320"},
{file = "annotated_doc-0.0.4.tar.gz", hash = "sha256:fbcda96e87e9c92ad167c2e53839e57503ecfda18804ea28102353485033faa4"},
]
[[package]]
name = "annotated-types"
version = "0.7.0"
description = "Reusable constraint types to use with typing.Annotated"
optional = true
python-versions = ">=3.8"
groups = ["main"]
markers = "extra == \"fastapi\" or extra == \"all-frameworks\""
files = [
{file = "annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53"},
{file = "annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89"},
]
[[package]]
name = "anyio"
version = "4.10.0"
@ -20,6 +59,22 @@ typing_extensions = {version = ">=4.5", markers = "python_version < \"3.13\""}
[package.extras]
trio = ["trio (>=0.26.1)"]
[[package]]
name = "asgiref"
version = "3.11.0"
description = "ASGI specs, helper code, and adapters"
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"django\" or extra == \"all-frameworks\""
files = [
{file = "asgiref-3.11.0-py3-none-any.whl", hash = "sha256:1db9021efadb0d9512ce8ffaf72fcef601c7b73a8807a1bb2ef143dc6b14846d"},
{file = "asgiref-3.11.0.tar.gz", hash = "sha256:13acff32519542a1736223fb79a715acdebe24286d98e8b164a73085f40da2c4"},
]
[package.extras]
tests = ["mypy (>=1.14.0)", "pytest", "pytest-asyncio"]
[[package]]
name = "black"
version = "25.1.0"
@ -65,6 +120,19 @@ d = ["aiohttp (>=3.10)"]
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "blinker"
version = "1.9.0"
description = "Fast, simple object-to-object and broadcast signaling"
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"flask\" or extra == \"all-frameworks\""
files = [
{file = "blinker-1.9.0-py3-none-any.whl", hash = "sha256:ba0efaa9080b619ff2f3459d1d500c57bddea4a6b424b60a91141db6fd2f08bc"},
{file = "blinker-1.9.0.tar.gz", hash = "sha256:b4ce2265a7abece45e7cc896e98dbebe6cead56bcf805a3d23136d145f5445bf"},
]
[[package]]
name = "certifi"
version = "2025.11.12"
@ -79,14 +147,14 @@ files = [
[[package]]
name = "click"
version = "8.2.1"
version = "8.3.1"
description = "Composable command line interface toolkit"
optional = false
python-versions = ">=3.10"
groups = ["main", "dev"]
files = [
{file = "click-8.2.1-py3-none-any.whl", hash = "sha256:61a3265b914e850b85317d0b3109c7f8cd35a670f963866005d6ef1d5175a12b"},
{file = "click-8.2.1.tar.gz", hash = "sha256:27c491cc05d968d271d5a1db13e3b5a184636d9d930f148c50b038f0d0646202"},
{file = "click-8.3.1-py3-none-any.whl", hash = "sha256:981153a64e25f12d547d3426c367a4857371575ee7ad18df2a6183ab0545b2a6"},
{file = "click-8.3.1.tar.gz", hash = "sha256:12ff4785d337a1bb490bb7e9c2b1ee5da3112e94a8622f26a6c77f5d2fc6842a"},
]
[package.dependencies]
@ -206,6 +274,101 @@ files = [
[package.extras]
toml = ["tomli ; python_full_version <= \"3.11.0a6\""]
[[package]]
name = "cython"
version = "3.2.2"
description = "The Cython compiler for writing C extensions in the Python language."
optional = false
python-versions = ">=3.8"
groups = ["dev"]
files = [
{file = "cython-3.2.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b5afac4e77e71a9010dc7fd3191ced00f9b12b494dd7525c140781054ce63a73"},
{file = "cython-3.2.2-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9cd2ede6af225499ad22888dbfb13b92d71fc1016f401ee637559a5831b177c2"},
{file = "cython-3.2.2-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8c9265b3e84ae2d999b7c3165c683e366bbbbbe4346468055ca2366fe013f2df"},
{file = "cython-3.2.2-cp310-cp310-win_amd64.whl", hash = "sha256:d7b3447b2005dffc5f276d420a480d2b57d15091242652d410b6a46fb00ed251"},
{file = "cython-3.2.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d140c2701cbb8cf960300cf1b67f3b4fa9d294d32e51b85f329bff56936a82fd"},
{file = "cython-3.2.2-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:50bbaabee733fd2780985e459fc20f655e02def83e8eff10220ad88455a34622"},
{file = "cython-3.2.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a9509f1e9c41c86b790cff745bb31927bbc861662a3b462596d71d3d2a578abb"},
{file = "cython-3.2.2-cp311-cp311-win_amd64.whl", hash = "sha256:034ab96cb8bc8e7432bc27491f8d66f51e435b1eb21ddc03aa844be8f21ad847"},
{file = "cython-3.2.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:692a41c8fe06fb2dc55ca2c8d71c80c469fd16fe69486ed99f3b3cbb2d3af83f"},
{file = "cython-3.2.2-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:098590c1dc309f8a0406ade031963a95a87714296b425539f9920aebf924560d"},
{file = "cython-3.2.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a3898c076e9c458bcb3e4936187919fda5f5365fe4c567d35d2b003444b6f3fe"},
{file = "cython-3.2.2-cp312-cp312-win_amd64.whl", hash = "sha256:2b910b89a2a71004064c5e890b9512a251eda63fae252caa0feb9835057035f9"},
{file = "cython-3.2.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:aa24cd0bdab27ca099b2467806c684404add597c1108e07ddf7b6471653c85d7"},
{file = "cython-3.2.2-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:60f4aa425e1ff98abf8d965ae7020f06dd2cbc01dbd945137d2f9cca4ff0524a"},
{file = "cython-3.2.2-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a473df474ba89e9fee81ee82b31062a267f9e598096b222783477e56d02ad12c"},
{file = "cython-3.2.2-cp313-cp313-win_amd64.whl", hash = "sha256:b4df52101209817fde7284cf779156f79142fb639b1d7840f11680ff4bb30604"},
{file = "cython-3.2.2-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:177faf4d61e9f2d4d2db61194ac9ec16d3fe3041c1b6830f871a01935319eeb3"},
{file = "cython-3.2.2-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8db28aef793c81dc69383b619ca508668998aaf099cd839d3cbae85184cce744"},
{file = "cython-3.2.2-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3de43a5786033a27fae1c882feb5ff0d023c38b83356e6800c1be0bcd6cf9f11"},
{file = "cython-3.2.2-cp314-cp314-win_amd64.whl", hash = "sha256:fed44d0ab2d36f1b0301c770b0dafec23bcb9700d58e7769cd6d9136b3304c11"},
{file = "cython-3.2.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e7200309b81f4066cf36a96efeec646716ca74afd73d159045169263db891133"},
{file = "cython-3.2.2-cp38-cp38-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8e72ee88a9a5381d30a6da116a3c8352730b9b038a49ed9bc5c3d0ed6d69b06c"},
{file = "cython-3.2.2-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0e35ff0f1bb3a7a5c40afb8fb540e4178b6551909f10748bf39d323f8140ccf3"},
{file = "cython-3.2.2-cp38-cp38-win_amd64.whl", hash = "sha256:b223c1f84c3420c24f6a4858e979524bd35a79437a5839e29d41201c87ed119d"},
{file = "cython-3.2.2-cp39-abi3-macosx_10_9_x86_64.whl", hash = "sha256:a6387e3ad31342443916db9a419509935fddd8d4cbac34aab9c895ae55326a56"},
{file = "cython-3.2.2-cp39-abi3-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:436eb562d0affbc0b959f62f3f9c1ed251b9499e4f29c1d19514ae859894b6bf"},
{file = "cython-3.2.2-cp39-abi3-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:f560ff3aea5b5df93853ec7bf1a1e9623d6d511f4192f197559aca18fca43392"},
{file = "cython-3.2.2-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:d8c93fe128b58942832b1fcac96e48f93c2c69b569eff0d38d30fb5995fecfa0"},
{file = "cython-3.2.2-cp39-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:b4fe499eed7cd70b2aa4e096b9ce2588f5e6fdf049b46d40a5e55efcde6e4904"},
{file = "cython-3.2.2-cp39-abi3-musllinux_1_2_i686.whl", hash = "sha256:14432d7f207245a3c35556155873f494784169297b28978a6204f1c60d31553e"},
{file = "cython-3.2.2-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:820c4a99dbf6b3e6c0300be42b4040b501eff0e1feeb80cfa52c48a346fb0df2"},
{file = "cython-3.2.2-cp39-abi3-win32.whl", hash = "sha256:826cad0ad43ab05a26e873b5d625f64d458dc739ec6fdeecab848b60a91c4252"},
{file = "cython-3.2.2-cp39-abi3-win_arm64.whl", hash = "sha256:5f818d40bbcf17e2089e2de7840f0de1c0ca527acf9b044aba79d5f5d8a5bdba"},
{file = "cython-3.2.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ff07e784ea748225bbdea07fec0ac451379e9e41a0a84cb57b36db19dd01ae71"},
{file = "cython-3.2.2-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:aff11412ed5fc78bd8b148621f4d1034fcad6cfcba468c20cd9f327b4f61ec3e"},
{file = "cython-3.2.2-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ca18d9d53c0e2f0c9347478b37532b46e0dc34c704e052ab1b0d8b21a290fc0f"},
{file = "cython-3.2.2-cp39-cp39-win_amd64.whl", hash = "sha256:86b1d39a1ea974dd16fe3bcef0df7b64dadd0bd38d05a339f287b48d37cb109f"},
{file = "cython-3.2.2-py3-none-any.whl", hash = "sha256:13b99ecb9482aff6a6c12d1ca6feef6940c507af909914b49f568de74fa965fb"},
{file = "cython-3.2.2.tar.gz", hash = "sha256:c3add3d483acc73129a61d105389344d792c17e7c1cee24863f16416bd071634"},
]
[[package]]
name = "django"
version = "5.2.9"
description = "A high-level Python web framework that encourages rapid development and clean, pragmatic design."
optional = true
python-versions = ">=3.10"
groups = ["main"]
markers = "extra == \"django\" or extra == \"all-frameworks\""
files = [
{file = "django-5.2.9-py3-none-any.whl", hash = "sha256:3a4ea88a70370557ab1930b332fd2887a9f48654261cdffda663fef5976bb00a"},
{file = "django-5.2.9.tar.gz", hash = "sha256:16b5ccfc5e8c27e6c0561af551d2ea32852d7352c67d452ae3e76b4f6b2ca495"},
]
[package.dependencies]
asgiref = ">=3.8.1"
sqlparse = ">=0.3.1"
tzdata = {version = "*", markers = "sys_platform == \"win32\""}
[package.extras]
argon2 = ["argon2-cffi (>=19.1.0)"]
bcrypt = ["bcrypt"]
[[package]]
name = "fastapi"
version = "0.123.5"
description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production"
optional = true
python-versions = ">=3.8"
groups = ["main"]
markers = "extra == \"fastapi\" or extra == \"all-frameworks\""
files = [
{file = "fastapi-0.123.5-py3-none-any.whl", hash = "sha256:a9c708e47c0fa424139cddb8601d0f92d3111b77843c22e9c8d0164d65fe3c97"},
{file = "fastapi-0.123.5.tar.gz", hash = "sha256:54bbb660ca231d3985474498b51c621ddcf8888d9a4c1ecb10aa40ec217e4965"},
]
[package.dependencies]
annotated-doc = ">=0.0.2"
pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<2.0.0 || >2.0.0,<2.0.1 || >2.0.1,<2.1.0 || >2.1.0,<3.0.0"
starlette = ">=0.40.0,<0.51.0"
typing-extensions = ">=4.8.0"
[package.extras]
all = ["email-validator (>=2.0.0)", "fastapi-cli[standard] (>=0.0.8)", "httpx (>=0.23.0,<1.0.0)", "itsdangerous (>=1.1.0)", "jinja2 (>=3.1.5)", "orjson (>=3.2.1)", "pydantic-extra-types (>=2.0.0)", "pydantic-settings (>=2.0.0)", "python-multipart (>=0.0.18)", "pyyaml (>=5.3.1)", "ujson (>=4.0.1,!=4.0.2,!=4.1.0,!=4.2.0,!=4.3.0,!=5.0.0,!=5.1.0)", "uvicorn[standard] (>=0.12.0)"]
standard = ["email-validator (>=2.0.0)", "fastapi-cli[standard] (>=0.0.8)", "httpx (>=0.23.0,<1.0.0)", "jinja2 (>=3.1.5)", "python-multipart (>=0.0.18)", "uvicorn[standard] (>=0.12.0)"]
standard-no-fastapi-cloud-cli = ["email-validator (>=2.0.0)", "fastapi-cli[standard-no-fastapi-cloud-cli] (>=0.0.8)", "httpx (>=0.23.0,<1.0.0)", "jinja2 (>=3.1.5)", "python-multipart (>=0.0.18)", "uvicorn[standard] (>=0.12.0)"]
[[package]]
name = "flake8"
version = "7.3.0"
@ -223,6 +386,31 @@ mccabe = ">=0.7.0,<0.8.0"
pycodestyle = ">=2.14.0,<2.15.0"
pyflakes = ">=3.4.0,<3.5.0"
[[package]]
name = "flask"
version = "3.1.2"
description = "A simple framework for building complex web applications."
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"flask\" or extra == \"all-frameworks\""
files = [
{file = "flask-3.1.2-py3-none-any.whl", hash = "sha256:ca1d8112ec8a6158cc29ea4858963350011b5c846a414cdb7a954aa9e967d03c"},
{file = "flask-3.1.2.tar.gz", hash = "sha256:bf656c15c80190ed628ad08cdfd3aaa35beb087855e2f494910aa3774cc4fd87"},
]
[package.dependencies]
blinker = ">=1.9.0"
click = ">=8.1.3"
itsdangerous = ">=2.2.0"
jinja2 = ">=3.1.2"
markupsafe = ">=2.1.1"
werkzeug = ">=3.1.0"
[package.extras]
async = ["asgiref (>=3.2)"]
dotenv = ["python-dotenv"]
[[package]]
name = "h11"
version = "0.16.0"
@ -382,6 +570,162 @@ files = [
colors = ["colorama"]
plugins = ["setuptools"]
[[package]]
name = "itsdangerous"
version = "2.2.0"
description = "Safely pass data to untrusted environments and back."
optional = true
python-versions = ">=3.8"
groups = ["main"]
markers = "extra == \"flask\" or extra == \"all-frameworks\""
files = [
{file = "itsdangerous-2.2.0-py3-none-any.whl", hash = "sha256:c6242fc49e35958c8b15141343aa660db5fc54d4f13a1db01a3f5891b98700ef"},
{file = "itsdangerous-2.2.0.tar.gz", hash = "sha256:e0050c0b7da1eea53ffaf149c0cfbb5c6e2e2b69c4bef22c81fa6eb73e5f6173"},
]
[[package]]
name = "jinja2"
version = "3.1.6"
description = "A very fast and expressive template engine."
optional = true
python-versions = ">=3.7"
groups = ["main"]
markers = "extra == \"flask\" or extra == \"all-frameworks\""
files = [
{file = "jinja2-3.1.6-py3-none-any.whl", hash = "sha256:85ece4451f492d0c13c5dd7c13a64681a86afae63a5f347908daf103ce6d2f67"},
{file = "jinja2-3.1.6.tar.gz", hash = "sha256:0137fb05990d35f1275a587e9aee6d56da821fc83491a0fb838183be43f66d6d"},
]
[package.dependencies]
MarkupSafe = ">=2.0"
[package.extras]
i18n = ["Babel (>=2.7)"]
[[package]]
name = "markdown-it-py"
version = "4.0.0"
description = "Python port of markdown-it. Markdown parsing, done right!"
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "markdown_it_py-4.0.0-py3-none-any.whl", hash = "sha256:87327c59b172c5011896038353a81343b6754500a08cd7a4973bb48c6d578147"},
{file = "markdown_it_py-4.0.0.tar.gz", hash = "sha256:cb0a2b4aa34f932c007117b194e945bd74e0ec24133ceb5bac59009cda1cb9f3"},
]
[package.dependencies]
mdurl = ">=0.1,<1.0"
[package.extras]
benchmarking = ["psutil", "pytest", "pytest-benchmark"]
compare = ["commonmark (>=0.9,<1.0)", "markdown (>=3.4,<4.0)", "markdown-it-pyrs", "mistletoe (>=1.0,<2.0)", "mistune (>=3.0,<4.0)", "panflute (>=2.3,<3.0)"]
linkify = ["linkify-it-py (>=1,<3)"]
plugins = ["mdit-py-plugins (>=0.5.0)"]
profiling = ["gprof2dot"]
rtd = ["ipykernel", "jupyter_sphinx", "mdit-py-plugins (>=0.5.0)", "myst-parser", "pyyaml", "sphinx", "sphinx-book-theme (>=1.0,<2.0)", "sphinx-copybutton", "sphinx-design"]
testing = ["coverage", "pytest", "pytest-cov", "pytest-regressions", "requests"]
[[package]]
name = "markupsafe"
version = "3.0.3"
description = "Safely add untrusted strings to HTML/XML markup."
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"flask\" or extra == \"all-frameworks\""
files = [
{file = "markupsafe-3.0.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2f981d352f04553a7171b8e44369f2af4055f888dfb147d55e42d29e29e74559"},
{file = "markupsafe-3.0.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e1c1493fb6e50ab01d20a22826e57520f1284df32f2d8601fdd90b6304601419"},
{file = "markupsafe-3.0.3-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1ba88449deb3de88bd40044603fafffb7bc2b055d626a330323a9ed736661695"},
{file = "markupsafe-3.0.3-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:f42d0984e947b8adf7dd6dde396e720934d12c506ce84eea8476409563607591"},
{file = "markupsafe-3.0.3-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:c0c0b3ade1c0b13b936d7970b1d37a57acde9199dc2aecc4c336773e1d86049c"},
{file = "markupsafe-3.0.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:0303439a41979d9e74d18ff5e2dd8c43ed6c6001fd40e5bf2e43f7bd9bbc523f"},
{file = "markupsafe-3.0.3-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:d2ee202e79d8ed691ceebae8e0486bd9a2cd4794cec4824e1c99b6f5009502f6"},
{file = "markupsafe-3.0.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:177b5253b2834fe3678cb4a5f0059808258584c559193998be2601324fdeafb1"},
{file = "markupsafe-3.0.3-cp310-cp310-win32.whl", hash = "sha256:2a15a08b17dd94c53a1da0438822d70ebcd13f8c3a95abe3a9ef9f11a94830aa"},
{file = "markupsafe-3.0.3-cp310-cp310-win_amd64.whl", hash = "sha256:c4ffb7ebf07cfe8931028e3e4c85f0357459a3f9f9490886198848f4fa002ec8"},
{file = "markupsafe-3.0.3-cp310-cp310-win_arm64.whl", hash = "sha256:e2103a929dfa2fcaf9bb4e7c091983a49c9ac3b19c9061b6d5427dd7d14d81a1"},
{file = "markupsafe-3.0.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:1cc7ea17a6824959616c525620e387f6dd30fec8cb44f649e31712db02123dad"},
{file = "markupsafe-3.0.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4bd4cd07944443f5a265608cc6aab442e4f74dff8088b0dfc8238647b8f6ae9a"},
{file = "markupsafe-3.0.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6b5420a1d9450023228968e7e6a9ce57f65d148ab56d2313fcd589eee96a7a50"},
{file = "markupsafe-3.0.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0bf2a864d67e76e5c9a34dc26ec616a66b9888e25e7b9460e1c76d3293bd9dbf"},
{file = "markupsafe-3.0.3-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:bc51efed119bc9cfdf792cdeaa4d67e8f6fcccab66ed4bfdd6bde3e59bfcbb2f"},
{file = "markupsafe-3.0.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:068f375c472b3e7acbe2d5318dea141359e6900156b5b2ba06a30b169086b91a"},
{file = "markupsafe-3.0.3-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:7be7b61bb172e1ed687f1754f8e7484f1c8019780f6f6b0786e76bb01c2ae115"},
{file = "markupsafe-3.0.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f9e130248f4462aaa8e2552d547f36ddadbeaa573879158d721bbd33dfe4743a"},
{file = "markupsafe-3.0.3-cp311-cp311-win32.whl", hash = "sha256:0db14f5dafddbb6d9208827849fad01f1a2609380add406671a26386cdf15a19"},
{file = "markupsafe-3.0.3-cp311-cp311-win_amd64.whl", hash = "sha256:de8a88e63464af587c950061a5e6a67d3632e36df62b986892331d4620a35c01"},
{file = "markupsafe-3.0.3-cp311-cp311-win_arm64.whl", hash = "sha256:3b562dd9e9ea93f13d53989d23a7e775fdfd1066c33494ff43f5418bc8c58a5c"},
{file = "markupsafe-3.0.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:d53197da72cc091b024dd97249dfc7794d6a56530370992a5e1a08983ad9230e"},
{file = "markupsafe-3.0.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:1872df69a4de6aead3491198eaf13810b565bdbeec3ae2dc8780f14458ec73ce"},
{file = "markupsafe-3.0.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3a7e8ae81ae39e62a41ec302f972ba6ae23a5c5396c8e60113e9066ef893da0d"},
{file = "markupsafe-3.0.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:d6dd0be5b5b189d31db7cda48b91d7e0a9795f31430b7f271219ab30f1d3ac9d"},
{file = "markupsafe-3.0.3-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:94c6f0bb423f739146aec64595853541634bde58b2135f27f61c1ffd1cd4d16a"},
{file = "markupsafe-3.0.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:be8813b57049a7dc738189df53d69395eba14fb99345e0a5994914a3864c8a4b"},
{file = "markupsafe-3.0.3-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:83891d0e9fb81a825d9a6d61e3f07550ca70a076484292a70fde82c4b807286f"},
{file = "markupsafe-3.0.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:77f0643abe7495da77fb436f50f8dab76dbc6e5fd25d39589a0f1fe6548bfa2b"},
{file = "markupsafe-3.0.3-cp312-cp312-win32.whl", hash = "sha256:d88b440e37a16e651bda4c7c2b930eb586fd15ca7406cb39e211fcff3bf3017d"},
{file = "markupsafe-3.0.3-cp312-cp312-win_amd64.whl", hash = "sha256:26a5784ded40c9e318cfc2bdb30fe164bdb8665ded9cd64d500a34fb42067b1c"},
{file = "markupsafe-3.0.3-cp312-cp312-win_arm64.whl", hash = "sha256:35add3b638a5d900e807944a078b51922212fb3dedb01633a8defc4b01a3c85f"},
{file = "markupsafe-3.0.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:e1cf1972137e83c5d4c136c43ced9ac51d0e124706ee1c8aa8532c1287fa8795"},
{file = "markupsafe-3.0.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:116bb52f642a37c115f517494ea5feb03889e04df47eeff5b130b1808ce7c219"},
{file = "markupsafe-3.0.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:133a43e73a802c5562be9bbcd03d090aa5a1fe899db609c29e8c8d815c5f6de6"},
{file = "markupsafe-3.0.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ccfcd093f13f0f0b7fdd0f198b90053bf7b2f02a3927a30e63f3ccc9df56b676"},
{file = "markupsafe-3.0.3-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:509fa21c6deb7a7a273d629cf5ec029bc209d1a51178615ddf718f5918992ab9"},
{file = "markupsafe-3.0.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:a4afe79fb3de0b7097d81da19090f4df4f8d3a2b3adaa8764138aac2e44f3af1"},
{file = "markupsafe-3.0.3-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:795e7751525cae078558e679d646ae45574b47ed6e7771863fcc079a6171a0fc"},
{file = "markupsafe-3.0.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:8485f406a96febb5140bfeca44a73e3ce5116b2501ac54fe953e488fb1d03b12"},
{file = "markupsafe-3.0.3-cp313-cp313-win32.whl", hash = "sha256:bdd37121970bfd8be76c5fb069c7751683bdf373db1ed6c010162b2a130248ed"},
{file = "markupsafe-3.0.3-cp313-cp313-win_amd64.whl", hash = "sha256:9a1abfdc021a164803f4d485104931fb8f8c1efd55bc6b748d2f5774e78b62c5"},
{file = "markupsafe-3.0.3-cp313-cp313-win_arm64.whl", hash = "sha256:7e68f88e5b8799aa49c85cd116c932a1ac15caaa3f5db09087854d218359e485"},
{file = "markupsafe-3.0.3-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:218551f6df4868a8d527e3062d0fb968682fe92054e89978594c28e642c43a73"},
{file = "markupsafe-3.0.3-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:3524b778fe5cfb3452a09d31e7b5adefeea8c5be1d43c4f810ba09f2ceb29d37"},
{file = "markupsafe-3.0.3-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4e885a3d1efa2eadc93c894a21770e4bc67899e3543680313b09f139e149ab19"},
{file = "markupsafe-3.0.3-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8709b08f4a89aa7586de0aadc8da56180242ee0ada3999749b183aa23df95025"},
{file = "markupsafe-3.0.3-cp313-cp313t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:b8512a91625c9b3da6f127803b166b629725e68af71f8184ae7e7d54686a56d6"},
{file = "markupsafe-3.0.3-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:9b79b7a16f7fedff2495d684f2b59b0457c3b493778c9eed31111be64d58279f"},
{file = "markupsafe-3.0.3-cp313-cp313t-musllinux_1_2_riscv64.whl", hash = "sha256:12c63dfb4a98206f045aa9563db46507995f7ef6d83b2f68eda65c307c6829eb"},
{file = "markupsafe-3.0.3-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:8f71bc33915be5186016f675cd83a1e08523649b0e33efdb898db577ef5bb009"},
{file = "markupsafe-3.0.3-cp313-cp313t-win32.whl", hash = "sha256:69c0b73548bc525c8cb9a251cddf1931d1db4d2258e9599c28c07ef3580ef354"},
{file = "markupsafe-3.0.3-cp313-cp313t-win_amd64.whl", hash = "sha256:1b4b79e8ebf6b55351f0d91fe80f893b4743f104bff22e90697db1590e47a218"},
{file = "markupsafe-3.0.3-cp313-cp313t-win_arm64.whl", hash = "sha256:ad2cf8aa28b8c020ab2fc8287b0f823d0a7d8630784c31e9ee5edea20f406287"},
{file = "markupsafe-3.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:eaa9599de571d72e2daf60164784109f19978b327a3910d3e9de8c97b5b70cfe"},
{file = "markupsafe-3.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c47a551199eb8eb2121d4f0f15ae0f923d31350ab9280078d1e5f12b249e0026"},
{file = "markupsafe-3.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f34c41761022dd093b4b6896d4810782ffbabe30f2d443ff5f083e0cbbb8c737"},
{file = "markupsafe-3.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:457a69a9577064c05a97c41f4e65148652db078a3a509039e64d3467b9e7ef97"},
{file = "markupsafe-3.0.3-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:e8afc3f2ccfa24215f8cb28dcf43f0113ac3c37c2f0f0806d8c70e4228c5cf4d"},
{file = "markupsafe-3.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:ec15a59cf5af7be74194f7ab02d0f59a62bdcf1a537677ce67a2537c9b87fcda"},
{file = "markupsafe-3.0.3-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:0eb9ff8191e8498cca014656ae6b8d61f39da5f95b488805da4bb029cccbfbaf"},
{file = "markupsafe-3.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:2713baf880df847f2bece4230d4d094280f4e67b1e813eec43b4c0e144a34ffe"},
{file = "markupsafe-3.0.3-cp314-cp314-win32.whl", hash = "sha256:729586769a26dbceff69f7a7dbbf59ab6572b99d94576a5592625d5b411576b9"},
{file = "markupsafe-3.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:bdc919ead48f234740ad807933cdf545180bfbe9342c2bb451556db2ed958581"},
{file = "markupsafe-3.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:5a7d5dc5140555cf21a6fefbdbf8723f06fcd2f63ef108f2854de715e4422cb4"},
{file = "markupsafe-3.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:1353ef0c1b138e1907ae78e2f6c63ff67501122006b0f9abad68fda5f4ffc6ab"},
{file = "markupsafe-3.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:1085e7fbddd3be5f89cc898938f42c0b3c711fdcb37d75221de2666af647c175"},
{file = "markupsafe-3.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1b52b4fb9df4eb9ae465f8d0c228a00624de2334f216f178a995ccdcf82c4634"},
{file = "markupsafe-3.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fed51ac40f757d41b7c48425901843666a6677e3e8eb0abcff09e4ba6e664f50"},
{file = "markupsafe-3.0.3-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:f190daf01f13c72eac4efd5c430a8de82489d9cff23c364c3ea822545032993e"},
{file = "markupsafe-3.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:e56b7d45a839a697b5eb268c82a71bd8c7f6c94d6fd50c3d577fa39a9f1409f5"},
{file = "markupsafe-3.0.3-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:f3e98bb3798ead92273dc0e5fd0f31ade220f59a266ffd8a4f6065e0a3ce0523"},
{file = "markupsafe-3.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:5678211cb9333a6468fb8d8be0305520aa073f50d17f089b5b4b477ea6e67fdc"},
{file = "markupsafe-3.0.3-cp314-cp314t-win32.whl", hash = "sha256:915c04ba3851909ce68ccc2b8e2cd691618c4dc4c4232fb7982bca3f41fd8c3d"},
{file = "markupsafe-3.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4faffd047e07c38848ce017e8725090413cd80cbc23d86e55c587bf979e579c9"},
{file = "markupsafe-3.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:32001d6a8fc98c8cb5c947787c5d08b0a50663d139f1305bac5885d98d9b40fa"},
{file = "markupsafe-3.0.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:15d939a21d546304880945ca1ecb8a039db6b4dc49b2c5a400387cdae6a62e26"},
{file = "markupsafe-3.0.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f71a396b3bf33ecaa1626c255855702aca4d3d9fea5e051b41ac59a9c1c41edc"},
{file = "markupsafe-3.0.3-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0f4b68347f8c5eab4a13419215bdfd7f8c9b19f2b25520968adfad23eb0ce60c"},
{file = "markupsafe-3.0.3-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e8fc20152abba6b83724d7ff268c249fa196d8259ff481f3b1476383f8f24e42"},
{file = "markupsafe-3.0.3-cp39-cp39-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:949b8d66bc381ee8b007cd945914c721d9aba8e27f71959d750a46f7c282b20b"},
{file = "markupsafe-3.0.3-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:3537e01efc9d4dccdf77221fb1cb3b8e1a38d5428920e0657ce299b20324d758"},
{file = "markupsafe-3.0.3-cp39-cp39-musllinux_1_2_riscv64.whl", hash = "sha256:591ae9f2a647529ca990bc681daebdd52c8791ff06c2bfa05b65163e28102ef2"},
{file = "markupsafe-3.0.3-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:a320721ab5a1aba0a233739394eb907f8c8da5c98c9181d1161e77a0c8e36f2d"},
{file = "markupsafe-3.0.3-cp39-cp39-win32.whl", hash = "sha256:df2449253ef108a379b8b5d6b43f4b1a8e81a061d6537becd5582fba5f9196d7"},
{file = "markupsafe-3.0.3-cp39-cp39-win_amd64.whl", hash = "sha256:7c3fb7d25180895632e5d3148dbdc29ea38ccb7fd210aa27acbd1201a1902c6e"},
{file = "markupsafe-3.0.3-cp39-cp39-win_arm64.whl", hash = "sha256:38664109c14ffc9e7437e86b4dceb442b0096dfe3541d7864d9cbe1da4cf36c8"},
{file = "markupsafe-3.0.3.tar.gz", hash = "sha256:722695808f4b6457b320fdc131280796bdceb04ab50fe1795cd540799ebe1698"},
]
[[package]]
name = "mccabe"
version = "0.7.0"
@ -394,6 +738,18 @@ files = [
{file = "mccabe-0.7.0.tar.gz", hash = "sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325"},
]
[[package]]
name = "mdurl"
version = "0.1.2"
description = "Markdown URL utilities"
optional = false
python-versions = ">=3.7"
groups = ["main"]
files = [
{file = "mdurl-0.1.2-py3-none-any.whl", hash = "sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8"},
{file = "mdurl-0.1.2.tar.gz", hash = "sha256:bb413d29f5eea38f31dd4754dd7377d4465116fb207585f97bf925588687c1ba"},
]
[[package]]
name = "mypy"
version = "1.17.1"
@ -523,6 +879,39 @@ files = [
dev = ["pre-commit", "tox"]
testing = ["coverage", "pytest", "pytest-benchmark"]
[[package]]
name = "psutil"
version = "7.1.3"
description = "Cross-platform lib for process and system monitoring."
optional = false
python-versions = ">=3.6"
groups = ["main"]
files = [
{file = "psutil-7.1.3-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:0005da714eee687b4b8decd3d6cc7c6db36215c9e74e5ad2264b90c3df7d92dc"},
{file = "psutil-7.1.3-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:19644c85dcb987e35eeeaefdc3915d059dac7bd1167cdcdbf27e0ce2df0c08c0"},
{file = "psutil-7.1.3-cp313-cp313t-manylinux2010_x86_64.manylinux_2_12_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:95ef04cf2e5ba0ab9eaafc4a11eaae91b44f4ef5541acd2ee91d9108d00d59a7"},
{file = "psutil-7.1.3-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1068c303be3a72f8e18e412c5b2a8f6d31750fb152f9cb106b54090296c9d251"},
{file = "psutil-7.1.3-cp313-cp313t-win_amd64.whl", hash = "sha256:18349c5c24b06ac5612c0428ec2a0331c26443d259e2a0144a9b24b4395b58fa"},
{file = "psutil-7.1.3-cp313-cp313t-win_arm64.whl", hash = "sha256:c525ffa774fe4496282fb0b1187725793de3e7c6b29e41562733cae9ada151ee"},
{file = "psutil-7.1.3-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:b403da1df4d6d43973dc004d19cee3b848e998ae3154cc8097d139b77156c353"},
{file = "psutil-7.1.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:ad81425efc5e75da3f39b3e636293360ad8d0b49bed7df824c79764fb4ba9b8b"},
{file = "psutil-7.1.3-cp314-cp314t-manylinux2010_x86_64.manylinux_2_12_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8f33a3702e167783a9213db10ad29650ebf383946e91bc77f28a5eb083496bc9"},
{file = "psutil-7.1.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:fac9cd332c67f4422504297889da5ab7e05fd11e3c4392140f7370f4208ded1f"},
{file = "psutil-7.1.3-cp314-cp314t-win_amd64.whl", hash = "sha256:3792983e23b69843aea49c8f5b8f115572c5ab64c153bada5270086a2123c7e7"},
{file = "psutil-7.1.3-cp314-cp314t-win_arm64.whl", hash = "sha256:31d77fcedb7529f27bb3a0472bea9334349f9a04160e8e6e5020f22c59893264"},
{file = "psutil-7.1.3-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:2bdbcd0e58ca14996a42adf3621a6244f1bb2e2e528886959c72cf1e326677ab"},
{file = "psutil-7.1.3-cp36-abi3-macosx_11_0_arm64.whl", hash = "sha256:bc31fa00f1fbc3c3802141eede66f3a2d51d89716a194bf2cd6fc68310a19880"},
{file = "psutil-7.1.3-cp36-abi3-manylinux2010_x86_64.manylinux_2_12_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3bb428f9f05c1225a558f53e30ccbad9930b11c3fc206836242de1091d3e7dd3"},
{file = "psutil-7.1.3-cp36-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:56d974e02ca2c8eb4812c3f76c30e28836fffc311d55d979f1465c1feeb2b68b"},
{file = "psutil-7.1.3-cp37-abi3-win_amd64.whl", hash = "sha256:f39c2c19fe824b47484b96f9692932248a54c43799a84282cfe58d05a6449efd"},
{file = "psutil-7.1.3-cp37-abi3-win_arm64.whl", hash = "sha256:bd0d69cee829226a761e92f28140bec9a5ee9d5b4fb4b0cc589068dbfff559b1"},
{file = "psutil-7.1.3.tar.gz", hash = "sha256:6c86281738d77335af7aec228328e944b30930899ea760ecf33a4dba66be5e74"},
]
[package.extras]
dev = ["abi3audit", "black", "check-manifest", "colorama ; os_name == \"nt\"", "coverage", "packaging", "pylint", "pyperf", "pypinfo", "pyreadline ; os_name == \"nt\"", "pytest", "pytest-cov", "pytest-instafail", "pytest-subtests", "pytest-xdist", "pywin32 ; os_name == \"nt\" and platform_python_implementation != \"PyPy\"", "requests", "rstcheck", "ruff", "setuptools", "sphinx", "sphinx_rtd_theme", "toml-sort", "twine", "validate-pyproject[all]", "virtualenv", "vulture", "wheel", "wheel ; os_name == \"nt\" and platform_python_implementation != \"PyPy\"", "wmi ; os_name == \"nt\" and platform_python_implementation != \"PyPy\""]
test = ["pytest", "pytest-instafail", "pytest-subtests", "pytest-xdist", "pywin32 ; os_name == \"nt\" and platform_python_implementation != \"PyPy\"", "setuptools", "wheel ; os_name == \"nt\" and platform_python_implementation != \"PyPy\"", "wmi ; os_name == \"nt\" and platform_python_implementation != \"PyPy\""]
[[package]]
name = "pycodestyle"
version = "2.14.0"
@ -535,6 +924,164 @@ files = [
{file = "pycodestyle-2.14.0.tar.gz", hash = "sha256:c4b5b517d278089ff9d0abdec919cd97262a3367449ea1c8b49b91529167b783"},
]
[[package]]
name = "pydantic"
version = "2.12.5"
description = "Data validation using Python type hints"
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"fastapi\" or extra == \"all-frameworks\""
files = [
{file = "pydantic-2.12.5-py3-none-any.whl", hash = "sha256:e561593fccf61e8a20fc46dfc2dfe075b8be7d0188df33f221ad1f0139180f9d"},
{file = "pydantic-2.12.5.tar.gz", hash = "sha256:4d351024c75c0f085a9febbb665ce8c0c6ec5d30e903bdb6394b7ede26aebb49"},
]
[package.dependencies]
annotated-types = ">=0.6.0"
pydantic-core = "2.41.5"
typing-extensions = ">=4.14.1"
typing-inspection = ">=0.4.2"
[package.extras]
email = ["email-validator (>=2.0.0)"]
timezone = ["tzdata ; python_version >= \"3.9\" and platform_system == \"Windows\""]
[[package]]
name = "pydantic-core"
version = "2.41.5"
description = "Core functionality for Pydantic validation and serialization"
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"fastapi\" or extra == \"all-frameworks\""
files = [
{file = "pydantic_core-2.41.5-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:77b63866ca88d804225eaa4af3e664c5faf3568cea95360d21f4725ab6e07146"},
{file = "pydantic_core-2.41.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:dfa8a0c812ac681395907e71e1274819dec685fec28273a28905df579ef137e2"},
{file = "pydantic_core-2.41.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5921a4d3ca3aee735d9fd163808f5e8dd6c6972101e4adbda9a4667908849b97"},
{file = "pydantic_core-2.41.5-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e25c479382d26a2a41b7ebea1043564a937db462816ea07afa8a44c0866d52f9"},
{file = "pydantic_core-2.41.5-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f547144f2966e1e16ae626d8ce72b4cfa0caedc7fa28052001c94fb2fcaa1c52"},
{file = "pydantic_core-2.41.5-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6f52298fbd394f9ed112d56f3d11aabd0d5bd27beb3084cc3d8ad069483b8941"},
{file = "pydantic_core-2.41.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:100baa204bb412b74fe285fb0f3a385256dad1d1879f0a5cb1499ed2e83d132a"},
{file = "pydantic_core-2.41.5-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:05a2c8852530ad2812cb7914dc61a1125dc4e06252ee98e5638a12da6cc6fb6c"},
{file = "pydantic_core-2.41.5-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:29452c56df2ed968d18d7e21f4ab0ac55e71dc59524872f6fc57dcf4a3249ed2"},
{file = "pydantic_core-2.41.5-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:d5160812ea7a8a2ffbe233d8da666880cad0cbaf5d4de74ae15c313213d62556"},
{file = "pydantic_core-2.41.5-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:df3959765b553b9440adfd3c795617c352154e497a4eaf3752555cfb5da8fc49"},
{file = "pydantic_core-2.41.5-cp310-cp310-win32.whl", hash = "sha256:1f8d33a7f4d5a7889e60dc39856d76d09333d8a6ed0f5f1190635cbec70ec4ba"},
{file = "pydantic_core-2.41.5-cp310-cp310-win_amd64.whl", hash = "sha256:62de39db01b8d593e45871af2af9e497295db8d73b085f6bfd0b18c83c70a8f9"},
{file = "pydantic_core-2.41.5-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:a3a52f6156e73e7ccb0f8cced536adccb7042be67cb45f9562e12b319c119da6"},
{file = "pydantic_core-2.41.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7f3bf998340c6d4b0c9a2f02d6a400e51f123b59565d74dc60d252ce888c260b"},
{file = "pydantic_core-2.41.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:378bec5c66998815d224c9ca994f1e14c0c21cb95d2f52b6021cc0b2a58f2a5a"},
{file = "pydantic_core-2.41.5-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e7b576130c69225432866fe2f4a469a85a54ade141d96fd396dffcf607b558f8"},
{file = "pydantic_core-2.41.5-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6cb58b9c66f7e4179a2d5e0f849c48eff5c1fca560994d6eb6543abf955a149e"},
{file = "pydantic_core-2.41.5-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:88942d3a3dff3afc8288c21e565e476fc278902ae4d6d134f1eeda118cc830b1"},
{file = "pydantic_core-2.41.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f31d95a179f8d64d90f6831d71fa93290893a33148d890ba15de25642c5d075b"},
{file = "pydantic_core-2.41.5-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c1df3d34aced70add6f867a8cf413e299177e0c22660cc767218373d0779487b"},
{file = "pydantic_core-2.41.5-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:4009935984bd36bd2c774e13f9a09563ce8de4abaa7226f5108262fa3e637284"},
{file = "pydantic_core-2.41.5-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:34a64bc3441dc1213096a20fe27e8e128bd3ff89921706e83c0b1ac971276594"},
{file = "pydantic_core-2.41.5-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c9e19dd6e28fdcaa5a1de679aec4141f691023916427ef9bae8584f9c2fb3b0e"},
{file = "pydantic_core-2.41.5-cp311-cp311-win32.whl", hash = "sha256:2c010c6ded393148374c0f6f0bf89d206bf3217f201faa0635dcd56bd1520f6b"},
{file = "pydantic_core-2.41.5-cp311-cp311-win_amd64.whl", hash = "sha256:76ee27c6e9c7f16f47db7a94157112a2f3a00e958bc626e2f4ee8bec5c328fbe"},
{file = "pydantic_core-2.41.5-cp311-cp311-win_arm64.whl", hash = "sha256:4bc36bbc0b7584de96561184ad7f012478987882ebf9f9c389b23f432ea3d90f"},
{file = "pydantic_core-2.41.5-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:f41a7489d32336dbf2199c8c0a215390a751c5b014c2c1c5366e817202e9cdf7"},
{file = "pydantic_core-2.41.5-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:070259a8818988b9a84a449a2a7337c7f430a22acc0859c6b110aa7212a6d9c0"},
{file = "pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e96cea19e34778f8d59fe40775a7a574d95816eb150850a85a7a4c8f4b94ac69"},
{file = "pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ed2e99c456e3fadd05c991f8f437ef902e00eedf34320ba2b0842bd1c3ca3a75"},
{file = "pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:65840751b72fbfd82c3c640cff9284545342a4f1eb1586ad0636955b261b0b05"},
{file = "pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e536c98a7626a98feb2d3eaf75944ef6f3dbee447e1f841eae16f2f0a72d8ddc"},
{file = "pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eceb81a8d74f9267ef4081e246ffd6d129da5d87e37a77c9bde550cb04870c1c"},
{file = "pydantic_core-2.41.5-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d38548150c39b74aeeb0ce8ee1d8e82696f4a4e16ddc6de7b1d8823f7de4b9b5"},
{file = "pydantic_core-2.41.5-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:c23e27686783f60290e36827f9c626e63154b82b116d7fe9adba1fda36da706c"},
{file = "pydantic_core-2.41.5-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:482c982f814460eabe1d3bb0adfdc583387bd4691ef00b90575ca0d2b6fe2294"},
{file = "pydantic_core-2.41.5-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:bfea2a5f0b4d8d43adf9d7b8bf019fb46fdd10a2e5cde477fbcb9d1fa08c68e1"},
{file = "pydantic_core-2.41.5-cp312-cp312-win32.whl", hash = "sha256:b74557b16e390ec12dca509bce9264c3bbd128f8a2c376eaa68003d7f327276d"},
{file = "pydantic_core-2.41.5-cp312-cp312-win_amd64.whl", hash = "sha256:1962293292865bca8e54702b08a4f26da73adc83dd1fcf26fbc875b35d81c815"},
{file = "pydantic_core-2.41.5-cp312-cp312-win_arm64.whl", hash = "sha256:1746d4a3d9a794cacae06a5eaaccb4b8643a131d45fbc9af23e353dc0a5ba5c3"},
{file = "pydantic_core-2.41.5-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:941103c9be18ac8daf7b7adca8228f8ed6bb7a1849020f643b3a14d15b1924d9"},
{file = "pydantic_core-2.41.5-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:112e305c3314f40c93998e567879e887a3160bb8689ef3d2c04b6cc62c33ac34"},
{file = "pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0cbaad15cb0c90aa221d43c00e77bb33c93e8d36e0bf74760cd00e732d10a6a0"},
{file = "pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:03ca43e12fab6023fc79d28ca6b39b05f794ad08ec2feccc59a339b02f2b3d33"},
{file = "pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:dc799088c08fa04e43144b164feb0c13f9a0bc40503f8df3e9fde58a3c0c101e"},
{file = "pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:97aeba56665b4c3235a0e52b2c2f5ae9cd071b8a8310ad27bddb3f7fb30e9aa2"},
{file = "pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:406bf18d345822d6c21366031003612b9c77b3e29ffdb0f612367352aab7d586"},
{file = "pydantic_core-2.41.5-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b93590ae81f7010dbe380cdeab6f515902ebcbefe0b9327cc4804d74e93ae69d"},
{file = "pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:01a3d0ab748ee531f4ea6c3e48ad9dac84ddba4b0d82291f87248f2f9de8d740"},
{file = "pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:6561e94ba9dacc9c61bce40e2d6bdc3bfaa0259d3ff36ace3b1e6901936d2e3e"},
{file = "pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:915c3d10f81bec3a74fbd4faebe8391013ba61e5a1a8d48c4455b923bdda7858"},
{file = "pydantic_core-2.41.5-cp313-cp313-win32.whl", hash = "sha256:650ae77860b45cfa6e2cdafc42618ceafab3a2d9a3811fcfbd3bbf8ac3c40d36"},
{file = "pydantic_core-2.41.5-cp313-cp313-win_amd64.whl", hash = "sha256:79ec52ec461e99e13791ec6508c722742ad745571f234ea6255bed38c6480f11"},
{file = "pydantic_core-2.41.5-cp313-cp313-win_arm64.whl", hash = "sha256:3f84d5c1b4ab906093bdc1ff10484838aca54ef08de4afa9de0f5f14d69639cd"},
{file = "pydantic_core-2.41.5-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:3f37a19d7ebcdd20b96485056ba9e8b304e27d9904d233d7b1015db320e51f0a"},
{file = "pydantic_core-2.41.5-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:1d1d9764366c73f996edd17abb6d9d7649a7eb690006ab6adbda117717099b14"},
{file = "pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25e1c2af0fce638d5f1988b686f3b3ea8cd7de5f244ca147c777769e798a9cd1"},
{file = "pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:506d766a8727beef16b7adaeb8ee6217c64fc813646b424d0804d67c16eddb66"},
{file = "pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4819fa52133c9aa3c387b3328f25c1facc356491e6135b459f1de698ff64d869"},
{file = "pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2b761d210c9ea91feda40d25b4efe82a1707da2ef62901466a42492c028553a2"},
{file = "pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:22f0fb8c1c583a3b6f24df2470833b40207e907b90c928cc8d3594b76f874375"},
{file = "pydantic_core-2.41.5-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2782c870e99878c634505236d81e5443092fba820f0373997ff75f90f68cd553"},
{file = "pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:0177272f88ab8312479336e1d777f6b124537d47f2123f89cb37e0accea97f90"},
{file = "pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_armv7l.whl", hash = "sha256:63510af5e38f8955b8ee5687740d6ebf7c2a0886d15a6d65c32814613681bc07"},
{file = "pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:e56ba91f47764cc14f1daacd723e3e82d1a89d783f0f5afe9c364b8bb491ccdb"},
{file = "pydantic_core-2.41.5-cp314-cp314-win32.whl", hash = "sha256:aec5cf2fd867b4ff45b9959f8b20ea3993fc93e63c7363fe6851424c8a7e7c23"},
{file = "pydantic_core-2.41.5-cp314-cp314-win_amd64.whl", hash = "sha256:8e7c86f27c585ef37c35e56a96363ab8de4e549a95512445b85c96d3e2f7c1bf"},
{file = "pydantic_core-2.41.5-cp314-cp314-win_arm64.whl", hash = "sha256:e672ba74fbc2dc8eea59fb6d4aed6845e6905fc2a8afe93175d94a83ba2a01a0"},
{file = "pydantic_core-2.41.5-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:8566def80554c3faa0e65ac30ab0932b9e3a5cd7f8323764303d468e5c37595a"},
{file = "pydantic_core-2.41.5-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:b80aa5095cd3109962a298ce14110ae16b8c1aece8b72f9dafe81cf597ad80b3"},
{file = "pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3006c3dd9ba34b0c094c544c6006cc79e87d8612999f1a5d43b769b89181f23c"},
{file = "pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:72f6c8b11857a856bcfa48c86f5368439f74453563f951e473514579d44aa612"},
{file = "pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5cb1b2f9742240e4bb26b652a5aeb840aa4b417c7748b6f8387927bc6e45e40d"},
{file = "pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bd3d54f38609ff308209bd43acea66061494157703364ae40c951f83ba99a1a9"},
{file = "pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2ff4321e56e879ee8d2a879501c8e469414d948f4aba74a2d4593184eb326660"},
{file = "pydantic_core-2.41.5-cp314-cp314t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d0d2568a8c11bf8225044aa94409e21da0cb09dcdafe9ecd10250b2baad531a9"},
{file = "pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_aarch64.whl", hash = "sha256:a39455728aabd58ceabb03c90e12f71fd30fa69615760a075b9fec596456ccc3"},
{file = "pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_armv7l.whl", hash = "sha256:239edca560d05757817c13dc17c50766136d21f7cd0fac50295499ae24f90fdf"},
{file = "pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_x86_64.whl", hash = "sha256:2a5e06546e19f24c6a96a129142a75cee553cc018ffee48a460059b1185f4470"},
{file = "pydantic_core-2.41.5-cp314-cp314t-win32.whl", hash = "sha256:b4ececa40ac28afa90871c2cc2b9ffd2ff0bf749380fbdf57d165fd23da353aa"},
{file = "pydantic_core-2.41.5-cp314-cp314t-win_amd64.whl", hash = "sha256:80aa89cad80b32a912a65332f64a4450ed00966111b6615ca6816153d3585a8c"},
{file = "pydantic_core-2.41.5-cp314-cp314t-win_arm64.whl", hash = "sha256:35b44f37a3199f771c3eaa53051bc8a70cd7b54f333531c59e29fd4db5d15008"},
{file = "pydantic_core-2.41.5-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:8bfeaf8735be79f225f3fefab7f941c712aaca36f1128c9d7e2352ee1aa87bdf"},
{file = "pydantic_core-2.41.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:346285d28e4c8017da95144c7f3acd42740d637ff41946af5ce6e5e420502dd5"},
{file = "pydantic_core-2.41.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a75dafbf87d6276ddc5b2bf6fae5254e3d0876b626eb24969a574fff9149ee5d"},
{file = "pydantic_core-2.41.5-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:7b93a4d08587e2b7e7882de461e82b6ed76d9026ce91ca7915e740ecc7855f60"},
{file = "pydantic_core-2.41.5-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e8465ab91a4bd96d36dde3263f06caa6a8a6019e4113f24dc753d79a8b3a3f82"},
{file = "pydantic_core-2.41.5-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:299e0a22e7ae2b85c1a57f104538b2656e8ab1873511fd718a1c1c6f149b77b5"},
{file = "pydantic_core-2.41.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:707625ef0983fcfb461acfaf14de2067c5942c6bb0f3b4c99158bed6fedd3cf3"},
{file = "pydantic_core-2.41.5-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f41eb9797986d6ebac5e8edff36d5cef9de40def462311b3eb3eeded1431e425"},
{file = "pydantic_core-2.41.5-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0384e2e1021894b1ff5a786dbf94771e2986ebe2869533874d7e43bc79c6f504"},
{file = "pydantic_core-2.41.5-cp39-cp39-musllinux_1_1_armv7l.whl", hash = "sha256:f0cd744688278965817fd0839c4a4116add48d23890d468bc436f78beb28abf5"},
{file = "pydantic_core-2.41.5-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:753e230374206729bf0a807954bcc6c150d3743928a73faffee51ac6557a03c3"},
{file = "pydantic_core-2.41.5-cp39-cp39-win32.whl", hash = "sha256:873e0d5b4fb9b89ef7c2d2a963ea7d02879d9da0da8d9d4933dee8ee86a8b460"},
{file = "pydantic_core-2.41.5-cp39-cp39-win_amd64.whl", hash = "sha256:e4f4a984405e91527a0d62649ee21138f8e3d0ef103be488c1dc11a80d7f184b"},
{file = "pydantic_core-2.41.5-graalpy311-graalpy242_311_native-macosx_10_12_x86_64.whl", hash = "sha256:b96d5f26b05d03cc60f11a7761a5ded1741da411e7fe0909e27a5e6a0cb7b034"},
{file = "pydantic_core-2.41.5-graalpy311-graalpy242_311_native-macosx_11_0_arm64.whl", hash = "sha256:634e8609e89ceecea15e2d61bc9ac3718caaaa71963717bf3c8f38bfde64242c"},
{file = "pydantic_core-2.41.5-graalpy311-graalpy242_311_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:93e8740d7503eb008aa2df04d3b9735f845d43ae845e6dcd2be0b55a2da43cd2"},
{file = "pydantic_core-2.41.5-graalpy311-graalpy242_311_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f15489ba13d61f670dcc96772e733aad1a6f9c429cc27574c6cdaed82d0146ad"},
{file = "pydantic_core-2.41.5-graalpy312-graalpy250_312_native-macosx_10_12_x86_64.whl", hash = "sha256:7da7087d756b19037bc2c06edc6c170eeef3c3bafcb8f532ff17d64dc427adfd"},
{file = "pydantic_core-2.41.5-graalpy312-graalpy250_312_native-macosx_11_0_arm64.whl", hash = "sha256:aabf5777b5c8ca26f7824cb4a120a740c9588ed58df9b2d196ce92fba42ff8dc"},
{file = "pydantic_core-2.41.5-graalpy312-graalpy250_312_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c007fe8a43d43b3969e8469004e9845944f1a80e6acd47c150856bb87f230c56"},
{file = "pydantic_core-2.41.5-graalpy312-graalpy250_312_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:76d0819de158cd855d1cbb8fcafdf6f5cf1eb8e470abe056d5d161106e38062b"},
{file = "pydantic_core-2.41.5-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:b5819cd790dbf0c5eb9f82c73c16b39a65dd6dd4d1439dcdea7816ec9adddab8"},
{file = "pydantic_core-2.41.5-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:5a4e67afbc95fa5c34cf27d9089bca7fcab4e51e57278d710320a70b956d1b9a"},
{file = "pydantic_core-2.41.5-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ece5c59f0ce7d001e017643d8d24da587ea1f74f6993467d85ae8a5ef9d4f42b"},
{file = "pydantic_core-2.41.5-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:16f80f7abe3351f8ea6858914ddc8c77e02578544a0ebc15b4c2e1a0e813b0b2"},
{file = "pydantic_core-2.41.5-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:33cb885e759a705b426baada1fe68cbb0a2e68e34c5d0d0289a364cf01709093"},
{file = "pydantic_core-2.41.5-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:c8d8b4eb992936023be7dee581270af5c6e0697a8559895f527f5b7105ecd36a"},
{file = "pydantic_core-2.41.5-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:242a206cd0318f95cd21bdacff3fcc3aab23e79bba5cac3db5a841c9ef9c6963"},
{file = "pydantic_core-2.41.5-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:d3a978c4f57a597908b7e697229d996d77a6d3c94901e9edee593adada95ce1a"},
{file = "pydantic_core-2.41.5-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:b2379fa7ed44ddecb5bfe4e48577d752db9fc10be00a6b7446e9663ba143de26"},
{file = "pydantic_core-2.41.5-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:266fb4cbf5e3cbd0b53669a6d1b039c45e3ce651fd5442eff4d07c2cc8d66808"},
{file = "pydantic_core-2.41.5-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:58133647260ea01e4d0500089a8c4f07bd7aa6ce109682b1426394988d8aaacc"},
{file = "pydantic_core-2.41.5-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:287dad91cfb551c363dc62899a80e9e14da1f0e2b6ebde82c806612ca2a13ef1"},
{file = "pydantic_core-2.41.5-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:03b77d184b9eb40240ae9fd676ca364ce1085f203e1b1256f8ab9984dca80a84"},
{file = "pydantic_core-2.41.5-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:a668ce24de96165bb239160b3d854943128f4334822900534f2fe947930e5770"},
{file = "pydantic_core-2.41.5-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:f14f8f046c14563f8eb3f45f499cc658ab8d10072961e07225e507adb700e93f"},
{file = "pydantic_core-2.41.5-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:56121965f7a4dc965bff783d70b907ddf3d57f6eba29b6d2e5dabfaf07799c51"},
{file = "pydantic_core-2.41.5.tar.gz", hash = "sha256:08daa51ea16ad373ffd5e7606252cc32f07bc72b28284b6bc9c6df804816476e"},
]
[package.dependencies]
typing-extensions = ">=4.14.1"
[[package]]
name = "pyflakes"
version = "3.4.0"
@ -702,6 +1249,46 @@ files = [
{file = "pyyaml-6.0.2.tar.gz", hash = "sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e"},
]
[[package]]
name = "rich"
version = "14.2.0"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
optional = false
python-versions = ">=3.8.0"
groups = ["main"]
files = [
{file = "rich-14.2.0-py3-none-any.whl", hash = "sha256:76bc51fe2e57d2b1be1f96c524b890b816e334ab4c1e45888799bfaab0021edd"},
{file = "rich-14.2.0.tar.gz", hash = "sha256:73ff50c7c0c1c77c8243079283f4edb376f0f6442433aecb8ce7e6d0b92d1fe4"},
]
[package.dependencies]
markdown-it-py = ">=2.2.0"
pygments = ">=2.13.0,<3.0.0"
[package.extras]
jupyter = ["ipywidgets (>=7.5.1,<9)"]
[[package]]
name = "setuptools"
version = "80.9.0"
description = "Easily download, build, install, upgrade, and uninstall Python packages"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "setuptools-80.9.0-py3-none-any.whl", hash = "sha256:062d34222ad13e0cc312a4c02d73f059e86a4acbfbdea8f8f76b28c99f306922"},
{file = "setuptools-80.9.0.tar.gz", hash = "sha256:f36b47402ecde768dbfafc46e8e4207b4360c654f1f3bb84475f0a28628fb19c"},
]
[package.extras]
check = ["pytest-checkdocs (>=2.4)", "pytest-ruff (>=0.2.1) ; sys_platform != \"cygwin\"", "ruff (>=0.8.0) ; sys_platform != \"cygwin\""]
core = ["importlib_metadata (>=6) ; python_version < \"3.10\"", "jaraco.functools (>=4)", "jaraco.text (>=3.7)", "more_itertools", "more_itertools (>=8.8)", "packaging (>=24.2)", "platformdirs (>=4.2.2)", "tomli (>=2.0.1) ; python_version < \"3.11\"", "wheel (>=0.43.0)"]
cover = ["pytest-cov"]
doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "pyproject-hooks (!=1.1)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier", "towncrier (<24.7)"]
enabler = ["pytest-enabler (>=2.2)"]
test = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "ini2toml[lite] (>=0.14)", "jaraco.develop (>=7.21) ; python_version >= \"3.9\" and sys_platform != \"cygwin\"", "jaraco.envs (>=2.2)", "jaraco.path (>=3.7.2)", "jaraco.test (>=5.5)", "packaging (>=24.2)", "pip (>=19.1)", "pyproject-hooks (!=1.1)", "pytest (>=6,!=8.1.*)", "pytest-home (>=0.5)", "pytest-perf ; sys_platform != \"cygwin\"", "pytest-subprocess", "pytest-timeout", "pytest-xdist (>=3)", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel (>=0.44.0)"]
type = ["importlib_metadata (>=7.0.2) ; python_version < \"3.10\"", "jaraco.develop (>=7.21) ; sys_platform != \"cygwin\"", "mypy (==1.14.*)", "pytest-mypy"]
[[package]]
name = "sniffio"
version = "1.3.1"
@ -714,6 +1301,23 @@ files = [
{file = "sniffio-1.3.1.tar.gz", hash = "sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc"},
]
[[package]]
name = "sqlparse"
version = "0.5.4"
description = "A non-validating SQL parser."
optional = true
python-versions = ">=3.8"
groups = ["main"]
markers = "extra == \"django\" or extra == \"all-frameworks\""
files = [
{file = "sqlparse-0.5.4-py3-none-any.whl", hash = "sha256:99a9f0314977b76d776a0fcb8554de91b9bb8a18560631d6bc48721d07023dcb"},
{file = "sqlparse-0.5.4.tar.gz", hash = "sha256:4396a7d3cf1cd679c1be976cf3dc6e0a51d0111e87787e7a8d780e7d5a998f9e"},
]
[package.extras]
dev = ["build"]
doc = ["sphinx"]
[[package]]
name = "starlette"
version = "0.47.3"
@ -745,6 +1349,18 @@ files = [
{file = "structlog-25.4.0.tar.gz", hash = "sha256:186cd1b0a8ae762e29417095664adf1d6a31702160a46dacb7796ea82f7409e4"},
]
[[package]]
name = "types-psutil"
version = "7.1.3.20251202"
description = "Typing stubs for psutil"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "types_psutil-7.1.3.20251202-py3-none-any.whl", hash = "sha256:39bfc44780de7ab686c65169e36a7969db09e7f39d92de643b55789292953400"},
{file = "types_psutil-7.1.3.20251202.tar.gz", hash = "sha256:5cfecaced7c486fb3995bb290eab45043d697a261718aca01b9b340d1ab7968a"},
]
[[package]]
name = "types-pyyaml"
version = "6.0.12.20250822"
@ -769,6 +1385,35 @@ files = [
{file = "typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466"},
]
[[package]]
name = "typing-inspection"
version = "0.4.2"
description = "Runtime typing introspection tools"
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"fastapi\" or extra == \"all-frameworks\""
files = [
{file = "typing_inspection-0.4.2-py3-none-any.whl", hash = "sha256:4ed1cacbdc298c220f1bd249ed5287caa16f34d44ef4e9c3d0cbad5b521545e7"},
{file = "typing_inspection-0.4.2.tar.gz", hash = "sha256:ba561c48a67c5958007083d386c3295464928b01faa735ab8547c5692e87f464"},
]
[package.dependencies]
typing-extensions = ">=4.12.0"
[[package]]
name = "tzdata"
version = "2025.2"
description = "Provider of IANA time zone data"
optional = true
python-versions = ">=2"
groups = ["main"]
markers = "(extra == \"django\" or extra == \"all-frameworks\") and sys_platform == \"win32\""
files = [
{file = "tzdata-2025.2-py2.py3-none-any.whl", hash = "sha256:1a403fada01ff9221ca8044d701868fa132215d84beb92242d9acd2147f667a8"},
{file = "tzdata-2025.2.tar.gz", hash = "sha256:b60a638fcc0daffadf82fe0f57e53d06bdec2f36c4df66280ae79bce6bd6f2b9"},
]
[[package]]
name = "uvicorn"
version = "0.35.0"
@ -1046,10 +1691,34 @@ files = [
{file = "websockets-15.0.1.tar.gz", hash = "sha256:82544de02076bafba038ce055ee6412d68da13ab47f0c60cab827346de828dee"},
]
[[package]]
name = "werkzeug"
version = "3.1.4"
description = "The comprehensive WSGI web application library."
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"flask\" or extra == \"all-frameworks\""
files = [
{file = "werkzeug-3.1.4-py3-none-any.whl", hash = "sha256:2ad50fb9ed09cc3af22c54698351027ace879a0b60a3b5edf5730b2f7d876905"},
{file = "werkzeug-3.1.4.tar.gz", hash = "sha256:cd3cd98b1b92dc3b7b3995038826c68097dcb16f9baa63abe35f20eafeb9fe5e"},
]
[package.dependencies]
markupsafe = ">=2.1.1"
[package.extras]
watchdog = ["watchdog (>=2.3)"]
[extras]
all-frameworks = ["a2wsgi", "django", "fastapi", "flask"]
dev = ["black", "flake8", "isort", "mypy", "pytest", "pytest-asyncio", "pytest-cov"]
django = ["django"]
fastapi = ["fastapi"]
flask = ["a2wsgi", "flask"]
wsgi = ["a2wsgi"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.12"
content-hash = "e68108657ddfdc07ac0c4f5dbd9c5d2950e78b8b0053e4487ebf2327bbf4e020"
content-hash = "653d7b992e2bb133abde2e8b1c44265e948ed90487ab3f2670429510a8aa0683"

View File

@ -1,7 +1,7 @@
[project]
name = "pyserve"
version = "0.7.0"
description = "Simple HTTP Web server written in Python"
version = "0.9.10"
description = "Python Application Orchestrator & HTTP Server - unified gateway for multiple Python web apps"
authors = [
{name = "Илья Глазунов",email = "i.glazunov@sapiens.solutions"}
]
@ -15,10 +15,14 @@ dependencies = [
"types-pyyaml (>=6.0.12.20250822,<7.0.0.0)",
"structlog (>=25.4.0,<26.0.0)",
"httpx (>=0.27.0,<0.28.0)",
"click (>=8.0)",
"rich (>=13.0)",
"psutil (>=5.9)",
]
[project.scripts]
pyserve = "pyserve.cli:main"
pyservectl = "pyserve.ctl:main"
[project.optional-dependencies]
dev = [
@ -30,10 +34,29 @@ dev = [
"mypy",
"flake8"
]
wsgi = [
"a2wsgi>=1.10.0",
]
flask = [
"flask>=3.0.0",
"a2wsgi>=1.10.0",
]
fastapi = [
"fastapi>=0.115.0",
]
django = [
"django>=5.0",
]
all-frameworks = [
"fastapi>=0.115.0",
"flask>=3.0.0",
"django>=5.0",
"a2wsgi>=1.10.0",
]
[build-system]
requires = ["poetry-core>=2.0.0,<3.0.0"]
requires = ["poetry-core>=2.0.0,<3.0.0", "setuptools", "cython>=3.0.0"]
build-backend = "poetry.core.masonry.api"
[tool.black]
@ -76,4 +99,7 @@ isort = "^6.0.1"
mypy = "^1.17.1"
flake8 = "^7.3.0"
pytest-asyncio = "^1.3.0"
cython = "^3.0.0"
setuptools = "^80.0.0"
types-psutil = "^7.1.3.20251202"

View File

@ -2,10 +2,48 @@
PyServe - HTTP web server written on Python
"""
__version__ = "0.6.0"
__version__ = "0.10.0"
__author__ = "Ilya Glazunov"
from .server import PyServeServer
from .asgi_mount import (
ASGIAppLoader,
ASGIMountManager,
MountedApp,
create_django_app,
create_fastapi_app,
create_flask_app,
create_starlette_app,
)
from .config import Config
from .process_manager import (
ProcessConfig,
ProcessInfo,
ProcessManager,
ProcessState,
get_process_manager,
init_process_manager,
shutdown_process_manager,
)
from .server import PyServeServer
__all__ = ["PyServeServer", "Config", "__version__"]
__all__ = [
"PyServeServer",
"Config",
"__version__",
# ASGI mounting (in-process)
"ASGIAppLoader",
"ASGIMountManager",
"MountedApp",
"create_fastapi_app",
"create_flask_app",
"create_django_app",
"create_starlette_app",
# Process orchestration (multi-process)
"ProcessManager",
"ProcessConfig",
"ProcessInfo",
"ProcessState",
"get_process_manager",
"init_process_manager",
"shutdown_process_manager",
]

225
pyserve/_path_matcher.pyx Normal file
View File

@ -0,0 +1,225 @@
# cython: language_level=3
# cython: boundscheck=False
# cython: wraparound=False
# cython: cdivision=True
"""
Fast path matching module for PyServe.
This Cython module provides optimized path matching operations
for ASGI mount routing, significantly reducing overhead on hot paths.
"""
from cpython.object cimport PyObject
cdef class FastMountedPath:
cdef:
str _path
str _path_with_slash
Py_ssize_t _path_len
bint _is_root
public str name
public bint strip_path
def __cinit__(self):
self._path = ""
self._path_with_slash = "/"
self._path_len = 0
self._is_root = 1
self.name = ""
self.strip_path = 1
def __init__(self, str path, str name="", bint strip_path=True):
cdef Py_ssize_t path_len
path_len = len(path)
if path_len > 1 and path[path_len - 1] == '/':
path = path[:path_len - 1]
self._path = path
self._path_len = len(path)
self._is_root = 1 if (path == "" or path == "/") else 0
self._path_with_slash = path + "/" if self._is_root == 0 else "/"
self.name = name if name else path
self.strip_path = 1 if strip_path else 0
@property
def path(self) -> str:
return self._path
cpdef bint matches(self, str request_path):
cdef Py_ssize_t req_len
if self._is_root:
return 1
req_len = len(request_path)
if req_len < self._path_len:
return 0
if req_len == self._path_len:
return 1 if request_path == self._path else 0
if request_path[self._path_len] == '/':
return 1 if request_path[:self._path_len] == self._path else 0
return 0
cpdef str get_modified_path(self, str original_path):
cdef str new_path
if not self.strip_path:
return original_path
if self._is_root:
return original_path
new_path = original_path[self._path_len:]
if not new_path:
return "/"
return new_path
def __repr__(self):
return f"FastMountedPath(path={self._path!r}, name={self.name!r})"
def _get_path_len_neg(mount):
return -len(mount.path)
cdef class FastMountManager:
cdef:
list _mounts
int _mount_count
def __cinit__(self):
self._mounts = []
self._mount_count = 0
def __init__(self):
self._mounts = []
self._mount_count = 0
cpdef void add_mount(self, FastMountedPath mount):
self._mounts.append(mount)
self._mounts = sorted(self._mounts, key=_get_path_len_neg, reverse=False)
self._mount_count = len(self._mounts)
cpdef FastMountedPath get_mount(self, str request_path):
cdef:
int i
FastMountedPath mount
for i in range(self._mount_count):
mount = <FastMountedPath>self._mounts[i]
if mount.matches(request_path):
return mount
return None
cpdef bint remove_mount(self, str path):
cdef:
int i
Py_ssize_t path_len
FastMountedPath mount
path_len = len(path)
if path_len > 1 and path[path_len - 1] == '/':
path = path[:path_len - 1]
for i in range(self._mount_count):
mount = <FastMountedPath>self._mounts[i]
if mount._path == path:
del self._mounts[i]
self._mount_count -= 1
return 1
return 0
@property
def mounts(self) -> list:
return list(self._mounts)
@property
def mount_count(self) -> int:
return self._mount_count
cpdef list list_mounts(self):
cdef:
list result = []
FastMountedPath mount
for mount in self._mounts:
result.append({
"path": mount._path,
"name": mount.name,
"strip_path": mount.strip_path,
})
return result
cpdef bint path_matches_prefix(str request_path, str mount_path):
cdef:
Py_ssize_t mount_len = len(mount_path)
Py_ssize_t req_len = len(request_path)
if mount_len == 0 or mount_path == "/":
return 1
if req_len < mount_len:
return 0
if req_len == mount_len:
return 1 if request_path == mount_path else 0
if request_path[mount_len] == '/':
return 1 if request_path[:mount_len] == mount_path else 0
return 0
cpdef str strip_path_prefix(str original_path, str mount_path):
cdef:
Py_ssize_t mount_len = len(mount_path)
str result
if mount_len == 0 or mount_path == "/":
return original_path
result = original_path[mount_len:]
if not result:
return "/"
return result
cpdef tuple match_and_modify_path(str request_path, str mount_path, bint strip_path=True):
cdef:
Py_ssize_t mount_len = len(mount_path)
Py_ssize_t req_len = len(request_path)
bint is_root = 1 if (mount_len == 0 or mount_path == "/") else 0
str modified
if is_root:
return (True, request_path if strip_path else request_path)
if req_len < mount_len:
return (False, None)
if req_len == mount_len:
if request_path == mount_path:
return (True, "/" if strip_path else request_path)
return (False, None)
if request_path[mount_len] == '/' and request_path[:mount_len] == mount_path:
if strip_path:
modified = request_path[mount_len:]
return (True, modified if modified else "/")
return (True, request_path)
return (False, None)

168
pyserve/_path_matcher_py.py Normal file
View File

@ -0,0 +1,168 @@
"""
Pure Python fallback for _path_matcher when Cython is not available.
This module provides the same interface as the Cython _path_matcher module,
allowing the application to run without compilation.
"""
from typing import Any, Dict, List, Optional, Tuple
class FastMountedPath:
__slots__ = ("_path", "_path_with_slash", "_path_len", "_is_root", "name", "strip_path")
def __init__(self, path: str, name: str = "", strip_path: bool = True):
if path.endswith("/") and len(path) > 1:
path = path[:-1]
self._path = path
self._path_len = len(path)
self._is_root = path == "" or path == "/"
self._path_with_slash = path + "/" if not self._is_root else "/"
self.name = name or path
self.strip_path = strip_path
@property
def path(self) -> str:
return self._path
def matches(self, request_path: str) -> bool:
if self._is_root:
return True
req_len = len(request_path)
if req_len < self._path_len:
return False
if req_len == self._path_len:
return request_path == self._path
if request_path[self._path_len] == "/":
return request_path[: self._path_len] == self._path
return False
def get_modified_path(self, original_path: str) -> str:
if not self.strip_path:
return original_path
if self._is_root:
return original_path
new_path = original_path[self._path_len :]
if not new_path:
return "/"
return new_path
def __repr__(self) -> str:
return f"FastMountedPath(path={self._path!r}, name={self.name!r})"
class FastMountManager:
__slots__ = ("_mounts", "_mount_count")
def __init__(self) -> None:
self._mounts: List[FastMountedPath] = []
self._mount_count: int = 0
def add_mount(self, mount: FastMountedPath) -> None:
self._mounts.append(mount)
self._mounts.sort(key=lambda m: len(m.path), reverse=True)
self._mount_count = len(self._mounts)
def get_mount(self, request_path: str) -> Optional[FastMountedPath]:
for mount in self._mounts:
if mount.matches(request_path):
return mount
return None
def remove_mount(self, path: str) -> bool:
if path.endswith("/") and len(path) > 1:
path = path[:-1]
for i, mount in enumerate(self._mounts):
if mount._path == path:
del self._mounts[i]
self._mount_count -= 1
return True
return False
@property
def mounts(self) -> List[FastMountedPath]:
return self._mounts.copy()
@property
def mount_count(self) -> int:
return self._mount_count
def list_mounts(self) -> List[Dict[str, Any]]:
return [
{
"path": mount._path,
"name": mount.name,
"strip_path": mount.strip_path,
}
for mount in self._mounts
]
def path_matches_prefix(request_path: str, mount_path: str) -> bool:
mount_len = len(mount_path)
req_len = len(request_path)
if mount_len == 0 or mount_path == "/":
return True
if req_len < mount_len:
return False
if req_len == mount_len:
return request_path == mount_path
if request_path[mount_len] == "/":
return request_path[:mount_len] == mount_path
return False
def strip_path_prefix(original_path: str, mount_path: str) -> str:
mount_len = len(mount_path)
if mount_len == 0 or mount_path == "/":
return original_path
result = original_path[mount_len:]
if not result:
return "/"
return result
def match_and_modify_path(request_path: str, mount_path: str, strip_path: bool = True) -> Tuple[bool, Optional[str]]:
mount_len = len(mount_path)
req_len = len(request_path)
is_root = mount_len == 0 or mount_path == "/"
if is_root:
return (True, request_path)
if req_len < mount_len:
return (False, None)
if req_len == mount_len:
if request_path == mount_path:
return (True, "/" if strip_path else request_path)
return (False, None)
if request_path[mount_len] == "/" and request_path[:mount_len] == mount_path:
if strip_path:
modified = request_path[mount_len:]
return (True, modified if modified else "/")
return (True, request_path)
return (False, None)

73
pyserve/_wsgi_wrapper.py Normal file
View File

@ -0,0 +1,73 @@
"""
WSGI Wrapper Module for Process Orchestration.
This module provides a wrapper that allows WSGI applications to be run
via uvicorn by wrapping them with a2wsgi.
The WSGI app path is passed via environment variables:
- PYSERVE_WSGI_APP: The app path (e.g., "myapp:app" or "myapp.main:create_app")
- PYSERVE_WSGI_FACTORY: "1" if the app path points to a factory function
"""
import importlib
import os
from typing import Any, Callable, Optional, Type
WSGIMiddlewareType = Optional[Type[Any]]
WSGI_ADAPTER: Optional[str] = None
WSGIMiddleware: WSGIMiddlewareType = None
try:
from a2wsgi import WSGIMiddleware as _A2WSGIMiddleware
WSGIMiddleware = _A2WSGIMiddleware
WSGI_ADAPTER = "a2wsgi"
except ImportError:
try:
from asgiref.wsgi import WsgiToAsgi as _AsgirefMiddleware
WSGIMiddleware = _AsgirefMiddleware
WSGI_ADAPTER = "asgiref"
except ImportError:
pass
def _load_wsgi_app() -> Callable[..., Any]:
app_path = os.environ.get("PYSERVE_WSGI_APP")
is_factory = os.environ.get("PYSERVE_WSGI_FACTORY", "0") == "1"
if not app_path:
raise RuntimeError("PYSERVE_WSGI_APP environment variable not set. " "This module should only be used by PyServe process orchestration.")
if ":" in app_path:
module_name, attr_name = app_path.rsplit(":", 1)
else:
module_name = app_path
attr_name = "app"
try:
module = importlib.import_module(module_name)
except ImportError as e:
raise RuntimeError(f"Failed to import WSGI module '{module_name}': {e}")
try:
app_or_factory = getattr(module, attr_name)
except AttributeError:
raise RuntimeError(f"Module '{module_name}' has no attribute '{attr_name}'")
if is_factory:
result: Callable[..., Any] = app_or_factory()
return result
loaded_app: Callable[..., Any] = app_or_factory
return loaded_app
def _create_asgi_app() -> Any:
if WSGIMiddleware is None:
raise RuntimeError("No WSGI adapter available. " "Install a2wsgi (recommended) or asgiref: pip install a2wsgi")
wsgi_app = _load_wsgi_app()
return WSGIMiddleware(wsgi_app)
app = _create_asgi_app()

307
pyserve/asgi_mount.py Normal file
View File

@ -0,0 +1,307 @@
"""
ASGI Application Mount Module
This module provides functionality to mount external ASGI/WSGI applications
(FastAPI, Flask, Django, etc.) at specified paths within PyServe.
"""
import importlib
import sys
from pathlib import Path
from typing import Any, Callable, Dict, Optional, cast
from starlette.types import ASGIApp, Receive, Scope, Send
from .logging_utils import get_logger
logger = get_logger(__name__)
class ASGIAppLoader:
def __init__(self) -> None:
self._apps: Dict[str, ASGIApp] = {}
self._wsgi_adapters: Dict[str, ASGIApp] = {}
def load_app(
self,
app_path: str,
app_type: str = "asgi",
module_path: Optional[str] = None,
factory: bool = False,
factory_args: Optional[Dict[str, Any]] = None,
) -> Optional[ASGIApp]:
try:
if module_path:
module_dir = Path(module_path).resolve()
if str(module_dir) not in sys.path:
sys.path.insert(0, str(module_dir))
logger.debug(f"Added {module_dir} to sys.path")
if ":" in app_path:
module_name, attr_name = app_path.rsplit(":", 1)
else:
module_name = app_path
attr_name = "app"
module = importlib.import_module(module_name)
app_or_factory = getattr(module, attr_name)
if factory:
factory_args = factory_args or {}
app = app_or_factory(**factory_args)
logger.info(f"Created app from factory: {app_path}")
else:
app = app_or_factory
logger.info(f"Loaded app: {app_path}")
if app_type == "wsgi":
app = self._wrap_wsgi(app)
logger.info(f"Wrapped WSGI app: {app_path}")
self._apps[app_path] = app
return cast(ASGIApp, app)
except ImportError as e:
logger.error(f"Failed to import application {app_path}: {e}")
return None
except AttributeError as e:
logger.error(f"Failed to get attribute from {app_path}: {e}")
return None
except Exception as e:
logger.error(f"Failed to load application {app_path}: {e}")
return None
def _wrap_wsgi(self, wsgi_app: Callable) -> ASGIApp:
try:
from a2wsgi import WSGIMiddleware
return cast(ASGIApp, WSGIMiddleware(wsgi_app))
except ImportError:
logger.warning("a2wsgi not installed, trying asgiref")
try:
from asgiref.wsgi import WsgiToAsgi
return cast(ASGIApp, WsgiToAsgi(wsgi_app))
except ImportError:
logger.error("Neither a2wsgi nor asgiref installed. " "Install with: pip install a2wsgi or pip install asgiref")
raise ImportError("WSGI adapter not available. Install a2wsgi or asgiref.")
def get_app(self, app_path: str) -> Optional[ASGIApp]:
return self._apps.get(app_path)
def reload_app(self, app_path: str, **kwargs: Any) -> Optional[ASGIApp]:
if app_path in self._apps:
del self._apps[app_path]
if ":" in app_path:
module_name, _ = app_path.rsplit(":", 1)
else:
module_name = app_path
if module_name in sys.modules:
importlib.reload(sys.modules[module_name])
return self.load_app(app_path, **kwargs)
class MountedApp:
def __init__(
self,
path: str,
app: ASGIApp,
name: str = "",
strip_path: bool = True,
):
self.path = path.rstrip("/")
self.app = app
self.name = name or path
self.strip_path = strip_path
def matches(self, request_path: str) -> bool:
if self.path == "":
return True
return request_path == self.path or request_path.startswith(f"{self.path}/")
def get_modified_path(self, original_path: str) -> str:
if not self.strip_path:
return original_path
if self.path == "":
return original_path
new_path = original_path[len(self.path) :]
return new_path if new_path else "/"
class ASGIMountManager:
def __init__(self) -> None:
self._mounts: list[MountedApp] = []
self._loader = ASGIAppLoader()
def mount(
self,
path: str,
app: Optional[ASGIApp] = None,
app_path: Optional[str] = None,
app_type: str = "asgi",
module_path: Optional[str] = None,
factory: bool = False,
factory_args: Optional[Dict[str, Any]] = None,
name: str = "",
strip_path: bool = True,
) -> bool:
if app is None and app_path is None:
logger.error("Either 'app' or 'app_path' must be provided")
return False
if app is None:
app = self._loader.load_app(
app_path=app_path, # type: ignore
app_type=app_type,
module_path=module_path,
factory=factory,
factory_args=factory_args,
)
if app is None:
return False
mounted = MountedApp(
path=path,
app=app,
name=name or app_path or "unnamed",
strip_path=strip_path,
)
self._mounts.append(mounted)
self._mounts.sort(key=lambda m: len(m.path), reverse=True)
logger.info(f"Mounted application '{mounted.name}' at path '{path}'")
return True
def unmount(self, path: str) -> bool:
for i, mount in enumerate(self._mounts):
if mount.path == path.rstrip("/"):
del self._mounts[i]
logger.info(f"Unmounted application at path '{path}'")
return True
return False
def get_mount(self, request_path: str) -> Optional[MountedApp]:
for mount in self._mounts:
if mount.matches(request_path):
return mount
return None
async def handle_request(
self,
scope: Scope,
receive: Receive,
send: Send,
) -> bool:
if scope["type"] != "http":
return False
path = scope.get("path", "/")
mount = self.get_mount(path)
if mount is None:
return False
modified_scope = dict(scope)
if mount.strip_path:
modified_scope["path"] = mount.get_modified_path(path)
modified_scope["root_path"] = scope.get("root_path", "") + mount.path
logger.debug(f"Routing request to mounted app '{mount.name}': " f"{path} -> {modified_scope['path']}")
try:
await mount.app(modified_scope, receive, send)
return True
except Exception as e:
logger.error(f"Error in mounted app '{mount.name}': {e}")
raise
@property
def mounts(self) -> list[MountedApp]:
return self._mounts.copy()
def list_mounts(self) -> list[Dict[str, Any]]:
return [
{
"path": mount.path,
"name": mount.name,
"strip_path": mount.strip_path,
}
for mount in self._mounts
]
def create_fastapi_app(
app_path: str,
module_path: Optional[str] = None,
factory: bool = False,
factory_args: Optional[Dict[str, Any]] = None,
) -> Optional[ASGIApp]:
loader = ASGIAppLoader()
return loader.load_app(
app_path=app_path,
app_type="asgi",
module_path=module_path,
factory=factory,
factory_args=factory_args,
)
def create_flask_app(
app_path: str,
module_path: Optional[str] = None,
factory: bool = False,
factory_args: Optional[Dict[str, Any]] = None,
) -> Optional[ASGIApp]:
loader = ASGIAppLoader()
return loader.load_app(
app_path=app_path,
app_type="wsgi",
module_path=module_path,
factory=factory,
factory_args=factory_args,
)
def create_django_app(
settings_module: str,
module_path: Optional[str] = None,
) -> Optional[ASGIApp]:
import os
if module_path:
module_dir = Path(module_path).resolve()
if str(module_dir) not in sys.path:
sys.path.insert(0, str(module_dir))
os.environ.setdefault("DJANGO_SETTINGS_MODULE", settings_module)
try:
from django.core.asgi import get_asgi_application
return cast(ASGIApp, get_asgi_application())
except ImportError as e:
logger.error(f"Failed to load Django application: {e}")
return None
def create_starlette_app(
app_path: str,
module_path: Optional[str] = None,
factory: bool = False,
factory_args: Optional[Dict[str, Any]] = None,
) -> Optional[ASGIApp]:
loader = ASGIAppLoader()
return loader.load_app(
app_path=app_path,
app_type="asgi",
module_path=module_path,
factory=factory,
factory_args=factory_args,
)

View File

@ -1,38 +1,47 @@
import sys
"""
PyServe CLI - Server entry point
Simple CLI for running the PyServe HTTP server.
For service management, use pyservectl.
"""
import argparse
import sys
from pathlib import Path
from . import PyServeServer, Config, __version__
from . import Config, PyServeServer, __version__
def main() -> None:
parser = argparse.ArgumentParser(
description="PyServe - HTTP web server",
prog="pyserve",
epilog="For service management (start/stop/restart/logs), use: pyservectl",
)
parser.add_argument(
"-c", "--config",
"-c",
"--config",
default="config.yaml",
help="Path to configuration file (default: config.yaml)"
help="Path to configuration file (default: config.yaml)",
)
parser.add_argument(
"--host",
help="Host to bind the server to"
help="Host to bind the server to",
)
parser.add_argument(
"--port",
type=int,
help="Port to bind the server to"
help="Port to bind the server to",
)
parser.add_argument(
"--debug",
action="store_true",
help="Enable debug mode"
help="Enable debug mode",
)
parser.add_argument(
"--version",
action="version",
version=f"%(prog)s {__version__}"
version=f"%(prog)s {__version__}",
)
args = parser.parse_args()

View File

@ -1,8 +1,10 @@
import yaml
import os
from typing import Dict, Any, List, cast
from dataclasses import dataclass, field
import logging
import os
from dataclasses import dataclass, field
from typing import Any, Dict, List, cast
import yaml
from .logging_utils import setup_logging
@ -84,7 +86,7 @@ class Config:
@classmethod
def from_yaml(cls, file_path: str) -> "Config":
try:
with open(file_path, 'r', encoding='utf-8') as f:
with open(file_path, "r", encoding="utf-8") as f:
data = yaml.safe_load(f)
return cls._from_dict(data)
@ -99,133 +101,117 @@ class Config:
def _from_dict(cls, data: Dict[str, Any]) -> "Config":
config = cls()
if 'http' in data:
http_data = data['http']
if "http" in data:
http_data = data["http"]
config.http = HttpConfig(
static_dir=http_data.get('static_dir', config.http.static_dir),
templates_dir=http_data.get('templates_dir', config.http.templates_dir)
static_dir=http_data.get("static_dir", config.http.static_dir),
templates_dir=http_data.get("templates_dir", config.http.templates_dir),
)
if 'server' in data:
server_data = data['server']
if "server" in data:
server_data = data["server"]
config.server = ServerConfig(
host=server_data.get('host', config.server.host),
port=server_data.get('port', config.server.port),
backlog=server_data.get('backlog', config.server.backlog),
default_root=server_data.get('default_root', config.server.default_root),
proxy_timeout=server_data.get('proxy_timeout', config.server.proxy_timeout),
redirect_instructions=server_data.get('redirect_instructions', {})
host=server_data.get("host", config.server.host),
port=server_data.get("port", config.server.port),
backlog=server_data.get("backlog", config.server.backlog),
default_root=server_data.get("default_root", config.server.default_root),
proxy_timeout=server_data.get("proxy_timeout", config.server.proxy_timeout),
redirect_instructions=server_data.get("redirect_instructions", {}),
)
if 'ssl' in data:
ssl_data = data['ssl']
if "ssl" in data:
ssl_data = data["ssl"]
config.ssl = SSLConfig(
enabled=ssl_data.get('enabled', config.ssl.enabled),
cert_file=ssl_data.get('cert_file', config.ssl.cert_file),
key_file=ssl_data.get('key_file', config.ssl.key_file)
enabled=ssl_data.get("enabled", config.ssl.enabled),
cert_file=ssl_data.get("cert_file", config.ssl.cert_file),
key_file=ssl_data.get("key_file", config.ssl.key_file),
)
if 'logging' in data:
log_data = data['logging']
format_data = log_data.get('format', {})
if "logging" in data:
log_data = data["logging"]
format_data = log_data.get("format", {})
global_format = LogFormatConfig(
type=format_data.get('type', 'standard'),
use_colors=format_data.get('use_colors', True),
show_module=format_data.get('show_module', True),
timestamp_format=format_data.get('timestamp_format', '%Y-%m-%d %H:%M:%S')
type=format_data.get("type", "standard"),
use_colors=format_data.get("use_colors", True),
show_module=format_data.get("show_module", True),
timestamp_format=format_data.get("timestamp_format", "%Y-%m-%d %H:%M:%S"),
)
console_data = log_data.get('console', {})
console_format_data = console_data.get('format', {})
console_data = log_data.get("console", {})
console_format_data = console_data.get("format", {})
console_format = LogFormatConfig(
type=console_format_data.get('type', global_format.type),
use_colors=console_format_data.get('use_colors', global_format.use_colors),
show_module=console_format_data.get('show_module', global_format.show_module),
timestamp_format=console_format_data.get('timestamp_format', global_format.timestamp_format)
)
console_config = LogHandlerConfig(
level=console_data.get('level', log_data.get('level', 'INFO')),
format=console_format
type=console_format_data.get("type", global_format.type),
use_colors=console_format_data.get("use_colors", global_format.use_colors),
show_module=console_format_data.get("show_module", global_format.show_module),
timestamp_format=console_format_data.get("timestamp_format", global_format.timestamp_format),
)
console_config = LogHandlerConfig(level=console_data.get("level", log_data.get("level", "INFO")), format=console_format)
files_config = []
if 'log_file' in log_data:
if "log_file" in log_data:
default_file_format = LogFormatConfig(
type=global_format.type,
use_colors=False,
show_module=global_format.show_module,
timestamp_format=global_format.timestamp_format
type=global_format.type, use_colors=False, show_module=global_format.show_module, timestamp_format=global_format.timestamp_format
)
default_file = LogFileConfig(
path=log_data['log_file'],
level=log_data.get('level', 'INFO'),
path=log_data["log_file"],
level=log_data.get("level", "INFO"),
format=default_file_format,
loggers=[], # Empty list means including all loggers
max_bytes=10 * 1024 * 1024,
backup_count=5
backup_count=5,
)
files_config.append(default_file)
if 'files' in log_data:
for file_data in log_data['files']:
file_format_data = file_data.get('format', {})
if "files" in log_data:
for file_data in log_data["files"]:
file_format_data = file_data.get("format", {})
file_format = LogFormatConfig(
type=file_format_data.get('type', global_format.type),
use_colors=file_format_data.get('use_colors', False),
show_module=file_format_data.get('show_module', global_format.show_module),
timestamp_format=file_format_data.get('timestamp_format', global_format.timestamp_format)
type=file_format_data.get("type", global_format.type),
use_colors=file_format_data.get("use_colors", False),
show_module=file_format_data.get("show_module", global_format.show_module),
timestamp_format=file_format_data.get("timestamp_format", global_format.timestamp_format),
)
file_config = LogFileConfig(
path=file_data.get('path', './logs/pyserve.log'),
level=file_data.get('level', log_data.get('level', 'INFO')),
path=file_data.get("path", "./logs/pyserve.log"),
level=file_data.get("level", log_data.get("level", "INFO")),
format=file_format,
loggers=file_data.get('loggers', []),
max_bytes=file_data.get('max_bytes', 10 * 1024 * 1024),
backup_count=file_data.get('backup_count', 5)
loggers=file_data.get("loggers", []),
max_bytes=file_data.get("max_bytes", 10 * 1024 * 1024),
backup_count=file_data.get("backup_count", 5),
)
files_config.append(file_config)
if 'show_module' in console_format_data:
print(
"\033[33mWARNING: Parameter 'show_module' in console.format in development and may work incorrectly\033[0m"
)
console_config.format.show_module = console_format_data.get('show_module')
if "show_module" in console_format_data:
print("\033[33mWARNING: Parameter 'show_module' in console.format in development and may work incorrectly\033[0m")
console_config.format.show_module = console_format_data.get("show_module")
for i, file_data in enumerate(log_data.get('files', [])):
if 'format' in file_data and 'show_module' in file_data['format']:
print(
f"\033[33mWARNING: Parameter 'show_module' in files[{i}].format in development and may work incorrectly\033[0m"
)
for i, file_data in enumerate(log_data.get("files", [])):
if "format" in file_data and "show_module" in file_data["format"]:
print(f"\033[33mWARNING: Parameter 'show_module' in files[{i}].format in development and may work incorrectly\033[0m")
if not files_config:
default_file_format = LogFormatConfig(
type=global_format.type,
use_colors=False,
show_module=global_format.show_module,
timestamp_format=global_format.timestamp_format
type=global_format.type, use_colors=False, show_module=global_format.show_module, timestamp_format=global_format.timestamp_format
)
default_file = LogFileConfig(
path='./logs/pyserve.log',
level=log_data.get('level', 'INFO'),
path="./logs/pyserve.log",
level=log_data.get("level", "INFO"),
format=default_file_format,
loggers=[],
max_bytes=10 * 1024 * 1024,
backup_count=5
backup_count=5,
)
files_config.append(default_file)
config.logging = LoggingConfig(
level=log_data.get('level', 'INFO'),
console_output=log_data.get('console_output', True),
level=log_data.get("level", "INFO"),
console_output=log_data.get("console_output", True),
format=global_format,
console=console_config,
files=files_config
files=files_config,
)
if 'extensions' in data:
for ext_data in data['extensions']:
extension = ExtensionConfig(
type=ext_data.get('type', ''),
config=ext_data.get('config', {})
)
if "extensions" in data:
for ext_data in data["extensions"]:
extension = ExtensionConfig(type=ext_data.get("type", ""), config=ext_data.get("config", {}))
config.extensions.append(extension)
return config
@ -245,14 +231,14 @@ class Config:
if not (1 <= self.server.port <= 65535):
errors.append(f"Invalid port: {self.server.port}")
valid_log_levels = ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']
valid_log_levels = ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"]
if self.logging.level.upper() not in valid_log_levels:
errors.append(f"Invalid logging level: {self.logging.level}")
if self.logging.console.level.upper() not in valid_log_levels:
errors.append(f"Invalid console logging level: {self.logging.console.level}")
valid_format_types = ['standard', 'json']
valid_format_types = ["standard", "json"]
if self.logging.format.type not in valid_format_types:
errors.append(f"Invalid logging format type: {self.logging.format.type}")
@ -283,40 +269,40 @@ class Config:
def setup_logging(self) -> None:
config_dict = {
'level': self.logging.level,
'console_output': self.logging.console_output,
'format': {
'type': self.logging.format.type,
'use_colors': self.logging.format.use_colors,
'show_module': self.logging.format.show_module,
'timestamp_format': self.logging.format.timestamp_format
"level": self.logging.level,
"console_output": self.logging.console_output,
"format": {
"type": self.logging.format.type,
"use_colors": self.logging.format.use_colors,
"show_module": self.logging.format.show_module,
"timestamp_format": self.logging.format.timestamp_format,
},
'console': {
'level': self.logging.console.level,
'format': {
'type': self.logging.console.format.type,
'use_colors': self.logging.console.format.use_colors,
'show_module': self.logging.console.format.show_module,
'timestamp_format': self.logging.console.format.timestamp_format
}
"console": {
"level": self.logging.console.level,
"format": {
"type": self.logging.console.format.type,
"use_colors": self.logging.console.format.use_colors,
"show_module": self.logging.console.format.show_module,
"timestamp_format": self.logging.console.format.timestamp_format,
},
},
'files': []
"files": [],
}
for file_config in self.logging.files:
file_dict = {
'path': file_config.path,
'level': file_config.level,
'loggers': file_config.loggers,
'max_bytes': file_config.max_bytes,
'backup_count': file_config.backup_count,
'format': {
'type': file_config.format.type,
'use_colors': file_config.format.use_colors,
'show_module': file_config.format.show_module,
'timestamp_format': file_config.format.timestamp_format
}
"path": file_config.path,
"level": file_config.level,
"loggers": file_config.loggers,
"max_bytes": file_config.max_bytes,
"backup_count": file_config.backup_count,
"format": {
"type": file_config.format.type,
"use_colors": file_config.format.use_colors,
"show_module": file_config.format.show_module,
"timestamp_format": file_config.format.timestamp_format,
},
}
cast(List[Dict[str, Any]], config_dict['files']).append(file_dict)
cast(List[Dict[str, Any]], config_dict["files"]).append(file_dict)
setup_logging(config_dict)

26
pyserve/ctl/__init__.py Normal file
View File

@ -0,0 +1,26 @@
"""
PyServeCtl - Service management CLI
Docker-compose-like tool for managing PyServe services.
Usage:
pyservectl [OPTIONS] COMMAND [ARGS]...
Commands:
init Initialize a new project
config Configuration management
up Start all services
down Stop all services
start Start specific services
stop Stop specific services
restart Restart services
ps Show service status
logs View service logs
top Live monitoring dashboard
health Check service health
scale Scale services
"""
from .main import cli, main
__all__ = ["cli", "main"]

93
pyserve/ctl/_daemon.py Normal file
View File

@ -0,0 +1,93 @@
"""
PyServe Daemon Process
Runs pyserve services in background mode.
"""
import argparse
import asyncio
import logging
import os
import signal
import sys
from pathlib import Path
from types import FrameType
from typing import Optional
def main() -> None:
parser = argparse.ArgumentParser(description="PyServe Daemon")
parser.add_argument("--config", required=True, help="Configuration file path")
parser.add_argument("--state-dir", required=True, help="State directory path")
parser.add_argument("--services", default=None, help="Comma-separated list of services")
parser.add_argument("--scale", action="append", default=[], help="Scale overrides (name=workers)")
parser.add_argument("--force-recreate", action="store_true", help="Force recreate services")
args = parser.parse_args()
config_path = Path(args.config)
state_dir = Path(args.state_dir)
services = args.services.split(",") if args.services else None
scale_map = {}
for scale in args.scale:
name, workers = scale.split("=")
scale_map[name] = int(workers)
from ..config import Config
config = Config.from_yaml(str(config_path))
from .state import StateManager
state_manager = StateManager(state_dir)
log_file = state_dir / "logs" / "daemon.log"
log_file.parent.mkdir(parents=True, exist_ok=True)
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
handlers=[
logging.FileHandler(log_file),
],
)
logger = logging.getLogger("pyserve.daemon")
pid_file = state_dir / "pyserve.pid"
pid_file.write_text(str(os.getpid()))
logger.info(f"Starting daemon with PID {os.getpid()}")
from ._runner import ServiceRunner
runner = ServiceRunner(config, state_manager)
def signal_handler(signum: int, frame: Optional[FrameType]) -> None:
logger.info(f"Received signal {signum}, shutting down...")
runner.stop()
signal.signal(signal.SIGTERM, signal_handler)
signal.signal(signal.SIGINT, signal_handler)
try:
asyncio.run(
runner.start(
services=services,
scale_map=scale_map,
force_recreate=args.force_recreate,
)
)
except Exception as e:
logger.error(f"Daemon error: {e}")
sys.exit(1)
finally:
if pid_file.exists():
pid_file.unlink()
logger.info("Daemon stopped")
if __name__ == "__main__":
main()

389
pyserve/ctl/_runner.py Normal file
View File

@ -0,0 +1,389 @@
"""
PyServe Service Runner
Handles starting, stopping, and managing services.
Integrates with ProcessManager for actual process management.
"""
import asyncio
import os
import sys
import time
from dataclasses import dataclass, field
from typing import Dict, List, Optional
from ..config import Config
from ..process_manager import ProcessConfig, ProcessManager, ProcessState
from .state import StateManager
@dataclass
class ServiceDefinition:
name: str
path: str
app_path: str
app_type: str = "asgi"
module_path: Optional[str] = None
workers: int = 1
health_check_path: str = "/health"
health_check_interval: float = 10.0
health_check_timeout: float = 5.0
health_check_retries: int = 3
max_restart_count: int = 5
restart_delay: float = 1.0
shutdown_timeout: float = 30.0
strip_path: bool = True
env: Dict[str, str] = field(default_factory=dict)
class ServiceRunner:
def __init__(self, config: Config, state_manager: StateManager):
self.config = config
self.state_manager = state_manager
self._process_manager: Optional[ProcessManager] = None
self._services: Dict[str, ServiceDefinition] = {}
self._running = False
self._parse_services()
def _parse_services(self) -> None:
for ext in self.config.extensions:
if ext.type == "process_orchestration":
apps = ext.config.get("apps", [])
for app_config in apps:
service = ServiceDefinition(
name=app_config.get("name", "unnamed"),
path=app_config.get("path", "/"),
app_path=app_config.get("app_path", ""),
app_type=app_config.get("app_type", "asgi"),
module_path=app_config.get("module_path"),
workers=app_config.get("workers", 1),
health_check_path=app_config.get("health_check_path", "/health"),
health_check_interval=app_config.get("health_check_interval", 10.0),
health_check_timeout=app_config.get("health_check_timeout", 5.0),
health_check_retries=app_config.get("health_check_retries", 3),
max_restart_count=app_config.get("max_restart_count", 5),
restart_delay=app_config.get("restart_delay", 1.0),
shutdown_timeout=app_config.get("shutdown_timeout", 30.0),
strip_path=app_config.get("strip_path", True),
env=app_config.get("env", {}),
)
self._services[service.name] = service
def get_services(self) -> Dict[str, ServiceDefinition]:
return self._services.copy()
def get_service(self, name: str) -> Optional[ServiceDefinition]:
return self._services.get(name)
async def start(
self,
services: Optional[List[str]] = None,
scale_map: Optional[Dict[str, int]] = None,
force_recreate: bool = False,
wait_healthy: bool = False,
timeout: int = 60,
) -> None:
from .output import console, print_error, print_info, print_success
scale_map = scale_map or {}
target_services = services or list(self._services.keys())
if not target_services:
print_info("No services configured. Add services to your config.yaml")
return
for name in target_services:
if name not in self._services:
print_error(f"Service '{name}' not found in configuration")
return
port_range = (9000, 9999)
for ext in self.config.extensions:
if ext.type == "process_orchestration":
port_range = tuple(ext.config.get("port_range", [9000, 9999]))
break
self._process_manager = ProcessManager(
port_range=port_range,
health_check_enabled=True,
)
await self._process_manager.start()
self._running = True
for name in target_services:
service = self._services[name]
workers = scale_map.get(name, service.workers)
proc_config = ProcessConfig(
name=name,
app_path=service.app_path,
app_type=service.app_type,
workers=workers,
module_path=service.module_path,
health_check_enabled=True,
health_check_path=service.health_check_path,
health_check_interval=service.health_check_interval,
health_check_timeout=service.health_check_timeout,
health_check_retries=service.health_check_retries,
max_restart_count=service.max_restart_count,
restart_delay=service.restart_delay,
shutdown_timeout=service.shutdown_timeout,
env=service.env,
)
try:
await self._process_manager.register(proc_config)
success = await self._process_manager.start_process(name)
if success:
info = self._process_manager.get_process(name)
if info:
self.state_manager.update_service(
name,
state="running",
pid=info.pid,
port=info.port,
workers=workers,
started_at=time.time(),
)
print_success(f"Started service: {name}")
else:
self.state_manager.update_service(name, state="failed")
print_error(f"Failed to start service: {name}")
except Exception as e:
print_error(f"Error starting {name}: {e}")
self.state_manager.update_service(name, state="failed")
if wait_healthy:
print_info("Waiting for services to be healthy...")
await self._wait_healthy(target_services, timeout)
console.print("\n[bold]Services running. Press Ctrl+C to stop.[/bold]\n")
try:
while self._running:
await asyncio.sleep(1)
await self._sync_state()
except asyncio.CancelledError:
pass
finally:
await self.stop_all()
async def _sync_state(self) -> None:
if not self._process_manager:
return
for name, info in self._process_manager.get_all_processes().items():
state_str = info.state.value
health_status = "healthy" if info.health_check_failures == 0 else "unhealthy"
self.state_manager.update_service(
name,
state=state_str,
pid=info.pid,
port=info.port,
)
service_state = self.state_manager.get_service(name)
if service_state:
service_state.health.status = health_status
service_state.health.failures = info.health_check_failures
self.state_manager.save()
async def _wait_healthy(self, services: List[str], timeout: int) -> None:
from .output import print_info, print_warning
start_time = time.time()
while time.time() - start_time < timeout:
all_healthy = True
for name in services:
if not self._process_manager:
continue
info = self._process_manager.get_process(name)
if not info or info.state != ProcessState.RUNNING:
all_healthy = False
break
if all_healthy:
print_info("All services healthy")
return
await asyncio.sleep(1)
print_warning("Timeout waiting for services to become healthy")
async def stop_all(self, timeout: int = 30) -> None:
from .output import print_info
self._running = False
if self._process_manager:
print_info("Stopping all services...")
await self._process_manager.stop()
self._process_manager = None
for name in self._services:
self.state_manager.update_service(
name,
state="stopped",
pid=None,
)
def stop(self) -> None:
self._running = False
async def start_service(self, name: str, timeout: int = 60) -> bool:
from .output import print_error
service = self._services.get(name)
if not service:
print_error(f"Service '{name}' not found")
return False
if not self._process_manager:
self._process_manager = ProcessManager()
await self._process_manager.start()
proc_config = ProcessConfig(
name=name,
app_path=service.app_path,
app_type=service.app_type,
workers=service.workers,
module_path=service.module_path,
health_check_enabled=True,
health_check_path=service.health_check_path,
env=service.env,
)
try:
existing = self._process_manager.get_process(name)
if not existing:
await self._process_manager.register(proc_config)
success = await self._process_manager.start_process(name)
if success:
info = self._process_manager.get_process(name)
self.state_manager.update_service(
name,
state="running",
pid=info.pid if info else None,
port=info.port if info else 0,
started_at=time.time(),
)
return success
except Exception as e:
print_error(f"Error starting {name}: {e}")
return False
async def stop_service(self, name: str, timeout: int = 30, force: bool = False) -> bool:
if not self._process_manager:
self.state_manager.update_service(name, state="stopped", pid=None)
return True
try:
success = await self._process_manager.stop_process(name)
if success:
self.state_manager.update_service(
name,
state="stopped",
pid=None,
)
return success
except Exception as e:
from .output import print_error
print_error(f"Error stopping {name}: {e}")
return False
async def restart_service(self, name: str, timeout: int = 60) -> bool:
if not self._process_manager:
return False
try:
self.state_manager.update_service(name, state="restarting")
success = await self._process_manager.restart_process(name)
if success:
info = self._process_manager.get_process(name)
self.state_manager.update_service(
name,
state="running",
pid=info.pid if info else None,
port=info.port if info else 0,
started_at=time.time(),
)
return success
except Exception as e:
from .output import print_error
print_error(f"Error restarting {name}: {e}")
return False
async def scale_service(self, name: str, workers: int, timeout: int = 60, wait: bool = True) -> bool:
# For now, this requires restart with new worker count
# In future, could implement hot-reloading
service = self._services.get(name)
if not service:
return False
# Update service definition
service.workers = workers
# Restart with new configuration
return await self.restart_service(name, timeout)
def start_daemon(
self,
services: Optional[List[str]] = None,
scale_map: Optional[Dict[str, int]] = None,
force_recreate: bool = False,
) -> int:
import subprocess
cmd = [
sys.executable,
"-m",
"pyserve.cli._daemon",
"--config",
str(self.state_manager.state_dir.parent / "config.yaml"),
"--state-dir",
str(self.state_manager.state_dir),
]
if services:
cmd.extend(["--services", ",".join(services)])
if scale_map:
for name, workers in scale_map.items():
cmd.extend(["--scale", f"{name}={workers}"])
if force_recreate:
cmd.append("--force-recreate")
env = os.environ.copy()
process = subprocess.Popen(
cmd,
env=env,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
start_new_session=True,
)
return process.pid

View File

@ -0,0 +1,25 @@
from .config import config_cmd
from .down import down_cmd
from .health import health_cmd
from .init import init_cmd
from .logs import logs_cmd
from .scale import scale_cmd
from .service import restart_cmd, start_cmd, stop_cmd
from .status import ps_cmd
from .top import top_cmd
from .up import up_cmd
__all__ = [
"init_cmd",
"config_cmd",
"up_cmd",
"down_cmd",
"start_cmd",
"stop_cmd",
"restart_cmd",
"ps_cmd",
"logs_cmd",
"top_cmd",
"health_cmd",
"scale_cmd",
]

View File

@ -0,0 +1,419 @@
"""
pyserve config - Configuration management commands
"""
import json
from pathlib import Path
from typing import Any, Optional
import click
import yaml
@click.group("config")
def config_cmd() -> None:
"""
Configuration management commands.
\b
Commands:
validate Validate configuration file
show Display current configuration
get Get a specific configuration value
set Set a configuration value
diff Compare two configuration files
"""
pass
@config_cmd.command("validate")
@click.option(
"-c",
"--config",
"config_file",
default=None,
help="Path to configuration file",
)
@click.option(
"--strict",
is_flag=True,
help="Enable strict validation (warn on unknown fields)",
)
@click.pass_obj
def validate_cmd(ctx: Any, config_file: Optional[str], strict: bool) -> None:
"""
Validate a configuration file.
Checks for syntax errors, missing required fields, and invalid values.
\b
Examples:
pyserve config validate
pyserve config validate -c production.yaml
pyserve config validate --strict
"""
from ..output import console, print_error, print_success, print_warning
config_path = Path(config_file or ctx.config_file)
if not config_path.exists():
print_error(f"Configuration file not found: {config_path}")
raise click.Abort()
console.print(f"Validating [cyan]{config_path}[/cyan]...")
try:
with open(config_path) as f:
data = yaml.safe_load(f)
if data is None:
print_error("Configuration file is empty")
raise click.Abort()
from ...config import Config
config = Config.from_yaml(str(config_path))
errors = []
warnings = []
if not (1 <= config.server.port <= 65535):
errors.append(f"Invalid server port: {config.server.port}")
valid_levels = ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"]
if config.logging.level.upper() not in valid_levels:
errors.append(f"Invalid logging level: {config.logging.level}")
if config.ssl.enabled:
if not Path(config.ssl.cert_file).exists():
warnings.append(f"SSL cert file not found: {config.ssl.cert_file}")
if not Path(config.ssl.key_file).exists():
warnings.append(f"SSL key file not found: {config.ssl.key_file}")
valid_extension_types = [
"routing",
"process_orchestration",
"asgi_mount",
]
for ext in config.extensions:
if ext.type not in valid_extension_types:
warnings.append(f"Unknown extension type: {ext.type}")
if strict:
known_top_level = {"http", "server", "ssl", "logging", "extensions"}
for key in data.keys():
if key not in known_top_level:
warnings.append(f"Unknown top-level field: {key}")
if errors:
for error in errors:
print_error(error)
raise click.Abort()
if warnings:
for warning in warnings:
print_warning(warning)
print_success("Configuration is valid!")
except yaml.YAMLError as e:
print_error(f"YAML syntax error: {e}")
raise click.Abort()
except Exception as e:
print_error(f"Validation error: {e}")
raise click.Abort()
@config_cmd.command("show")
@click.option(
"-c",
"--config",
"config_file",
default=None,
help="Path to configuration file",
)
@click.option(
"--format",
"output_format",
type=click.Choice(["yaml", "json", "table"]),
default="yaml",
help="Output format",
)
@click.option(
"--section",
"section",
default=None,
help="Show only a specific section (e.g., server, logging)",
)
@click.pass_obj
def show_cmd(ctx: Any, config_file: Optional[str], output_format: str, section: Optional[str]) -> None:
"""
Display current configuration.
\b
Examples:
pyserve config show
pyserve config show --format json
pyserve config show --section server
"""
from ..output import console, print_error
config_path = Path(config_file or ctx.config_file)
if not config_path.exists():
print_error(f"Configuration file not found: {config_path}")
raise click.Abort()
try:
with open(config_path) as f:
data = yaml.safe_load(f)
if section:
if section in data:
data = {section: data[section]}
else:
print_error(f"Section '{section}' not found in configuration")
raise click.Abort()
if output_format == "yaml":
from rich.syntax import Syntax
yaml_str = yaml.dump(data, default_flow_style=False, sort_keys=False)
syntax = Syntax(yaml_str, "yaml", theme="monokai", line_numbers=False)
console.print(syntax)
elif output_format == "json":
from rich.syntax import Syntax
json_str = json.dumps(data, indent=2)
syntax = Syntax(json_str, "json", theme="monokai", line_numbers=False)
console.print(syntax)
elif output_format == "table":
from rich.tree import Tree
def build_tree(data: Any, tree: Any) -> None:
if isinstance(data, dict):
for key, value in data.items():
if isinstance(value, (dict, list)):
branch = tree.add(f"[cyan]{key}[/cyan]")
build_tree(value, branch)
else:
tree.add(f"[cyan]{key}[/cyan]: [green]{value}[/green]")
elif isinstance(data, list):
for i, item in enumerate(data):
if isinstance(item, (dict, list)):
branch = tree.add(f"[dim][{i}][/dim]")
build_tree(item, branch)
else:
tree.add(f"[dim][{i}][/dim] [green]{item}[/green]")
tree = Tree(f"[bold]Configuration: {config_path}[/bold]")
build_tree(data, tree)
console.print(tree)
except Exception as e:
print_error(f"Error reading configuration: {e}")
raise click.Abort()
@config_cmd.command("get")
@click.argument("key")
@click.option(
"-c",
"--config",
"config_file",
default=None,
help="Path to configuration file",
)
@click.pass_obj
def get_cmd(ctx: Any, key: str, config_file: Optional[str]) -> None:
"""
Get a specific configuration value.
Use dot notation to access nested values.
\b
Examples:
pyserve config get server.port
pyserve config get logging.level
pyserve config get extensions.0.type
"""
from ..output import console, print_error
config_path = Path(config_file or ctx.config_file)
if not config_path.exists():
print_error(f"Configuration file not found: {config_path}")
raise click.Abort()
try:
with open(config_path) as f:
data = yaml.safe_load(f)
value = data
for part in key.split("."):
if isinstance(value, dict):
if part in value:
value = value[part]
else:
print_error(f"Key '{key}' not found")
raise click.Abort()
elif isinstance(value, list):
try:
index = int(part)
value = value[index]
except (ValueError, IndexError):
print_error(f"Invalid index '{part}' in key '{key}'")
raise click.Abort()
else:
print_error(f"Cannot access '{part}' in {type(value).__name__}")
raise click.Abort()
if isinstance(value, (dict, list)):
console.print(yaml.dump(value, default_flow_style=False))
else:
console.print(str(value))
except Exception as e:
print_error(f"Error: {e}")
raise click.Abort()
@config_cmd.command("set")
@click.argument("key")
@click.argument("value")
@click.option(
"-c",
"--config",
"config_file",
default=None,
help="Path to configuration file",
)
@click.pass_obj
def set_cmd(ctx: Any, key: str, value: str, config_file: Optional[str]) -> None:
"""
Set a configuration value.
Use dot notation to access nested values.
\b
Examples:
pyserve config set server.port 8080
pyserve config set logging.level DEBUG
"""
from ..output import print_error, print_success
config_path = Path(config_file or ctx.config_file)
if not config_path.exists():
print_error(f"Configuration file not found: {config_path}")
raise click.Abort()
try:
with open(config_path) as f:
data = yaml.safe_load(f)
parsed_value: Any
if value.lower() == "true":
parsed_value = True
elif value.lower() == "false":
parsed_value = False
elif value.isdigit():
parsed_value = int(value)
else:
try:
parsed_value = float(value)
except ValueError:
parsed_value = value
parts = key.split(".")
current = data
for part in parts[:-1]:
if isinstance(current, dict):
if part not in current:
current[part] = {}
current = current[part]
elif isinstance(current, list):
index = int(part)
current = current[index]
final_key = parts[-1]
if isinstance(current, dict):
current[final_key] = parsed_value
elif isinstance(current, list):
current[int(final_key)] = parsed_value
with open(config_path, "w") as f:
yaml.dump(data, f, default_flow_style=False, sort_keys=False)
print_success(f"Set {key} = {parsed_value}")
except Exception as e:
print_error(f"Error: {e}")
raise click.Abort()
@config_cmd.command("diff")
@click.argument("file1", type=click.Path(exists=True))
@click.argument("file2", type=click.Path(exists=True))
def diff_cmd(file1: str, file2: str) -> None:
"""
Compare two configuration files.
\b
Examples:
pyserve config diff config.yaml production.yaml
"""
from ..output import console, print_error
try:
with open(file1) as f:
data1 = yaml.safe_load(f)
with open(file2) as f:
data2 = yaml.safe_load(f)
def compare_dicts(d1: Any, d2: Any, path: str = "") -> list[tuple[str, str, Any, Any]]:
differences: list[tuple[str, str, Any, Any]] = []
all_keys = set(d1.keys() if d1 else []) | set(d2.keys() if d2 else [])
for key in sorted(all_keys):
current_path = f"{path}.{key}" if path else key
v1 = d1.get(key) if d1 else None
v2 = d2.get(key) if d2 else None
if key not in (d1 or {}):
differences.append(("added", current_path, None, v2))
elif key not in (d2 or {}):
differences.append(("removed", current_path, v1, None))
elif isinstance(v1, dict) and isinstance(v2, dict):
differences.extend(compare_dicts(v1, v2, current_path))
elif v1 != v2:
differences.append(("changed", current_path, v1, v2))
return differences
differences = compare_dicts(data1, data2)
if not differences:
console.print("[green]Files are identical[/green]")
return
console.print(f"\n[bold]Differences between {file1} and {file2}:[/bold]\n")
for diff_type, path, v1, v2 in differences:
if diff_type == "added":
console.print(f" [green]+ {path}: {v2}[/green]")
elif diff_type == "removed":
console.print(f" [red]- {path}: {v1}[/red]")
elif diff_type == "changed":
console.print(f" [yellow]~ {path}:[/yellow]")
console.print(f" [red]- {v1}[/red]")
console.print(f" [green]+ {v2}[/green]")
console.print()
except Exception as e:
print_error(f"Error: {e}")
raise click.Abort()

View File

@ -0,0 +1,123 @@
"""
pyserve down - Stop all services
"""
import signal
import time
from pathlib import Path
from typing import Any, cast
import click
@click.command("down")
@click.option(
"--timeout",
"timeout",
default=30,
type=int,
help="Timeout in seconds for graceful shutdown",
)
@click.option(
"-v",
"--volumes",
is_flag=True,
help="Remove volumes/data",
)
@click.option(
"--remove-orphans",
is_flag=True,
help="Remove orphaned services",
)
@click.pass_obj
def down_cmd(
ctx: Any,
timeout: int,
volumes: bool,
remove_orphans: bool,
) -> None:
"""
Stop and remove all services.
\b
Examples:
pyserve down # Stop all services
pyserve down --timeout 60 # Extended shutdown timeout
pyserve down -v # Remove volumes too
"""
from ..output import console, print_error, print_info, print_success, print_warning
from ..state import StateManager
state_manager = StateManager(Path(".pyserve"), ctx.project)
if state_manager.is_daemon_running():
daemon_pid = state_manager.get_daemon_pid()
console.print(f"[bold]Stopping PyServe daemon (PID: {daemon_pid})...[/bold]")
try:
import os
# FIXME: Please fix the cast usage here
os.kill(cast(int, daemon_pid), signal.SIGTERM)
start_time = time.time()
while time.time() - start_time < timeout:
try:
# FIXME: Please fix the cast usage here
os.kill(cast(int, daemon_pid), 0)
time.sleep(0.5)
except ProcessLookupError:
break
else:
print_warning("Graceful shutdown timed out, forcing...")
try:
# FIXME: Please fix the cast usage here
os.kill(cast(int, daemon_pid), signal.SIGKILL)
except ProcessLookupError:
pass
state_manager.clear_daemon_pid()
print_success("PyServe daemon stopped")
except ProcessLookupError:
print_info("Daemon was not running")
state_manager.clear_daemon_pid()
except PermissionError:
print_error("Permission denied to stop daemon")
raise click.Abort()
else:
services = state_manager.get_all_services()
if not services:
print_info("No services are running")
return
console.print("[bold]Stopping services...[/bold]")
from ...config import Config
from .._runner import ServiceRunner
config_path = Path(ctx.config_file)
if config_path.exists():
config = Config.from_yaml(str(config_path))
else:
config = Config()
runner = ServiceRunner(config, state_manager)
import asyncio
try:
asyncio.run(runner.stop_all(timeout=timeout))
print_success("All services stopped")
except Exception as e:
print_error(f"Error stopping services: {e}")
if volumes:
console.print("Cleaning up state...")
state_manager.clear()
print_info("State cleared")
if remove_orphans:
# This would remove services that are in state but not in config
pass

View File

@ -0,0 +1,161 @@
"""
pyserve health - Check health of services
"""
import asyncio
from pathlib import Path
from typing import Any
import click
@click.command("health")
@click.argument("services", nargs=-1)
@click.option(
"--timeout",
"timeout",
default=5,
type=int,
help="Health check timeout in seconds",
)
@click.option(
"--format",
"output_format",
type=click.Choice(["table", "json"]),
default="table",
help="Output format",
)
@click.pass_obj
def health_cmd(ctx: Any, services: tuple[str, ...], timeout: int, output_format: str) -> None:
"""
Check health of services.
Performs active health checks on running services.
\b
Examples:
pyserve health # Check all services
pyserve health api admin # Check specific services
pyserve health --format json # JSON output
"""
from ..output import console, print_error, print_info
from ..state import StateManager
state_manager = StateManager(Path(".pyserve"), ctx.project)
all_services = state_manager.get_all_services()
if services:
all_services = {k: v for k, v in all_services.items() if k in services}
if not all_services:
print_info("No services to check")
return
results = asyncio.run(_check_health(all_services, timeout))
if output_format == "json":
import json
console.print(json.dumps(results, indent=2))
return
from rich.table import Table
from ..output import format_health
table = Table(show_header=True, header_style="bold")
table.add_column("SERVICE", style="cyan")
table.add_column("HEALTH")
table.add_column("CHECKS", justify="right")
table.add_column("LAST CHECK", style="dim")
table.add_column("RESPONSE TIME", justify="right")
for name, result in results.items():
health_str = format_health(result["status"])
checks = f"{result['successes']}/{result['total']}"
last_check = result.get("last_check", "-")
response_time = f"{result['response_time_ms']:.0f}ms" if result.get("response_time_ms") else "-"
table.add_row(name, health_str, checks, last_check, response_time)
console.print()
console.print(table)
console.print()
healthy = sum(1 for r in results.values() if r["status"] == "healthy")
unhealthy = sum(1 for r in results.values() if r["status"] == "unhealthy")
if unhealthy:
print_error(f"{unhealthy} service(s) unhealthy")
raise SystemExit(1)
else:
from ..output import print_success
print_success(f"All {healthy} service(s) healthy")
async def _check_health(services: dict, timeout: int) -> dict:
import time
try:
import httpx
except ImportError:
return {name: {"status": "unknown", "error": "httpx not installed"} for name in services}
results = {}
for name, service in services.items():
if service.state != "running" or not service.port:
results[name] = {
"status": "unknown",
"successes": 0,
"total": 0,
"error": "Service not running",
}
continue
health_path = "/health"
url = f"http://127.0.0.1:{service.port}{health_path}"
start_time = time.time()
try:
async with httpx.AsyncClient(timeout=timeout) as client:
resp = await client.get(url)
response_time = (time.time() - start_time) * 1000
if resp.status_code < 500:
results[name] = {
"status": "healthy",
"successes": 1,
"total": 1,
"response_time_ms": response_time,
"last_check": "just now",
"status_code": resp.status_code,
}
else:
results[name] = {
"status": "unhealthy",
"successes": 0,
"total": 1,
"response_time_ms": response_time,
"last_check": "just now",
"status_code": resp.status_code,
}
except httpx.TimeoutException:
results[name] = {
"status": "unhealthy",
"successes": 0,
"total": 1,
"error": "timeout",
"last_check": "just now",
}
except Exception as e:
results[name] = {
"status": "unhealthy",
"successes": 0,
"total": 1,
"error": str(e),
"last_check": "just now",
}
return results

View File

@ -0,0 +1,432 @@
"""
pyserve init - Initialize a new pyserve project
"""
from pathlib import Path
import click
TEMPLATES = {
"basic": {
"description": "Basic configuration with static files and routing",
"filename": "config.yaml",
},
"orchestration": {
"description": "Process orchestration with multiple ASGI/WSGI apps",
"filename": "config.yaml",
},
"asgi": {
"description": "ASGI mount configuration for in-process apps",
"filename": "config.yaml",
},
"full": {
"description": "Full configuration with all features",
"filename": "config.yaml",
},
}
BASIC_TEMPLATE = """\
# PyServe Configuration
# Generated by: pyserve init
http:
static_dir: ./static
templates_dir: ./templates
server:
host: 0.0.0.0
port: 8080
backlog: 100
proxy_timeout: 30.0
ssl:
enabled: false
cert_file: ./ssl/cert.pem
key_file: ./ssl/key.pem
logging:
level: INFO
console_output: true
format:
type: standard
use_colors: true
show_module: true
timestamp_format: "%Y-%m-%d %H:%M:%S"
console:
level: INFO
format:
type: standard
use_colors: true
files:
- path: ./logs/pyserve.log
level: INFO
format:
type: standard
use_colors: false
extensions:
- type: routing
config:
regex_locations:
# Health check endpoint
"=/health":
return: "200 OK"
content_type: "text/plain"
# Static files
"^/static/":
root: "./static"
strip_prefix: "/static"
# Default fallback
"__default__":
spa_fallback: true
root: "./static"
index_file: "index.html"
"""
ORCHESTRATION_TEMPLATE = """\
# PyServe Process Orchestration Configuration
# Generated by: pyserve init --template orchestration
#
# This configuration runs multiple ASGI/WSGI apps as isolated processes
# with automatic health monitoring and restart.
server:
host: 0.0.0.0
port: 8080
backlog: 2048
proxy_timeout: 60.0
logging:
level: INFO
console_output: true
format:
type: standard
use_colors: true
files:
- path: ./logs/pyserve.log
level: DEBUG
format:
type: standard
use_colors: false
extensions:
# Process Orchestration - runs each app in its own process
- type: process_orchestration
config:
port_range: [9000, 9999]
health_check_enabled: true
proxy_timeout: 60.0
apps:
# Example: FastAPI application
- name: api
path: /api
app_path: myapp.api:app
module_path: "."
workers: 2
health_check_path: /health
health_check_interval: 10.0
health_check_timeout: 5.0
health_check_retries: 3
max_restart_count: 5
restart_delay: 1.0
strip_path: true
env:
APP_ENV: "production"
# Example: Flask application (WSGI)
# - name: admin
# path: /admin
# app_path: myapp.admin:app
# app_type: wsgi
# module_path: "."
# workers: 1
# health_check_path: /health
# strip_path: true
# Static files routing
- type: routing
config:
regex_locations:
"=/health":
return: "200 OK"
content_type: "text/plain"
"^/static/":
root: "./static"
strip_prefix: "/static"
"""
ASGI_TEMPLATE = """\
# PyServe ASGI Mount Configuration
# Generated by: pyserve init --template asgi
#
# This configuration mounts ASGI apps in-process (like ASGI Lifespan).
# More efficient but apps share the same process.
server:
host: 0.0.0.0
port: 8080
backlog: 100
proxy_timeout: 30.0
logging:
level: INFO
console_output: true
format:
type: standard
use_colors: true
files:
- path: ./logs/pyserve.log
level: DEBUG
extensions:
- type: asgi_mount
config:
mounts:
# FastAPI app mounted at /api
- path: /api
app: myapp.api:app
# factory: false # Set to true if app is a factory function
# Starlette app mounted at /web
# - path: /web
# app: myapp.web:app
- type: routing
config:
regex_locations:
"=/health":
return: "200 OK"
content_type: "text/plain"
"^/static/":
root: "./static"
strip_prefix: "/static"
"__default__":
spa_fallback: true
root: "./static"
index_file: "index.html"
"""
FULL_TEMPLATE = """\
# PyServe Full Configuration
# Generated by: pyserve init --template full
#
# Comprehensive configuration showcasing all PyServe features.
http:
static_dir: ./static
templates_dir: ./templates
server:
host: 0.0.0.0
port: 8080
backlog: 2048
default_root: false
proxy_timeout: 60.0
redirect_instructions:
"/old-path": "/new-path"
ssl:
enabled: false
cert_file: ./ssl/cert.pem
key_file: ./ssl/key.pem
logging:
level: INFO
console_output: true
format:
type: standard
use_colors: true
show_module: true
timestamp_format: "%Y-%m-%d %H:%M:%S"
console:
level: DEBUG
format:
type: standard
use_colors: true
files:
# Main log file
- path: ./logs/pyserve.log
level: DEBUG
format:
type: standard
use_colors: false
# JSON logs for log aggregation
- path: ./logs/pyserve.json
level: INFO
format:
type: json
# Access logs
- path: ./logs/access.log
level: INFO
loggers: ["pyserve.access"]
max_bytes: 10485760 # 10MB
backup_count: 10
extensions:
# Process Orchestration for background services
- type: process_orchestration
config:
port_range: [9000, 9999]
health_check_enabled: true
proxy_timeout: 60.0
apps:
- name: api
path: /api
app_path: myapp.api:app
module_path: "."
workers: 2
health_check_path: /health
strip_path: true
env:
APP_ENV: "production"
# Advanced routing with regex
- type: routing
config:
regex_locations:
# API versioning
"~^/api/v(?P<version>\\\\d+)/":
proxy_pass: "http://localhost:9001"
headers:
- "API-Version: {version}"
- "X-Forwarded-For: $remote_addr"
# Static files with caching
"~*\\\\.(js|css|png|jpg|gif|ico|svg|woff2?)$":
root: "./static"
cache_control: "public, max-age=31536000"
headers:
- "Access-Control-Allow-Origin: *"
# Health check
"=/health":
return: "200 OK"
content_type: "text/plain"
# Static files
"^/static/":
root: "./static"
strip_prefix: "/static"
# SPA fallback
"__default__":
spa_fallback: true
root: "./static"
index_file: "index.html"
"""
def get_template_content(template: str) -> str:
templates = {
"basic": BASIC_TEMPLATE,
"orchestration": ORCHESTRATION_TEMPLATE,
"asgi": ASGI_TEMPLATE,
"full": FULL_TEMPLATE,
}
return templates.get(template, BASIC_TEMPLATE)
@click.command("init")
@click.option(
"-t",
"--template",
"template",
type=click.Choice(list(TEMPLATES.keys())),
default="basic",
help="Configuration template to use",
)
@click.option(
"-o",
"--output",
"output_file",
default="config.yaml",
help="Output file path (default: config.yaml)",
)
@click.option(
"-f",
"--force",
is_flag=True,
help="Overwrite existing configuration",
)
@click.option(
"--list-templates",
is_flag=True,
help="List available templates",
)
@click.pass_context
def init_cmd(
ctx: click.Context,
template: str,
output_file: str,
force: bool,
list_templates: bool,
) -> None:
"""
Initialize a new pyserve project.
Creates a configuration file with sensible defaults and directory structure.
\b
Examples:
pyserve init # Basic configuration
pyserve init -t orchestration # Process orchestration setup
pyserve init -t asgi # ASGI mount setup
pyserve init -t full # All features
pyserve init -o production.yaml # Custom output file
"""
from ..output import console, print_info, print_success, print_warning
if list_templates:
console.print("\n[bold]Available Templates:[/bold]\n")
for name, info in TEMPLATES.items():
console.print(f" [cyan]{name:15}[/cyan] - {info['description']}")
console.print()
return
output_path = Path(output_file)
if output_path.exists() and not force:
print_warning(f"Configuration file '{output_file}' already exists.")
if not click.confirm("Do you want to overwrite it?"):
raise click.Abort()
dirs_to_create = ["static", "templates", "logs"]
if template == "orchestration":
dirs_to_create.append("apps")
for dir_name in dirs_to_create:
dir_path = Path(dir_name)
if not dir_path.exists():
dir_path.mkdir(parents=True)
print_info(f"Created directory: {dir_name}/")
state_dir = Path(".pyserve")
if not state_dir.exists():
state_dir.mkdir()
print_info("Created directory: .pyserve/")
content = get_template_content(template)
output_path.write_text(content)
print_success(f"Created configuration file: {output_file}")
print_info(f"Template: {template}")
gitignore_path = Path(".pyserve/.gitignore")
if not gitignore_path.exists():
gitignore_path.write_text("*\n!.gitignore\n")
console.print()
console.print("[bold]Next steps:[/bold]")
console.print(f" 1. Edit [cyan]{output_file}[/cyan] to configure your services")
console.print(" 2. Run [cyan]pyserve config validate[/cyan] to check configuration")
console.print(" 3. Run [cyan]pyserve up[/cyan] to start services")
console.print()

View File

@ -0,0 +1,280 @@
"""
pyserve logs - View service logs
"""
import asyncio
import time
from pathlib import Path
from typing import Any, Optional
import click
@click.command("logs")
@click.argument("services", nargs=-1)
@click.option(
"-f",
"--follow",
is_flag=True,
help="Follow log output",
)
@click.option(
"--tail",
"tail",
default=100,
type=int,
help="Number of lines to show from the end",
)
@click.option(
"--since",
"since",
default=None,
help="Show logs since timestamp (e.g., '10m', '1h', '2024-01-01')",
)
@click.option(
"--until",
"until_time",
default=None,
help="Show logs until timestamp",
)
@click.option(
"-t",
"--timestamps",
is_flag=True,
help="Show timestamps",
)
@click.option(
"--no-color",
is_flag=True,
help="Disable colored output",
)
@click.option(
"--filter",
"filter_pattern",
default=None,
help="Filter logs by pattern",
)
@click.pass_obj
def logs_cmd(
ctx: Any,
services: tuple[str, ...],
follow: bool,
tail: int,
since: Optional[str],
until_time: Optional[str],
timestamps: bool,
no_color: bool,
filter_pattern: Optional[str],
) -> None:
"""
View service logs.
If no services are specified, shows logs from all services.
\b
Examples:
pyserve logs # All logs
pyserve logs api # Logs from api service
pyserve logs api admin # Logs from multiple services
pyserve logs -f # Follow logs
pyserve logs --tail 50 # Last 50 lines
pyserve logs --since "10m" # Logs from last 10 minutes
"""
from ..output import print_info
from ..state import StateManager
state_manager = StateManager(Path(".pyserve"), ctx.project)
if services:
log_files = [(name, state_manager.get_service_log_file(name)) for name in services]
else:
all_services = state_manager.get_all_services()
if not all_services:
main_log = Path("logs/pyserve.log")
if main_log.exists():
log_files = [("pyserve", main_log)]
else:
print_info("No logs available. Start services with 'pyserve up'")
return
else:
log_files = [(name, state_manager.get_service_log_file(name)) for name in all_services]
existing_logs = [(name, path) for name, path in log_files if path.exists()]
if not existing_logs:
print_info("No log files found")
return
since_time = _parse_time(since) if since else None
until_timestamp = _parse_time(until_time) if until_time else None
colors = ["cyan", "green", "yellow", "blue", "magenta"]
service_colors = {name: colors[i % len(colors)] for i, (name, _) in enumerate(existing_logs)}
if follow:
asyncio.run(
_follow_logs(
existing_logs,
service_colors,
timestamps,
no_color,
filter_pattern,
)
)
else:
_read_logs(
existing_logs,
service_colors,
tail,
since_time,
until_timestamp,
timestamps,
no_color,
filter_pattern,
)
def _parse_time(time_str: str) -> Optional[float]:
import re
from datetime import datetime
# Relative time (e.g., "10m", "1h", "2d")
match = re.match(r"^(\d+)([smhd])$", time_str)
if match:
value = int(match.group(1))
unit = match.group(2)
units = {"s": 1, "m": 60, "h": 3600, "d": 86400}
return time.time() - (value * units[unit])
# Relative phrase (e.g., "10m ago")
match = re.match(r"^(\d+)([smhd])\s+ago$", time_str)
if match:
value = int(match.group(1))
unit = match.group(2)
units = {"s": 1, "m": 60, "h": 3600, "d": 86400}
return time.time() - (value * units[unit])
# ISO format
try:
dt = datetime.fromisoformat(time_str)
return dt.timestamp()
except ValueError:
pass
return None
def _read_logs(
log_files: list[tuple[str, Path]],
service_colors: dict[str, str],
tail: int,
since_time: Optional[float],
until_time: Optional[float],
timestamps: bool,
no_color: bool,
filter_pattern: Optional[str],
) -> None:
import re
from ..output import console
all_lines = []
for service_name, log_path in log_files:
try:
with open(log_path) as f:
lines = f.readlines()
# Take last N lines
lines = lines[-tail:] if tail else lines
for line in lines:
line = line.rstrip()
if not line:
continue
if filter_pattern and filter_pattern not in line:
continue
line_time = None
timestamp_match = re.match(r"^(\d{4}-\d{2}-\d{2}[T ]\d{2}:\d{2}:\d{2})", line)
if timestamp_match:
try:
from datetime import datetime
line_time = datetime.fromisoformat(timestamp_match.group(1).replace(" ", "T")).timestamp()
except ValueError:
pass
if since_time and line_time and line_time < since_time:
continue
if until_time and line_time and line_time > until_time:
continue
all_lines.append((line_time or 0, service_name, line))
except Exception as e:
console.print(f"[red]Error reading {log_path}: {e}[/red]")
all_lines.sort(key=lambda x: x[0])
for _, service_name, line in all_lines:
if len(log_files) > 1:
# Multiple services - prefix with service name
if no_color:
console.print(f"{service_name} | {line}")
else:
color = service_colors.get(service_name, "white")
console.print(f"[{color}]{service_name}[/{color}] | {line}")
else:
console.print(line)
async def _follow_logs(
log_files: list[tuple[str, Path]],
service_colors: dict[str, str],
timestamps: bool,
no_color: bool,
filter_pattern: Optional[str],
) -> None:
from ..output import console
positions = {}
for service_name, log_path in log_files:
if log_path.exists():
positions[service_name] = log_path.stat().st_size
else:
positions[service_name] = 0
console.print("[dim]Following logs... Press Ctrl+C to stop[/dim]\n")
try:
while True:
for service_name, log_path in log_files:
if not log_path.exists():
continue
current_size = log_path.stat().st_size
if current_size > positions[service_name]:
with open(log_path) as f:
f.seek(positions[service_name])
new_content = f.read()
positions[service_name] = f.tell()
for line in new_content.splitlines():
if filter_pattern and filter_pattern not in line:
continue
if len(log_files) > 1:
if no_color:
console.print(f"{service_name} | {line}")
else:
color = service_colors.get(service_name, "white")
console.print(f"[{color}]{service_name}[/{color}] | {line}")
else:
console.print(line)
await asyncio.sleep(0.5)
except KeyboardInterrupt:
console.print("\n[dim]Stopped following logs[/dim]")

View File

@ -0,0 +1,88 @@
"""
pyserve scale - Scale services
"""
import asyncio
from pathlib import Path
from typing import Any
import click
@click.command("scale")
@click.argument("scales", nargs=-1, required=True)
@click.option(
"--timeout",
"timeout",
default=60,
type=int,
help="Timeout in seconds for scaling operation",
)
@click.option(
"--no-wait",
is_flag=True,
help="Don't wait for services to be ready",
)
@click.pass_obj
def scale_cmd(ctx: Any, scales: tuple[str, ...], timeout: int, no_wait: bool) -> None:
"""
Scale services to specified number of workers.
Use SERVICE=NUM format to specify scaling.
\b
Examples:
pyserve scale api=4 # Scale api to 4 workers
pyserve scale api=4 admin=2 # Scale multiple services
"""
from ...config import Config
from .._runner import ServiceRunner
from ..output import console, print_error, print_info, print_success
from ..state import StateManager
scale_map = {}
for scale in scales:
try:
service, num = scale.split("=")
scale_map[service] = int(num)
except ValueError:
print_error(f"Invalid scale format: {scale}. Use SERVICE=NUM")
raise click.Abort()
config_path = Path(ctx.config_file)
if not config_path.exists():
print_error(f"Configuration file not found: {config_path}")
raise click.Abort()
config = Config.from_yaml(str(config_path))
state_manager = StateManager(Path(".pyserve"), ctx.project)
all_services = state_manager.get_all_services()
for service in scale_map:
if service not in all_services:
print_error(f"Service '{service}' not found")
raise click.Abort()
runner = ServiceRunner(config, state_manager)
console.print("[bold]Scaling services...[/bold]")
async def do_scale() -> None:
for service, workers in scale_map.items():
current = all_services[service].workers or 1
print_info(f"Scaling {service}: {current}{workers} workers")
try:
success = await runner.scale_service(service, workers, timeout=timeout, wait=not no_wait)
if success:
print_success(f"Scaled {service} to {workers} workers")
else:
print_error(f"Failed to scale {service}")
except Exception as e:
print_error(f"Error scaling {service}: {e}")
try:
asyncio.run(do_scale())
except Exception as e:
print_error(f"Scaling failed: {e}")
raise click.Abort()

View File

@ -0,0 +1,190 @@
"""
pyserve start/stop/restart - Service management commands
"""
import asyncio
from pathlib import Path
from typing import Any, Dict
import click
@click.command("start")
@click.argument("services", nargs=-1, required=True)
@click.option(
"--timeout",
"timeout",
default=60,
type=int,
help="Timeout in seconds for service startup",
)
@click.pass_obj
def start_cmd(ctx: Any, services: tuple[str, ...], timeout: int) -> None:
"""
Start one or more services.
\b
Examples:
pyserve start api # Start api service
pyserve start api admin # Start multiple services
"""
from ...config import Config
from .._runner import ServiceRunner
from ..output import console, print_error, print_success
from ..state import StateManager
config_path = Path(ctx.config_file)
if not config_path.exists():
print_error(f"Configuration file not found: {config_path}")
raise click.Abort()
config = Config.from_yaml(str(config_path))
state_manager = StateManager(Path(".pyserve"), ctx.project)
runner = ServiceRunner(config, state_manager)
console.print(f"[bold]Starting services: {', '.join(services)}[/bold]")
async def do_start() -> Dict[str, bool]:
results: Dict[str, bool] = {}
for service in services:
try:
success = await runner.start_service(service, timeout=timeout)
results[service] = success
if success:
print_success(f"Started {service}")
else:
print_error(f"Failed to start {service}")
except Exception as e:
print_error(f"Error starting {service}: {e}")
results[service] = False
return results
try:
results = asyncio.run(do_start())
if not all(results.values()):
raise click.Abort()
except Exception as e:
print_error(f"Error: {e}")
raise click.Abort()
@click.command("stop")
@click.argument("services", nargs=-1, required=True)
@click.option(
"--timeout",
"timeout",
default=30,
type=int,
help="Timeout in seconds for graceful shutdown",
)
@click.option(
"-f",
"--force",
is_flag=True,
help="Force stop (SIGKILL)",
)
@click.pass_obj
def stop_cmd(ctx: Any, services: tuple[str, ...], timeout: int, force: bool) -> None:
"""
Stop one or more services.
\b
Examples:
pyserve stop api # Stop api service
pyserve stop api admin # Stop multiple services
pyserve stop api --force # Force stop
"""
from ...config import Config
from .._runner import ServiceRunner
from ..output import console, print_error, print_success
from ..state import StateManager
config_path = Path(ctx.config_file)
config = Config.from_yaml(str(config_path)) if config_path.exists() else Config()
state_manager = StateManager(Path(".pyserve"), ctx.project)
runner = ServiceRunner(config, state_manager)
console.print(f"[bold]Stopping services: {', '.join(services)}[/bold]")
async def do_stop() -> Dict[str, bool]:
results: Dict[str, bool] = {}
for service in services:
try:
success = await runner.stop_service(service, timeout=timeout, force=force)
results[service] = success
if success:
print_success(f"Stopped {service}")
else:
print_error(f"Failed to stop {service}")
except Exception as e:
print_error(f"Error stopping {service}: {e}")
results[service] = False
return results
try:
results = asyncio.run(do_stop())
if not all(results.values()):
raise click.Abort()
except Exception as e:
print_error(f"Error: {e}")
raise click.Abort()
@click.command("restart")
@click.argument("services", nargs=-1, required=True)
@click.option(
"--timeout",
"timeout",
default=60,
type=int,
help="Timeout in seconds for restart",
)
@click.pass_obj
def restart_cmd(ctx: Any, services: tuple[str, ...], timeout: int) -> None:
"""
Restart one or more services.
\b
Examples:
pyserve restart api # Restart api service
pyserve restart api admin # Restart multiple services
"""
from ...config import Config
from .._runner import ServiceRunner
from ..output import console, print_error, print_success
from ..state import StateManager
config_path = Path(ctx.config_file)
if not config_path.exists():
print_error(f"Configuration file not found: {config_path}")
raise click.Abort()
config = Config.from_yaml(str(config_path))
state_manager = StateManager(Path(".pyserve"), ctx.project)
runner = ServiceRunner(config, state_manager)
console.print(f"[bold]Restarting services: {', '.join(services)}[/bold]")
async def do_restart() -> Dict[str, bool]:
results = {}
for service in services:
try:
success = await runner.restart_service(service, timeout=timeout)
results[service] = success
if success:
print_success(f"Restarted {service}")
else:
print_error(f"Failed to restart {service}")
except Exception as e:
print_error(f"Error restarting {service}: {e}")
results[service] = False
return results
try:
results = asyncio.run(do_restart())
if not all(results.values()):
raise click.Abort()
except Exception as e:
print_error(f"Error: {e}")
raise click.Abort()

View File

@ -0,0 +1,147 @@
"""
pyserve ps / status - Show service status
"""
import json
from pathlib import Path
from typing import Any, Optional
import click
@click.command("ps")
@click.argument("services", nargs=-1)
@click.option(
"-a",
"--all",
"show_all",
is_flag=True,
help="Show all services (including stopped)",
)
@click.option(
"-q",
"--quiet",
is_flag=True,
help="Only show service names",
)
@click.option(
"--format",
"output_format",
type=click.Choice(["table", "json", "yaml"]),
default="table",
help="Output format",
)
@click.option(
"--filter",
"filter_status",
default=None,
help="Filter by status (running, stopped, failed)",
)
@click.pass_obj
def ps_cmd(
ctx: Any,
services: tuple[str, ...],
show_all: bool,
quiet: bool,
output_format: str,
filter_status: Optional[str],
) -> None:
"""
Show status of services.
\b
Examples:
pyserve ps # Show running services
pyserve ps -a # Show all services
pyserve ps api admin # Show specific services
pyserve ps --format json # JSON output
pyserve ps --filter running # Filter by status
"""
from ..output import (
console,
create_services_table,
format_health,
format_status,
format_uptime,
print_info,
)
from ..state import StateManager
state_manager = StateManager(Path(".pyserve"), ctx.project)
all_services = state_manager.get_all_services()
# Check if daemon is running
daemon_running = state_manager.is_daemon_running()
# Filter services
if services:
all_services = {k: v for k, v in all_services.items() if k in services}
if filter_status:
all_services = {k: v for k, v in all_services.items() if v.state.lower() == filter_status.lower()}
if not show_all:
# By default, show only running/starting/failed services
all_services = {k: v for k, v in all_services.items() if v.state.lower() in ("running", "starting", "stopping", "failed", "restarting")}
if not all_services:
if daemon_running:
print_info("No services found. Daemon is running but no services are configured.")
else:
print_info("No services running. Use 'pyserve up' to start services.")
return
if quiet:
for name in all_services:
click.echo(name)
return
if output_format == "json":
data = {name: svc.to_dict() for name, svc in all_services.items()}
console.print(json.dumps(data, indent=2))
return
if output_format == "yaml":
import yaml
data = {name: svc.to_dict() for name, svc in all_services.items()}
console.print(yaml.dump(data, default_flow_style=False))
return
table = create_services_table()
for name, service in sorted(all_services.items()):
ports = f"{service.port}" if service.port else "-"
uptime = format_uptime(service.uptime) if service.state == "running" else "-"
health = format_health(service.health.status if service.state == "running" else "-")
pid = str(service.pid) if service.pid else "-"
workers = f"{service.workers}" if service.workers else "-"
table.add_row(
name,
format_status(service.state),
ports,
uptime,
health,
pid,
workers,
)
console.print()
console.print(table)
console.print()
total = len(all_services)
running = sum(1 for s in all_services.values() if s.state == "running")
failed = sum(1 for s in all_services.values() if s.state == "failed")
summary_parts = [f"[bold]{total}[/bold] service(s)"]
if running:
summary_parts.append(f"[green]{running} running[/green]")
if failed:
summary_parts.append(f"[red]{failed} failed[/red]")
if total - running - failed > 0:
summary_parts.append(f"[dim]{total - running - failed} stopped[/dim]")
console.print(" | ".join(summary_parts))
console.print()

183
pyserve/ctl/commands/top.py Normal file
View File

@ -0,0 +1,183 @@
"""
pyserve top - Live monitoring dashboard
"""
import asyncio
import time
from pathlib import Path
from typing import Any, Optional
import click
@click.command("top")
@click.argument("services", nargs=-1)
@click.option(
"--refresh",
"refresh_interval",
default=2,
type=float,
help="Refresh interval in seconds",
)
@click.option(
"--no-color",
is_flag=True,
help="Disable colored output",
)
@click.pass_obj
def top_cmd(ctx: Any, services: tuple[str, ...], refresh_interval: float, no_color: bool) -> None:
"""
Live monitoring dashboard for services.
Shows real-time CPU, memory usage, and request metrics.
\b
Examples:
pyserve top # Monitor all services
pyserve top api admin # Monitor specific services
pyserve top --refresh 5 # Slower refresh rate
"""
from ..output import console, print_info
from ..state import StateManager
state_manager = StateManager(Path(".pyserve"), ctx.project)
if not state_manager.is_daemon_running():
print_info("No services running. Start with 'pyserve up -d'")
return
try:
asyncio.run(
_run_dashboard(
state_manager,
list(services) if services else None,
refresh_interval,
no_color,
)
)
except KeyboardInterrupt:
console.print("\n")
async def _run_dashboard(
state_manager: Any,
filter_services: Optional[list[str]],
refresh_interval: float,
no_color: bool,
) -> None:
from rich.layout import Layout
from rich.live import Live
from rich.panel import Panel
from rich.table import Table
from rich.text import Text
from ..output import console, format_bytes, format_uptime
try:
import psutil
except ImportError:
console.print("[yellow]psutil not installed. Install with: pip install psutil[/yellow]")
return
start_time = time.time()
def make_dashboard() -> Any:
all_services = state_manager.get_all_services()
if filter_services:
all_services = {k: v for k, v in all_services.items() if k in filter_services}
table = Table(
title=None,
show_header=True,
header_style="bold",
border_style="dim",
expand=True,
)
table.add_column("SERVICE", style="cyan", no_wrap=True)
table.add_column("STATUS", no_wrap=True)
table.add_column("CPU%", justify="right")
table.add_column("MEM", justify="right")
table.add_column("PID", style="dim")
table.add_column("UPTIME", style="dim")
table.add_column("HEALTH", no_wrap=True)
total_cpu = 0.0
total_mem = 0
running_count = 0
total_count = len(all_services)
for name, service in sorted(all_services.items()):
status_style = {
"running": "[green]● RUN[/green]",
"stopped": "[dim]○ STOP[/dim]",
"failed": "[red]✗ FAIL[/red]",
"starting": "[yellow]◐ START[/yellow]",
"stopping": "[yellow]◑ STOP[/yellow]",
}.get(service.state, service.state)
cpu_str = "-"
mem_str = "-"
if service.pid and service.state == "running":
try:
proc = psutil.Process(service.pid)
cpu = proc.cpu_percent(interval=0.1)
mem = proc.memory_info().rss
cpu_str = f"{cpu:.1f}%"
mem_str = format_bytes(mem)
total_cpu += cpu
total_mem += mem
running_count += 1
except (psutil.NoSuchProcess, psutil.AccessDenied):
pass
health_style = {
"healthy": "[green]✓[/green]",
"unhealthy": "[red]✗[/red]",
"degraded": "[yellow]⚠[/yellow]",
"unknown": "[dim]?[/dim]",
}.get(service.health.status, "[dim]-[/dim]")
uptime = format_uptime(service.uptime) if service.state == "running" else "-"
pid = str(service.pid) if service.pid else "-"
table.add_row(
name,
status_style,
cpu_str,
mem_str,
pid,
uptime,
health_style,
)
elapsed = format_uptime(time.time() - start_time)
summary = Text()
summary.append(f"Running: {running_count}/{total_count}", style="bold")
summary.append(" | ")
summary.append(f"CPU: {total_cpu:.1f}%", style="cyan")
summary.append(" | ")
summary.append(f"MEM: {format_bytes(total_mem)}", style="cyan")
summary.append(" | ")
summary.append(f"Session: {elapsed}", style="dim")
layout = Layout()
layout.split_column(
Layout(
Panel(
Text("PyServe Dashboard", style="bold cyan", justify="center"),
border_style="cyan",
),
size=3,
),
Layout(table),
Layout(Panel(summary, border_style="dim"), size=3),
)
return layout
with Live(make_dashboard(), refresh_per_second=1 / refresh_interval, console=console) as live:
while True:
await asyncio.sleep(refresh_interval)
live.update(make_dashboard())

175
pyserve/ctl/commands/up.py Normal file
View File

@ -0,0 +1,175 @@
"""
pyserve up - Start all services
"""
import asyncio
import signal
import sys
import time
from pathlib import Path
from typing import Any
import click
@click.command("up")
@click.argument("services", nargs=-1)
@click.option(
"-d",
"--detach",
is_flag=True,
help="Run in background (detached mode)",
)
@click.option(
"--build",
is_flag=True,
help="Build/reload applications before starting",
)
@click.option(
"--force-recreate",
is_flag=True,
help="Recreate services even if configuration hasn't changed",
)
@click.option(
"--scale",
"scales",
multiple=True,
help="Scale SERVICE to NUM workers (e.g., --scale api=4)",
)
@click.option(
"--timeout",
"timeout",
default=60,
type=int,
help="Timeout in seconds for service startup",
)
@click.option(
"--wait",
is_flag=True,
help="Wait for services to be healthy before returning",
)
@click.option(
"--remove-orphans",
is_flag=True,
help="Remove services not defined in configuration",
)
@click.pass_obj
def up_cmd(
ctx: Any,
services: tuple[str, ...],
detach: bool,
build: bool,
force_recreate: bool,
scales: tuple[str, ...],
timeout: int,
wait: bool,
remove_orphans: bool,
) -> None:
"""
Start services defined in configuration.
If no services are specified, all services will be started.
\b
Examples:
pyserve up # Start all services
pyserve up -d # Start in background
pyserve up api admin # Start specific services
pyserve up --scale api=4 # Scale api to 4 workers
pyserve up --wait # Wait for healthy status
"""
from .._runner import ServiceRunner
from ..output import console, print_error, print_info, print_success, print_warning
from ..state import StateManager
config_path = Path(ctx.config_file)
if not config_path.exists():
print_error(f"Configuration file not found: {config_path}")
print_info("Run 'pyserve init' to create a configuration file")
raise click.Abort()
scale_map = {}
for scale in scales:
try:
service, num = scale.split("=")
scale_map[service] = int(num)
except ValueError:
print_error(f"Invalid scale format: {scale}. Use SERVICE=NUM")
raise click.Abort()
try:
from ...config import Config
config = Config.from_yaml(str(config_path))
except Exception as e:
print_error(f"Failed to load configuration: {e}")
raise click.Abort()
state_manager = StateManager(Path(".pyserve"), ctx.project)
if state_manager.is_daemon_running():
daemon_pid = state_manager.get_daemon_pid()
print_warning(f"PyServe daemon is already running (PID: {daemon_pid})")
if not click.confirm("Do you want to restart it?"):
raise click.Abort()
try:
import os
from typing import cast
# FIXME: Please fix the cast usage here
os.kill(cast(int, daemon_pid), signal.SIGTERM)
time.sleep(2)
except ProcessLookupError:
pass
state_manager.clear_daemon_pid()
runner = ServiceRunner(config, state_manager)
service_list = list(services) if services else None
if detach:
console.print("[bold]Starting PyServe in background...[/bold]")
try:
pid = runner.start_daemon(
service_list,
scale_map=scale_map,
force_recreate=force_recreate,
)
state_manager.set_daemon_pid(pid)
print_success(f"PyServe started in background (PID: {pid})")
print_info("Use 'pyserve ps' to see service status")
print_info("Use 'pyserve logs -f' to follow logs")
print_info("Use 'pyserve down' to stop")
except Exception as e:
print_error(f"Failed to start daemon: {e}")
raise click.Abort()
else:
console.print("[bold]Starting PyServe...[/bold]")
def signal_handler(signum: int, frame: Any) -> None:
console.print("\n[yellow]Received shutdown signal...[/yellow]")
runner.stop()
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
try:
asyncio.run(
runner.start(
service_list,
scale_map=scale_map,
force_recreate=force_recreate,
wait_healthy=wait,
timeout=timeout,
)
)
except KeyboardInterrupt:
console.print("\n[yellow]Shutting down...[/yellow]")
except Exception as e:
print_error(f"Failed to start services: {e}")
if ctx.debug:
raise
raise click.Abort()

168
pyserve/ctl/main.py Normal file
View File

@ -0,0 +1,168 @@
"""
PyServeCTL - Main entry point
Usage:
pyservectl [OPTIONS] COMMAND [ARGS]...
"""
import sys
from pathlib import Path
from typing import TYPE_CHECKING, Optional
import click
from .. import __version__
from .commands import (
config_cmd,
down_cmd,
health_cmd,
init_cmd,
logs_cmd,
ps_cmd,
restart_cmd,
scale_cmd,
start_cmd,
stop_cmd,
top_cmd,
up_cmd,
)
if TYPE_CHECKING:
from ..config import Config
from .state import StateManager
DEFAULT_CONFIG = "config.yaml"
DEFAULT_STATE_DIR = ".pyserve"
class Context:
def __init__(self) -> None:
self.config_file: str = DEFAULT_CONFIG
self.state_dir: Path = Path(DEFAULT_STATE_DIR)
self.verbose: bool = False
self.debug: bool = False
self.project: Optional[str] = None
self._config: Optional["Config"] = None
self._state: Optional["StateManager"] = None
@property
def config(self) -> "Config":
if self._config is None:
from ..config import Config
if Path(self.config_file).exists():
self._config = Config.from_yaml(self.config_file)
else:
self._config = Config()
return self._config
@property
def state(self) -> "StateManager":
if self._state is None:
from .state import StateManager
self._state = StateManager(self.state_dir, self.project)
return self._state
pass_context = click.make_pass_decorator(Context, ensure=True)
@click.group(invoke_without_command=True)
@click.option(
"-c",
"--config",
"config_file",
default=DEFAULT_CONFIG,
envvar="PYSERVE_CONFIG",
help=f"Path to configuration file (default: {DEFAULT_CONFIG})",
type=click.Path(),
)
@click.option(
"-p",
"--project",
"project",
default=None,
envvar="PYSERVE_PROJECT",
help="Project name for isolation",
)
@click.option(
"-v",
"--verbose",
is_flag=True,
help="Enable verbose output",
)
@click.option(
"--debug",
is_flag=True,
help="Enable debug mode",
)
@click.version_option(version=__version__, prog_name="pyservectl")
@click.pass_context
def cli(ctx: click.Context, config_file: str, project: Optional[str], verbose: bool, debug: bool) -> None:
"""
PyServeCTL - Service management CLI for PyServe.
Docker-compose-like tool for managing PyServe services.
\b
Quick Start:
pyservectl init # Initialize a new project
pyservectl up # Start all services
pyservectl ps # Show service status
pyservectl logs -f # Follow logs
pyservectl down # Stop all services
\b
Examples:
pyservectl up -d # Start in background
pyservectl up -c prod.yaml # Use custom config
pyservectl logs api -f --tail 100 # Follow api logs
pyservectl restart api admin # Restart specific services
pyservectl scale api=4 # Scale api to 4 workers
"""
ctx.ensure_object(Context)
ctx.obj.config_file = config_file
ctx.obj.verbose = verbose
ctx.obj.debug = debug
ctx.obj.project = project
if ctx.invoked_subcommand is None:
click.echo(ctx.get_help())
cli.add_command(init_cmd, name="init")
cli.add_command(config_cmd, name="config")
cli.add_command(up_cmd, name="up")
cli.add_command(down_cmd, name="down")
cli.add_command(start_cmd, name="start")
cli.add_command(stop_cmd, name="stop")
cli.add_command(restart_cmd, name="restart")
cli.add_command(ps_cmd, name="ps")
cli.add_command(logs_cmd, name="logs")
cli.add_command(top_cmd, name="top")
cli.add_command(health_cmd, name="health")
cli.add_command(scale_cmd, name="scale")
# Alias 'status' -> 'ps'
cli.add_command(ps_cmd, name="status")
def main() -> None:
try:
cli(standalone_mode=False)
except click.ClickException as e:
e.show()
sys.exit(e.exit_code)
except KeyboardInterrupt:
click.echo("\nInterrupted by user")
sys.exit(130)
except Exception as e:
if "--debug" in sys.argv:
raise
click.secho(f"Error: {e}", fg="red", err=True)
sys.exit(1)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,110 @@
"""
PyServe CLI Output utilities
Rich-based formatters and helpers for CLI output.
"""
from rich.console import Console
from rich.table import Table
from rich.theme import Theme
pyserve_theme = Theme(
{
"info": "cyan",
"warning": "yellow",
"error": "red bold",
"success": "green",
"service.running": "green",
"service.stopped": "dim",
"service.failed": "red",
"service.starting": "yellow",
}
)
console = Console(theme=pyserve_theme)
def print_error(message: str) -> None:
console.print(f"[error] {message}[/error]")
def print_warning(message: str) -> None:
console.print(f"[warning] {message}[/warning]")
def print_success(message: str) -> None:
console.print(f"[success] {message}[/success]")
def print_info(message: str) -> None:
console.print(f"[info] {message}[/info]")
def create_services_table() -> Table:
table = Table(
title=None,
show_header=True,
header_style="bold",
border_style="dim",
)
table.add_column("NAME", style="cyan", no_wrap=True)
table.add_column("STATUS", no_wrap=True)
table.add_column("PORTS", style="dim")
table.add_column("UPTIME", style="dim")
table.add_column("HEALTH", no_wrap=True)
table.add_column("PID", style="dim")
table.add_column("WORKERS", style="dim")
return table
def format_status(status: str) -> str:
status_styles = {
"running": "[service.running]● running[/service.running]",
"stopped": "[service.stopped]○ stopped[/service.stopped]",
"failed": "[service.failed]✗ failed[/service.failed]",
"starting": "[service.starting]◐ starting[/service.starting]",
"stopping": "[service.starting]◑ stopping[/service.starting]",
"restarting": "[service.starting]↻ restarting[/service.starting]",
"pending": "[service.stopped]○ pending[/service.stopped]",
}
return status_styles.get(status.lower(), status)
def format_health(health: str) -> str:
health_styles = {
"healthy": "[green] healthy[/green]",
"unhealthy": "[red] unhealthy[/red]",
"degraded": "[yellow] degraded[/yellow]",
"unknown": "[dim] unknown[/dim]",
"-": "[dim]-[/dim]",
}
return health_styles.get(health.lower(), health)
def format_uptime(seconds: float) -> str:
if seconds <= 0:
return "-"
if seconds < 60:
return f"{int(seconds)}s"
elif seconds < 3600:
minutes = int(seconds / 60)
secs = int(seconds % 60)
return f"{minutes}m {secs}s"
elif seconds < 86400:
hours = int(seconds / 3600)
minutes = int((seconds % 3600) / 60)
return f"{hours}h {minutes}m"
else:
days = int(seconds / 86400)
hours = int((seconds % 86400) / 3600)
return f"{days}d {hours}h"
def format_bytes(num_bytes: int) -> str:
value = float(num_bytes)
for unit in ["B", "KB", "MB", "GB", "TB"]:
if abs(value) < 1024.0:
return f"{value:.1f}{unit}"
value /= 1024.0
return f"{value:.1f}PB"

View File

@ -0,0 +1,232 @@
"""
PyServe CLI State Management
Manages the state of running services.
"""
import json
import os
import time
from dataclasses import asdict, dataclass, field
from pathlib import Path
from typing import Any, Dict, Optional
@dataclass
class ServiceHealth:
status: str = "unknown" # healthy, unhealthy, degraded, unknown
last_check: Optional[float] = None
failures: int = 0
response_time_ms: Optional[float] = None
@dataclass
class ServiceState:
name: str
state: str = "stopped" # pending, starting, running, stopping, stopped, failed, restarting
pid: Optional[int] = None
port: int = 0
workers: int = 0
started_at: Optional[float] = None
restart_count: int = 0
health: ServiceHealth = field(default_factory=ServiceHealth)
config_hash: str = ""
@property
def uptime(self) -> float:
if self.started_at is None:
return 0.0
return time.time() - self.started_at
def to_dict(self) -> Dict[str, Any]:
return {
"name": self.name,
"state": self.state,
"pid": self.pid,
"port": self.port,
"workers": self.workers,
"started_at": self.started_at,
"restart_count": self.restart_count,
"health": asdict(self.health),
"config_hash": self.config_hash,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "ServiceState":
health_data = data.pop("health", {})
health = ServiceHealth(**health_data) if health_data else ServiceHealth()
return cls(**data, health=health)
@dataclass
class ProjectState:
version: str = "1.0"
project: str = ""
config_file: str = ""
config_hash: str = ""
started_at: Optional[float] = None
daemon_pid: Optional[int] = None
services: Dict[str, ServiceState] = field(default_factory=dict)
def to_dict(self) -> Dict[str, Any]:
return {
"version": self.version,
"project": self.project,
"config_file": self.config_file,
"config_hash": self.config_hash,
"started_at": self.started_at,
"daemon_pid": self.daemon_pid,
"services": {name: svc.to_dict() for name, svc in self.services.items()},
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "ProjectState":
services_data = data.pop("services", {})
services = {name: ServiceState.from_dict(svc) for name, svc in services_data.items()}
return cls(**data, services=services)
class StateManager:
STATE_FILE = "state.json"
PID_FILE = "pyserve.pid"
SOCKET_FILE = "pyserve.sock"
LOGS_DIR = "logs"
def __init__(self, state_dir: Path, project: Optional[str] = None):
self.state_dir = Path(state_dir)
self.project = project or self._detect_project()
self._state: Optional[ProjectState] = None
def _detect_project(self) -> str:
return Path.cwd().name
@property
def state_file(self) -> Path:
return self.state_dir / self.STATE_FILE
@property
def pid_file(self) -> Path:
return self.state_dir / self.PID_FILE
@property
def socket_file(self) -> Path:
return self.state_dir / self.SOCKET_FILE
@property
def logs_dir(self) -> Path:
return self.state_dir / self.LOGS_DIR
def ensure_dirs(self) -> None:
self.state_dir.mkdir(parents=True, exist_ok=True)
self.logs_dir.mkdir(parents=True, exist_ok=True)
def load(self) -> ProjectState:
if self._state is not None:
return self._state
if self.state_file.exists():
try:
with open(self.state_file) as f:
data = json.load(f)
self._state = ProjectState.from_dict(data)
except (json.JSONDecodeError, KeyError):
self._state = ProjectState(project=self.project)
else:
self._state = ProjectState(project=self.project)
return self._state
def save(self) -> None:
if self._state is None:
return
self.ensure_dirs()
with open(self.state_file, "w") as f:
json.dump(self._state.to_dict(), f, indent=2)
def get_state(self) -> ProjectState:
return self.load()
def update_service(self, name: str, **kwargs: Any) -> ServiceState:
state = self.load()
if name not in state.services:
state.services[name] = ServiceState(name=name)
service = state.services[name]
for key, value in kwargs.items():
if hasattr(service, key):
setattr(service, key, value)
self.save()
return service
def remove_service(self, name: str) -> None:
state = self.load()
if name in state.services:
del state.services[name]
self.save()
def get_service(self, name: str) -> Optional[ServiceState]:
state = self.load()
return state.services.get(name)
def get_all_services(self) -> Dict[str, ServiceState]:
state = self.load()
return state.services.copy()
def clear(self) -> None:
self._state = ProjectState(project=self.project)
self.save()
def is_daemon_running(self) -> bool:
if not self.pid_file.exists():
return False
try:
pid = int(self.pid_file.read_text().strip())
# Check if process exists
os.kill(pid, 0)
return True
except (ValueError, ProcessLookupError, PermissionError):
return False
def get_daemon_pid(self) -> Optional[int]:
if not self.is_daemon_running():
return None
try:
return int(self.pid_file.read_text().strip())
except ValueError:
return None
def set_daemon_pid(self, pid: int) -> None:
self.ensure_dirs()
self.pid_file.write_text(str(pid))
state = self.load()
state.daemon_pid = pid
self.save()
def clear_daemon_pid(self) -> None:
if self.pid_file.exists():
self.pid_file.unlink()
state = self.load()
state.daemon_pid = None
self.save()
def get_service_log_file(self, service_name: str) -> Path:
self.ensure_dirs()
return self.logs_dir / f"{service_name}.log"
def compute_config_hash(self, config_file: str) -> str:
import hashlib
path = Path(config_file)
if not path.exists():
return ""
content = path.read_bytes()
return hashlib.sha256(content).hexdigest()[:16]

View File

@ -1,7 +1,10 @@
import asyncio
from abc import ABC, abstractmethod
from typing import Dict, Any, List, Optional, Type
from typing import Any, Dict, List, Optional, Type
from starlette.requests import Request
from starlette.responses import Response
from .logging_utils import get_logger
logger = get_logger(__name__)
@ -36,6 +39,7 @@ class RoutingExtension(Extension):
default_proxy_timeout = config.get("default_proxy_timeout", 30.0)
self.router = create_router_from_config(regex_locations)
from .routing import RequestHandler
self.handler = RequestHandler(self.router, default_proxy_timeout=default_proxy_timeout)
async def process_request(self, request: Request) -> Optional[Response]:
@ -54,11 +58,9 @@ class SecurityExtension(Extension):
super().__init__(config)
self.allowed_ips = config.get("allowed_ips", [])
self.blocked_ips = config.get("blocked_ips", [])
self.security_headers = config.get("security_headers", {
"X-Content-Type-Options": "nosniff",
"X-Frame-Options": "DENY",
"X-XSS-Protection": "1; mode=block"
})
self.security_headers = config.get(
"security_headers", {"X-Content-Type-Options": "nosniff", "X-Frame-Options": "DENY", "X-XSS-Protection": "1; mode=block"}
)
async def process_request(self, request: Request) -> Optional[Response]:
client_ip = request.client.host if request.client else "unknown"
@ -66,11 +68,13 @@ class SecurityExtension(Extension):
if self.blocked_ips and client_ip in self.blocked_ips:
logger.warning(f"Blocked request from IP: {client_ip}")
from starlette.responses import PlainTextResponse
return PlainTextResponse("403 Forbidden", status_code=403)
if self.allowed_ips and client_ip not in self.allowed_ips:
logger.warning(f"Access denied for IP: {client_ip}")
from starlette.responses import PlainTextResponse
return PlainTextResponse("403 Forbidden", status_code=403)
return None
@ -108,36 +112,101 @@ class MonitoringExtension(Extension):
async def process_request(self, request: Request) -> Optional[Response]:
if self.enable_metrics:
self.request_count += 1
request.state.start_time = __import__('time').time()
request.state.start_time = __import__("time").time()
return None
async def process_response(self, request: Request, response: Response) -> Response:
if self.enable_metrics and hasattr(request.state, 'start_time'):
response_time = __import__('time').time() - request.state.start_time
if self.enable_metrics and hasattr(request.state, "start_time"):
response_time = __import__("time").time() - request.state.start_time
self.response_times.append(response_time)
if response.status_code >= 400:
self.error_count += 1
logger.info(f"Request: {request.method} {request.url.path} - "
f"Status: {response.status_code} - "
f"Time: {response_time:.3f}s")
logger.info(f"Request: {request.method} {request.url.path} - " f"Status: {response.status_code} - " f"Time: {response_time:.3f}s")
return response
def get_metrics(self) -> Dict[str, Any]:
avg_response_time = (sum(self.response_times) / len(self.response_times)
if self.response_times else 0)
avg_response_time = sum(self.response_times) / len(self.response_times) if self.response_times else 0
return {
"request_count": self.request_count,
"error_count": self.error_count,
"error_rate": self.error_count / max(self.request_count, 1),
"avg_response_time": avg_response_time,
"total_response_times": len(self.response_times)
"total_response_times": len(self.response_times),
}
class ASGIExtension(Extension):
def __init__(self, config: Dict[str, Any]):
super().__init__(config)
from .asgi_mount import ASGIMountManager
self.mount_manager = ASGIMountManager()
self._load_mounts(config.get("mounts", []))
def _load_mounts(self, mounts: List[Dict[str, Any]]) -> None:
from .asgi_mount import create_django_app
for mount_config in mounts:
path = mount_config.get("path", "/")
if "django_settings" in mount_config:
app = create_django_app(
settings_module=mount_config["django_settings"],
module_path=mount_config.get("module_path"),
)
if app:
self.mount_manager.mount(
path=path,
app=app,
name=mount_config.get("name", f"django:{mount_config['django_settings']}"),
strip_path=mount_config.get("strip_path", True),
)
continue
self.mount_manager.mount(
path=path,
app_path=mount_config.get("app_path"),
app_type=mount_config.get("app_type", "asgi"),
module_path=mount_config.get("module_path"),
factory=mount_config.get("factory", False),
factory_args=mount_config.get("factory_args"),
name=mount_config.get("name", ""),
strip_path=mount_config.get("strip_path", True),
)
async def process_request(self, request: Request) -> Optional[Response]:
path = request.url.path
mount = self.mount_manager.get_mount(path)
if mount is not None:
# Store mount info in request state for middleware to use
request.state.asgi_mount = mount
# Return a special marker response that middleware will intercept
return None # Will be handled by get_asgi_handler
return None
async def process_response(self, request: Request, response: Response) -> Response:
return response
def get_asgi_handler(self, request: Request) -> Optional[Any]:
path = request.url.path
return self.mount_manager.get_mount(path)
def get_metrics(self) -> Dict[str, Any]:
return {
"asgi_mounts": self.mount_manager.list_mounts(),
"asgi_mount_count": len(self.mount_manager.mounts),
}
def cleanup(self) -> None:
logger.info("Cleaning up ASGI mounts")
class ExtensionManager:
def __init__(self) -> None:
self.extensions: List[Extension] = []
@ -145,8 +214,18 @@ class ExtensionManager:
"routing": RoutingExtension,
"security": SecurityExtension,
"caching": CachingExtension,
"monitoring": MonitoringExtension
"monitoring": MonitoringExtension,
"asgi": ASGIExtension,
}
self._register_process_orchestration()
def _register_process_orchestration(self) -> None:
try:
from .process_extension import ProcessOrchestrationExtension
self.extension_registry["process_orchestration"] = ProcessOrchestrationExtension
except ImportError:
pass # Optional dependency
def register_extension_type(self, name: str, extension_class: Type[Extension]) -> None:
self.extension_registry[name] = extension_class
@ -165,6 +244,32 @@ class ExtensionManager:
except Exception as e:
logger.error(f"Error loading extension {extension_type}: {e}")
async def load_extension_async(self, extension_type: str, config: Dict[str, Any]) -> None:
"""Load extension with async setup support (for ProcessOrchestration)."""
if extension_type not in self.extension_registry:
logger.error(f"Unknown extension type: {extension_type}")
return
try:
extension_class = self.extension_registry[extension_type]
extension = extension_class(config)
setup_method = getattr(extension, "setup", None)
if setup_method is not None and asyncio.iscoroutinefunction(setup_method):
await setup_method(config)
else:
extension.initialize()
start_method = getattr(extension, "start", None)
if start_method is not None and asyncio.iscoroutinefunction(start_method):
await start_method()
# Insert at the beginning so process_orchestration is checked first
self.extensions.insert(0, extension)
logger.info(f"Loaded extension (async): {extension_type}")
except Exception as e:
logger.error(f"Error loading extension {extension_type}: {e}")
async def process_request(self, request: Request) -> Optional[Response]:
for extension in self.extensions:
if not extension.enabled:

View File

@ -3,9 +3,10 @@ import logging.handlers
import sys
import time
from pathlib import Path
from typing import Dict, Any, List, cast, Callable
from typing import Any, Callable, Dict, List, cast
import structlog
from structlog.types import FilteringBoundLogger, EventDict
from structlog.types import EventDict, FilteringBoundLogger
from . import __version__
@ -21,15 +22,15 @@ class StructlogFilter(logging.Filter):
return True
for logger_name in self.logger_names:
if record.name == logger_name or record.name.startswith(logger_name + '.'):
if record.name == logger_name or record.name.startswith(logger_name + "."):
return True
return False
class UvicornStructlogFilter(logging.Filter):
def filter(self, record: logging.LogRecord) -> bool:
if hasattr(record, 'name') and 'uvicorn.access' in record.name:
if hasattr(record, 'getMessage'):
if hasattr(record, "name") and "uvicorn.access" in record.name:
if hasattr(record, "getMessage"):
msg = record.getMessage()
if ' - "' in msg and '" ' in msg:
parts = msg.split(' - "')
@ -56,14 +57,14 @@ def add_log_level(logger: FilteringBoundLogger, method_name: str, event_dict: Ev
def add_module_info(logger: FilteringBoundLogger, method_name: str, event_dict: EventDict) -> EventDict:
if hasattr(logger, '_context') and 'logger_name' in logger._context:
logger_name = logger._context['logger_name']
if logger_name.startswith('pyserve'):
if hasattr(logger, "_context") and "logger_name" in logger._context:
logger_name = logger._context["logger_name"]
if logger_name.startswith("pyserve"):
event_dict["module"] = logger_name
elif logger_name.startswith('uvicorn'):
event_dict["module"] = 'uvicorn'
elif logger_name.startswith('starlette'):
event_dict["module"] = 'starlette'
elif logger_name.startswith("uvicorn"):
event_dict["module"] = "uvicorn"
elif logger_name.startswith("starlette"):
event_dict["module"] = "starlette"
else:
event_dict["module"] = logger_name
return event_dict
@ -74,18 +75,19 @@ def filter_module_info(show_module: bool) -> Callable[[FilteringBoundLogger, str
if not show_module and "module" in event_dict:
del event_dict["module"]
return event_dict
return processor
def colored_console_renderer(use_colors: bool = True, show_module: bool = True) -> structlog.dev.ConsoleRenderer:
return structlog.dev.ConsoleRenderer(
colors=use_colors and hasattr(sys.stderr, 'isatty') and sys.stderr.isatty(),
colors=use_colors and hasattr(sys.stderr, "isatty") and sys.stderr.isatty(),
level_styles={
"critical": "\033[35m", # Magenta
"error": "\033[31m", # Red
"warning": "\033[33m", # Yellow
"info": "\033[32m", # Green
"debug": "\033[36m", # Cyan
"error": "\033[31m", # Red
"warning": "\033[33m", # Yellow
"info": "\033[32m", # Green
"debug": "\033[36m", # Cyan
},
pad_event=25,
)
@ -113,43 +115,35 @@ class PyServeLogManager:
if self.configured:
return
if 'format' not in config and 'console' not in config and 'files' not in config:
level = config.get('level', 'INFO').upper()
console_output = config.get('console_output', True)
log_file = config.get('log_file', './logs/pyserve.log')
if "format" not in config and "console" not in config and "files" not in config:
level = config.get("level", "INFO").upper()
console_output = config.get("console_output", True)
log_file = config.get("log_file", "./logs/pyserve.log")
config = {
'level': level,
'console_output': console_output,
'format': {
'type': 'standard',
'use_colors': True,
'show_module': True,
'timestamp_format': '%Y-%m-%d %H:%M:%S'
},
'files': [{
'path': log_file,
'level': level,
'loggers': [],
'max_bytes': 10 * 1024 * 1024,
'backup_count': 5,
'format': {
'type': 'standard',
'use_colors': False,
'show_module': True,
'timestamp_format': '%Y-%m-%d %H:%M:%S'
"level": level,
"console_output": console_output,
"format": {"type": "standard", "use_colors": True, "show_module": True, "timestamp_format": "%Y-%m-%d %H:%M:%S"},
"files": [
{
"path": log_file,
"level": level,
"loggers": [],
"max_bytes": 10 * 1024 * 1024,
"backup_count": 5,
"format": {"type": "standard", "use_colors": False, "show_module": True, "timestamp_format": "%Y-%m-%d %H:%M:%S"},
}
}]
],
}
main_level = config.get('level', 'INFO').upper()
console_output = config.get('console_output', True)
main_level = config.get("level", "INFO").upper()
console_output = config.get("console_output", True)
global_format = config.get('format', {})
console_config = config.get('console', {})
files_config = config.get('files', [])
global_format = config.get("format", {})
console_config = config.get("console", {})
files_config = config.get("files", [])
console_format = {**global_format, **console_config.get('format', {})}
console_level = console_config.get('level', main_level)
console_format = {**global_format, **console_config.get("format", {})}
console_level = console_config.get("level", main_level)
self._save_original_handlers()
self._clear_all_handlers()
@ -159,38 +153,33 @@ class PyServeLogManager:
console_output=console_output,
console_format=console_format,
console_level=console_level,
files_config=files_config
files_config=files_config,
)
self._configure_stdlib_loggers(main_level)
logger = self.get_logger('pyserve')
logger = self.get_logger("pyserve")
logger.info(
"PyServe logger initialized",
version=__version__,
level=main_level,
console_output=console_output,
console_format=console_format.get('type', 'standard')
console_format=console_format.get("type", "standard"),
)
for i, file_config in enumerate(files_config):
logger.info(
"File logging configured",
file_index=i,
path=file_config.get('path'),
level=file_config.get('level', main_level),
format_type=file_config.get('format', {}).get('type', 'standard')
path=file_config.get("path"),
level=file_config.get("level", main_level),
format_type=file_config.get("format", {}).get("type", "standard"),
)
self.configured = True
def _configure_structlog(
self,
main_level: str,
console_output: bool,
console_format: Dict[str, Any],
console_level: str,
files_config: List[Dict[str, Any]]
self, main_level: str, console_output: bool, console_format: Dict[str, Any], console_level: str, files_config: List[Dict[str, Any]]
) -> None:
shared_processors = [
structlog.stdlib.filter_by_level,
@ -202,57 +191,46 @@ class PyServeLogManager:
]
if console_output:
console_show_module = console_format.get('show_module', True)
console_show_module = console_format.get("show_module", True)
console_processors = shared_processors.copy()
console_processors.append(filter_module_info(console_show_module))
if console_format.get('type') == 'json':
if console_format.get("type") == "json":
console_processors.append(json_renderer())
else:
console_processors.append(
colored_console_renderer(
console_format.get('use_colors', True),
console_show_module
)
)
console_processors.append(colored_console_renderer(console_format.get("use_colors", True), console_show_module))
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setLevel(getattr(logging, console_level))
console_handler.addFilter(UvicornStructlogFilter())
console_formatter = structlog.stdlib.ProcessorFormatter(
processor=colored_console_renderer(
console_format.get('use_colors', True),
console_show_module
)
if console_format.get('type') != 'json'
else json_renderer(),
processor=(
colored_console_renderer(console_format.get("use_colors", True), console_show_module)
if console_format.get("type") != "json"
else json_renderer()
),
)
console_handler.setFormatter(console_formatter)
root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)
root_logger.addHandler(console_handler)
self.handlers['console'] = console_handler
self.handlers["console"] = console_handler
for i, file_config in enumerate(files_config):
file_path = file_config.get('path', './logs/pyserve.log')
file_level = file_config.get('level', main_level)
file_loggers = file_config.get('loggers', [])
max_bytes = file_config.get('max_bytes', 10 * 1024 * 1024)
backup_count = file_config.get('backup_count', 5)
file_format = file_config.get('format', {})
file_show_module = file_format.get('show_module', True)
file_path = file_config.get("path", "./logs/pyserve.log")
file_level = file_config.get("level", main_level)
file_loggers = file_config.get("loggers", [])
max_bytes = file_config.get("max_bytes", 10 * 1024 * 1024)
backup_count = file_config.get("backup_count", 5)
file_format = file_config.get("format", {})
file_show_module = file_format.get("show_module", True)
self._ensure_log_directory(file_path)
file_handler = logging.handlers.RotatingFileHandler(
file_path,
maxBytes=max_bytes,
backupCount=backup_count,
encoding='utf-8'
)
file_handler = logging.handlers.RotatingFileHandler(file_path, maxBytes=max_bytes, backupCount=backup_count, encoding="utf-8")
file_handler.setLevel(getattr(logging, file_level))
if file_loggers:
@ -262,15 +240,13 @@ class PyServeLogManager:
file_processors.append(filter_module_info(file_show_module))
file_formatter = structlog.stdlib.ProcessorFormatter(
processor=json_renderer()
if file_format.get('type') == 'json'
else plain_console_renderer(file_show_module),
processor=json_renderer() if file_format.get("type") == "json" else plain_console_renderer(file_show_module),
)
file_handler.setFormatter(file_formatter)
root_logger = logging.getLogger()
root_logger.addHandler(file_handler)
self.handlers[f'file_{i}'] = file_handler
self.handlers[f"file_{i}"] = file_handler
base_processors = [
structlog.stdlib.filter_by_level,
@ -293,14 +269,14 @@ class PyServeLogManager:
def _configure_stdlib_loggers(self, main_level: str) -> None:
library_configs = {
'uvicorn': 'DEBUG' if main_level == 'DEBUG' else 'WARNING',
'uvicorn.access': 'DEBUG' if main_level == 'DEBUG' else 'WARNING',
'uvicorn.error': 'DEBUG' if main_level == 'DEBUG' else 'ERROR',
'uvicorn.asgi': 'DEBUG' if main_level == 'DEBUG' else 'WARNING',
'starlette': 'DEBUG' if main_level == 'DEBUG' else 'WARNING',
'asyncio': 'WARNING',
'concurrent.futures': 'WARNING',
'multiprocessing': 'WARNING',
"uvicorn": "DEBUG" if main_level == "DEBUG" else "WARNING",
"uvicorn.access": "DEBUG" if main_level == "DEBUG" else "WARNING",
"uvicorn.error": "DEBUG" if main_level == "DEBUG" else "ERROR",
"uvicorn.asgi": "DEBUG" if main_level == "DEBUG" else "WARNING",
"starlette": "DEBUG" if main_level == "DEBUG" else "WARNING",
"asyncio": "WARNING",
"concurrent.futures": "WARNING",
"multiprocessing": "WARNING",
}
for logger_name, level in library_configs.items():
@ -309,7 +285,7 @@ class PyServeLogManager:
logger.propagate = True
def _save_original_handlers(self) -> None:
logger_names = ['', 'uvicorn', 'uvicorn.access', 'uvicorn.error', 'starlette']
logger_names = ["", "uvicorn", "uvicorn.access", "uvicorn.error", "starlette"]
for name in logger_names:
logger = logging.getLogger(name)
@ -320,7 +296,7 @@ class PyServeLogManager:
for handler in root_logger.handlers[:]:
root_logger.removeHandler(handler)
logger_names = ['uvicorn', 'uvicorn.access', 'uvicorn.error', 'starlette']
logger_names = ["uvicorn", "uvicorn.access", "uvicorn.error", "starlette"]
for name in logger_names:
logger = logging.getLogger(name)
for handler in logger.handlers[:]:
@ -335,14 +311,17 @@ class PyServeLogManager:
def get_logger(self, name: str) -> structlog.stdlib.BoundLogger:
if not self._structlog_configured:
structlog.configure(
processors=cast(Any, [
structlog.stdlib.filter_by_level,
add_timestamp,
add_log_level,
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
]),
processors=cast(
Any,
[
structlog.stdlib.filter_by_level,
add_timestamp,
add_log_level,
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
],
),
context_class=dict,
logger_factory=structlog.stdlib.LoggerFactory(),
wrapper_class=structlog.stdlib.BoundLogger,
@ -370,16 +349,8 @@ class PyServeLogManager:
handler.close()
del self.handlers[name]
def create_access_log(
self,
method: str,
path: str,
status_code: int,
response_time: float,
client_ip: str,
user_agent: str = ""
) -> None:
access_logger = self.get_logger('pyserve.access')
def create_access_log(self, method: str, path: str, status_code: int, response_time: float, client_ip: str, user_agent: str = "") -> None:
access_logger = self.get_logger("pyserve.access")
access_logger.info(
"HTTP access",
method=method,
@ -388,7 +359,7 @@ class PyServeLogManager:
response_time_ms=round(response_time * 1000, 2),
client_ip=client_ip,
user_agent=user_agent,
timestamp_format="access"
timestamp_format="access",
)
def shutdown(self) -> None:
@ -416,14 +387,7 @@ def get_logger(name: str) -> structlog.stdlib.BoundLogger:
return log_manager.get_logger(name)
def create_access_log(
method: str,
path: str,
status_code: int,
response_time: float,
client_ip: str,
user_agent: str = ""
) -> None:
def create_access_log(method: str, path: str, status_code: int, response_time: float, client_ip: str, user_agent: str = "") -> None:
log_manager.create_access_log(method, path, status_code, response_time, client_ip, user_agent)

33
pyserve/path_matcher.py Normal file
View File

@ -0,0 +1,33 @@
"""
Path matcher module - uses Cython implementation if available, falls back to pure Python.
"""
try:
from pyserve._path_matcher import (
FastMountedPath,
FastMountManager,
match_and_modify_path,
path_matches_prefix,
strip_path_prefix,
)
CYTHON_AVAILABLE = True
except ImportError:
from pyserve._path_matcher_py import (
FastMountedPath,
FastMountManager,
match_and_modify_path,
path_matches_prefix,
strip_path_prefix,
)
CYTHON_AVAILABLE = False
__all__ = [
"FastMountedPath",
"FastMountManager",
"path_matches_prefix",
"strip_path_prefix",
"match_and_modify_path",
"CYTHON_AVAILABLE",
]

View File

@ -0,0 +1,365 @@
"""Process Orchestration Extension
Extension that manages ASGI/WSGI applications as isolated processes
and routes requests to them via reverse proxy.
"""
import asyncio
import logging
import time
import uuid
from typing import Any, Dict, Optional
import httpx
from starlette.requests import Request
from starlette.responses import Response
from .extensions import Extension
from .logging_utils import get_logger
from .process_manager import ProcessConfig, ProcessManager
logger = get_logger(__name__)
class ProcessOrchestrationExtension(Extension):
"""
Extension that orchestrates ASGI/WSGI applications as separate processes.
Unlike ASGIExtension which runs apps in-process, this extension:
- Runs each app in its own isolated process
- Provides health monitoring and auto-restart
- Routes requests via HTTP reverse proxy
- Supports multiple workers per app
Configuration example:
```yaml
extensions:
- type: process_orchestration
config:
port_range: [9000, 9999]
health_check_enabled: true
apps:
- name: api
path: /api
app_path: myapp.api:app
workers: 4
health_check_path: /health
- name: admin
path: /admin
app_path: myapp.admin:create_app
factory: true
workers: 2
```
"""
name = "process_orchestration"
def __init__(self, config: Dict[str, Any]) -> None:
super().__init__(config)
self._manager: Optional[ProcessManager] = None
self._mounts: Dict[str, MountConfig] = {} # path -> config
self._http_client: Optional[httpx.AsyncClient] = None
self._started = False
self._proxy_timeout: float = config.get("proxy_timeout", 60.0)
self._pending_config = config # Store for async setup
logging_config = config.get("logging", {})
self._log_proxy_requests: bool = logging_config.get("proxy_logs", True)
self._log_health_checks: bool = logging_config.get("health_check_logs", False)
httpx_level = logging_config.get("httpx_level", "warning").upper()
logging.getLogger("httpx").setLevel(getattr(logging, httpx_level, logging.WARNING))
logging.getLogger("httpcore").setLevel(getattr(logging, httpx_level, logging.WARNING))
async def setup(self, config: Optional[Dict[str, Any]] = None) -> None:
if config is None:
config = self._pending_config
port_range = tuple(config.get("port_range", [9000, 9999]))
health_check_enabled = config.get("health_check_enabled", True)
self._proxy_timeout = config.get("proxy_timeout", 60.0)
self._manager = ProcessManager(
port_range=port_range,
health_check_enabled=health_check_enabled,
)
self._http_client = httpx.AsyncClient(
timeout=httpx.Timeout(self._proxy_timeout),
follow_redirects=False,
limits=httpx.Limits(
max_keepalive_connections=100,
max_connections=200,
),
)
apps_config = config.get("apps", [])
for app_config in apps_config:
await self._register_app(app_config)
logger.info(
"Process orchestration extension initialized",
app_count=len(self._mounts),
)
async def _register_app(self, app_config: Dict[str, Any]) -> None:
if not self._manager:
return
name = app_config.get("name")
path = app_config.get("path", "").rstrip("/")
app_path = app_config.get("app_path")
if not name or not app_path:
logger.error("App config missing 'name' or 'app_path'")
return
process_config = ProcessConfig(
name=name,
app_path=app_path,
app_type=app_config.get("app_type", "asgi"),
workers=app_config.get("workers", 1),
module_path=app_config.get("module_path"),
factory=app_config.get("factory", False),
factory_args=app_config.get("factory_args"),
env=app_config.get("env", {}),
health_check_enabled=app_config.get("health_check_enabled", True),
health_check_path=app_config.get("health_check_path", "/health"),
health_check_interval=app_config.get("health_check_interval", 10.0),
health_check_timeout=app_config.get("health_check_timeout", 5.0),
health_check_retries=app_config.get("health_check_retries", 3),
max_memory_mb=app_config.get("max_memory_mb"),
max_restart_count=app_config.get("max_restart_count", 5),
restart_delay=app_config.get("restart_delay", 1.0),
shutdown_timeout=app_config.get("shutdown_timeout", 30.0),
)
await self._manager.register(process_config)
self._mounts[path] = MountConfig(
path=path,
process_name=name,
strip_path=app_config.get("strip_path", True),
)
logger.info(f"Registered app '{name}' at path '{path}'")
async def start(self) -> None:
if self._started or not self._manager:
return
await self._manager.start()
results = await self._manager.start_all()
self._started = True
success = sum(1 for v in results.values() if v)
failed = len(results) - success
logger.info(
"Process orchestration started",
success=success,
failed=failed,
)
async def stop(self) -> None:
if not self._started:
return
if self._http_client:
await self._http_client.aclose()
self._http_client = None
if self._manager:
await self._manager.stop()
self._started = False
logger.info("Process orchestration stopped")
def cleanup(self) -> None:
try:
loop = asyncio.get_running_loop()
loop.create_task(self.stop())
except RuntimeError:
asyncio.run(self.stop())
async def process_request(self, request: Request) -> Optional[Response]:
if not self._started or not self._manager:
logger.debug(
"Process orchestration not ready",
started=self._started,
has_manager=self._manager is not None,
)
return None
mount = self._get_mount(request.url.path)
if not mount:
logger.debug(
"No mount found for path",
path=request.url.path,
available_mounts=list(self._mounts.keys()),
)
return None
upstream_url = self._manager.get_upstream_url(mount.process_name)
if not upstream_url:
logger.warning(
f"Process '{mount.process_name}' not running",
path=request.url.path,
)
return Response("Service Unavailable", status_code=503)
request_id = request.headers.get("X-Request-ID", str(uuid.uuid4())[:8])
start_time = time.perf_counter()
response = await self._proxy_request(request, upstream_url, mount, request_id)
latency_ms = (time.perf_counter() - start_time) * 1000
if self._log_proxy_requests:
logger.info(
"Proxy request completed",
request_id=request_id,
method=request.method,
path=request.url.path,
process=mount.process_name,
upstream=upstream_url,
status=response.status_code,
latency_ms=round(latency_ms, 2),
)
return response
def _get_mount(self, path: str) -> Optional["MountConfig"]:
for mount_path in sorted(self._mounts.keys(), key=len, reverse=True):
if mount_path == "":
return self._mounts[mount_path]
if path == mount_path or path.startswith(f"{mount_path}/"):
return self._mounts[mount_path]
return None
async def _proxy_request(
self,
request: Request,
upstream_url: str,
mount: "MountConfig",
request_id: str = "",
) -> Response:
path = request.url.path
if mount.strip_path and mount.path:
path = path[len(mount.path) :] or "/"
target_url = f"{upstream_url}{path}"
if request.url.query:
target_url += f"?{request.url.query}"
headers = dict(request.headers)
headers.pop("host", None)
headers["X-Forwarded-For"] = request.client.host if request.client else "unknown"
headers["X-Forwarded-Proto"] = request.url.scheme
headers["X-Forwarded-Host"] = request.headers.get("host", "")
if request_id:
headers["X-Request-ID"] = request_id
try:
if not self._http_client:
return Response("Service Unavailable", status_code=503)
body = await request.body()
response = await self._http_client.request(
method=request.method,
url=target_url,
headers=headers,
content=body,
)
response_headers = dict(response.headers)
for header in ["transfer-encoding", "connection", "keep-alive"]:
response_headers.pop(header, None)
return Response(
content=response.content,
status_code=response.status_code,
headers=response_headers,
)
except httpx.TimeoutException:
logger.error(f"Proxy timeout to {upstream_url}")
return Response("Gateway Timeout", status_code=504)
except httpx.ConnectError as e:
logger.error(f"Proxy connection error to {upstream_url}: {e}")
return Response("Bad Gateway", status_code=502)
except Exception as e:
logger.error(f"Proxy error to {upstream_url}: {e}")
return Response("Internal Server Error", status_code=500)
async def process_response(
self,
request: Request,
response: Response,
) -> Response:
return response
def get_metrics(self) -> Dict[str, Any]:
metrics = {
"process_orchestration": {
"enabled": self._started,
"mounts": len(self._mounts),
}
}
if self._manager:
metrics["process_orchestration"].update(self._manager.get_metrics())
return metrics
async def get_process_status(self, name: str) -> Optional[Dict[str, Any]]:
if not self._manager:
return None
info = self._manager.get_process(name)
return info.to_dict() if info else None
async def get_all_status(self) -> Dict[str, Any]:
if not self._manager:
return {}
return {name: info.to_dict() for name, info in self._manager.get_all_processes().items()}
async def restart_process(self, name: str) -> bool:
if not self._manager:
return False
return await self._manager.restart_process(name)
async def scale_process(self, name: str, workers: int) -> bool:
if not self._manager:
return False
info = self._manager.get_process(name)
if not info:
return False
info.config.workers = workers
return await self._manager.restart_process(name)
class MountConfig:
def __init__(
self,
path: str,
process_name: str,
strip_path: bool = True,
):
self.path = path
self.process_name = process_name
self.strip_path = strip_path
async def setup_process_orchestration(config: Dict[str, Any]) -> ProcessOrchestrationExtension:
ext = ProcessOrchestrationExtension(config)
await ext.setup(config)
await ext.start()
return ext
async def shutdown_process_orchestration(ext: ProcessOrchestrationExtension) -> None:
await ext.stop()

553
pyserve/process_manager.py Normal file
View File

@ -0,0 +1,553 @@
"""Process Manager Module
Orchestrates ASGI/WSGI applications as separate processes
"""
import asyncio
import logging
import os
import signal
import socket
import subprocess
import sys
import time
from dataclasses import dataclass, field
from enum import Enum
from pathlib import Path
from typing import Any, Dict, List, Optional
from .logging_utils import get_logger
logging.getLogger("httpx").setLevel(logging.WARNING)
logging.getLogger("httpcore").setLevel(logging.WARNING)
logger = get_logger(__name__)
class ProcessState(Enum):
PENDING = "pending"
STARTING = "starting"
RUNNING = "running"
STOPPING = "stopping"
STOPPED = "stopped"
FAILED = "failed"
RESTARTING = "restarting"
@dataclass
class ProcessConfig:
name: str
app_path: str
app_type: str = "asgi" # asgi, wsgi
host: str = "127.0.0.1"
port: int = 0 # 0 = auto-assign
workers: int = 1
module_path: Optional[str] = None
factory: bool = False
factory_args: Optional[Dict[str, Any]] = None
env: Dict[str, str] = field(default_factory=dict)
health_check_enabled: bool = True
health_check_path: str = "/health"
health_check_interval: float = 10.0
health_check_timeout: float = 5.0
health_check_retries: int = 3
max_memory_mb: Optional[int] = None
max_restart_count: int = 5
restart_delay: float = 1.0 # seconds
shutdown_timeout: float = 30.0 # seconds
@dataclass
class ProcessInfo:
config: ProcessConfig
state: ProcessState = ProcessState.PENDING
pid: Optional[int] = None
port: int = 0
start_time: Optional[float] = None
restart_count: int = 0
last_health_check: Optional[float] = None
health_check_failures: int = 0
process: Optional[subprocess.Popen] = None
@property
def uptime(self) -> float:
if self.start_time is None:
return 0.0
return time.time() - self.start_time
@property
def is_running(self) -> bool:
return self.state == ProcessState.RUNNING and self.process is not None
def to_dict(self) -> Dict[str, Any]:
return {
"name": self.config.name,
"state": self.state.value,
"pid": self.pid,
"port": self.port,
"uptime": round(self.uptime, 2),
"restart_count": self.restart_count,
"health_check_failures": self.health_check_failures,
"workers": self.config.workers,
}
class PortAllocator:
def __init__(self, start_port: int = 9000, end_port: int = 9999):
self.start_port = start_port
self.end_port = end_port
self._allocated: set[int] = set()
self._lock = asyncio.Lock()
async def allocate(self) -> int:
async with self._lock:
for port in range(self.start_port, self.end_port + 1):
if port in self._allocated:
continue
if self._is_port_available(port):
self._allocated.add(port)
return port
raise RuntimeError(f"No available ports in range {self.start_port}-{self.end_port}")
async def release(self, port: int) -> None:
async with self._lock:
self._allocated.discard(port)
def _is_port_available(self, port: int) -> bool:
try:
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(("127.0.0.1", port))
return True
except OSError:
return False
class ProcessManager:
def __init__(
self,
port_range: tuple[int, int] = (9000, 9999),
health_check_enabled: bool = True,
):
self._processes: Dict[str, ProcessInfo] = {}
self._port_allocator = PortAllocator(*port_range)
self._health_check_enabled = health_check_enabled
self._health_check_task: Optional[asyncio.Task] = None
self._shutdown_event = asyncio.Event()
self._started = False
self._lock = asyncio.Lock()
async def start(self) -> None:
if self._started:
return
self._started = True
self._shutdown_event.clear()
if self._health_check_enabled:
self._health_check_task = asyncio.create_task(self._health_check_loop(), name="process_manager_health_check")
logger.info("Process manager started")
async def stop(self) -> None:
if not self._started:
return
logger.info("Stopping process manager...")
self._shutdown_event.set()
if self._health_check_task:
self._health_check_task.cancel()
try:
await self._health_check_task
except asyncio.CancelledError:
pass
await self.stop_all()
self._started = False
logger.info("Process manager stopped")
async def register(self, config: ProcessConfig) -> ProcessInfo:
async with self._lock:
if config.name in self._processes:
raise ValueError(f"Process '{config.name}' already registered")
info = ProcessInfo(config=config)
self._processes[config.name] = info
logger.info(f"Registered process '{config.name}'", app_path=config.app_path)
return info
async def unregister(self, name: str) -> None:
async with self._lock:
if name not in self._processes:
return
info = self._processes[name]
if info.is_running:
await self._stop_process(info)
if info.port:
await self._port_allocator.release(info.port)
del self._processes[name]
logger.info(f"Unregistered process '{name}'")
async def start_process(self, name: str) -> bool:
info = self._processes.get(name)
if not info:
logger.error(f"Process '{name}' not found")
return False
if info.is_running:
logger.warning(f"Process '{name}' is already running")
return True
return await self._start_process(info)
async def stop_process(self, name: str) -> bool:
info = self._processes.get(name)
if not info:
logger.error(f"Process '{name}' not found")
return False
return await self._stop_process(info)
async def restart_process(self, name: str) -> bool:
info = self._processes.get(name)
if not info:
logger.error(f"Process '{name}' not found")
return False
info.state = ProcessState.RESTARTING
if info.is_running:
await self._stop_process(info)
await asyncio.sleep(info.config.restart_delay)
return await self._start_process(info)
async def start_all(self) -> Dict[str, bool]:
results = {}
for name in self._processes:
results[name] = await self.start_process(name)
return results
async def stop_all(self) -> None:
tasks = []
for info in self._processes.values():
if info.is_running:
tasks.append(self._stop_process(info))
if tasks:
await asyncio.gather(*tasks, return_exceptions=True)
def get_process(self, name: str) -> Optional[ProcessInfo]:
return self._processes.get(name)
def get_all_processes(self) -> Dict[str, ProcessInfo]:
return self._processes.copy()
def get_process_by_port(self, port: int) -> Optional[ProcessInfo]:
for info in self._processes.values():
if info.port == port:
return info
return None
def get_upstream_url(self, name: str) -> Optional[str]:
info = self._processes.get(name)
if not info or not info.is_running:
return None
return f"http://{info.config.host}:{info.port}"
async def _start_process(self, info: ProcessInfo) -> bool:
config = info.config
try:
info.state = ProcessState.STARTING
if info.port == 0:
info.port = await self._port_allocator.allocate()
cmd = self._build_command(config, info.port)
env = os.environ.copy()
env.update(config.env)
if config.module_path:
python_path = env.get("PYTHONPATH", "")
module_dir = str(Path(config.module_path).resolve())
env["PYTHONPATH"] = f"{module_dir}:{python_path}" if python_path else module_dir
# For WSGI apps, pass configuration via environment variables
if config.app_type == "wsgi":
env["PYSERVE_WSGI_APP"] = config.app_path
env["PYSERVE_WSGI_FACTORY"] = "1" if config.factory else "0"
logger.info(
f"Starting process '{config.name}'",
command=" ".join(cmd),
port=info.port,
)
info.process = subprocess.Popen(
cmd,
env=env,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setsid if hasattr(os, "setsid") else None,
)
info.pid = info.process.pid
info.start_time = time.time()
if not await self._wait_for_ready(info):
raise RuntimeError(f"Process '{config.name}' failed to start")
info.state = ProcessState.RUNNING
logger.info(
f"Process '{config.name}' started successfully",
pid=info.pid,
port=info.port,
)
return True
except Exception as e:
logger.error(f"Failed to start process '{config.name}': {e}")
info.state = ProcessState.FAILED
if info.port:
await self._port_allocator.release(info.port)
info.port = 0
return False
async def _stop_process(self, info: ProcessInfo) -> bool:
if not info.process:
info.state = ProcessState.STOPPED
return True
config = info.config
info.state = ProcessState.STOPPING
try:
if hasattr(os, "killpg"):
try:
os.killpg(os.getpgid(info.process.pid), signal.SIGTERM)
except ProcessLookupError:
pass
else:
info.process.terminate()
try:
await asyncio.wait_for(asyncio.get_event_loop().run_in_executor(None, info.process.wait), timeout=config.shutdown_timeout)
except asyncio.TimeoutError:
logger.warning(f"Process '{config.name}' did not stop gracefully, forcing kill")
if hasattr(os, "killpg"):
try:
os.killpg(os.getpgid(info.process.pid), signal.SIGKILL)
except ProcessLookupError:
pass
else:
info.process.kill()
info.process.wait()
if info.port:
await self._port_allocator.release(info.port)
info.state = ProcessState.STOPPED
info.process = None
info.pid = None
logger.info(f"Process '{config.name}' stopped")
return True
except Exception as e:
logger.error(f"Error stopping process '{config.name}': {e}")
info.state = ProcessState.FAILED
return False
async def _wait_for_ready(self, info: ProcessInfo, timeout: float = 30.0) -> bool:
import httpx
start_time = time.time()
url = f"http://{info.config.host}:{info.port}{info.config.health_check_path}"
while time.time() - start_time < timeout:
if info.process and info.process.poll() is not None:
stdout, stderr = info.process.communicate()
logger.error(
f"Process '{info.config.name}' exited during startup",
returncode=info.process.returncode,
stderr=stderr.decode() if stderr else "",
)
return False
try:
async with httpx.AsyncClient(timeout=2.0) as client:
resp = await client.get(url)
if resp.status_code < 500:
return True
except Exception:
pass
await asyncio.sleep(0.5)
return False
async def _health_check_loop(self) -> None:
while not self._shutdown_event.is_set():
try:
for info in list(self._processes.values()):
if not info.is_running or not info.config.health_check_enabled:
continue
await self._check_process_health(info)
try:
await asyncio.wait_for(
self._shutdown_event.wait(),
timeout=(
min(p.config.health_check_interval for p in self._processes.values() if p.config.health_check_enabled)
if self._processes
else 10.0
),
)
break
except asyncio.TimeoutError:
pass
except Exception as e:
logger.error(f"Error in health check loop: {e}")
await asyncio.sleep(5)
async def _check_process_health(self, info: ProcessInfo) -> bool:
import httpx
config = info.config
url = f"http://{config.host}:{info.port}{config.health_check_path}"
try:
async with httpx.AsyncClient(timeout=config.health_check_timeout) as client:
resp = await client.get(url)
if resp.status_code < 500:
info.health_check_failures = 0
info.last_health_check = time.time()
return True
else:
raise Exception(f"Health check returned status {resp.status_code}")
except Exception as e:
info.health_check_failures += 1
logger.warning(
f"Health check failed for '{config.name}'",
failures=info.health_check_failures,
error=str(e),
)
if info.health_check_failures >= config.health_check_retries:
logger.error(f"Process '{config.name}' is unhealthy, restarting...")
await self._handle_unhealthy_process(info)
return False
async def _handle_unhealthy_process(self, info: ProcessInfo) -> None:
config = info.config
if info.restart_count >= config.max_restart_count:
logger.error(f"Process '{config.name}' exceeded max restart count, marking as failed")
info.state = ProcessState.FAILED
return
info.restart_count += 1
info.health_check_failures = 0
delay = config.restart_delay * (2 ** (info.restart_count - 1))
delay = min(delay, 60.0)
logger.info(
f"Restarting process '{config.name}'",
restart_count=info.restart_count,
delay=delay,
)
await self._stop_process(info)
await asyncio.sleep(delay)
await self._start_process(info)
def _build_command(self, config: ProcessConfig, port: int) -> List[str]:
if config.app_type == "wsgi":
wrapper_app = self._create_wsgi_wrapper_path(config)
app_path = wrapper_app
else:
app_path = config.app_path
cmd = [
sys.executable,
"-m",
"uvicorn",
app_path,
"--host",
config.host,
"--port",
str(port),
"--workers",
str(config.workers),
"--log-level",
"warning",
"--no-access-log",
]
if config.factory and config.app_type != "wsgi":
cmd.append("--factory")
return cmd
def _create_wsgi_wrapper_path(self, config: ProcessConfig) -> str:
"""
Since uvicorn can't directly run WSGI apps, we create a wrapper
that imports the WSGI app and wraps it with a2wsgi.
"""
# For WSGI apps, we'll use a special wrapper module
# The wrapper is: pyserve._wsgi_wrapper:create_app
# It will be called with app_path as environment variable
return "pyserve._wsgi_wrapper:app"
def get_metrics(self) -> Dict[str, Any]:
return {
"managed_processes": len(self._processes),
"running_processes": sum(1 for p in self._processes.values() if p.is_running),
"processes": {name: info.to_dict() for name, info in self._processes.items()},
}
_process_manager: Optional[ProcessManager] = None
def get_process_manager() -> ProcessManager:
global _process_manager
if _process_manager is None:
_process_manager = ProcessManager()
return _process_manager
async def init_process_manager(
port_range: tuple[int, int] = (9000, 9999),
health_check_enabled: bool = True,
) -> ProcessManager:
global _process_manager
_process_manager = ProcessManager(
port_range=port_range,
health_check_enabled=health_check_enabled,
)
await _process_manager.start()
return _process_manager
async def shutdown_process_manager() -> None:
global _process_manager
if _process_manager:
await _process_manager.stop()
_process_manager = None

View File

@ -1,11 +1,13 @@
import re
import mimetypes
import re
from pathlib import Path
from typing import Dict, Any, Optional, Pattern
from typing import Any, Dict, Optional, Pattern
from urllib.parse import urlparse
import httpx
from starlette.requests import Request
from starlette.responses import Response, FileResponse, PlainTextResponse
from starlette.responses import FileResponse, PlainTextResponse, Response
from .logging_utils import get_logger
logger = get_logger(__name__)
@ -100,8 +102,7 @@ class RequestHandler:
text = ""
content_type = config.get("content_type", "text/plain")
return PlainTextResponse(text, status_code=status_code,
media_type=content_type)
return PlainTextResponse(text, status_code=status_code, media_type=content_type)
if "proxy_pass" in config:
return await self._handle_proxy(request, config, route_match.params)
@ -123,6 +124,10 @@ class RequestHandler:
file_path = root / index_file
else:
file_path = root / path
# If path is a directory, look for index file
if file_path.is_dir():
index_file = config.get("index_file", "index.html")
file_path = file_path / index_file
try:
file_path = file_path.resolve()
@ -167,8 +172,7 @@ class RequestHandler:
return PlainTextResponse("404 Not Found", status_code=404)
async def _handle_proxy(self, request: Request, config: Dict[str, Any],
params: Dict[str, str]) -> Response:
async def _handle_proxy(self, request: Request, config: Dict[str, Any], params: Dict[str, str]) -> Response:
proxy_url = config["proxy_pass"]
for key, value in params.items():
@ -193,9 +197,15 @@ class RequestHandler:
proxy_headers = dict(request.headers)
hop_by_hop_headers = [
"connection", "keep-alive", "proxy-authenticate",
"proxy-authorization", "te", "trailers", "transfer-encoding",
"upgrade", "host"
"connection",
"keep-alive",
"proxy-authenticate",
"proxy-authorization",
"te",
"trailers",
"transfer-encoding",
"upgrade",
"host",
]
for header in hop_by_hop_headers:
proxy_headers.pop(header, None)

View File

@ -1,18 +1,19 @@
import ssl
import uvicorn
import time
from pathlib import Path
from typing import Any, Dict, Optional
import uvicorn
from starlette.applications import Starlette
from starlette.requests import Request
from starlette.responses import Response, PlainTextResponse
from starlette.responses import PlainTextResponse, Response
from starlette.routing import Route
from starlette.types import ASGIApp, Receive, Scope, Send
from pathlib import Path
from typing import Optional, Dict, Any
from .config import Config
from .extensions import ExtensionManager
from .logging_utils import get_logger
from . import __version__
from .config import Config
from .extensions import ASGIExtension, ExtensionManager
from .logging_utils import get_logger
logger = get_logger(__name__)
@ -21,7 +22,7 @@ class PyServeMiddleware:
def __init__(self, app: ASGIApp, extension_manager: ExtensionManager):
self.app = app
self.extension_manager = extension_manager
self.access_logger = get_logger('pyserve.access')
self.access_logger = get_logger("pyserve.access")
async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
if scope["type"] != "http":
@ -30,6 +31,11 @@ class PyServeMiddleware:
start_time = time.time()
request = Request(scope, receive)
asgi_handled = await self._try_asgi_mount(scope, receive, send, request, start_time)
if asgi_handled:
return
response = await self.extension_manager.process_request(request)
if response is None:
@ -39,6 +45,55 @@ class PyServeMiddleware:
response = await self.extension_manager.process_response(request, response)
response.headers["Server"] = f"pyserve/{__version__}"
self._log_access(request, response, start_time)
await response(scope, receive, send)
async def _try_asgi_mount(self, scope: Scope, receive: Receive, send: Send, request: Request, start_time: float) -> bool:
for extension in self.extension_manager.extensions:
if isinstance(extension, ASGIExtension):
mount = extension.get_asgi_handler(request)
if mount is not None:
modified_scope = dict(scope)
if mount.strip_path:
modified_scope["path"] = mount.get_modified_path(request.url.path)
modified_scope["root_path"] = scope.get("root_path", "") + mount.path
logger.debug(f"Routing to ASGI mount '{mount.name}': " f"{request.url.path} -> {modified_scope['path']}")
try:
response_started = False
status_code = 0
async def send_wrapper(message: Dict[str, Any]) -> None:
nonlocal response_started, status_code
if message["type"] == "http.response.start":
response_started = True
status_code = message.get("status", 0)
await send(message)
await mount.app(modified_scope, receive, send_wrapper)
process_time = round((time.time() - start_time) * 1000, 2)
self.access_logger.info(
"ASGI request",
client_ip=request.client.host if request.client else "unknown",
method=request.method,
path=str(request.url.path),
mount=mount.name,
status_code=status_code,
process_time_ms=process_time,
user_agent=request.headers.get("user-agent", ""),
)
return True
except Exception as e:
logger.error(f"Error in ASGI mount '{mount.name}': {e}")
error_response = PlainTextResponse("500 Internal Server Error", status_code=500)
await error_response(scope, receive, send)
return True
return False
def _log_access(self, request: Request, response: Response, start_time: float) -> None:
client_ip = request.client.host if request.client else "unknown"
method = request.method
path = str(request.url.path)
@ -55,17 +110,16 @@ class PyServeMiddleware:
path=path,
status_code=status_code,
process_time_ms=process_time,
user_agent=request.headers.get("user-agent", "")
user_agent=request.headers.get("user-agent", ""),
)
await response(scope, receive, send)
class PyServeServer:
def __init__(self, config: Config):
self.config = config
self.extension_manager = ExtensionManager()
self.app: Optional[Starlette] = None
self._async_extensions_loaded = False
self._setup_logging()
self._load_extensions()
self._create_app()
@ -80,30 +134,39 @@ class PyServeServer:
if ext_config.type == "routing":
config.setdefault("default_proxy_timeout", self.config.server.proxy_timeout)
self.extension_manager.load_extension(
ext_config.type,
config
)
if ext_config.type == "process_orchestration":
continue
self.extension_manager.load_extension(ext_config.type, config)
async def _load_async_extensions(self) -> None:
if self._async_extensions_loaded:
return
for ext_config in self.config.extensions:
if ext_config.type == "process_orchestration":
config = ext_config.config.copy()
await self.extension_manager.load_extension_async(ext_config.type, config)
self._async_extensions_loaded = True
def _create_app(self) -> None:
from contextlib import asynccontextmanager
from typing import AsyncIterator
@asynccontextmanager
async def lifespan(app: Starlette) -> AsyncIterator[None]:
await self._load_async_extensions()
logger.info("Async extensions loaded")
yield
routes = [
Route("/health", self._health_check, methods=["GET"]),
Route("/metrics", self._metrics, methods=["GET"]),
Route(
"/{path:path}",
self._catch_all,
methods=[
"GET",
"POST",
"PUT",
"DELETE",
"PATCH",
"OPTIONS"
]
),
Route("/{path:path}", self._catch_all, methods=["GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS"]),
]
self.app = Starlette(routes=routes)
self.app = Starlette(routes=routes, lifespan=lifespan)
self.app.add_middleware(PyServeMiddleware, extension_manager=self.extension_manager)
async def _health_check(self, request: Request) -> Response:
@ -113,19 +176,16 @@ class PyServeServer:
metrics = {}
for extension in self.extension_manager.extensions:
if hasattr(extension, 'get_metrics'):
if hasattr(extension, "get_metrics"):
try:
ext_metrics = getattr(extension, 'get_metrics')()
ext_metrics = getattr(extension, "get_metrics")()
metrics.update(ext_metrics)
except Exception as e:
logger.error("Error getting metrics from extension",
extension=type(extension).__name__, error=str(e))
logger.error("Error getting metrics from extension", extension=type(extension).__name__, error=str(e))
import json
return Response(
json.dumps(metrics, ensure_ascii=False, indent=2),
media_type="application/json"
)
return Response(json.dumps(metrics, ensure_ascii=False, indent=2), media_type="application/json")
async def _catch_all(self, request: Request) -> Response:
return PlainTextResponse("404 Not Found", status_code=404)
@ -144,10 +204,7 @@ class PyServeServer:
try:
context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
context.load_cert_chain(
self.config.ssl.cert_file,
self.config.ssl.key_file
)
context.load_cert_chain(self.config.ssl.cert_file, self.config.ssl.key_file)
logger.info("SSL context created successfully")
return context
except Exception as e:
@ -172,20 +229,17 @@ class PyServeServer:
}
if ssl_context:
uvicorn_config.update({
"ssl_keyfile": self.config.ssl.key_file,
"ssl_certfile": self.config.ssl.cert_file,
})
uvicorn_config.update(
{
"ssl_keyfile": self.config.ssl.key_file,
"ssl_certfile": self.config.ssl.cert_file,
}
)
protocol = "https"
else:
protocol = "http"
logger.info(
"Starting PyServe server",
protocol=protocol,
host=self.config.server.host,
port=self.config.server.port
)
logger.info("Starting PyServe server", protocol=protocol, host=self.config.server.host, port=self.config.server.port)
try:
assert self.app is not None, "App not initialized"
@ -241,6 +295,7 @@ class PyServeServer:
self.extension_manager.cleanup()
from .logging_utils import shutdown_logging
shutdown_logging()
logger.info("Server stopped")
@ -252,13 +307,12 @@ class PyServeServer:
metrics = {"server_status": "running"}
for extension in self.extension_manager.extensions:
if hasattr(extension, 'get_metrics'):
if hasattr(extension, "get_metrics"):
try:
ext_metrics = getattr(extension, 'get_metrics')()
ext_metrics = getattr(extension, "get_metrics")()
metrics.update(ext_metrics)
except Exception as e:
logger.error("Error getting metrics from extension",
extension=type(extension).__name__, error=str(e))
logger.error("Error getting metrics from extension", extension=type(extension).__name__, error=str(e))
return metrics

72
scripts/build_cython.py Normal file
View File

@ -0,0 +1,72 @@
"""
Build script for Cython extensions.
Usage:
python scripts/build_cython.py build_ext --inplace
Or via make:
make build-cython
"""
import os
import sys
from pathlib import Path
def build_extensions():
try:
from Cython.Build import cythonize
except ImportError:
print("Cython not installed. Skipping Cython build.")
print("Install with: pip install cython")
return False
try:
from setuptools import Extension
from setuptools.dist import Distribution
from setuptools.command.build_ext import build_ext
except ImportError:
print("setuptools not installed. Skipping Cython build.")
print("Install with: pip install setuptools")
return False
extensions = [
Extension(
"pyserve._path_matcher",
sources=["pyserve/_path_matcher.pyx"],
extra_compile_args=["-O3", "-ffast-math"],
define_macros=[("NPY_NO_DEPRECATED_API", "NPY_1_7_API_VERSION")],
),
]
ext_modules = cythonize(
extensions,
compiler_directives={
"language_level": "3",
"boundscheck": False,
"wraparound": False,
"cdivision": True,
"embedsignature": True,
},
annotate=True,
)
dist = Distribution({"ext_modules": ext_modules})
dist.package_dir = {"": "."}
cmd = build_ext(dist)
cmd.ensure_finalized()
cmd.inplace = True
cmd.run()
print("\nCython extensions built successfully!")
print(" - pyserve/_path_matcher" + (".pyd" if sys.platform == "win32" else ".so"))
return True
if __name__ == "__main__":
project_root = Path(__file__).parent.parent
os.chdir(project_root)
success = build_extensions()
sys.exit(0 if success else 1)

897
tests/test_asgi_mount.py Normal file
View File

@ -0,0 +1,897 @@
"""
Integration tests for ASGI mount functionality.
These tests start PyServe with mounted ASGI applications and verify
that requests are correctly routed to the mounted apps.
"""
import asyncio
import pytest
import httpx
import socket
from typing import Dict, Any
import uvicorn
from starlette.applications import Starlette
from starlette.requests import Request
from starlette.responses import JSONResponse, PlainTextResponse, Response
from starlette.routing import Route
from pyserve.config import Config, ServerConfig, HttpConfig, LoggingConfig, ExtensionConfig
from pyserve.server import PyServeServer
from pyserve.asgi_mount import (
ASGIAppLoader,
MountedApp,
ASGIMountManager,
)
def get_free_port() -> int:
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind(('', 0))
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
return s.getsockname()[1]
# ============== Test ASGI Applications ==============
def create_api_v1_app() -> Starlette:
"""Create a test API v1 application."""
async def root(request: Request) -> JSONResponse:
return JSONResponse({
"app": "api-v1",
"message": "Welcome to API v1",
"path": request.url.path,
"root_path": request.scope.get("root_path", ""),
})
async def health(request: Request) -> JSONResponse:
return JSONResponse({"status": "healthy", "app": "api-v1"})
async def users_list(request: Request) -> JSONResponse:
return JSONResponse({
"users": [
{"id": 1, "name": "Alice"},
{"id": 2, "name": "Bob"},
],
"app": "api-v1",
})
async def user_detail(request: Request) -> JSONResponse:
user_id = request.path_params.get("user_id")
return JSONResponse({
"user": {"id": user_id, "name": f"User {user_id}"},
"app": "api-v1",
})
async def create_user(request: Request) -> JSONResponse:
body = await request.json()
return JSONResponse({
"created": body,
"app": "api-v1",
}, status_code=201)
async def echo(request: Request) -> Response:
body = await request.body()
return Response(
content=body,
media_type=request.headers.get("content-type", "text/plain"),
)
routes = [
Route("/", root, methods=["GET"]),
Route("/health", health, methods=["GET"]),
Route("/users", users_list, methods=["GET"]),
Route("/users", create_user, methods=["POST"]),
Route("/users/{user_id:int}", user_detail, methods=["GET"]),
Route("/echo", echo, methods=["POST"]),
]
return Starlette(routes=routes)
def create_api_v2_app() -> Starlette:
"""Create a test API v2 application with different responses."""
async def root(request: Request) -> JSONResponse:
return JSONResponse({
"app": "api-v2",
"message": "Welcome to API v2 - Enhanced!",
"version": "2.0.0",
"path": request.url.path,
})
async def health(request: Request) -> JSONResponse:
return JSONResponse({
"status": "healthy",
"app": "api-v2",
"version": "2.0.0",
})
async def users_list(request: Request) -> JSONResponse:
return JSONResponse({
"data": {
"users": [
{"id": 1, "name": "Alice", "email": "alice@test.com"},
{"id": 2, "name": "Bob", "email": "bob@test.com"},
],
},
"meta": {"total": 2, "page": 1},
"app": "api-v2",
})
routes = [
Route("/", root, methods=["GET"]),
Route("/health", health, methods=["GET"]),
Route("/users", users_list, methods=["GET"]),
]
return Starlette(routes=routes)
def create_admin_app() -> Starlette:
"""Create a test admin application."""
async def dashboard(request: Request) -> JSONResponse:
return JSONResponse({
"app": "admin",
"page": "dashboard",
"path": request.url.path,
})
async def settings(request: Request) -> JSONResponse:
return JSONResponse({
"app": "admin",
"page": "settings",
"config": {"debug": True, "theme": "dark"},
})
async def stats(request: Request) -> JSONResponse:
return JSONResponse({
"app": "admin",
"stats": {
"requests": 1000,
"errors": 5,
"uptime": "24h",
},
})
routes = [
Route("/", dashboard, methods=["GET"]),
Route("/settings", settings, methods=["GET"]),
Route("/stats", stats, methods=["GET"]),
]
return Starlette(routes=routes)
def create_websocket_test_app() -> Starlette:
"""Create a test app that also has websocket endpoint info."""
async def root(request: Request) -> JSONResponse:
return JSONResponse({
"app": "ws-app",
"message": "WebSocket test app",
"ws_endpoint": "/ws",
})
async def info(request: Request) -> JSONResponse:
return JSONResponse({
"app": "ws-app",
"supports": ["http", "websocket"],
})
routes = [
Route("/", root, methods=["GET"]),
Route("/info", info, methods=["GET"]),
]
return Starlette(routes=routes)
# ============== PyServe Test Server ==============
class PyServeTestServer:
"""Test server wrapper for PyServe with ASGI mounts."""
def __init__(self, config: Config):
self.config = config
self.server = PyServeServer(config)
self._server_task = None
async def start(self) -> None:
assert self.server.app is not None, "Server app not initialized"
config = uvicorn.Config(
app=self.server.app,
host=self.config.server.host,
port=self.config.server.port,
log_level="critical",
access_log=False,
)
server = uvicorn.Server(config)
self._server_task = asyncio.create_task(server.serve())
# Wait for server to be ready
for _ in range(50):
try:
async with httpx.AsyncClient() as client:
await client.get(f"http://127.0.0.1:{self.config.server.port}/health")
return
except httpx.ConnectError:
await asyncio.sleep(0.1)
raise RuntimeError(f"PyServe server failed to start on port {self.config.server.port}")
async def stop(self) -> None:
if self._server_task:
self._server_task.cancel()
try:
await self._server_task
except asyncio.CancelledError:
pass
# ============== Fixtures ==============
@pytest.fixture
def pyserve_port() -> int:
"""Get a free port for PyServe."""
return get_free_port()
@pytest.fixture
def api_v1_app() -> Starlette:
"""Create API v1 test app."""
return create_api_v1_app()
@pytest.fixture
def api_v2_app() -> Starlette:
"""Create API v2 test app."""
return create_api_v2_app()
@pytest.fixture
def admin_app() -> Starlette:
"""Create admin test app."""
return create_admin_app()
# ============== Unit Tests ==============
class TestMountedApp:
"""Unit tests for MountedApp class."""
def test_matches_exact_path(self, api_v1_app):
"""Test exact path matching."""
mounted = MountedApp("/api", api_v1_app)
assert mounted.matches("/api") is True
assert mounted.matches("/api/users") is True
assert mounted.matches("/api/users/123") is True
assert mounted.matches("/other") is False
assert mounted.matches("/apiv2") is False
def test_matches_empty_path(self, api_v1_app):
"""Test root mount matching."""
mounted = MountedApp("", api_v1_app)
assert mounted.matches("/") is True
assert mounted.matches("/anything") is True
assert mounted.matches("/nested/path") is True
def test_get_modified_path_with_strip(self, api_v1_app):
"""Test path modification with strip_path=True."""
mounted = MountedApp("/api/v1", api_v1_app, strip_path=True)
assert mounted.get_modified_path("/api/v1") == "/"
assert mounted.get_modified_path("/api/v1/users") == "/users"
assert mounted.get_modified_path("/api/v1/users/123") == "/users/123"
def test_get_modified_path_without_strip(self, api_v1_app):
"""Test path modification with strip_path=False."""
mounted = MountedApp("/api/v1", api_v1_app, strip_path=False)
assert mounted.get_modified_path("/api/v1/users") == "/api/v1/users"
class TestASGIMountManager:
"""Unit tests for ASGIMountManager class."""
def test_mount_direct_app(self, api_v1_app):
"""Test mounting a direct ASGI app."""
manager = ASGIMountManager()
result = manager.mount(
path="/api",
app=api_v1_app,
name="api-v1"
)
assert result is True
assert len(manager.mounts) == 1
assert manager.mounts[0].name == "api-v1"
assert manager.mounts[0].path == "/api"
def test_mount_requires_app_or_path(self):
"""Test that mount requires either app or app_path."""
manager = ASGIMountManager()
result = manager.mount(path="/test")
assert result is False
assert len(manager.mounts) == 0
def test_mount_ordering_by_path_length(self, api_v1_app, api_v2_app, admin_app):
"""Test that mounts are ordered by path length (longest first)."""
manager = ASGIMountManager()
manager.mount(path="/api", app=api_v1_app, name="short")
manager.mount(path="/api/v1", app=api_v2_app, name="medium")
manager.mount(path="/api/v1/admin", app=admin_app, name="long")
# Verify ordering
assert manager.mounts[0].name == "long"
assert manager.mounts[1].name == "medium"
assert manager.mounts[2].name == "short"
# Should match the longest prefix first
mount = manager.get_mount("/api/v1/admin/dashboard")
assert mount is not None
assert mount.name == "long"
mount = manager.get_mount("/api/v1/users")
assert mount is not None
assert mount.name == "medium"
mount = manager.get_mount("/api/other")
assert mount is not None
assert mount.name == "short"
def test_unmount(self, api_v1_app):
"""Test unmounting an application."""
manager = ASGIMountManager()
manager.mount(path="/api", app=api_v1_app)
assert len(manager.mounts) == 1
result = manager.unmount("/api")
assert result is True
assert len(manager.mounts) == 0
def test_list_mounts(self, api_v1_app, api_v2_app):
"""Test listing all mounts."""
manager = ASGIMountManager()
manager.mount(path="/api/v1", app=api_v1_app, name="api-v1")
manager.mount(path="/api/v2", app=api_v2_app, name="api-v2")
mounts_info = manager.list_mounts()
assert len(mounts_info) == 2
names = {m["name"] for m in mounts_info}
assert "api-v1" in names
assert "api-v2" in names
class TestASGIAppLoader:
"""Unit tests for ASGIAppLoader class."""
def test_load_app_invalid_module(self):
"""Test loading app with invalid module path."""
loader = ASGIAppLoader()
app = loader.load_app("nonexistent.module:app")
assert app is None
def test_load_app_invalid_attribute(self):
"""Test loading app with invalid attribute."""
loader = ASGIAppLoader()
app = loader.load_app("starlette.applications:nonexistent")
assert app is None
def test_get_app_cached(self, api_v1_app):
"""Test getting a cached app."""
loader = ASGIAppLoader()
loader._apps["test:app"] = api_v1_app
app = loader.get_app("test:app")
assert app is api_v1_app
app = loader.get_app("nonexistent:app")
assert app is None
# ============== Integration Tests ==============
class TestASGIMountIntegration:
"""Integration tests for ASGIMountManager request handling."""
@pytest.mark.asyncio
async def test_handle_request_to_mounted_app(self, api_v1_app):
"""Test handling a request through mounted app."""
manager = ASGIMountManager()
manager.mount(path="/api", app=api_v1_app)
scope = {
"type": "http",
"asgi": {"version": "3.0"},
"http_version": "1.1",
"method": "GET",
"path": "/api/health",
"query_string": b"",
"headers": [],
"server": ("127.0.0.1", 8000),
}
received_messages = []
async def receive():
return {"type": "http.request", "body": b""}
async def send(message):
received_messages.append(message)
result = await manager.handle_request(scope, receive, send)
assert result is True
assert len(received_messages) == 2 # response.start + response.body
assert received_messages[0]["type"] == "http.response.start"
assert received_messages[0]["status"] == 200
@pytest.mark.asyncio
async def test_handle_request_no_match(self):
"""Test handling request with no matching mount."""
manager = ASGIMountManager()
scope = {
"type": "http",
"path": "/unmatched",
}
async def receive():
return {}
async def send(message):
pass
result = await manager.handle_request(scope, receive, send)
assert result is False
@pytest.mark.asyncio
async def test_handle_non_http_request(self, api_v1_app):
"""Test that non-HTTP requests are not handled."""
manager = ASGIMountManager()
manager.mount(path="/api", app=api_v1_app)
scope = {
"type": "websocket",
"path": "/api/ws",
}
async def receive():
return {}
async def send(message):
pass
result = await manager.handle_request(scope, receive, send)
assert result is False
@pytest.mark.asyncio
async def test_path_stripping(self, api_v1_app):
"""Test that mount path is correctly stripped from request."""
manager = ASGIMountManager()
manager.mount(path="/api/v1", app=api_v1_app, strip_path=True)
scope = {
"type": "http",
"asgi": {"version": "3.0"},
"http_version": "1.1",
"method": "GET",
"path": "/api/v1/users",
"query_string": b"",
"headers": [],
"server": ("127.0.0.1", 8000),
}
received_messages = []
async def receive():
return {"type": "http.request", "body": b""}
async def send(message):
received_messages.append(message)
result = await manager.handle_request(scope, receive, send)
assert result is True
assert received_messages[0]["status"] == 200
# ============== Full Server Integration Tests ==============
class TestPyServeWithASGIMounts:
"""Full integration tests with PyServe server and ASGI mounts."""
@pytest.mark.asyncio
async def test_basic_asgi_mount(self, pyserve_port, api_v1_app):
"""Test basic ASGI app mounting through PyServe."""
config = Config(
server=ServerConfig(host="127.0.0.1", port=pyserve_port),
http=HttpConfig(static_dir="./static", templates_dir="./templates"),
logging=LoggingConfig(level="ERROR", console_output=False),
extensions=[],
)
# Create server and manually add ASGI extension
server = PyServeServer(config)
# Add ASGI mount directly via extension manager
from pyserve.extensions import ASGIExtension
asgi_ext = ASGIExtension({"mounts": []})
asgi_ext.mount_manager.mount(path="/api", app=api_v1_app, name="api-v1")
server.extension_manager.extensions.insert(0, asgi_ext)
test_server = PyServeTestServer.__new__(PyServeTestServer)
test_server.config = config
test_server.server = server
test_server._server_task = None
try:
await test_server.start()
async with httpx.AsyncClient() as client:
# Test root endpoint of mounted app
response = await client.get(f"http://127.0.0.1:{pyserve_port}/api/")
assert response.status_code == 200
data = response.json()
assert data["app"] == "api-v1"
assert data["message"] == "Welcome to API v1"
# Test health endpoint
response = await client.get(f"http://127.0.0.1:{pyserve_port}/api/health")
assert response.status_code == 200
data = response.json()
assert data["status"] == "healthy"
assert data["app"] == "api-v1"
# Test users list
response = await client.get(f"http://127.0.0.1:{pyserve_port}/api/users")
assert response.status_code == 200
data = response.json()
assert "users" in data
assert len(data["users"]) == 2
# Test user detail
response = await client.get(f"http://127.0.0.1:{pyserve_port}/api/users/1")
assert response.status_code == 200
data = response.json()
assert data["user"]["id"] == 1
finally:
await test_server.stop()
@pytest.mark.asyncio
async def test_multiple_asgi_mounts(self, pyserve_port, api_v1_app, api_v2_app, admin_app):
"""Test multiple ASGI apps mounted at different paths."""
config = Config(
server=ServerConfig(host="127.0.0.1", port=pyserve_port),
http=HttpConfig(static_dir="./static", templates_dir="./templates"),
logging=LoggingConfig(level="ERROR", console_output=False),
extensions=[],
)
server = PyServeServer(config)
from pyserve.extensions import ASGIExtension
asgi_ext = ASGIExtension({"mounts": []})
asgi_ext.mount_manager.mount(path="/api/v1", app=api_v1_app, name="api-v1")
asgi_ext.mount_manager.mount(path="/api/v2", app=api_v2_app, name="api-v2")
asgi_ext.mount_manager.mount(path="/admin", app=admin_app, name="admin")
server.extension_manager.extensions.insert(0, asgi_ext)
test_server = PyServeTestServer.__new__(PyServeTestServer)
test_server.config = config
test_server.server = server
test_server._server_task = None
try:
await test_server.start()
async with httpx.AsyncClient() as client:
# Test API v1
response = await client.get(f"http://127.0.0.1:{pyserve_port}/api/v1/")
assert response.status_code == 200
assert response.json()["app"] == "api-v1"
# Test API v2
response = await client.get(f"http://127.0.0.1:{pyserve_port}/api/v2/")
assert response.status_code == 200
data = response.json()
assert data["app"] == "api-v2"
assert data["version"] == "2.0.0"
# Test API v2 users (different response format)
response = await client.get(f"http://127.0.0.1:{pyserve_port}/api/v2/users")
assert response.status_code == 200
data = response.json()
assert "data" in data
assert "meta" in data
# Test Admin
response = await client.get(f"http://127.0.0.1:{pyserve_port}/admin/")
assert response.status_code == 200
assert response.json()["app"] == "admin"
assert response.json()["page"] == "dashboard"
# Test Admin settings
response = await client.get(f"http://127.0.0.1:{pyserve_port}/admin/settings")
assert response.status_code == 200
data = response.json()
assert data["config"]["theme"] == "dark"
finally:
await test_server.stop()
@pytest.mark.asyncio
async def test_asgi_mount_post_request(self, pyserve_port, api_v1_app):
"""Test POST requests to mounted ASGI app."""
config = Config(
server=ServerConfig(host="127.0.0.1", port=pyserve_port),
http=HttpConfig(static_dir="./static", templates_dir="./templates"),
logging=LoggingConfig(level="ERROR", console_output=False),
extensions=[],
)
server = PyServeServer(config)
from pyserve.extensions import ASGIExtension
asgi_ext = ASGIExtension({"mounts": []})
asgi_ext.mount_manager.mount(path="/api", app=api_v1_app, name="api")
server.extension_manager.extensions.insert(0, asgi_ext)
test_server = PyServeTestServer.__new__(PyServeTestServer)
test_server.config = config
test_server.server = server
test_server._server_task = None
try:
await test_server.start()
async with httpx.AsyncClient() as client:
# Test POST to create user
response = await client.post(
f"http://127.0.0.1:{pyserve_port}/api/users",
json={"name": "Charlie", "email": "charlie@test.com"}
)
assert response.status_code == 201
data = response.json()
assert data["created"]["name"] == "Charlie"
# Test echo endpoint
response = await client.post(
f"http://127.0.0.1:{pyserve_port}/api/echo",
content=b"Hello, World!",
headers={"Content-Type": "text/plain"}
)
assert response.status_code == 200
assert response.content == b"Hello, World!"
finally:
await test_server.stop()
@pytest.mark.asyncio
async def test_asgi_mount_with_routing_extension(self, pyserve_port, api_v1_app):
"""Test ASGI mounts working alongside routing extension."""
config = Config(
server=ServerConfig(host="127.0.0.1", port=pyserve_port),
http=HttpConfig(static_dir="./static", templates_dir="./templates"),
logging=LoggingConfig(level="ERROR", console_output=False),
extensions=[
ExtensionConfig(
type="routing",
config={
"regex_locations": {
"=/health": {"return": "200 PyServe OK"},
"=/status": {"return": "200 Server Running"},
}
}
)
],
)
server = PyServeServer(config)
# Add ASGI extension BEFORE routing extension
from pyserve.extensions import ASGIExtension
asgi_ext = ASGIExtension({"mounts": []})
asgi_ext.mount_manager.mount(path="/api", app=api_v1_app, name="api")
server.extension_manager.extensions.insert(0, asgi_ext)
test_server = PyServeTestServer.__new__(PyServeTestServer)
test_server.config = config
test_server.server = server
test_server._server_task = None
try:
await test_server.start()
async with httpx.AsyncClient() as client:
# Test ASGI mounted app
response = await client.get(f"http://127.0.0.1:{pyserve_port}/api/users")
assert response.status_code == 200
assert response.json()["app"] == "api-v1"
# Test routing extension endpoints
response = await client.get(f"http://127.0.0.1:{pyserve_port}/status")
assert response.status_code == 200
assert "Server Running" in response.text
finally:
await test_server.stop()
@pytest.mark.asyncio
async def test_asgi_mount_path_not_stripped(self, pyserve_port):
"""Test ASGI mount with strip_path=False."""
# Create an app that expects full path
async def handler(request: Request) -> JSONResponse:
return JSONResponse({
"full_path": request.url.path,
"received": True,
})
app = Starlette(routes=[
Route("/mounted/data", handler, methods=["GET"]),
])
config = Config(
server=ServerConfig(host="127.0.0.1", port=pyserve_port),
http=HttpConfig(static_dir="./static", templates_dir="./templates"),
logging=LoggingConfig(level="ERROR", console_output=False),
extensions=[],
)
server = PyServeServer(config)
from pyserve.extensions import ASGIExtension
asgi_ext = ASGIExtension({"mounts": []})
asgi_ext.mount_manager.mount(
path="/mounted",
app=app,
name="full-path-app",
strip_path=False
)
server.extension_manager.extensions.insert(0, asgi_ext)
test_server = PyServeTestServer.__new__(PyServeTestServer)
test_server.config = config
test_server.server = server
test_server._server_task = None
try:
await test_server.start()
async with httpx.AsyncClient() as client:
response = await client.get(f"http://127.0.0.1:{pyserve_port}/mounted/data")
assert response.status_code == 200
data = response.json()
assert data["full_path"] == "/mounted/data"
finally:
await test_server.stop()
@pytest.mark.asyncio
async def test_asgi_mount_metrics(self, pyserve_port, api_v1_app):
"""Test that ASGI extension reports metrics correctly."""
config = Config(
server=ServerConfig(host="127.0.0.1", port=pyserve_port),
http=HttpConfig(static_dir="./static", templates_dir="./templates"),
logging=LoggingConfig(level="ERROR", console_output=False),
extensions=[],
)
server = PyServeServer(config)
from pyserve.extensions import ASGIExtension
asgi_ext = ASGIExtension({"mounts": []})
asgi_ext.mount_manager.mount(path="/api/v1", app=api_v1_app, name="api-v1")
asgi_ext.mount_manager.mount(path="/api/v2", app=create_api_v2_app(), name="api-v2")
server.extension_manager.extensions.insert(0, asgi_ext)
# Check metrics
metrics = asgi_ext.get_metrics()
assert metrics["asgi_mount_count"] == 2
assert len(metrics["asgi_mounts"]) == 2
mount_names = {m["name"] for m in metrics["asgi_mounts"]}
assert "api-v1" in mount_names
assert "api-v2" in mount_names
@pytest.mark.asyncio
async def test_asgi_mount_error_handling(self, pyserve_port):
"""Test error handling when mounted app raises exception."""
async def failing_handler(request: Request) -> JSONResponse:
raise ValueError("Intentional error for testing")
async def working_handler(request: Request) -> JSONResponse:
return JSONResponse({"status": "ok"})
app = Starlette(routes=[
Route("/fail", failing_handler, methods=["GET"]),
Route("/ok", working_handler, methods=["GET"]),
])
config = Config(
server=ServerConfig(host="127.0.0.1", port=pyserve_port),
http=HttpConfig(static_dir="./static", templates_dir="./templates"),
logging=LoggingConfig(level="ERROR", console_output=False),
extensions=[],
)
server = PyServeServer(config)
from pyserve.extensions import ASGIExtension
asgi_ext = ASGIExtension({"mounts": []})
asgi_ext.mount_manager.mount(path="/test", app=app, name="test-app")
server.extension_manager.extensions.insert(0, asgi_ext)
test_server = PyServeTestServer.__new__(PyServeTestServer)
test_server.config = config
test_server.server = server
test_server._server_task = None
try:
await test_server.start()
async with httpx.AsyncClient() as client:
# Working endpoint should work
response = await client.get(f"http://127.0.0.1:{pyserve_port}/test/ok")
assert response.status_code == 200
# Failing endpoint should return 500
response = await client.get(f"http://127.0.0.1:{pyserve_port}/test/fail")
assert response.status_code == 500
finally:
await test_server.stop()
@pytest.mark.asyncio
async def test_concurrent_requests_to_mounted_apps(self, pyserve_port, api_v1_app, api_v2_app):
"""Test concurrent requests to different mounted apps."""
config = Config(
server=ServerConfig(host="127.0.0.1", port=pyserve_port),
http=HttpConfig(static_dir="./static", templates_dir="./templates"),
logging=LoggingConfig(level="ERROR", console_output=False),
extensions=[],
)
server = PyServeServer(config)
from pyserve.extensions import ASGIExtension
asgi_ext = ASGIExtension({"mounts": []})
asgi_ext.mount_manager.mount(path="/v1", app=api_v1_app, name="v1")
asgi_ext.mount_manager.mount(path="/v2", app=api_v2_app, name="v2")
server.extension_manager.extensions.insert(0, asgi_ext)
test_server = PyServeTestServer.__new__(PyServeTestServer)
test_server.config = config
test_server.server = server
test_server._server_task = None
try:
await test_server.start()
async with httpx.AsyncClient() as client:
# Send concurrent requests
tasks = [
client.get(f"http://127.0.0.1:{pyserve_port}/v1/health"),
client.get(f"http://127.0.0.1:{pyserve_port}/v2/health"),
client.get(f"http://127.0.0.1:{pyserve_port}/v1/users"),
client.get(f"http://127.0.0.1:{pyserve_port}/v2/users"),
client.get(f"http://127.0.0.1:{pyserve_port}/v1/"),
client.get(f"http://127.0.0.1:{pyserve_port}/v2/"),
]
responses = await asyncio.gather(*tasks)
# All requests should succeed
for response in responses:
assert response.status_code == 200
# Verify correct app responded
assert responses[0].json()["app"] == "api-v1"
assert responses[1].json()["app"] == "api-v2"
finally:
await test_server.stop()

273
tests/test_path_matcher.py Normal file
View File

@ -0,0 +1,273 @@
"""
Tests for path_matcher module.
Run with: pytest tests/test_path_matcher.py -v
"""
import pytest
from pyserve.path_matcher import (
FastMountedPath,
FastMountManager,
path_matches_prefix,
strip_path_prefix,
match_and_modify_path,
CYTHON_AVAILABLE,
)
class TestFastMountedPath:
def test_root_mount_matches_everything(self):
"""Root mount should match all paths."""
mount = FastMountedPath("")
assert mount.matches("/") is True
assert mount.matches("/api") is True
assert mount.matches("/api/users") is True
assert mount.matches("/anything/at/all") is True
def test_slash_root_mount_matches_everything(self):
"""'/' mount should match all paths."""
mount = FastMountedPath("/")
assert mount.matches("/") is True
assert mount.matches("/api") is True
assert mount.matches("/api/users") is True
def test_exact_path_match(self):
"""Exact path should match."""
mount = FastMountedPath("/api")
assert mount.matches("/api") is True
assert mount.matches("/api/") is True
assert mount.matches("/api/users") is True
def test_no_false_prefix_match(self):
"""/api should not match /api-v2."""
mount = FastMountedPath("/api")
assert mount.matches("/api-v2") is False
assert mount.matches("/api2") is False
assert mount.matches("/apiv2") is False
def test_shorter_path_no_match(self):
"""Request path shorter than mount path should not match."""
mount = FastMountedPath("/api/v1")
assert mount.matches("/api") is False
assert mount.matches("/ap") is False
assert mount.matches("/") is False
def test_trailing_slash_normalized(self):
"""Trailing slashes should be normalized."""
mount1 = FastMountedPath("/api/")
mount2 = FastMountedPath("/api")
assert mount1.path == "/api"
assert mount2.path == "/api"
assert mount1.matches("/api/users") is True
assert mount2.matches("/api/users") is True
def test_get_modified_path_strips_prefix(self):
"""Modified path should have prefix stripped."""
mount = FastMountedPath("/api")
assert mount.get_modified_path("/api") == "/"
assert mount.get_modified_path("/api/") == "/"
assert mount.get_modified_path("/api/users") == "/users"
assert mount.get_modified_path("/api/users/123") == "/users/123"
def test_get_modified_path_no_strip(self):
"""With strip_path=False, path should not be modified."""
mount = FastMountedPath("/api", strip_path=False)
assert mount.get_modified_path("/api/users") == "/api/users"
assert mount.get_modified_path("/api") == "/api"
def test_root_mount_modified_path(self):
"""Root mount should return original path."""
mount = FastMountedPath("")
assert mount.get_modified_path("/api/users") == "/api/users"
assert mount.get_modified_path("/") == "/"
def test_name_property(self):
"""Name should be set correctly."""
mount1 = FastMountedPath("/api")
mount2 = FastMountedPath("/api", name="API Mount")
assert mount1.name == "/api"
assert mount2.name == "API Mount"
def test_repr(self):
"""Repr should be informative."""
mount = FastMountedPath("/api", name="API")
assert "FastMountedPath" in repr(mount)
assert "/api" in repr(mount)
class TestFastMountManager:
def test_empty_manager(self):
"""Empty manager should return None."""
manager = FastMountManager()
assert manager.get_mount("/api") is None
assert manager.mount_count == 0
def test_add_mount(self):
"""Adding mounts should work."""
manager = FastMountManager()
mount = FastMountedPath("/api")
manager.add_mount(mount)
assert manager.mount_count == 1
assert manager.get_mount("/api/users") is mount
def test_longest_prefix_matching(self):
"""Longer prefixes should match first."""
manager = FastMountManager()
api_mount = FastMountedPath("/api", name="api")
api_v1_mount = FastMountedPath("/api/v1", name="api_v1")
api_v2_mount = FastMountedPath("/api/v2", name="api_v2")
manager.add_mount(api_mount)
manager.add_mount(api_v2_mount)
manager.add_mount(api_v1_mount)
assert manager.get_mount("/api/v1/users").name == "api_v1"
assert manager.get_mount("/api/v2/items").name == "api_v2"
assert manager.get_mount("/api/v3/other").name == "api"
assert manager.get_mount("/api").name == "api"
def test_remove_mount(self):
"""Removing mounts should work."""
manager = FastMountManager()
manager.add_mount(FastMountedPath("/api"))
manager.add_mount(FastMountedPath("/admin"))
assert manager.mount_count == 2
result = manager.remove_mount("/api")
assert result is True
assert manager.mount_count == 1
assert manager.get_mount("/api/users") is None
assert manager.get_mount("/admin/users") is not None
def test_remove_nonexistent_mount(self):
"""Removing nonexistent mount should return False."""
manager = FastMountManager()
result = manager.remove_mount("/api")
assert result is False
def test_list_mounts(self):
"""list_mounts should return mount info."""
manager = FastMountManager()
manager.add_mount(FastMountedPath("/api", name="API"))
manager.add_mount(FastMountedPath("/admin", name="Admin"))
mounts = manager.list_mounts()
assert len(mounts) == 2
assert all("path" in m and "name" in m and "strip_path" in m for m in mounts)
def test_mounts_property_returns_copy(self):
"""mounts property should return a copy."""
manager = FastMountManager()
manager.add_mount(FastMountedPath("/api"))
mounts1 = manager.mounts
mounts2 = manager.mounts
assert mounts1 is not mounts2
assert mounts1 == mounts2
class TestUtilityFunctions:
"""Tests for standalone utility functions."""
def test_path_matches_prefix_basic(self):
"""Basic prefix matching."""
assert path_matches_prefix("/api/users", "/api") is True
assert path_matches_prefix("/api", "/api") is True
assert path_matches_prefix("/api-v2", "/api") is False
assert path_matches_prefix("/ap", "/api") is False
def test_path_matches_prefix_root(self):
"""Root prefix matches everything."""
assert path_matches_prefix("/anything", "") is True
assert path_matches_prefix("/anything", "/") is True
def test_strip_path_prefix_basic(self):
"""Basic path stripping."""
assert strip_path_prefix("/api/users", "/api") == "/users"
assert strip_path_prefix("/api", "/api") == "/"
assert strip_path_prefix("/api/", "/api") == "/"
def test_strip_path_prefix_root(self):
"""Root prefix doesn't strip anything."""
assert strip_path_prefix("/api/users", "") == "/api/users"
assert strip_path_prefix("/api/users", "/") == "/api/users"
def test_match_and_modify_combined(self):
"""Combined match and modify operation."""
matches, path = match_and_modify_path("/api/users", "/api")
assert matches is True
assert path == "/users"
matches, path = match_and_modify_path("/api", "/api")
assert matches is True
assert path == "/"
matches, path = match_and_modify_path("/other", "/api")
assert matches is False
assert path is None
def test_match_and_modify_no_strip(self):
"""Combined operation with strip_path=False."""
matches, path = match_and_modify_path("/api/users", "/api", strip_path=False)
assert matches is True
assert path == "/api/users"
class TestCythonAvailability:
def test_cython_available_is_bool(self):
"""CYTHON_AVAILABLE should be a boolean."""
assert isinstance(CYTHON_AVAILABLE, bool)
def test_module_works_regardless(self):
"""Module should work whether Cython is available or not."""
mount = FastMountedPath("/test")
assert mount.matches("/test/path") is True
class TestPerformance:
def test_many_matches(self):
"""Should handle many match operations."""
mount = FastMountedPath("/api/v1/users")
for _ in range(10000):
assert mount.matches("/api/v1/users/123/posts") is True
assert mount.matches("/other/path") is False
def test_many_mounts(self):
"""Should handle many mounts."""
manager = FastMountManager()
for i in range(100):
manager.add_mount(FastMountedPath(f"/api/v{i}"))
assert manager.mount_count == 100
mount = manager.get_mount("/api/v50/users")
assert mount is not None
assert mount.path == "/api/v50"
if __name__ == "__main__":
pytest.main([__file__, "-v"])

File diff suppressed because it is too large Load Diff

1348
tests/test_pyservectl.py Normal file

File diff suppressed because it is too large Load Diff

1327
tests/test_routing.py Normal file

File diff suppressed because it is too large Load Diff