commit
48c5d18b71
@ -0,0 +1,7 @@ |
||||
/models/ |
||||
/lora/ |
||||
/.venv/ |
||||
.idea |
||||
/out |
||||
__pycache__ |
||||
run_web.sh |
||||
@ -0,0 +1,341 @@ |
||||
# Stable Diffusion Python WebUI |
||||
|
||||
> This project is based on a script from https://github.com/imanslab/poc-uncensored-stable-diffusion/ |
||||
|
||||
A local web interface for generating images with Stable Diffusion, designed for fast iteration with the model kept loaded in memory. |
||||
|
||||
This is basically a very simplified reimplementation of Automatic1111 |
||||
|
||||
--- |
||||
|
||||
**Warning** A lot of this project was made with claude code because I just wanted to have a working SD webui and Automation1111 refused to work for me. The whole thing was made in two days. This is not aiming to be a maintainable project. It's best to run claude, tell it to read this README and have it make the changes you want. |
||||
|
||||
I have AMD RX 6600 XT, which is not supported by ROCm, but works fine with `HSA_OVERRIDE_GFX_VERSION=10.3.0` in env. That essentially lies about the GPU you have, and somehow a lie is sufficient here. Welcome to ML python. |
||||
|
||||
This will work best with NVidia, but you will need some adjustments as I optimized it for my AMD card. |
||||
|
||||
--- |
||||
|
||||
## Legal Disclaimer |
||||
|
||||
THIS GENERATOR IS FOR LOCAL USE ONLY AND MUST NOT BE EXPOSED TO THE INTERNET |
||||
|
||||
- **No Responsibility**: The creators of this project bear no responsibility for how the software is used. |
||||
- **An uncensored model has no guardrails**. |
||||
- **You are responsible for anything you do with the tool and the model you downloaded**, just as you are responsible for anything you do with any dangerous object. |
||||
- **Publishing anything this model generates is the same as publishing it yourself**. |
||||
- Ensure compliance with all applicable laws and regulations in your jurisdiction. |
||||
- If you unintentionally generate something illegal, delete it immediately and permanently (not to recycle bin) |
||||
- All pictures are saved in the `out/` folder |
||||
- Clear browser cache and other relevant caches |
||||
|
||||
### Safety checker |
||||
|
||||
Some models have a built-in safety (NSFW) checker. You can try enabling it, but in my experience, it blacks out even totally safe results. |
||||
|
||||
By default, safety checker is disabled. To enable it, remove this piece from the pipeline: `, safety_checker=None` |
||||
|
||||
e.g. in `StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, safety_checker=None)` |
||||
|
||||
## Prerequisites |
||||
|
||||
- **Python 3.11+** |
||||
- Compatible GPU (default config is for AMD with ROCm) - ROCm must be installed system-wide |
||||
|
||||
Optional: |
||||
- **Git** |
||||
- **Git LFS** (Large File Storage) - Required for downloading models. Install from [git-lfs.github.com](https://git-lfs.github.com) |
||||
|
||||
## Setup |
||||
|
||||
### Create directories |
||||
|
||||
- models |
||||
- lora (if using) |
||||
- out |
||||
|
||||
these are in gitignore. |
||||
|
||||
### Install Python dependencies |
||||
|
||||
```bash |
||||
# Create Virtual Environment |
||||
python -m venv .venv |
||||
source .venv/bin/activate |
||||
|
||||
# Install Dependencies |
||||
pip install -r requirements.txt |
||||
``` |
||||
|
||||
You may need to edit the requirements file to fit your particular setup. |
||||
|
||||
You also need system-wide install of ROCm and amdgpu.ids - it will complain if they are missing |
||||
|
||||
### Download a model |
||||
|
||||
Models go in the `models/` directory. |
||||
|
||||
**Supported formats:** |
||||
- `.safetensors` - Single-file safetensors format |
||||
- `.ckpt` - Single-file checkpoint format |
||||
- Directory - Diffusers pretrained model directory |
||||
|
||||
You can use any Stable Diffusion 1.5 or SDXL model. |
||||
|
||||
The simplest way is to download them from civitai.com (safetensors format) |
||||
|
||||
For diffusers format (e.g. from Huggingface.co), e.g.: |
||||
|
||||
https://huggingface.co/stablediffusionapi/realistic-vision-v51 |
||||
|
||||
```bash |
||||
mkdir -p models |
||||
cd models |
||||
git lfs install # Needed once |
||||
|
||||
# Download using git lfs |
||||
git clone https://huggingface.co/stablediffusionapi/realistic-vision-v51 |
||||
``` |
||||
|
||||
## Configuration |
||||
|
||||
### Model Configuration |
||||
|
||||
Use environment variables to configure which model to load: |
||||
|
||||
| Variable | Values | Default | Description | |
||||
|----------|--------|---------|-------------| |
||||
| `SD_MODEL_PATH` | path | `./models/my-diffusers-model` | Path to model file or directory | |
||||
| `SD_MODEL_TYPE` | `sd15`, `sdxl` | `sd15` | Model architecture type | |
||||
| `SD_LOW_VRAM` | `1`, `true`, `yes` | disabled | Enable for GPUs with <12GB VRAM | |
||||
| `SD_LORA_STACK` | see below | none | LoRA files to load with weights | |
||||
|
||||
```bash |
||||
# SD 1.5 safetensors file |
||||
SD_MODEL_PATH=./models/my_sd15_model.safetensors ./run_web.sh |
||||
|
||||
# SDXL safetensors file |
||||
SD_MODEL_TYPE=sdxl SD_MODEL_PATH=./models/my_sdxl_model.safetensors ./run_web.sh |
||||
|
||||
# SDXL on GPU with <12GB VRAM (slower but works) |
||||
SD_MODEL_TYPE=sdxl SD_MODEL_PATH=./models/my_sdxl_model.safetensors SD_LOW_VRAM=1 ./run_web.sh |
||||
|
||||
# SD 1.5 ckpt checkpoint file |
||||
SD_MODEL_PATH=./models/my_model.ckpt ./run_web.sh |
||||
|
||||
# Diffusers directory (default) |
||||
SD_MODEL_PATH=./models/my-diffusers-model ./run_web.sh |
||||
``` |
||||
|
||||
### LoRA Configuration |
||||
|
||||
Load one or more LoRA files using the `SD_LORA_STACK` environment variable. |
||||
|
||||
Note: I'm not sure if this actually works, in my experience it more degraded the picture quality. |
||||
|
||||
**Format:** `path/to/lora.safetensors:WEIGHT,path/to/other.safetensors:WEIGHT` |
||||
|
||||
- Paths are comma-separated |
||||
- Weight is optional (defaults to 1.0) |
||||
- Weight range: 0.0 to 1.0+ (higher values = stronger effect) |
||||
|
||||
```bash |
||||
# Single LoRA with default weight (1.0) |
||||
SD_LORA_STACK=./loras/style.safetensors ./run_web.sh |
||||
|
||||
# Single LoRA with custom weight |
||||
SD_LORA_STACK=./loras/style.safetensors:0.8 ./run_web.sh |
||||
|
||||
# Multiple LoRAs stacked |
||||
SD_LORA_STACK=./loras/style.safetensors:0.7,./loras/character.safetensors:0.5 ./run_web.sh |
||||
``` |
||||
|
||||
LoRAs are loaded at startup and applied to all generations. Make sure your LoRAs are compatible with your base model type (SD 1.5 LoRAs for SD 1.5 models, SDXL LoRAs for SDXL models). |
||||
|
||||
### Frontend |
||||
|
||||
Frontend constants in `templates/index.html`, normally this does not need changing. |
||||
|
||||
```javascript |
||||
const CONFIG = { |
||||
GUIDANCE_MIN: 1, |
||||
GUIDANCE_MAX: 20, |
||||
GUIDANCE_SPREAD: 2.5, // Range spread when syncing to slider |
||||
STEPS_MIN: 1, |
||||
STEPS_MAX: 100, |
||||
STEPS_SPREAD: 15, // Range spread when syncing to slider |
||||
DEFAULT_TIME_ESTIMATE: 20 // Seconds, for first image progress bar |
||||
}; |
||||
``` |
||||
|
||||
## Run the POC to verify config |
||||
|
||||
Run `run_poc.sh` (modify it as needed). With the my-diffusers-model model and default settings, it produces this picture: |
||||
|
||||
 |
||||
|
||||
|
||||
## Run the Server |
||||
|
||||
1. Copy `run_web_example.sh` to `run_web.sh` and customize the env vars to fit your needs - choice of mode, special options for your GPU etc. |
||||
|
||||
2. Start the server |
||||
|
||||
```bash |
||||
./run_web.sh |
||||
``` |
||||
|
||||
Or manually: |
||||
```bash |
||||
source .venv/bin/activate |
||||
python app.py |
||||
``` |
||||
|
||||
Open http://localhost:5000 in your browser. |
||||
|
||||
## Architecture |
||||
|
||||
``` |
||||
┌─────────────────────────────────────────────────────────────┐ |
||||
│ Browser │ |
||||
│ ┌─────────────────────────────────────────────────────┐ │ |
||||
│ │ index.html + style.css │ │ |
||||
│ │ - Form controls for generation parameters │ │ |
||||
│ │ - Real-time progress bar with ETA │ │ |
||||
│ │ - Streaming image display via SSE │ │ |
||||
│ └─────────────────────────────────────────────────────┘ │ |
||||
└─────────────────────────┬───────────────────────────────────┘ |
||||
│ HTTP + Server-Sent Events |
||||
┌─────────────────────────▼───────────────────────────────────┐ |
||||
│ Flask Server (app.py) │ |
||||
│ - GET / → Serve UI │ |
||||
│ - POST /generate → Stream generated images via SSE │ |
||||
│ - GET /out/<file> → Serve saved images │ |
||||
└─────────────────────────┬───────────────────────────────────┘ |
||||
│ |
||||
┌─────────────────────────▼───────────────────────────────────┐ |
||||
│ Pipeline Manager (sd_pipeline.py) │ |
||||
│ - Singleton pattern keeps model in GPU memory │ |
||||
│ - Thread-safe generation with locking │ |
||||
│ - Yields images one-by-one for streaming │ |
||||
└─────────────────────────┬───────────────────────────────────┘ |
||||
│ |
||||
┌─────────────────────────▼───────────────────────────────────┐ |
||||
│ Stable Diffusion Model │ |
||||
│ - Loaded once at startup │ |
||||
│ - Persists between requests for fast regeneration │ |
||||
└─────────────────────────────────────────────────────────────┘ |
||||
``` |
||||
|
||||
## Technologies |
||||
|
||||
| Component | Technology | Purpose | |
||||
|--------------|--------------------------|-------------------------------------| |
||||
| Backend | Flask | Lightweight Python web framework | |
||||
| Frontend | Vanilla HTML/CSS/JS | No build step, minimal dependencies | |
||||
| ML Framework | PyTorch + Diffusers | Stable Diffusion inference | |
||||
| Streaming | Server-Sent Events (SSE) | Real-time image delivery | |
||||
| GPU | CUDA/ROCm | Hardware acceleration | |
||||
|
||||
## File Structure |
||||
|
||||
``` |
||||
poc-uncensored-stable-diffusion/ |
||||
├── app.py # Flask application with routes |
||||
├── sd_pipeline.py # Singleton pipeline manager |
||||
├── models.py # Data classes (GenerationOptions, ImageResult, etc.) |
||||
├── templates/ |
||||
│ └── index.html # Main UI with embedded JS |
||||
├── static/ |
||||
│ └── style.css # Styling with dark theme |
||||
├── run_web.sh # Startup script with env vars |
||||
├── requirements.txt # Python dependencies |
||||
├── out/ # Generated images output |
||||
└── models/ # Model files (SD 1.5, SDXL) |
||||
``` |
||||
|
||||
## Features |
||||
|
||||
### Generation Parameters |
||||
|
||||
- **Prompt**: Text description for image generation |
||||
- **Negative Prompt**: Things to avoid in the image |
||||
- **Seed**: Reproducible generation (random button generates 9-digit seed) |
||||
- **Steps**: Number of inference steps (1-100, default 20) |
||||
- **Guidance Scale**: CFG scale (1-20, default 7.5) |
||||
- **Number of Images**: Batch generation (1-10) |
||||
- **Quality Keywords**: Optional suffix for enhanced quality |
||||
|
||||
### Variation Modes |
||||
|
||||
- **Increment Seed**: Each image in batch gets seed+1 (default on) |
||||
- **Vary Guidance**: Sweep guidance scale across a range |
||||
- **Vary Steps**: Sweep step count across a range |
||||
|
||||
When vary modes are enabled, the corresponding slider hides and low/high range inputs appear. Range inputs stay synchronized with slider values when vary mode is off. |
||||
|
||||
### Progress Indication |
||||
|
||||
- Spinner animation during generation |
||||
- Progress bar with percentage and ETA countdown |
||||
- First image assumes 20s estimate, subsequent images use measured time |
||||
- After 90%, progress slows asymptotically until image arrives |
||||
- Measured time persists across generations for accurate estimates |
||||
|
||||
### Streaming Results |
||||
|
||||
- Images appear immediately as they complete (Server-Sent Events) |
||||
- No waiting for entire batch to finish |
||||
- Each image card shows: seed, steps, guidance scale, prompt, link to saved file |
||||
|
||||
### Settings Management |
||||
|
||||
- **Export**: Download current settings as JSON file |
||||
- **Import**: Load settings from JSON file |
||||
- All parameters preserved including vary mode ranges |
||||
|
||||
### Responsive Layout |
||||
|
||||
- **Narrow screens**: Stacked layout (form above results) |
||||
- **Wide screens (>1200px)**: Side-by-side layout |
||||
- Left panel: Fixed-width control form with scrollbar |
||||
- Right panel: Scrollable results grid |
||||
|
||||
### Output Files |
||||
|
||||
Each generated image saves two files to `out/`: |
||||
- `YY-MM-DD_HH-MM-SS_SEED.jpg` - The image |
||||
- `YY-MM-DD_HH-MM-SS_SEED.json` - Metadata file in a format that can be imported to the web UI to re-apply the settings. |
||||
|
||||
## API |
||||
|
||||
### POST /generate |
||||
|
||||
Request (JSON): |
||||
```json |
||||
{ |
||||
"prompt": "your prompt", |
||||
"negative_prompt": "things to avoid", |
||||
"seed": 12345, |
||||
"steps": 20, |
||||
"guidance_scale": 7.5, |
||||
"count": 1, |
||||
"width": 512, |
||||
"height": 512, |
||||
"add_quality_keywords": true, |
||||
"increment_seed": true, |
||||
"vary_guidance": false, |
||||
"guidance_low": 5.0, |
||||
"guidance_high": 12.0, |
||||
"vary_steps": false, |
||||
"steps_low": 20, |
||||
"steps_high": 80 |
||||
} |
||||
``` |
||||
|
||||
Response (SSE stream): |
||||
``` |
||||
data: {"index":1,"total":1,"filename":"...","seed":12345,"steps":20,"guidance_scale":7.5,"width":512,"height":512,"prompt":"...","negative_prompt":"...","full_prompt":"...","url":"/out/...","base64":"data:image/jpeg;base64,..."} |
||||
|
||||
data: {"done":true} |
||||
``` |
||||
@ -0,0 +1,78 @@ |
||||
import json |
||||
import traceback |
||||
from flask import Flask, render_template, request, jsonify, send_from_directory, Response |
||||
from sd_pipeline import pipeline |
||||
from models import GenerationOptions |
||||
|
||||
app = Flask(__name__) |
||||
|
||||
|
||||
@app.route("/") |
||||
def index(): |
||||
return render_template("index.html") |
||||
|
||||
|
||||
@app.route("/generate", methods=["POST"]) |
||||
def generate(): |
||||
data = request.get_json() |
||||
|
||||
prompt = data.get("prompt", "") |
||||
if not prompt: |
||||
return jsonify({"success": False, "error": "Prompt is required"}), 400 |
||||
|
||||
# Parse and validate seed |
||||
seed = data.get("seed") |
||||
if seed is not None and seed != "": |
||||
seed = int(seed) |
||||
else: |
||||
seed = None |
||||
|
||||
# Parse and clamp numeric values |
||||
width = data.get("width") |
||||
height = data.get("height") |
||||
if width: |
||||
width = max(256, min(2048, int(width) // 8 * 8)) # must be multiple of 8 |
||||
if height: |
||||
height = max(256, min(2048, int(height) // 8 * 8)) |
||||
|
||||
options = GenerationOptions( |
||||
prompt=prompt, |
||||
negative_prompt=data.get("negative_prompt", ""), |
||||
seed=seed, |
||||
steps=max(1, min(100, int(data.get("steps", 20)))), |
||||
guidance_scale=max(1.0, min(20.0, float(data.get("guidance_scale", 7.5)))), |
||||
count=max(1, min(10, int(data.get("count", 1)))), |
||||
add_quality_keywords=data.get("add_quality_keywords", True), |
||||
increment_seed=data.get("increment_seed", True), |
||||
vary_guidance=data.get("vary_guidance", False), |
||||
guidance_low=max(1.0, min(20.0, float(data.get("guidance_low", 5.0)))), |
||||
guidance_high=max(1.0, min(20.0, float(data.get("guidance_high", 12.0)))), |
||||
vary_steps=data.get("vary_steps", False), |
||||
steps_low=max(1, min(100, int(data.get("steps_low", 20)))), |
||||
steps_high=max(1, min(100, int(data.get("steps_high", 80)))), |
||||
width=width, |
||||
height=height, |
||||
) |
||||
|
||||
def generate_events(): |
||||
try: |
||||
for result in pipeline.generate_stream(options): |
||||
yield f"data: {json.dumps(result.to_dict())}\n\n" |
||||
yield f"data: {json.dumps({'done': True})}\n\n" |
||||
except Exception as e: |
||||
traceback.print_exc() |
||||
yield f"data: {json.dumps({'error': str(e)})}\n\n" |
||||
|
||||
return Response(generate_events(), mimetype='text/event-stream') |
||||
|
||||
|
||||
@app.route("/out/<path:filename>") |
||||
def serve_image(filename): |
||||
return send_from_directory("out", filename) |
||||
|
||||
|
||||
if __name__ == "__main__": |
||||
print("Loading model on startup...") |
||||
pipeline.load() |
||||
print("Starting web server...") |
||||
app.run(host="127.0.0.1", port=5000, debug=False, threaded=True) |
||||
@ -0,0 +1,53 @@ |
||||
#!/usr/bin/env python3 |
||||
"""Convert a diffusers-format model to a single safetensors checkpoint.""" |
||||
|
||||
import argparse |
||||
import torch |
||||
from pathlib import Path |
||||
|
||||
|
||||
def convert(model_path: str, output_path: str, half: bool = True): |
||||
from diffusers import StableDiffusionPipeline |
||||
from safetensors.torch import save_file |
||||
|
||||
dtype = torch.float16 if half else torch.float32 |
||||
print(f"Loading diffusers model from {model_path}...") |
||||
pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=dtype) |
||||
|
||||
state_dict = {} |
||||
|
||||
print("Converting UNet...") |
||||
for k, v in pipe.unet.state_dict().items(): |
||||
state_dict[f"model.diffusion_model.{k}"] = v |
||||
|
||||
print("Converting text encoder...") |
||||
for k, v in pipe.text_encoder.state_dict().items(): |
||||
state_dict[f"cond_stage_model.transformer.{k}"] = v |
||||
|
||||
print("Converting VAE...") |
||||
for k, v in pipe.vae.state_dict().items(): |
||||
state_dict[f"first_stage_model.{k}"] = v |
||||
|
||||
print(f"Saving to {output_path}...") |
||||
save_file(state_dict, output_path) |
||||
print("Done!") |
||||
|
||||
|
||||
def main(): |
||||
parser = argparse.ArgumentParser(description="Convert diffusers model to safetensors") |
||||
parser.add_argument("model_path", help="Path to diffusers model directory") |
||||
parser.add_argument("output_path", nargs="?", help="Output safetensors file path (default: model name in SD models dir)") |
||||
parser.add_argument("--full", action="store_true", help="Use float32 instead of float16") |
||||
args = parser.parse_args() |
||||
|
||||
model_path = Path(args.model_path) |
||||
if args.output_path: |
||||
output_path = args.output_path |
||||
else: |
||||
output_path = f"/var/opt/stable-diffusion-webui/data/models/Stable-diffusion/{model_path.name}.safetensors" |
||||
|
||||
convert(str(model_path), output_path, half=not args.full) |
||||
|
||||
|
||||
if __name__ == "__main__": |
||||
main() |
||||
@ -0,0 +1,65 @@ |
||||
from dataclasses import dataclass, asdict |
||||
|
||||
|
||||
@dataclass |
||||
class GenerationOptions: |
||||
"""Input options for image generation.""" |
||||
prompt: str |
||||
negative_prompt: str = "" |
||||
seed: int | None = None |
||||
steps: int = 20 |
||||
guidance_scale: float = 7.5 |
||||
count: int = 1 |
||||
add_quality_keywords: bool = True |
||||
increment_seed: bool = True |
||||
vary_guidance: bool = False |
||||
guidance_low: float = 5.0 |
||||
guidance_high: float = 12.0 |
||||
vary_steps: bool = False |
||||
steps_low: int = 20 |
||||
steps_high: int = 80 |
||||
width: int | None = None |
||||
height: int | None = None |
||||
|
||||
|
||||
@dataclass |
||||
class ImageParams: |
||||
"""Computed parameters for a single image generation.""" |
||||
seed: int |
||||
steps: int |
||||
guidance_scale: float |
||||
|
||||
|
||||
@dataclass |
||||
class ImageMetadata: |
||||
"""Metadata saved alongside each generated image.""" |
||||
prompt: str |
||||
negative_prompt: str |
||||
seed: int |
||||
steps: int |
||||
guidance_scale: float |
||||
width: int |
||||
height: int |
||||
add_quality_keywords: bool |
||||
full_prompt: str = "" |
||||
|
||||
def to_dict(self) -> dict: |
||||
return asdict(self) |
||||
|
||||
|
||||
@dataclass |
||||
class ImageResult: |
||||
"""Result returned for each generated image via SSE.""" |
||||
index: int |
||||
total: int |
||||
filename: str |
||||
url: str |
||||
base64: str |
||||
metadata: ImageMetadata |
||||
|
||||
def to_dict(self) -> dict: |
||||
"""Flatten metadata into the result dict for JSON serialization.""" |
||||
result = asdict(self) |
||||
metadata = result.pop("metadata") |
||||
result.update(metadata) |
||||
return result |
||||
@ -0,0 +1,56 @@ |
||||
from diffusers import StableDiffusionPipeline |
||||
import torch |
||||
import random |
||||
import datetime |
||||
|
||||
def random_seed(length): |
||||
random.seed() |
||||
min = 10**(length-1) |
||||
max = 9*min + (min-1) |
||||
return random.randint(min, max) |
||||
|
||||
device_type = "cuda" # Using AMD GPU with ROCm |
||||
|
||||
def load_model(): |
||||
model_id = "./models/realistic-vision-v51" |
||||
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, safety_checker=None) |
||||
|
||||
pipe = pipe.to(device_type) |
||||
return pipe |
||||
|
||||
def generate_image(pipe, prompt, seed=None): |
||||
generator = torch.Generator(device=device_type) |
||||
|
||||
if seed is not None: |
||||
generator.manual_seed(seed) |
||||
|
||||
with torch.no_grad(): |
||||
image = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=5, generator=generator).images[0] |
||||
return image |
||||
|
||||
quality_keywords = "Canon50, hyper detail, cinematic lighting, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited" |
||||
|
||||
def generate(pipe, prompt, seed): |
||||
image = generate_image(pipe, prompt, seed) |
||||
dt = datetime.datetime.now().strftime("%y-%m-%d_%H-%M-%S") |
||||
base_file = 'out/%s_%d' % (dt, seed) |
||||
image_file = '%s.jpg' % base_file |
||||
text_file = '%s.txt' % base_file |
||||
|
||||
image.save(image_file) |
||||
with open(text_file, "w") as file: |
||||
file.write("%d\n%s" % (seed, prompt)) |
||||
|
||||
# Open in viewer |
||||
image.show() |
||||
|
||||
|
||||
def main(): |
||||
pipe = load_model() |
||||
prompt = "young adult woman, ((shoulder cut dark hair)), blue eyes, no makeup, white blouse, dressed, long sleeves, tiled head, enigmatic smile, %s" % quality_keywords |
||||
|
||||
seed = 673842166 |
||||
generate(pipe, prompt, seed) |
||||
|
||||
if __name__ == "__main__": |
||||
main() |
||||
@ -0,0 +1,7 @@ |
||||
--extra-index-url https://download.pytorch.org/whl/rocm6.3 |
||||
torch==2.9.1+rocm6.3 |
||||
torchvision==0.24.1+rocm6.3 |
||||
transformers>=4.40.1 |
||||
diffusers>=0.27.2 |
||||
peft>=0.10.0 |
||||
flask>=3.0.0 |
||||
@ -0,0 +1,5 @@ |
||||
#!/bin/bash |
||||
|
||||
source .venv/bin/activate |
||||
export HSA_OVERRIDE_GFX_VERSION=10.3.0 |
||||
python poc.py |
||||
@ -0,0 +1,28 @@ |
||||
#!/bin/bash |
||||
cd "$(dirname "$0")" |
||||
source .venv/bin/activate |
||||
|
||||
# Needed for RDNA 2 cards |
||||
export HSA_OVERRIDE_GFX_VERSION=10.3.0 |
||||
|
||||
|
||||
# Example use of SD 1.5 model - these are fast and easy on VRAM, but produce only 512px |
||||
|
||||
#export SD_MODEL_PATH=models/realistic-vision-v51 |
||||
#export SD_MODEL_TYPE=sd15 |
||||
|
||||
|
||||
# Example use of SD XL model - these can produce 1024px but are slow and demand more VRAM |
||||
|
||||
#export SD_MODEL_PATH=models/perfectdeliberate_v70.safetensors |
||||
export SD_MODEL_TYPE=sdxl |
||||
export SD_LOW_VRAM=1 |
||||
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True |
||||
|
||||
|
||||
# Example of LORA stack |
||||
|
||||
#export SD_LORA_STACK="lora/detailed style xl.safetensors:0.7,lora/perfection style xl.safetensors:0.5" |
||||
|
||||
|
||||
python app.py |
||||
@ -0,0 +1,278 @@ |
||||
import threading |
||||
import datetime |
||||
import random |
||||
import base64 |
||||
import io |
||||
import json |
||||
import os |
||||
from diffusers import ( |
||||
StableDiffusionPipeline, |
||||
StableDiffusionXLPipeline, |
||||
DPMSolverMultistepScheduler, |
||||
) |
||||
import torch |
||||
|
||||
from models import GenerationOptions, ImageParams, ImageMetadata, ImageResult |
||||
|
||||
|
||||
# --- Model Loaders --- |
||||
# To add a new model type, create a loader function and register it in MODEL_LOADERS |
||||
|
||||
def load_sd15(model_path, device, is_single_file): |
||||
"""Load Stable Diffusion 1.5 model.""" |
||||
if is_single_file: |
||||
pipe = StableDiffusionPipeline.from_single_file( |
||||
model_path, |
||||
torch_dtype=torch.float16, |
||||
) |
||||
pipe.safety_checker = None |
||||
pipe.requires_safety_checker = False |
||||
else: |
||||
pipe = StableDiffusionPipeline.from_pretrained( |
||||
model_path, |
||||
torch_dtype=torch.float16, |
||||
safety_checker=None, |
||||
) |
||||
return pipe |
||||
|
||||
|
||||
def load_sdxl(model_path, device, is_single_file): |
||||
"""Load Stable Diffusion XL model.""" |
||||
if is_single_file: |
||||
pipe = StableDiffusionXLPipeline.from_single_file( |
||||
model_path, |
||||
torch_dtype=torch.float16, |
||||
) |
||||
else: |
||||
pipe = StableDiffusionXLPipeline.from_pretrained( |
||||
model_path, |
||||
torch_dtype=torch.float16, |
||||
) |
||||
return pipe |
||||
|
||||
|
||||
MODEL_LOADERS = { |
||||
"sd15": load_sd15, |
||||
"sdxl": load_sdxl, |
||||
} |
||||
|
||||
|
||||
# --- Pipeline Manager --- |
||||
|
||||
class SDPipeline: |
||||
_instance = None |
||||
_lock = threading.Lock() |
||||
|
||||
def __new__(cls): |
||||
if cls._instance is None: |
||||
with cls._lock: |
||||
if cls._instance is None: |
||||
cls._instance = super().__new__(cls) |
||||
cls._instance._initialized = False |
||||
return cls._instance |
||||
|
||||
def __init__(self): |
||||
if self._initialized: |
||||
return |
||||
self._initialized = True |
||||
self._generation_lock = threading.Lock() |
||||
self.device = "cuda" |
||||
self.pipe = None |
||||
self.model_path = os.environ.get("SD_MODEL_PATH", "./models/realistic-vision-v51") |
||||
self.model_type = os.environ.get("SD_MODEL_TYPE", "sd15") |
||||
self.low_vram = os.environ.get("SD_LOW_VRAM", "").lower() in ("1", "true", "yes") |
||||
self.lora_stack = self._parse_lora_stack(os.environ.get("SD_LORA_STACK", "")) |
||||
self.quality_keywords = "hyper detail, Canon50, cinematic lighting, realistic, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited" |
||||
|
||||
def _parse_lora_stack(self, lora_env: str) -> list[tuple[str, float]]: |
||||
"""Parse SD_LORA_STACK env var into list of (path, weight) tuples. |
||||
|
||||
Format: path/to/lora.safetensors:0.8,path/to/other.safetensors:0.5 |
||||
""" |
||||
if not lora_env.strip(): |
||||
return [] |
||||
|
||||
result = [] |
||||
for entry in lora_env.split(","): |
||||
entry = entry.strip() |
||||
if not entry: |
||||
continue |
||||
if ":" in entry: |
||||
path, weight_str = entry.rsplit(":", 1) |
||||
weight = float(weight_str) |
||||
else: |
||||
path = entry |
||||
weight = 1.0 |
||||
result.append((path, weight)) |
||||
return result |
||||
|
||||
def load(self): |
||||
"""Load the model into GPU memory.""" |
||||
if self.pipe is not None: |
||||
return |
||||
|
||||
if not os.path.exists(self.model_path): |
||||
raise FileNotFoundError(f"Model not found: {self.model_path}") |
||||
|
||||
if self.model_type not in MODEL_LOADERS: |
||||
available = ", ".join(MODEL_LOADERS.keys()) |
||||
raise ValueError(f"Unknown model type '{self.model_type}'. Available: {available}") |
||||
|
||||
print(f"Loading model ({self.model_type}) from {self.model_path}...") |
||||
|
||||
is_single_file = self.model_path.endswith((".safetensors", ".ckpt")) |
||||
loader = MODEL_LOADERS[self.model_type] |
||||
self.pipe = loader(self.model_path, self.device, is_single_file) |
||||
|
||||
self.pipe.scheduler = DPMSolverMultistepScheduler.from_config( |
||||
self.pipe.scheduler.config, |
||||
use_karras_sigmas=True, |
||||
) |
||||
|
||||
if self.low_vram: |
||||
self.pipe.enable_sequential_cpu_offload() |
||||
self.pipe.vae.enable_slicing() |
||||
self.pipe.vae.enable_tiling() |
||||
print("Low VRAM mode: enabled sequential CPU offload and VAE slicing/tiling") |
||||
else: |
||||
self.pipe = self.pipe.to(self.device) |
||||
self.pipe.enable_attention_slicing() |
||||
|
||||
self._load_loras() |
||||
|
||||
print("Model loaded successfully!") |
||||
|
||||
def _load_loras(self): |
||||
"""Load LoRA weights from SD_LORA_STACK configuration.""" |
||||
if not self.lora_stack: |
||||
return |
||||
|
||||
adapter_names = [] |
||||
adapter_weights = [] |
||||
|
||||
for i, (path, weight) in enumerate(self.lora_stack): |
||||
if not os.path.exists(path): |
||||
print(f"Warning: LoRA not found, skipping: {path}") |
||||
continue |
||||
|
||||
adapter_name = f"lora_{i}" |
||||
print(f"Loading LoRA: {path} (weight={weight})") |
||||
self.pipe.load_lora_weights(path, adapter_name=adapter_name) |
||||
adapter_names.append(adapter_name) |
||||
adapter_weights.append(weight) |
||||
|
||||
if adapter_names: |
||||
self.pipe.set_adapters(adapter_names, adapter_weights=adapter_weights) |
||||
print(f"Loaded {len(adapter_names)} LoRA(s)") |
||||
|
||||
def generate_stream(self, options: GenerationOptions): |
||||
"""Generate images and yield results one by one.""" |
||||
if self.pipe is None: |
||||
self.load() |
||||
|
||||
seed = options.seed if options.seed is not None else self._random_seed() |
||||
|
||||
with self._generation_lock: |
||||
for i in range(options.count): |
||||
params = self._compute_params(options, seed, i) |
||||
full_prompt = f"{options.prompt}, {self.quality_keywords}" if options.add_quality_keywords else options.prompt |
||||
|
||||
image = self._generate_image(full_prompt, options.negative_prompt, params, options.width, options.height) |
||||
result = self._save_and_encode(image, options, params, full_prompt, i) |
||||
yield result |
||||
|
||||
def _compute_params(self, options: GenerationOptions, seed: int, index: int) -> ImageParams: |
||||
"""Compute generation parameters for a single image.""" |
||||
current_seed = seed + index if options.increment_seed else seed |
||||
|
||||
if options.vary_guidance and options.count > 1: |
||||
t = index / (options.count - 1) |
||||
current_guidance = options.guidance_low + t * (options.guidance_high - options.guidance_low) |
||||
else: |
||||
current_guidance = options.guidance_scale |
||||
|
||||
if options.vary_steps and options.count > 1: |
||||
t = index / (options.count - 1) |
||||
current_steps = int(options.steps_low + t * (options.steps_high - options.steps_low)) |
||||
else: |
||||
current_steps = options.steps |
||||
|
||||
return ImageParams( |
||||
seed=current_seed, |
||||
steps=current_steps, |
||||
guidance_scale=current_guidance, |
||||
) |
||||
|
||||
def _generate_image(self, prompt: str, negative_prompt: str, params: ImageParams, width: int | None, height: int | None): |
||||
"""Run the diffusion pipeline to generate a single image.""" |
||||
if self.low_vram: |
||||
torch.cuda.empty_cache() |
||||
|
||||
gen_device = "cpu" if self.low_vram else self.device |
||||
generator = torch.Generator(device=gen_device) |
||||
generator.manual_seed(params.seed) |
||||
|
||||
kwargs = { |
||||
"prompt": prompt, |
||||
"num_inference_steps": params.steps, |
||||
"guidance_scale": params.guidance_scale, |
||||
"generator": generator, |
||||
} |
||||
if negative_prompt: |
||||
kwargs["negative_prompt"] = negative_prompt |
||||
if width: |
||||
kwargs["width"] = width |
||||
if height: |
||||
kwargs["height"] = height |
||||
|
||||
with torch.no_grad(): |
||||
result = self.pipe(**kwargs) |
||||
return result.images[0] |
||||
|
||||
def _save_and_encode(self, image, options: GenerationOptions, params: ImageParams, full_prompt: str, index: int) -> ImageResult: |
||||
"""Save image to disk and encode as base64.""" |
||||
dt = datetime.datetime.now().strftime("%y-%m-%d_%H-%M-%S") |
||||
base_file = f"out/{dt}_{params.seed}" |
||||
|
||||
image.save(f"{base_file}.jpg") |
||||
|
||||
width = options.width or image.width |
||||
height = options.height or image.height |
||||
|
||||
metadata = ImageMetadata( |
||||
prompt=options.prompt, |
||||
negative_prompt=options.negative_prompt, |
||||
seed=params.seed, |
||||
steps=params.steps, |
||||
guidance_scale=params.guidance_scale, |
||||
width=width, |
||||
height=height, |
||||
add_quality_keywords=options.add_quality_keywords, |
||||
full_prompt=full_prompt, |
||||
) |
||||
|
||||
with open(f"{base_file}.json", "w") as f: |
||||
json.dump(metadata.to_dict(), f, indent=2) |
||||
|
||||
buffer = io.BytesIO() |
||||
image.save(buffer, format="JPEG") |
||||
b64_image = base64.b64encode(buffer.getvalue()).decode("utf-8") |
||||
|
||||
return ImageResult( |
||||
index=index + 1, |
||||
total=options.count, |
||||
filename=f"{dt}_{params.seed}.jpg", |
||||
url=f"/out/{dt}_{params.seed}.jpg", |
||||
base64=f"data:image/jpeg;base64,{b64_image}", |
||||
metadata=metadata, |
||||
) |
||||
|
||||
def _random_seed(self, length=9): |
||||
"""Generate a random seed with the specified number of digits.""" |
||||
random.seed() |
||||
min_val = 10 ** (length - 1) |
||||
max_val = 10 ** length - 1 |
||||
return random.randint(min_val, max_val) |
||||
|
||||
|
||||
pipeline = SDPipeline() |
||||
@ -0,0 +1,386 @@ |
||||
* { |
||||
box-sizing: border-box; |
||||
} |
||||
|
||||
body { |
||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, sans-serif; |
||||
background-color: #1a1a2e; |
||||
color: #eee; |
||||
margin: 0; |
||||
padding: 20px; |
||||
min-height: 100vh; |
||||
} |
||||
|
||||
.container { |
||||
max-width: 900px; |
||||
margin: 0 auto; |
||||
} |
||||
|
||||
h1 { |
||||
text-align: center; |
||||
margin-bottom: 20px; |
||||
margin-top: 0; |
||||
color: #fff; |
||||
} |
||||
|
||||
form { |
||||
background: #16213e; |
||||
padding: 15px; |
||||
border-radius: 10px; |
||||
margin-bottom: 20px; |
||||
} |
||||
|
||||
.form-group { |
||||
margin-bottom: 15px; |
||||
} |
||||
|
||||
.form-row { |
||||
display: grid; |
||||
grid-template-columns: 1fr 1fr; |
||||
gap: 15px; |
||||
} |
||||
|
||||
label { |
||||
display: block; |
||||
margin-bottom: 8px; |
||||
font-weight: 500; |
||||
} |
||||
|
||||
textarea, input[type="number"] { |
||||
width: 100%; |
||||
padding: 12px; |
||||
border: 1px solid #0f3460; |
||||
border-radius: 6px; |
||||
background: #1a1a2e; |
||||
color: #fff; |
||||
font-size: 14px; |
||||
} |
||||
|
||||
textarea { |
||||
resize: vertical; |
||||
min-height: 80px; |
||||
} |
||||
|
||||
input[type="range"] { |
||||
width: 100%; |
||||
height: 8px; |
||||
-webkit-appearance: none; |
||||
background: #0f3460; |
||||
border-radius: 4px; |
||||
outline: none; |
||||
} |
||||
|
||||
input[type="range"]::-webkit-slider-thumb { |
||||
-webkit-appearance: none; |
||||
width: 20px; |
||||
height: 20px; |
||||
background: #e94560; |
||||
border-radius: 50%; |
||||
cursor: pointer; |
||||
} |
||||
|
||||
input[type="range"]::-moz-range-thumb { |
||||
width: 20px; |
||||
height: 20px; |
||||
background: #e94560; |
||||
border-radius: 50%; |
||||
cursor: pointer; |
||||
border: none; |
||||
} |
||||
|
||||
.seed-input { |
||||
display: flex; |
||||
gap: 10px; |
||||
} |
||||
|
||||
.seed-input input { |
||||
flex: 1; |
||||
} |
||||
|
||||
.seed-input button { |
||||
padding: 12px 20px; |
||||
background: #0f3460; |
||||
border: none; |
||||
border-radius: 6px; |
||||
color: #fff; |
||||
cursor: pointer; |
||||
font-size: 14px; |
||||
} |
||||
|
||||
.seed-input button:hover { |
||||
background: #1a4a7a; |
||||
} |
||||
|
||||
.checkbox-group label { |
||||
display: flex; |
||||
align-items: center; |
||||
gap: 10px; |
||||
cursor: pointer; |
||||
} |
||||
|
||||
.checkbox-group input[type="checkbox"] { |
||||
width: 18px; |
||||
height: 18px; |
||||
cursor: pointer; |
||||
} |
||||
|
||||
.range-inputs { |
||||
display: flex; |
||||
align-items: center; |
||||
gap: 10px; |
||||
margin-top: 8px; |
||||
margin-left: 28px; |
||||
} |
||||
|
||||
.range-inputs input { |
||||
width: 70px; |
||||
padding: 8px; |
||||
border: 1px solid #0f3460; |
||||
border-radius: 6px; |
||||
background: #1a1a2e; |
||||
color: #fff; |
||||
font-size: 14px; |
||||
} |
||||
|
||||
.range-inputs span { |
||||
color: #aaa; |
||||
} |
||||
|
||||
button[type="submit"] { |
||||
width: 100%; |
||||
padding: 15px; |
||||
background: #e94560; |
||||
border: none; |
||||
border-radius: 6px; |
||||
color: #fff; |
||||
font-size: 16px; |
||||
font-weight: 600; |
||||
cursor: pointer; |
||||
transition: background 0.2s; |
||||
} |
||||
|
||||
button[type="submit"]:hover { |
||||
background: #ff6b6b; |
||||
} |
||||
|
||||
button[type="submit"]:disabled { |
||||
background: #666; |
||||
cursor: not-allowed; |
||||
} |
||||
|
||||
.settings-buttons { |
||||
display: flex; |
||||
gap: 10px; |
||||
margin-bottom: 20px; |
||||
} |
||||
|
||||
.settings-buttons button { |
||||
flex: 1; |
||||
padding: 10px; |
||||
background: #0f3460; |
||||
border: none; |
||||
border-radius: 6px; |
||||
color: #fff; |
||||
font-size: 14px; |
||||
cursor: pointer; |
||||
transition: background 0.2s; |
||||
} |
||||
|
||||
.settings-buttons button:hover { |
||||
background: #1a4a7a; |
||||
} |
||||
|
||||
.status { |
||||
display: none; |
||||
align-items: center; |
||||
justify-content: center; |
||||
gap: 12px; |
||||
text-align: center; |
||||
padding: 15px; |
||||
border-radius: 6px; |
||||
margin-bottom: 20px; |
||||
} |
||||
|
||||
.status.loading, |
||||
.status.success, |
||||
.status.error { |
||||
display: flex; |
||||
} |
||||
|
||||
.status.loading { |
||||
background: #0f3460; |
||||
} |
||||
|
||||
.status.success { |
||||
background: #1e5128; |
||||
} |
||||
|
||||
.status.error { |
||||
background: #7b2d26; |
||||
} |
||||
|
||||
.spinner { |
||||
display: none; |
||||
width: 20px; |
||||
height: 20px; |
||||
border: 3px solid rgba(255, 255, 255, 0.3); |
||||
border-top-color: #fff; |
||||
border-radius: 50%; |
||||
animation: spin 0.8s linear infinite; |
||||
} |
||||
|
||||
.status.loading .spinner { |
||||
display: block; |
||||
} |
||||
|
||||
@keyframes spin { |
||||
to { |
||||
transform: rotate(360deg); |
||||
} |
||||
} |
||||
|
||||
.progress-container { |
||||
display: none; |
||||
margin-bottom: 20px; |
||||
} |
||||
|
||||
.progress-bar { |
||||
height: 8px; |
||||
background: #0f3460; |
||||
border-radius: 4px; |
||||
overflow: hidden; |
||||
} |
||||
|
||||
.progress-fill { |
||||
height: 100%; |
||||
background: linear-gradient(90deg, #e94560, #ff6b6b); |
||||
border-radius: 4px; |
||||
width: 0%; |
||||
transition: width 0.1s linear; |
||||
} |
||||
|
||||
.progress-text { |
||||
text-align: center; |
||||
margin-top: 8px; |
||||
font-size: 14px; |
||||
color: #aaa; |
||||
} |
||||
|
||||
.results { |
||||
display: grid; |
||||
grid-template-columns: repeat(auto-fill, minmax(400px, 1fr)); |
||||
gap: 20px; |
||||
} |
||||
|
||||
.image-card { |
||||
background: #16213e; |
||||
border-radius: 10px; |
||||
overflow: hidden; |
||||
} |
||||
|
||||
.image-card img { |
||||
width: 100%; |
||||
height: auto; |
||||
display: block; |
||||
} |
||||
|
||||
.image-info { |
||||
padding: 15px; |
||||
} |
||||
|
||||
.image-info p { |
||||
margin: 8px 0; |
||||
font-size: 14px; |
||||
word-break: break-word; |
||||
} |
||||
|
||||
.image-info a { |
||||
color: #e94560; |
||||
text-decoration: none; |
||||
} |
||||
|
||||
.image-info a:hover { |
||||
text-decoration: underline; |
||||
} |
||||
|
||||
@media (max-width: 600px) { |
||||
.form-row { |
||||
grid-template-columns: 1fr; |
||||
} |
||||
|
||||
.results { |
||||
grid-template-columns: 1fr; |
||||
} |
||||
} |
||||
|
||||
/* Wide screen mode */ |
||||
@media (min-width: 1200px) { |
||||
body { |
||||
padding: 0; |
||||
height: 100vh; |
||||
overflow: hidden; |
||||
} |
||||
|
||||
.container { |
||||
display: flex; |
||||
max-width: none; |
||||
height: 100vh; |
||||
margin: 0; |
||||
} |
||||
|
||||
.panel-left { |
||||
width: 600px; |
||||
min-width: 600px; |
||||
padding: 20px; |
||||
overflow-y: auto; |
||||
border-right: 1px solid #0f3460; |
||||
} |
||||
|
||||
.panel-left h1 { |
||||
font-size: 1.5rem; |
||||
margin-bottom: 20px; |
||||
} |
||||
|
||||
.panel-left form { |
||||
margin-bottom: 20px; |
||||
} |
||||
|
||||
.panel-left .form-row { |
||||
grid-template-columns: 1fr 1fr; |
||||
} |
||||
|
||||
/* Smaller scrollbar for left panel */ |
||||
.panel-left::-webkit-scrollbar { |
||||
width: 6px; |
||||
} |
||||
|
||||
.panel-left::-webkit-scrollbar-track { |
||||
background: #1a1a2e; |
||||
} |
||||
|
||||
.panel-left::-webkit-scrollbar-thumb { |
||||
background: #0f3460; |
||||
border-radius: 3px; |
||||
} |
||||
|
||||
.panel-left::-webkit-scrollbar-thumb:hover { |
||||
background: #1a4a7a; |
||||
} |
||||
|
||||
/* Firefox scrollbar */ |
||||
.panel-left { |
||||
scrollbar-width: thin; |
||||
scrollbar-color: #0f3460 #1a1a2e; |
||||
} |
||||
|
||||
.panel-right { |
||||
flex: 1; |
||||
padding: 20px; |
||||
overflow-y: auto; |
||||
background: #12121f; |
||||
} |
||||
|
||||
.panel-right .results { |
||||
grid-template-columns: repeat(auto-fill, minmax(400px, 1fr)); |
||||
} |
||||
} |
||||
@ -0,0 +1,625 @@ |
||||
<!DOCTYPE html> |
||||
<html lang="en"> |
||||
<head> |
||||
<meta charset="UTF-8"> |
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0"> |
||||
<title>Stable Diffusion Generator</title> |
||||
<link rel="stylesheet" href="/static/style.css"> |
||||
</head> |
||||
<body> |
||||
<div class="container"> |
||||
<div class="panel-left"> |
||||
<h1>Stable Diffusion Generator</h1> |
||||
|
||||
<form id="generate-form"> |
||||
<div class="form-group"> |
||||
<label for="prompt">Prompt</label> |
||||
<textarea id="prompt" name="prompt" rows="3" placeholder="Enter your prompt here..."></textarea> |
||||
</div> |
||||
|
||||
<div class="form-group"> |
||||
<label for="negative-prompt">Negative Prompt</label> |
||||
<textarea id="negative-prompt" name="negative_prompt" rows="2" placeholder="Things to avoid in the image..."></textarea> |
||||
</div> |
||||
|
||||
<div class="form-row"> |
||||
<div class="form-group"> |
||||
<label for="seed">Seed</label> |
||||
<div class="seed-input"> |
||||
<input type="number" id="seed" name="seed" placeholder="Random"> |
||||
<button type="button" id="random-seed">Random</button> |
||||
</div> |
||||
</div> |
||||
|
||||
<div class="form-group"> |
||||
<label for="count">Number of Images</label> |
||||
<input type="number" id="count" name="count" value="1" min="1" max="50"> |
||||
</div> |
||||
</div> |
||||
|
||||
<div class="form-row"> |
||||
<div class="form-group"> |
||||
<label for="width">Width</label> |
||||
<select id="width" name="width"> |
||||
<option value="128">128</option> |
||||
<option value="256">256</option> |
||||
<option value="512" selected>512 (SD 1.5)</option> |
||||
<option value="768">768</option> |
||||
<option value="1024">1024 (SDXL)</option> |
||||
</select> |
||||
</div> |
||||
<div class="form-group"> |
||||
<label for="height">Height</label> |
||||
<select id="height" name="height"> |
||||
<option value="128">128</option> |
||||
<option value="256">256</option> |
||||
<option value="512" selected>512 (SD 1.5)</option> |
||||
<option value="768">768</option> |
||||
<option value="1024">1024 (SDXL)</option> |
||||
</select> |
||||
</div> |
||||
</div> |
||||
|
||||
<div class="form-row"> |
||||
<div class="form-group" id="steps-group"> |
||||
<label for="steps">Steps: <span id="steps-value">20</span></label> |
||||
<input type="range" id="steps" name="steps" min="1" max="100" value="20"> |
||||
</div> |
||||
|
||||
<div class="form-group" id="guidance-group"> |
||||
<label for="guidance">Guidance Scale: <span id="guidance-value">7.5</span></label> |
||||
<input type="range" id="guidance" name="guidance_scale" min="1" max="20" step="0.5" value="7.5"> |
||||
</div> |
||||
</div> |
||||
|
||||
<div class="form-group checkbox-group"> |
||||
<label> |
||||
<input type="checkbox" id="quality-keywords" name="add_quality_keywords" checked> |
||||
Add quality keywords |
||||
</label> |
||||
</div> |
||||
|
||||
<div class="form-group checkbox-group"> |
||||
<label> |
||||
<input type="checkbox" id="increment-seed" name="increment_seed" checked> |
||||
Increment seed |
||||
</label> |
||||
</div> |
||||
|
||||
<div class="form-group checkbox-group"> |
||||
<label> |
||||
<input type="checkbox" id="vary-guidance" name="vary_guidance"> |
||||
Vary guidance |
||||
</label> |
||||
<div class="range-inputs" id="guidance-range" style="display: none;"> |
||||
<input type="number" id="guidance-low" value="5" min="1" max="20" step="0.5"> |
||||
<span>to</span> |
||||
<input type="number" id="guidance-high" value="12" min="1" max="20" step="0.5"> |
||||
</div> |
||||
</div> |
||||
|
||||
<div class="form-group checkbox-group"> |
||||
<label> |
||||
<input type="checkbox" id="vary-steps" name="vary_steps"> |
||||
Vary steps |
||||
</label> |
||||
<div class="range-inputs" id="steps-range" style="display: none;"> |
||||
<input type="number" id="steps-low" value="20" min="1" max="100"> |
||||
<span>to</span> |
||||
<input type="number" id="steps-high" value="80" min="1" max="100"> |
||||
</div> |
||||
</div> |
||||
|
||||
<button type="submit" id="generate-btn">Generate</button> |
||||
</form> |
||||
|
||||
<div class="settings-buttons"> |
||||
<button type="button" id="export-btn">Export Settings</button> |
||||
<button type="button" id="import-btn">Import Settings</button> |
||||
<input type="file" id="import-file" accept=".json" hidden> |
||||
</div> |
||||
|
||||
<div id="status" class="status"> |
||||
<div class="spinner"></div> |
||||
<span id="status-text"></span> |
||||
</div> |
||||
|
||||
<div id="progress-container" class="progress-container"> |
||||
<div class="progress-bar"> |
||||
<div id="progress-fill" class="progress-fill"></div> |
||||
</div> |
||||
<div id="progress-text" class="progress-text"></div> |
||||
</div> |
||||
</div> |
||||
|
||||
<div class="panel-right"> |
||||
<div id="results" class="results"></div> |
||||
</div> |
||||
</div> |
||||
|
||||
<script> |
||||
// Configuration constants |
||||
const CONFIG = { |
||||
GUIDANCE_MIN: 1, |
||||
GUIDANCE_MAX: 20, |
||||
GUIDANCE_SPREAD: 2.5, |
||||
STEPS_MIN: 1, |
||||
STEPS_MAX: 100, |
||||
STEPS_SPREAD: 15, |
||||
DEFAULT_TIME_ESTIMATE: 20 |
||||
}; |
||||
|
||||
const form = document.getElementById('generate-form'); |
||||
const generateBtn = document.getElementById('generate-btn'); |
||||
const statusDiv = document.getElementById('status'); |
||||
const statusText = document.getElementById('status-text'); |
||||
const progressContainer = document.getElementById('progress-container'); |
||||
const progressFill = document.getElementById('progress-fill'); |
||||
const progressText = document.getElementById('progress-text'); |
||||
const results = document.getElementById('results'); |
||||
const stepsSlider = document.getElementById('steps'); |
||||
const stepsValue = document.getElementById('steps-value'); |
||||
const guidanceSlider = document.getElementById('guidance'); |
||||
const guidanceValue = document.getElementById('guidance-value'); |
||||
const randomSeedBtn = document.getElementById('random-seed'); |
||||
const seedInput = document.getElementById('seed'); |
||||
|
||||
let timePerImage = null; |
||||
let progressInterval = null; |
||||
let imageStartTime = null; |
||||
|
||||
const incrementSeedCheckbox = document.getElementById('increment-seed'); |
||||
const varyGuidanceCheckbox = document.getElementById('vary-guidance'); |
||||
const varyStepsCheckbox = document.getElementById('vary-steps'); |
||||
const guidanceRangeDiv = document.getElementById('guidance-range'); |
||||
const stepsRangeDiv = document.getElementById('steps-range'); |
||||
const guidanceGroup = document.getElementById('guidance-group'); |
||||
const stepsGroup = document.getElementById('steps-group'); |
||||
const guidanceLow = document.getElementById('guidance-low'); |
||||
const guidanceHigh = document.getElementById('guidance-high'); |
||||
const stepsLow = document.getElementById('steps-low'); |
||||
const stepsHigh = document.getElementById('steps-high'); |
||||
const countInput = document.getElementById('count'); |
||||
const incrementSeedGroup = incrementSeedCheckbox.closest('.form-group'); |
||||
const varyGuidanceGroup = varyGuidanceCheckbox.closest('.form-group'); |
||||
const varyStepsGroup = varyStepsCheckbox.closest('.form-group'); |
||||
|
||||
function updateVaryOptionsVisibility() { |
||||
const count = parseInt(countInput.value) || 1; |
||||
const showVary = count > 1; |
||||
incrementSeedGroup.style.display = showVary ? 'block' : 'none'; |
||||
varyGuidanceGroup.style.display = showVary ? 'block' : 'none'; |
||||
varyStepsGroup.style.display = showVary ? 'block' : 'none'; |
||||
} |
||||
|
||||
countInput.addEventListener('input', updateVaryOptionsVisibility); |
||||
updateVaryOptionsVisibility(); |
||||
|
||||
stepsSlider.addEventListener('input', () => { |
||||
stepsValue.textContent = stepsSlider.value; |
||||
timePerImage = null; |
||||
if (!varyStepsCheckbox.checked) { |
||||
const val = parseInt(stepsSlider.value); |
||||
stepsLow.value = Math.max(CONFIG.STEPS_MIN, val - CONFIG.STEPS_SPREAD); |
||||
stepsHigh.value = Math.min(CONFIG.STEPS_MAX, val + CONFIG.STEPS_SPREAD); |
||||
} |
||||
}); |
||||
|
||||
guidanceSlider.addEventListener('input', () => { |
||||
guidanceValue.textContent = guidanceSlider.value; |
||||
if (!varyGuidanceCheckbox.checked) { |
||||
const val = parseFloat(guidanceSlider.value); |
||||
guidanceLow.value = Math.max(CONFIG.GUIDANCE_MIN, val - CONFIG.GUIDANCE_SPREAD); |
||||
guidanceHigh.value = Math.min(CONFIG.GUIDANCE_MAX, val + CONFIG.GUIDANCE_SPREAD); |
||||
} |
||||
}); |
||||
|
||||
randomSeedBtn.addEventListener('click', () => { |
||||
seedInput.value = Math.floor(100000000 + Math.random() * 900000000); |
||||
}); |
||||
|
||||
document.getElementById('prompt').addEventListener('keydown', (e) => { |
||||
if (e.ctrlKey && e.key === 'Enter') { |
||||
e.preventDefault(); |
||||
form.requestSubmit(); |
||||
} |
||||
}); |
||||
|
||||
varyGuidanceCheckbox.addEventListener('change', () => { |
||||
guidanceRangeDiv.style.display = varyGuidanceCheckbox.checked ? 'flex' : 'none'; |
||||
guidanceGroup.style.display = varyGuidanceCheckbox.checked ? 'none' : 'block'; |
||||
}); |
||||
|
||||
varyStepsCheckbox.addEventListener('change', () => { |
||||
stepsRangeDiv.style.display = varyStepsCheckbox.checked ? 'flex' : 'none'; |
||||
stepsGroup.style.display = varyStepsCheckbox.checked ? 'none' : 'block'; |
||||
}); |
||||
|
||||
stepsLow.addEventListener('input', () => { timePerImage = null; }); |
||||
stepsHigh.addEventListener('input', () => { timePerImage = null; }); |
||||
document.getElementById('width').addEventListener('change', () => { timePerImage = null; }); |
||||
document.getElementById('height').addEventListener('change', () => { timePerImage = null; }); |
||||
|
||||
const exportBtn = document.getElementById('export-btn'); |
||||
const importBtn = document.getElementById('import-btn'); |
||||
const importFile = document.getElementById('import-file'); |
||||
|
||||
// Initialize range values based on default slider values |
||||
(function initRanges() { |
||||
const gVal = parseFloat(guidanceSlider.value); |
||||
guidanceLow.value = Math.max(CONFIG.GUIDANCE_MIN, gVal - CONFIG.GUIDANCE_SPREAD); |
||||
guidanceHigh.value = Math.min(CONFIG.GUIDANCE_MAX, gVal + CONFIG.GUIDANCE_SPREAD); |
||||
const sVal = parseInt(stepsSlider.value); |
||||
stepsLow.value = Math.max(CONFIG.STEPS_MIN, sVal - CONFIG.STEPS_SPREAD); |
||||
stepsHigh.value = Math.min(CONFIG.STEPS_MAX, sVal + CONFIG.STEPS_SPREAD); |
||||
})(); |
||||
|
||||
function getSettings() { |
||||
return { |
||||
prompt: document.getElementById('prompt').value, |
||||
negative_prompt: document.getElementById('negative-prompt').value, |
||||
seed: seedInput.value ? parseInt(seedInput.value) : null, |
||||
steps: parseInt(stepsSlider.value), |
||||
guidance_scale: parseFloat(guidanceSlider.value), |
||||
count: parseInt(document.getElementById('count').value), |
||||
width: parseInt(document.getElementById('width').value), |
||||
height: parseInt(document.getElementById('height').value), |
||||
add_quality_keywords: document.getElementById('quality-keywords').checked, |
||||
increment_seed: incrementSeedCheckbox.checked, |
||||
vary_guidance: varyGuidanceCheckbox.checked, |
||||
guidance_low: parseFloat(guidanceLow.value), |
||||
guidance_high: parseFloat(guidanceHigh.value), |
||||
vary_steps: varyStepsCheckbox.checked, |
||||
steps_low: parseInt(stepsLow.value), |
||||
steps_high: parseInt(stepsHigh.value) |
||||
}; |
||||
} |
||||
|
||||
function applySettings(settings) { |
||||
if (settings.prompt !== undefined) { |
||||
document.getElementById('prompt').value = settings.prompt; |
||||
} |
||||
if (settings.negative_prompt !== undefined) { |
||||
document.getElementById('negative-prompt').value = settings.negative_prompt; |
||||
} |
||||
if (settings.seed !== undefined && settings.seed !== null) { |
||||
seedInput.value = settings.seed; |
||||
} else { |
||||
seedInput.value = ''; |
||||
} |
||||
if (settings.steps !== undefined) { |
||||
stepsSlider.value = settings.steps; |
||||
stepsValue.textContent = settings.steps; |
||||
} |
||||
if (settings.guidance_scale !== undefined) { |
||||
guidanceSlider.value = settings.guidance_scale; |
||||
guidanceValue.textContent = settings.guidance_scale; |
||||
} |
||||
if (settings.count !== undefined) { |
||||
document.getElementById('count').value = settings.count; |
||||
} |
||||
if (settings.width !== undefined) { |
||||
document.getElementById('width').value = settings.width; |
||||
} |
||||
if (settings.height !== undefined) { |
||||
document.getElementById('height').value = settings.height; |
||||
} |
||||
if (settings.add_quality_keywords !== undefined) { |
||||
document.getElementById('quality-keywords').checked = settings.add_quality_keywords; |
||||
} |
||||
if (settings.increment_seed !== undefined) { |
||||
incrementSeedCheckbox.checked = settings.increment_seed; |
||||
} |
||||
if (settings.vary_guidance !== undefined) { |
||||
varyGuidanceCheckbox.checked = settings.vary_guidance; |
||||
guidanceRangeDiv.style.display = settings.vary_guidance ? 'flex' : 'none'; |
||||
guidanceGroup.style.display = settings.vary_guidance ? 'none' : 'block'; |
||||
} |
||||
if (settings.guidance_low !== undefined) { |
||||
guidanceLow.value = settings.guidance_low; |
||||
} |
||||
if (settings.guidance_high !== undefined) { |
||||
guidanceHigh.value = settings.guidance_high; |
||||
} |
||||
if (settings.vary_steps !== undefined) { |
||||
varyStepsCheckbox.checked = settings.vary_steps; |
||||
stepsRangeDiv.style.display = settings.vary_steps ? 'flex' : 'none'; |
||||
stepsGroup.style.display = settings.vary_steps ? 'none' : 'block'; |
||||
} |
||||
if (settings.steps_low !== undefined) { |
||||
stepsLow.value = settings.steps_low; |
||||
} |
||||
if (settings.steps_high !== undefined) { |
||||
stepsHigh.value = settings.steps_high; |
||||
} |
||||
|
||||
// Sync ranges to slider values if vary mode is off |
||||
if (!varyGuidanceCheckbox.checked) { |
||||
const gVal = parseFloat(guidanceSlider.value); |
||||
guidanceLow.value = Math.max(CONFIG.GUIDANCE_MIN, gVal - CONFIG.GUIDANCE_SPREAD); |
||||
guidanceHigh.value = Math.min(CONFIG.GUIDANCE_MAX, gVal + CONFIG.GUIDANCE_SPREAD); |
||||
} |
||||
if (!varyStepsCheckbox.checked) { |
||||
const sVal = parseInt(stepsSlider.value); |
||||
stepsLow.value = Math.max(CONFIG.STEPS_MIN, sVal - CONFIG.STEPS_SPREAD); |
||||
stepsHigh.value = Math.min(CONFIG.STEPS_MAX, sVal + CONFIG.STEPS_SPREAD); |
||||
} |
||||
} |
||||
|
||||
exportBtn.addEventListener('click', () => { |
||||
const settings = getSettings(); |
||||
const json = JSON.stringify(settings, null, 2); |
||||
const blob = new Blob([json], { type: 'application/json' }); |
||||
const url = URL.createObjectURL(blob); |
||||
const a = document.createElement('a'); |
||||
a.href = url; |
||||
a.download = 'sd-settings.json'; |
||||
a.click(); |
||||
URL.revokeObjectURL(url); |
||||
}); |
||||
|
||||
importBtn.addEventListener('click', () => { |
||||
importFile.click(); |
||||
}); |
||||
|
||||
importFile.addEventListener('change', (e) => { |
||||
const file = e.target.files[0]; |
||||
if (!file) return; |
||||
|
||||
const reader = new FileReader(); |
||||
reader.onload = (event) => { |
||||
try { |
||||
const settings = JSON.parse(event.target.result); |
||||
applySettings(settings); |
||||
timePerImage = null; |
||||
setStatus('Settings imported', 'success'); |
||||
} catch (err) { |
||||
setStatus('Invalid settings file', 'error'); |
||||
} |
||||
}; |
||||
reader.readAsText(file); |
||||
importFile.value = ''; |
||||
}); |
||||
|
||||
function setStatus(text, type) { |
||||
statusText.textContent = text; |
||||
statusDiv.className = 'status ' + type; |
||||
} |
||||
|
||||
function showProgress(show) { |
||||
progressContainer.style.display = show ? 'block' : 'none'; |
||||
} |
||||
|
||||
function updateProgress(percent, eta) { |
||||
progressFill.style.width = Math.min(100, percent) + '%'; |
||||
if (eta !== null) { |
||||
progressText.textContent = `${Math.round(percent)}% - ETA: ${Math.ceil(eta)}s`; |
||||
} else { |
||||
progressText.textContent = `${Math.round(percent)}%`; |
||||
} |
||||
} |
||||
|
||||
function startProgressTimer(estimatedTime) { |
||||
imageStartTime = Date.now(); |
||||
stopProgressTimer(); |
||||
|
||||
progressInterval = setInterval(() => { |
||||
const elapsed = (Date.now() - imageStartTime) / 1000; |
||||
let percent, eta; |
||||
|
||||
if (elapsed < estimatedTime * 0.9) { |
||||
percent = (elapsed / estimatedTime) * 100; |
||||
eta = estimatedTime - elapsed; |
||||
} else { |
||||
const overTime = elapsed - (estimatedTime * 0.9); |
||||
const slowFactor = 1 + overTime * 0.5; |
||||
percent = 90 + (9 * (1 - 1 / slowFactor)); |
||||
eta = null; |
||||
} |
||||
|
||||
updateProgress(Math.min(99, percent), eta); |
||||
}, 100); |
||||
} |
||||
|
||||
function stopProgressTimer() { |
||||
if (progressInterval) { |
||||
clearInterval(progressInterval); |
||||
progressInterval = null; |
||||
} |
||||
} |
||||
|
||||
form.addEventListener('submit', async (e) => { |
||||
e.preventDefault(); |
||||
|
||||
const prompt = document.getElementById('prompt').value.trim(); |
||||
if (!prompt) { |
||||
setStatus('Please enter a prompt', 'error'); |
||||
return; |
||||
} |
||||
|
||||
generateBtn.disabled = true; |
||||
generateBtn.textContent = 'Generating...'; |
||||
results.innerHTML = ''; |
||||
showProgress(true); |
||||
|
||||
const seedValue = seedInput.value.trim(); |
||||
let requestedCount = parseInt(document.getElementById('count').value); |
||||
|
||||
// If count > 1 but no vary options enabled, reduce to 1 |
||||
const hasVaryOption = incrementSeedCheckbox.checked || varyGuidanceCheckbox.checked || varyStepsCheckbox.checked; |
||||
if (requestedCount > 1 && !hasVaryOption) { |
||||
requestedCount = 1; |
||||
} |
||||
|
||||
setStatus(requestedCount === 1 ? 'Generating image...' : `Generating image 1 of ${requestedCount}...`, 'loading'); |
||||
|
||||
const data = { |
||||
prompt: prompt, |
||||
negative_prompt: document.getElementById('negative-prompt').value.trim(), |
||||
seed: seedValue ? parseInt(seedValue) : null, |
||||
steps: parseInt(stepsSlider.value), |
||||
guidance_scale: parseFloat(guidanceSlider.value), |
||||
count: requestedCount, |
||||
width: parseInt(document.getElementById('width').value), |
||||
height: parseInt(document.getElementById('height').value), |
||||
add_quality_keywords: document.getElementById('quality-keywords').checked, |
||||
increment_seed: incrementSeedCheckbox.checked, |
||||
vary_guidance: varyGuidanceCheckbox.checked, |
||||
guidance_low: parseFloat(document.getElementById('guidance-low').value), |
||||
guidance_high: parseFloat(document.getElementById('guidance-high').value), |
||||
vary_steps: varyStepsCheckbox.checked, |
||||
steps_low: parseInt(document.getElementById('steps-low').value), |
||||
steps_high: parseInt(document.getElementById('steps-high').value) |
||||
}; |
||||
|
||||
let imageCount = 0; |
||||
let generationStartTime = Date.now(); |
||||
const estimate = timePerImage || CONFIG.DEFAULT_TIME_ESTIMATE; |
||||
|
||||
updateProgress(0, estimate); |
||||
startProgressTimer(estimate); |
||||
|
||||
try { |
||||
const response = await fetch('/generate', { |
||||
method: 'POST', |
||||
headers: { |
||||
'Content-Type': 'application/json' |
||||
}, |
||||
body: JSON.stringify(data) |
||||
}); |
||||
|
||||
const reader = response.body.getReader(); |
||||
const decoder = new TextDecoder(); |
||||
let buffer = ''; |
||||
|
||||
while (true) { |
||||
const { done, value } = await reader.read(); |
||||
if (done) break; |
||||
|
||||
buffer += decoder.decode(value, { stream: true }); |
||||
const lines = buffer.split('\n'); |
||||
buffer = lines.pop(); |
||||
|
||||
for (const line of lines) { |
||||
if (line.startsWith('data: ')) { |
||||
const jsonStr = line.slice(6); |
||||
try { |
||||
const eventData = JSON.parse(jsonStr); |
||||
|
||||
if (eventData.error) { |
||||
setStatus('Error: ' + eventData.error, 'error'); |
||||
stopProgressTimer(); |
||||
showProgress(false); |
||||
} else if (eventData.done) { |
||||
setStatus(`Generated ${imageCount} image(s)`, 'success'); |
||||
stopProgressTimer(); |
||||
showProgress(false); |
||||
} else { |
||||
const now = Date.now(); |
||||
|
||||
if (imageCount === 0) { |
||||
timePerImage = (now - generationStartTime) / 1000; |
||||
} |
||||
|
||||
imageCount++; |
||||
stopProgressTimer(); |
||||
updateProgress(100, null); |
||||
addImageCard(eventData); |
||||
|
||||
if (eventData.index < eventData.total) { |
||||
const statusText = eventData.total === 1 |
||||
? 'Generating image...' |
||||
: `Generating image ${eventData.index + 1} of ${eventData.total}...`; |
||||
setStatus(statusText, 'loading'); |
||||
updateProgress(0, timePerImage); |
||||
startProgressTimer(timePerImage); |
||||
} |
||||
} |
||||
} catch (e) { |
||||
console.error('Failed to parse SSE data:', e); |
||||
} |
||||
} |
||||
} |
||||
} |
||||
} catch (error) { |
||||
setStatus('Error: ' + error.message, 'error'); |
||||
stopProgressTimer(); |
||||
showProgress(false); |
||||
} finally { |
||||
generateBtn.disabled = false; |
||||
generateBtn.textContent = 'Generate'; |
||||
} |
||||
}); |
||||
|
||||
function addImageCard(img) { |
||||
const card = document.createElement('div'); |
||||
card.className = 'image-card'; |
||||
|
||||
const guidance = typeof img.guidance_scale === 'number' |
||||
? img.guidance_scale.toFixed(1) |
||||
: img.guidance_scale; |
||||
|
||||
const negativePromptHtml = img.negative_prompt |
||||
? `<p><strong>Negative:</strong> ${img.negative_prompt}</p>` |
||||
: ''; |
||||
|
||||
card.innerHTML = ` |
||||
<a href="${img.url}" target="_blank"><img src="${img.base64}" alt="Generated image"></a> |
||||
<div class="image-info"> |
||||
<p><strong>Seed:</strong> ${img.seed} | <strong>Steps:</strong> ${img.steps} | <strong>Guidance:</strong> ${guidance} | <strong>Size:</strong> ${img.width}x${img.height}</p> |
||||
<p><strong>Prompt:</strong> ${img.prompt}</p> |
||||
${negativePromptHtml} |
||||
<p><a href="${img.url}" target="_blank">Open saved image</a> | <button type="button" class="use-settings-btn">Use Settings</button> | <button type="button" class="download-settings-btn">Download Settings</button></p> |
||||
</div> |
||||
`; |
||||
|
||||
card.querySelector('.download-settings-btn').addEventListener('click', () => { |
||||
const settings = { |
||||
prompt: img.prompt, |
||||
negative_prompt: img.negative_prompt || '', |
||||
seed: img.seed, |
||||
steps: img.steps, |
||||
guidance_scale: img.guidance_scale, |
||||
width: img.width, |
||||
height: img.height |
||||
}; |
||||
const json = JSON.stringify(settings, null, 2); |
||||
const blob = new Blob([json], { type: 'application/json' }); |
||||
const url = URL.createObjectURL(blob); |
||||
const a = document.createElement('a'); |
||||
a.href = url; |
||||
a.download = `sd-settings-${img.seed}.json`; |
||||
a.click(); |
||||
URL.revokeObjectURL(url); |
||||
}); |
||||
|
||||
card.querySelector('.use-settings-btn').addEventListener('click', () => { |
||||
seedInput.value = img.seed; |
||||
document.getElementById('negative-prompt').value = img.negative_prompt || ''; |
||||
if (parseInt(stepsSlider.value) !== img.steps) { |
||||
timePerImage = null; |
||||
} |
||||
stepsSlider.value = img.steps; |
||||
stepsValue.textContent = img.steps; |
||||
guidanceSlider.value = img.guidance_scale; |
||||
guidanceValue.textContent = guidance; |
||||
document.getElementById('width').value = img.width; |
||||
document.getElementById('height').value = img.height; |
||||
countInput.value = 1; |
||||
updateVaryOptionsVisibility(); |
||||
|
||||
// Disable vary modes to use exact settings |
||||
varyGuidanceCheckbox.checked = false; |
||||
guidanceRangeDiv.style.display = 'none'; |
||||
guidanceGroup.style.display = 'block'; |
||||
varyStepsCheckbox.checked = false; |
||||
stepsRangeDiv.style.display = 'none'; |
||||
stepsGroup.style.display = 'block'; |
||||
timePerImage = null; |
||||
}); |
||||
|
||||
results.appendChild(card); |
||||
} |
||||
</script> |
||||
</body> |
||||
</html> |
||||
Loading…
Reference in new issue