Doeling-v1 Model Card
Doeling-v1 is a fast and efficient text-to-image model. It is a fine-tune of SSD-1B with a merged LCM LoRA, and was trained on images generated by the Faun model. This allows it to generate images at very high speeds in as few as 4 sampling steps, making it an ideal choice for users with low-end hardware who prioritize speed and low VRAM usage.
Model Files
This repository provides two quantized versions of the model. Choose the one that best fits your hardware:
doeling-v1.q8_0.gguf: The recommended version for higher quality.doeling-v1.q4_k.gguf: A more compressed version for systems with less memory.
Usage with DREAMIO (Local Generation)
You can use this model on your own hardware by loading it into DREAMIO.
Step-by-Step Guide:
Download and Place the Model:
- Download the appropriate GGUF model file (
doeling-v1.q8_0.ggufordoeling-v1.q4_k.gguf) from the "Files and versions" tab. - Place the file in a folder of your choice. Alternatively, you can put it in the default models folder located at
[GAME FOLDER]/Files/Models/Images/Checkpoints/.
- Download the appropriate GGUF model file (
Configure DREAMIO:
- Launch DREAMIO and navigate to Image Generation Settings.
- Set the Provider to
Local. - If you used a custom folder, click Models folder and select that folder. If you used the default folder, DREAMIO will find it automatically.
- From the Model dropdown menu, choose the Doeling-v1 model you downloaded.
Apply Recommended Settings:
- Enter the parameters from the "Recommended Settings" section below to achieve the intended output quality and style.
- For detailed instructions on this setup, please refer to the official guide: Local Image Generation Setup (Built-in Engine).
Recommended Settings
To get the best results, apply the following parameters in Image Generation Settings:
- Dimensions: 1024 x 1024
- Sampling steps: 4-8
- Classifier-free guidance scale: 1.00
- Sampler:
LCM - Scheduler:
SMOOTHSTEP - TAESD:
taesdxl.q8_0.gguf
Hardware Requirements
This model is optimized for local inference on consumer GPUs.
- GPU (Recommended): For best performance, use a dedicated GPU with 6GB of VRAM or more.
- CPU: If you do not have a compatible GPU, you can run the model on your CPU by selecting the
AVX2orNoAVXbackend. Please note that this will be significantly slower.
For users with limited VRAM, enabling the Autoencoder tiling or Offload to CPU options in the Image Generation Settings can help reduce VRAM usage and prevent "out of memory" errors.
- Downloads last month
- 25
8-bit
Model tree for OlegSkutte/Doeling-v1-GGUF
Base model
segmind/SSD-1B