Post
89
What do you think of my LLM Chat app so far?
Here are some of the features already included (and more are coming):
- Chat with AI models – Local inference via Ollama
- Reasoning support – View model thinking process (DeepSeek-R1, Qwen-QwQ, etc.)
- Vision models – Analyze images with llava, bakllava, moondream
- Image generation – Local GGUF models with GPU acceleration (CUDA)
- Fullscreen images – Click generated images to view in fullscreen
- Image attachments – File picker or clipboard paste (Ctrl+V)
- DeepSearch – Web search with tool use
- Inference Stats – Token counts, speed, duration (like Ollama verbose)
- Regenerate – Re-run any AI response
- Copy – One-click copy AI responses
Here are some of the features already included (and more are coming):
- Chat with AI models – Local inference via Ollama
- Reasoning support – View model thinking process (DeepSeek-R1, Qwen-QwQ, etc.)
- Vision models – Analyze images with llava, bakllava, moondream
- Image generation – Local GGUF models with GPU acceleration (CUDA)
- Fullscreen images – Click generated images to view in fullscreen
- Image attachments – File picker or clipboard paste (Ctrl+V)
- DeepSearch – Web search with tool use
- Inference Stats – Token counts, speed, duration (like Ollama verbose)
- Regenerate – Re-run any AI response
- Copy – One-click copy AI responses