Published On Premiered Aug 21, 2023
This video is about letting you know about new work done for implementation of stable diffusion models in plain c++ language. It takes code from ggml, which was already powerful and popular version for LLM models like llama.cpp
This video also explains you step by step process to install stable-diffusion.cpp locally and compile it for your cpu.
Also convert your normal safe tensor or ckpt model into ggml bin format.
After this video you will be able:
1- Run stable-diffusion.cpp locally
2- compile for any operating system.
3- convert any sd 1.x model to ggml format.
4- infer from ggml models
5- perform text2image and image2image.
you can run these models on just 2GB of ram and on cpu.
GitHub repo : https://github.com/leejet/stable-diff...
civit ai realistic model: https://civitai.com/models/4201/reali...
Connect with me:
🔹 FaceBook :   / proogramminghub Â
🔹 Twitter :   / programming_hut Â
🔹 Github : https://github.com/Pawandeep-prog
🔹 Discord :   / discord Â
🔹 LinkedIn :   / programminghut Â
🔹 YouTube :    / programminghutofficial Â
stable diffusion,image generation,text to text,image to image,latent noise,deep learning,c++ version,cpp version,stable-diffusion.cpp,stable diffusion cpp,how to install,how to run,locally,2gb memory,ram,low configuration