Trending

#ControlNet

Latest posts tagged with #ControlNet on Bluesky

Latest Top
Trending

Posts tagged #ControlNet

Post image

New online course invites us to think about how bodies perform with and within the latent spaces of technological media.
#comfyui #controlnet #LoRA
***Latent Bodies***
30. Mar. - 4. May
Five-weeks, Mondays, 7-9PM CET
For more info and to sign-up www.schoolofma.org/programs/p/e...

1 0 0 0
Video

A brief preview of the creative process behind the Nature Dance music video. I spent dozens of hours designing and learning how to use #stablediffusion and #controlnet to achieve this video effect, called #illusionDiffusion. Full music videoclip: www.youtube.com/watch?v=OdQp...

0 0 0 0
iorek b - Nature Dance #music #electronicmusic #dance #tutorial #stablediffusion #illusiondiffusion
iorek b - Nature Dance #music #electronicmusic #dance #tutorial #stablediffusion #illusiondiffusion YouTube video by iorek b

A brief preview of the creative process behind the Nature Dance music video. I spent dozens of hours designing and learning how to use #stablediffusion and #controlnet to achieve this video effect, called #illusionDiffusion.
www.youtube.com/shorts/tGoRK...

0 0 0 0
iorek b - Nature Dance #music #electronicmusic #dance #tutorial #stablediffusion #illusiondiffusion
iorek b - Nature Dance #music #electronicmusic #dance #tutorial #stablediffusion #illusiondiffusion YouTube video by iorek b

A brief preview of the creative process behind the Nature Dance music video. I spent dozens of hours designing and learning how to use #stablediffusiuon and #controlnet to achieve this video effect, called #illusionDiffusion.
www.youtube.com/shorts/bQHZ7...

0 0 0 0
iorek b - Nature Dance #music #electronicmusic #dance #tutorial #stablediffusion #illusiondiffusion
iorek b - Nature Dance #music #electronicmusic #dance #tutorial #stablediffusion #illusiondiffusion YouTube video by iorek b

A brief preview of the creative process behind the Nature Dance music video. I spent dozens of hours designing and learning how to use #stablediffusiuon and #controlnet to achieve this video effect, called #illusionDiffusion.

www.youtube.com/shorts/AWcto...

1 0 0 0
iorek b - Nature Dance (Official Music Video)
iorek b - Nature Dance (Official Music Video) YouTube video by iorek b

I spent dozens of hours designing and learning how to use #stablediffusiuon and #controlnet to achieve this videoclip with the #illusionDiffusion technic!

I really enjoy the result, hope you'll like it too

www.youtube.com/watch?v=OdQp...

0 0 0 0
SemanticControl: Training-Free Boost for ControlNet with Loose Cues

SemanticControl: Training-Free Boost for ControlNet with Loose Cues

SemanticControl boosts ControlNet via a training‑free mask extraction that adds only one extra forward pass, keeping cost low; the paper was submitted in September 2025. getnews.me/semanticcontrol-training... #semanticcontrol #controlnet

0 0 0 0
Video

Testing my #renderformer wrapper and #uni3c #controlnet for camera control in #WAN 2.1 video generation 📽️

In this example i am using a rendered 3d animation as a driver for the img2video generation without leaving @ComfyUI.

#ComfyUI #WANVideo #Ai #AIinArchitecture #AIGeneratedVisuals

0 0 0 0
Post image

SDXLモデルでキャラクター差分作りに挑戦中!✏️✨ lora、controlnet、ip adapterを活用して、絵柄を維持したまま高クオリティ&破綻の少ない差分が作れるようになってきました!🙌 でも手はやっぱり鬼門…😭

#ComfyUI #AIアート #キャラデザ #lora #controlnet #ipadapter #AIイラスト #SDXL

7 0 0 0
Post image

Cyber AI Sentinel QR Code #ai #brain #qrcode #qr #qrcodedesign #aiart #controlnet #qrcodeart #cyber #art #deepseek #aitechnology #techart #bsky

3 0 0 0
Post image

AI brain QR code art

#ai #brain #qrcode #qr #qrcodedesign #aiart #controlnet #qrcodeart #cyber #art #bskyart

4 0 0 0
Post image Post image Post image Post image

Exciting session this morning on large langage models application to Earth Observations #lps25 with Solene Debuysere and Madleen Bultez first #DIVA #controlnet

0 0 0 0
image

image

🎨 Give your AI art some serious style with ControlNet in ComfyUI! Use depth maps, edge detection & more to actually guide your AI like a pro. Less random, more wow. 🧠💥

🔗 Learn the magic:https://tinyurl.com/59x7jhtt

#AIArt #ComfyUI #ControlNet

1 0 0 0
Post image

Want to create stunning visuals with cutting-edge AI? Explore 5 top AI tools that use ControlNet in 2025 for next-level results. Discover more today

visite site : ebookbusinessclub.com/ai-tools-tha...

#AITools #ControlNet #VisualAI

0 0 0 0
Preview
The Yoga of Image Generation – Part 2 In the first part of this series on image generation, we explored how to set up a simple Text-to-Image workflow using Stable Diffusion and ComfyUI, running it locally. We also introduced embeddings to enhance prompts and adjust image styles. However, we found that embeddings alone were not sufficient for our specific use case: generating accurate yoga poses. ## Simple Image-to-Image Workflow Let’s now take it a step further and provide an image alongside the text prompts to serve as a base for the generation process. Here is the workflow in ComfyUI: I use an image of myself and combine it with a prompt specifying that the output image should depict a woman wearing blue yoga pants instead. This image is converted into the latent space and used in the generation process instead of starting from a fully noisy latent image. I apply only 55% denoising. We can see that the output image resembles the input image. The pose is identical, the subject is now a woman, but the surroundings are similar, and she is not wearing blue pants. Of course, I can tweak the prompt and change the generation seed. I can also adjust the denoising percentage. Here is the result with a 70% value: The image quality is better, and the pants are more blue, but the pose has changed slightly: her head is tilted down, and her left hand is not in the same position. There’s a trade-off between pose accuracy and the creative freedom given to the model. ## ControlNets Rather than injecting the entire input image into the generation process, it’s more efficient to transfer only specific characteristics. That’s where Control Networks (or ControlNets) come in. ControlNets are additional neural networks that extract features from an image and inject them directly into the latent space and the generation process. Control methods specialize in detecting different types of image features, such as: * **Structural** : pose, edge detection, segmentation, depth * **Texture & Detail**: scribbles/sketches, stylization from edges * **Content & Layout**: bounding boxes, inpainting masks * **Abstract & Style**: color maps, textural fields Most ControlNets work with preprocessors that extract specific features from input images. Here are some examples: Here’s our workflow updated to include a Depth ControlNet: I’ve reverted to an empty latent image so we can focus only on the depth features detected by the preprocessor and injected into the latent space by the ControlNet. The main parameters to tune are the strength of the ControlNet (here, 50%) and when it is applied during the generation (here, throughout the entire process). By tweaking these settings, you can adjust how much the ControlNet influences the final image and, once again, find the best balance between control and creativity. I can still apply an embedding to achieve a specific style—for example, a comic style: There is even an OpenPose ControlNet, specifically trained to detect and apply human poses, but unfortunately, it is not accurate enough for yoga poses. ## Advanced Image-to-Image Workflow Now that we’re extracting only certain features, we can use more abstract images as inputs—focusing on the pose and letting Stable Diffusion handle the rest. After multiple tests, I decided to combine two ControlNets: one for Edge Detection (Canny Edge, 40% strength) and one for Depth (30% strength). Here’s the resulting workflow: Watch this video to see the process in action with two fine-tuned SDXL models: * Juggernaut XL * Cheyenne, specialized in comic and graphic novel styles Neat! I can now control the pose using ControlNets and influence the rest of the image with prompts and embeddings. I just need to change the input image in the workflow to generate an entire series. Here are a few examples using image-compare mode: This is super convenient since my use case involves generating sequences—or even full yoga classes. But how can I ensure that the woman in each pose remains the same? How do I maintain visual identity and consistency across the sequence of images? We’ll cover that in the final part of this series. So stay tuned—and check out my YouTube tutorials as well.
0 0 0 0
Preview
The Yoga of Image Generation – Part 2 In the first part of this series on image generation, we explored how to set up a simple Text-to-Image workflow using Stable Diffusion and ComfyUI, running it locally. We also introduced embeddings to enhance prompts and adjust image styles. However, we found that embeddings alone were not sufficient for our specific use case: generating accurate yoga poses. ## Simple Image-to-Image Workflow Let’s now take it a step further and provide an image alongside the text prompts to serve as a base for the generation process. Here is the workflow in ComfyUI: I use an image of myself and combine it with a prompt specifying that the output image should depict a woman wearing blue yoga pants instead. This image is converted into the latent space and used in the generation process instead of starting from a fully noisy latent image. I apply only 55% denoising. We can see that the output image resembles the input image. The pose is identical, the subject is now a woman, but the surroundings are similar, and she is not wearing blue pants. Of course, I can tweak the prompt and change the generation seed. I can also adjust the denoising percentage. Here is the result with a 70% value: The image quality is better, and the pants are more blue, but the pose has changed slightly: her head is tilted down, and her left hand is not in the same position. There’s a trade-off between pose accuracy and the creative freedom given to the model. ## ControlNets Rather than injecting the entire input image into the generation process, it’s more efficient to transfer only specific characteristics. That’s where Control Networks (or ControlNets) come in. ControlNets are additional neural networks that extract features from an image and inject them directly into the latent space and the generation process. Control methods specialize in detecting different types of image features, such as: * **Structural** : pose, edge detection, segmentation, depth * **Texture & Detail**: scribbles/sketches, stylization from edges * **Content & Layout**: bounding boxes, inpainting masks * **Abstract & Style**: color maps, textural fields Most ControlNets work with preprocessors that extract specific features from input images. Here are some examples: Here’s our workflow updated to include a Depth ControlNet: I’ve reverted to an empty latent image so we can focus only on the depth features detected by the preprocessor and injected into the latent space by the ControlNet. The main parameters to tune are the strength of the ControlNet (here, 50%) and when it is applied during the generation (here, throughout the entire process). By tweaking these settings, you can adjust how much the ControlNet influences the final image and, once again, find the best balance between control and creativity. I can still apply an embedding to achieve a specific style—for example, a comic style: There is even an OpenPose ControlNet, specifically trained to detect and apply human poses, but unfortunately, it is not accurate enough for yoga poses. ## Advanced Image-to-Image Workflow Now that we’re extracting only certain features, we can use more abstract images as inputs—focusing on the pose and letting Stable Diffusion handle the rest. After multiple tests, I decided to combine two ControlNets: one for Edge Detection (Canny Edge, 40% strength) and one for Depth (30% strength). Here’s the resulting workflow: Watch this video to see the process in action with two fine-tuned SDXL models: * Juggernaut XL * Cheyenne, specialized in comic and graphic novel styles Neat! I can now control the pose using ControlNets and influence the rest of the image with prompts and embeddings. I just need to change the input image in the workflow to generate an entire series. Here are a few examples using image-compare mode: This is super convenient since my use case involves generating sequences—or even full yoga classes. But how can I ensure that the woman in each pose remains the same? How do I maintain visual identity and consistency across the sequence of images? We’ll cover that in the final part of this series. So stay tuned—and check out my YouTube tutorials as well.
0 0 0 0
Original vehicle render in Blender. (Not my art.)

Original vehicle render in Blender. (Not my art.)

Wildcard XL TURBO gen using cryptomatte masks to maintain consistency.

Wildcard XL TURBO gen using cryptomatte masks to maintain consistency.

Playing around with controlnet maps in order to get cleaner vehicle gens. I had to use cryptomattes in order to maintain color consistency of the wheels. (Descriptions in the ALT text.)

#AIArtistCommunity #stablediffusion #controlnet #racecars

0 0 0 0
Post image

Mediated Bodies Zine is out! And in the knick of time, as our Mediated Bodies five-week online course begins this eve! Have a look, get inspired, and consider joining us! <3 bit.ly/mediatedBodi... #bodies #motiontracking #audiovisual #rokoko #touchdesigner #davinci #controlnet #stablediffusion

0 0 0 0
Post image

Welcome to Mediated Bodies, an upcoming five-week online course beginning soon! For more info and to join, visit: schoolofma.org/programs/p/e...

#bodies #motiontracking #technology #audiovisual #rokoko #touchdesigner #davinci #controlnet #stablediffusion

0 1 0 0
Realtime ai image generation

Realtime ai image generation

New setup for my students
#ComfyUI #Flux #ControlNet

2 1 0 0
Enhanced i2i with ControlNets & ComfyUI
Enhanced i2i with ControlNets & ComfyUI YouTube video by Tech at Worldline

Explore "Mastering ControlNets for i2i in ComfyUI" and boost your image generation skills.
Learn to use input images, modify prompts, and leverage ControlNets for amazing results!
🔗 youtu.be/9QRz5cKQCUg

#StableDiffusion #ComfyUI #ControlNet #TechAtWorldline

0 0 0 0
Post image Post image

cannyは絵を元にアウトラインとそうでない部分を白黒に分けて、それを補助(骨組み)にして、プロンプトから画像をせいせいする感じか。
#controlnet
#stablediffusion
#AIart
#AIイラスト

13 2 0 1
Workflow:DetailUpControlNet(Depth + Canny)
cpkt_name:illunextNoobai_v2.safetensors
steps:35 CFG:6.0 Euler/normal

LoRA:ponyv5_noobE11_1_adamW-000013.safetensors 0.8
LoRA:add-detail-xl.safetensors 3.0
LoRA:cutesexyrobutts_style_illustrious_goofy.safetensors 0.3
LoRA:aruhshuraSDXL.safetensors 0.6
LoRA:spo_sdxl_10ep_4k-data_lora_webui.safetensors -1.5
LoRA:anmiXL_il_lokr_V53P1.safetensors 0.8

Workflow:DetailUpControlNet(Depth + Canny) cpkt_name:illunextNoobai_v2.safetensors steps:35 CFG:6.0 Euler/normal LoRA:ponyv5_noobE11_1_adamW-000013.safetensors 0.8 LoRA:add-detail-xl.safetensors 3.0 LoRA:cutesexyrobutts_style_illustrious_goofy.safetensors 0.3 LoRA:aruhshuraSDXL.safetensors 0.6 LoRA:spo_sdxl_10ep_4k-data_lora_webui.safetensors -1.5 LoRA:anmiXL_il_lokr_V53P1.safetensors 0.8

Workflow:DetailUpControlNet(Depth + Canny)
cpkt_name:illunextNoobai_v2.safetensors
steps:35 CFG:6.0 Euler/normal

LoRA:ponyv5_noobE11_1_adamW-000013.safetensors 0.8
LoRA:add-detail-xl.safetensors 3.0
LoRA:cutesexyrobutts_style_illustrious_goofy.safetensors 0.3
LoRA:aruhshuraSDXL.safetensors 0.6
LoRA:spo_sdxl_10ep_4k-data_lora_webui.safetensors -1.5
LoRA:748cmXL_il_lokr_V5311P.safetensors 1.0

Workflow:DetailUpControlNet(Depth + Canny) cpkt_name:illunextNoobai_v2.safetensors steps:35 CFG:6.0 Euler/normal LoRA:ponyv5_noobE11_1_adamW-000013.safetensors 0.8 LoRA:add-detail-xl.safetensors 3.0 LoRA:cutesexyrobutts_style_illustrious_goofy.safetensors 0.3 LoRA:aruhshuraSDXL.safetensors 0.6 LoRA:spo_sdxl_10ep_4k-data_lora_webui.safetensors -1.5 LoRA:748cmXL_il_lokr_V5311P.safetensors 1.0

#ComfyUI #AIArt #Illustrious #LoRA #ControlNet

5 1 0 0
Workflow:DetailUpControlNet(Depth + Canny)
cpkt_name:animagineXL40_v40
steps:30 CFG:6.0 Euler/normal
LoRA:Non

Workflow:DetailUpControlNet(Depth + Canny) cpkt_name:animagineXL40_v40 steps:30 CFG:6.0 Euler/normal LoRA:Non

Workflow:DetailUpControlNet(Depth + Canny)
cpkt_name:4nimaPencilXL_v101
steps:30 CFG:6.0 Euler/normal
LoRA:Non

Workflow:DetailUpControlNet(Depth + Canny) cpkt_name:4nimaPencilXL_v101 steps:30 CFG:6.0 Euler/normal LoRA:Non

Workflow:DetailUpControlNet(Depth + Canny)
cpkt_name:animagineXLV31_v31
steps:30 CFG:6.0 Euler/normal
LoRA:Non

Workflow:DetailUpControlNet(Depth + Canny) cpkt_name:animagineXLV31_v31 steps:30 CFG:6.0 Euler/normal LoRA:Non

Workflow:DetailUpControlNet(Depth + Canny)
cpkt_name:animaPencilXL_v500
steps:30 CFG:6.0 Euler/normal
LoRA:Non

Workflow:DetailUpControlNet(Depth + Canny) cpkt_name:animaPencilXL_v500 steps:30 CFG:6.0 Euler/normal LoRA:Non

#ComfyUI #AIArt #SDXL #ControlNet
記事で公開されたの知ったので回し、順にModelが
「Animagine XL 4.0」
「4nima_pencil-XL」
「animagineXLV31_v31」
「animaPencilXL_v500」
同じWorkflowでseed固定してModelのみ変更
Upscale:896×1152→1120×1440

4 0 0 0
Preview
(AI小説) 私のプロンプトは、私 / A Prompt That Writes Itself - MochiMermaid’s AI Art Adventures 眩しいモニターの光が、部屋の薄闇を切り裂く。 進捗バーがじわりじわりと伸びていくのを見つめながら、私は深く息をついた。 「今度こそ、理想の一枚を……!」 マウスを握る指に力が入る。Stable DiffusionのUIの向こう側で、ノイズの海が形を成していく。サンプラーはDPM++ 2M Karras、解像度は768×768。プロンプトには「異世界の幻想都市、浮遊する島々、魔法の光に包まれた空」と...

はてなブログに新作のAI小説を投稿しました!
(AI小説) 私のプロンプトは、私 / A Prompt That Writes Itself - MochiMermaid’s AI Art Adventures mochimermaid.hateblo.jp/entry/2025/0...
#はてなブログ #ChatGPT #OpenAI #StableDiffusion #AI小説 #AI画像 #AIイラスト #ControlNet #プロンプト #短編小説 #画像生成 #AIart

4 0 0 0
Post image

🔥~Premium Commission Example # 6~by Fenyx-Fantasy~🔥
#linkinbio #deviantart #ai #aiart #porn #aigirl #aiartist #aiartcommunity #nsfwai #aiporn #nsfw #sexy #woman #girl #woman #girls #lesbians #tits #pussy #strapon #stablediffusion #controlnet

32 1 1 0
Post image Post image Post image

FLUX Controlnet Depth is now available in our easy workflows. Just upload a photo and write a prompt, and it will work just like a stencil. Use cases include redesigning a game character, restyling a room, or putting a character into a pose. Try it today at graydient.ai #controlnet #ai #genai

2 0 0 0
🎵 Au petit matin (iorek b) #music #hardtechno #epic #snowboarding
🎵 Au petit matin (iorek b) #music #hardtechno #epic #snowboarding YouTube video by iorek b

After several experiments with #stableDiffusion, #controlnet and #deforum, I produced this short, which I'm quite proud of, on a sound from my new album due out in 2025 (🎵 Au petit matin) #aiArt #aiVideo #hardtechno #music #electronicMusic #epic #snowboarding
www.youtube.com/shorts/lC-E-...

2 0 0 0
Post image

Uncanny control.

I posted a detailed analysis on DA of how to get the most out of controlnet canny edge maps.

#AIArtistCommunity #controlnet #AIarttips #Furryaiartcommunity

www.deviantart.com/fossycat/jou...

4 1 0 0