Why can't Merlin AI provide the same output as ChatGPT when creating Ghibli Style art?
This Bonkers is not generating the expected outcome and why can't DALL-E use image input? This kinda sucks.
I've used this prompt "Transform the uploaded image into Studio Ghibli-style art. Reimagine the photo’s subject as a character inspired by Spirited Away. Use the signature Ghibli soft textures, warm lighting, and intricate details. Preserve the person’s recognizable features while giving them an expressive, animated look."
But no desirable output coming from Bonkers is near damn close from what ChatGPT was able to provide.