Comment on page
In this guide, we will walk you through the process of using ControlNet Canny and Image to Canny inside of Takomo. This process will allow you to generate new images based on the contours extracted from an image input.
To begin, start a new project in Takomo. In the input section of your new project, you will need to add two different input fields: text and image. The text will serve as your prompt, and the image will be your starting image.
The next thing you will need is an output field. This will be the result of the process, which will be an image output.
On the left side of the interface, you will find two nodes that you will need for this pipeline: ControlNet Canny and Image to Canny. ControlNet Canny is used for control image generation, while Image to Canny is used to draw the edge map.
To start, you will first draw the edge map. To do this, add an image (for example, an image of a skyline) and connect it to the input for Image to Canny. Run this pipeline to generate the skyline's contours.
If you're not satisfied with the output of these contours, you can adjust the thresholds to play with the sensitivity. This is particularly useful when your image doesn't have a lot of contrast or has too much contrast.
Once you're happy with the output, you can use these outputs to generate a new image. Add the next node you will need, which is ControlNet Canny. Connect the image output of Image to Canny to the Canny Edge map input. This will allow you to take the edge map and paint a new image inside of the output.
For the next step, you will need a prompt. For example, if you want to transform an everyday looking skyline into a futuristic city, your prompt could be "a futuristic city". Connect this prompt to the second node, the 2D text prompt.
Now you are ready to run your pipeline. The result should be a new image generated based on the contours of your original image and the prompt you provided.