AI-powered content generation is now embedded in everyday tools like Adobe and Canva, with a slew of agencies and studios incorporating the technology into their workflows. Image models now deliver photorealistic results consistently, video models can generate long and coherent clips, and both can follow creative directions.Creators are increasingly running these workflows locally on PCs to keep assets under direct control, remove cloud service costs and eliminate the friction of iteration - making it easier to refine outputs at the pace real creative projects demand.
Since their inception, NVIDIA RTX PCs have been the system of choice for running creative AI due to their high performance - reducing iteration time - and the fact that users can run models on them for free, removing token anxiety.
With recent RTX optimizations and new open-weight models introduced at CES earlier this month, creatives can work faster, more efficiently and with far greater creative control.
How to Get Started Getting started with visual generative AI can feel complex and limiting. Online AI generators are easy to use but offer limited control.
Open source community tools like ComfyUI simplify setting up advanced creative workflows and are easy to install. They also provide an easy way to download the latest and greatest models, such as FLUX.2 and LTX-2, as well as top community workflows.
Here's how to get started with visual generative AI locally on RTX PCs using ComfyUI and popular models:
Visit comfy.org to download and install ComfyUI for Windows.
Launch ComfyUI.
Create an initial image using the starter template:
Click on the Templates button, then on Getting Started and choose 1.1 Starter Text to Image.
Connect the model Node to the Save Image Node. The nodes work in a pipeline to generate content using AI.
Press the blue Run button and watch the green Node highlight as the RTX-powered PC generates its first image.
Change the prompt and run it again to enter more deeply into the creative world of visual generative AI.
Read more below on how to dive into additional ComfyUI templates that use more advanced image and video models.
Model Sizes and GPUs As users get more familiar with ComfyUI and the models that support it, they'll need to consider GPU VRAM capacity and whether a model will fit within it. Here are some examples for getting started, depending on GPU VRAM:
*Use FP4 models with NVIDIA GeForce RTX 50 Series GPUs, and FP8 models with RTX 40 Series GPUs for best results. This lets models use less VRAM while providing more performance. Generating Images
To explore how to improve image generation quality using FLUX.2-Dev:
From the ComfyUI Templates section, click on All Templates and search for FLUX.2 Dev Text to Image. Select it, and ComfyUI will load the collection of connected nodes, or Workflow.
FLUX.2-Dev has model weights that will need to be downloaded.
Model weights are the knowledge inside an AI model - think of them like the synapses in a brain. When an image generation model like FLUX.2 was trained, it learned patterns from millions of images. Those patterns are stored as billions of numerical values called weights.
ComfyUI doesn't come with these weights built in. Instead, it downloads them on demand from repositories like Hugging Face. These files are large (FLUX.2 can be >30GB depending on the version), which is why systems need enough storage and download time to grab them.
A dialog will appear to guide users through downloading the model weights. The weight files (filename.safetensors) are automatically saved to the correct ComfyUI folder on a user's PC.
Saving Workflows:
Now that the model weights are downloaded, the next step is to save this newly downloaded template as a Workflow.
Users can click on the top-left hamburger menu (three lines) and choose Save. The workflow is now saved in the user's list of Workflows (press W to show or hide the window). Close the tab to exit the workflow without losing any work.
If the download dialog was accidentally closed before the model weights finished downloading:
Press W to quickly open the Workflows window.
Select the Workflow and ComfyUI will load it. This will also prompt for any missing model weights to download.
ComfyUI is now ready to generate an image using FLUX.2-Dev.
Prompt Tips for FLUX.2-Dev:
Start with clear, concrete descriptions of the subject, setting, style and mood - for example: Cinematic closeup of a vintage race car in the rain, neon reflections on wet asphalt, high contrast, 35mm photography. Short to medium length prompts - a single, focused sentence or two - are usually easier to control than long, storylike prompts, especially when getting started.
Add constraints to guide consistency and quality. Specify things like:
Framing ( wide shot or portrait )
Detail level ( high detail, sharp focus )
Realism ( photorealistic or stylized illustration )
If results are too busy, remove adjectives instead of adding more.
Avoid negative prompting - stick to prompting what's desired.
Learn more about FLUX.2 prompting in this guide from Black Forest Labs.
Save Locations on Disk:
Once done refining the image, right click on Save Image Node to open the image in a browser, or save it in a new location.
ComfyUI's default output folders are typically the following, based on the application type and OS:
Windows (Standalone/Portable Version): The folder is usually found in C:\ComfyUI\output or a similar path within where the program was unzipped.
Windows (Desktop Application): The path is usually located within the AppData directory, like: C:\Users\%username%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\output
Linux: The installation location defaults to /.config/ComfyUI.
Prompting Videos Explore h










