Auto-Tagging: Searchable AI Assets at Zero Cost | FairStack
Your first 10 generations are easy to keep track of. By generation 50, you’re scrolling. By generation 200, you’ve lost that headshot you made last Tuesday.
Starting today, every generation on FairStack is automatically tagged with 5-7 semantic labels. Subject matter, style, mood, use-case, visual characteristics — all searchable, all free.
How It Works
When a generation completes, we pass the prompt to Gemini 2.5 Flash Lite. It returns a set of normalized tags in under 200ms. The tags land on your asset after the image (or video, or audio) is already delivered — tagging never blocks your generation.
Here’s a real example. This prompt:
A minimalist album cover for a lo-fi jazz playlist, warm amber tones, vinyl record texture
Produces these auto-tags:
lo-fi-jazz, minimalist-album-cover, warm-amber-tones,
photorealistic, coffee-and-headphones, vinyl-texture, soft-bokeh
The model picks up both the explicit descriptors (lo-fi jazz, warm amber tones) and the implied visual characteristics (soft bokeh, photorealistic rendering). Tags are lowercase, hyphenated, and deduplicated across your library.
Search Your Library
Tags are queryable everywhere — dashboard search bar, REST API, and CLI.
API:
# Find all assets tagged with "album-cover"
curl https://fairstack.ai/v1/assets?tag=album-cover \
-H "Authorization: Bearer fs_live_..."
# Combine tags for precise filtering
curl https://fairstack.ai/v1/assets?tag=portrait&tag=warm-tones
CLI:
# Search by tag
fairstack tags search --tag "album-cover"
# List all tags in your library
fairstack tags list
# Add your own tag to an asset
fairstack tags add asset_abc123 --tag "client-project-wave-2"
Auto-tags and your own custom tags live side by side. You can add, remove, or override any tag. The system does the busywork; you keep control.
What Gets Tagged
Every generation across all modalities:
- Images: subject, art style, color palette, composition type, mood
- Videos: scene type, motion style, visual tone, subject, setting
- Voice: speaking style, emotion, pace, use-case
- Music: genre, mood, tempo category, instrumentation, use-case
Tags are consistent across your library, so searching for warm-tones returns matching images, videos, and music — not just one modality.
What This Unlocks
Today, you get searchable assets. That’s the immediate value: “find that headshot from last week” works now.
But tags also open the door for what comes next. Recommendation engines that suggest models based on your past work. Usage analytics that show which styles you generate most. Gallery curation that groups assets by project or aesthetic without manual sorting. We’re building all of this on the tagging layer.
The Cost
Gemini 2.5 Flash Lite costs approximately $0.00004 per tagging call. That’s four hundred-thousandths of a penny. At 10,000 generations, the tagging bill is $0.40.
We absorb this cost entirely. It does not touch your credit balance. There’s no toggle to turn it off because there’s no reason to — it’s free, non-blocking, and makes your library better.
We absorb the cost because organized assets mean happier creators. And happier creators generate more. The math works for everyone.
Auto-tagging is live now for all users. New generations are tagged automatically. Existing assets will be backfilled over the coming weeks.
Check your library at fairstack.ai/app/library, or hit the tags API endpoint to start building on top of it.