Skip to content
All skills
COOK content v1.0.0 · Apache-2.0

Vision Driven Svg Iteration

Iterate on SVG icon/logo geometry using vision-AI feedback in a separate test harness BEFORE touching production. Covers the test-harness-first pattern, vision-inconsistency mitigation (same path can score 9/10 then 4/10 across calls), the "vision sees lobes wrong" calibration step, and the rule that human-eyeball-on-real-browser is the final gate for animated/aesthetic work. Use when designing iconography (chef hats, logos, marks), writing parametric SVG paths, or any visual iteration where you need objective feedback before deploying.

Audited
Source
SHA-256
Last reviewed
How we audit →

Install in your agent

Tell your agent: "install the recipes skill, then add vision-driven-svg-iteration"
Or via curl: curl -sL https://recipes.wisechef.ai/skill -o ~/.claude/skills/recipes/SKILL.md

Full skill source · SKILL.md

Vision-Driven SVG Iteration

When to Use

  • Designing a parametric SVG icon, logo, or hero animation shape
  • The user has rejected an initial visual; you need iteration
  • You're tempted to ship-and-iterate-on-prod (don't)
  • You need objective feedback on geometry/proportions before deploying

Skip this skill for trivial CSS tweaks, copy-only changes, or anything where a reload + screenshot is faster than a vision call.

The Iron Law

NO PRODUCTION DEPLOY UNTIL THE TEST HARNESS PASSES VISION ≥8/10
AND HUMAN-EYEBALL CONFIRMATION ON A REAL BROWSER

Vision can be wrong. Headless browsers throttle rAF. Real browsers run 60fps. Production deploys touch caches, CDNs, build hashes — every variable adds noise that masks whether the shape is right.

The Workflow

1. Build the test harness FIRST

Create /tmp/<feature>-harness.html with the exact SVG path / animation logic in isolation:

<!DOCTYPE html><html><head><style>
body{background:#0a0a0a;color:#fff;font-family:system-ui;margin:0;padding:30px;}
h2{color:#FFD166;margin:30px 0 10px;}
.grid{display:grid;grid-template-columns:repeat(auto-fit,minmax(220px,1fr));gap:30px;}
.card{text-align:center;}
.card span{display:block;margin-top:8px;color:#aaa;font-size:11px;}
svg{background:#1a1a1a;border:1px solid #333;}
</style></head><body>
<h2>Candidates</h2>
<div class="grid">
  <div class="card">
    <svg width="200" height="240" viewBox="0 0 100 120">
      <path d="M ..." fill="none" stroke="#FFE6A8" stroke-width="2" stroke-linejoin="round"/>
    </svg>
    <span>v1 — description</span>
  </div>
  <!-- repeat for v2, v3, v4 -->
</div>
</body></html>

Always render multiple candidates side-by-side — vision compares better than it judges absolutes.

For animations, also build a separate harness with a requestAnimationFrame loop and expose a window.__captureAt(effect, t) hook so vision can see specific frames mid-cycle:

window.__captureAt = (effect, t) => {
  if (effect === 'A') effectA(t);
  if (effect === 'B') effectB(t);
};

2. Navigate + ask vision

browser_navigate("file:///tmp/feature-harness.html")
browser_vision(question="""
Three [icon] candidates labeled v1, v2, v3.
For each:
1. Count specific features (e.g. "How many lobes/puffs on the crown")
2. Rate 1-10 on [specific quality, e.g. "classic chef toque recognition"]
3. Identify which is strongest.
What needs to change in the weakest?
""")

Ask vision to COUNT features, not just rate aesthetics. Counts are objective: "3 lobes" is verifiable; "looks chef-y" is not.

3. The vision-inconsistency calibration step (CRITICAL)

Vision-AI scores the SAME image differently across calls. Validated 2026-04-25 on Recipes hero:

  • v4.2 path rated 8.5/10 with "exactly 3 lobes" (call 1)
  • Same v4.2 path rated 6/10 with "5 lobes visible" (call 2 in different test harness)
  • v5.2 path rated 9/10 with 3 lobes (call 3 in shape-only harness)
  • Same v5.2 path rated 4/10 with "only 2 lobes visible" (call 4 with breath-glow rendered)

The presence of OTHER visual elements (glow, gradient, scale, neighboring icons) changes how vision counts and scores. Fix:

  • Get ≥2 vision passes per candidate before declaring winner
  • Test the SHAPE in isolation first (no glow, no fills, no neighbors), confirm count
  • Then test the SHAPE with effects added, confirm count is stable
  • If the count changes when effects are added, the count was probably wrong in one of them — render at large scale (width="400") and look yourself

4. Iterate until specific gates pass

Vision gates that work well:

  • Count gate: "Does the crown show EXACTLY 3 lobes?" (not 2, not 4)
  • Recognition gate: "Rate 1-10 for [classic chef toque / company logo / etc]"
  • Differentiation gate: "Are the 3 candidates clearly distinguishable from each other?"
  • Containment gate: "Is the glow contained inside the shape, or does it bleed outside?"

Don't ship until ALL applicable gates score ≥8/10.

5. Map test-harness coords to production

When the harness uses viewBox="0 0 100 120" and centers the path at (50, 60), the production canvas code MUST use the same:

const HAT_PATH_D = "...";  // EXACTLY the string from harness
const toCanvas = (px, py) => ({
  x: cx + ((px - 50) / 50) * scale,
  y: cy + ((py - 60) / 50) * scale,  // /50, NOT /60 — uniform aspect-preserving scale
});

Common bug: non-square viewBox (e.g. 100×140) tempts you to write (py - 70) / 70 * scale * 1.4 for the y-mapping. That double-scales y and the path renders distorted. Always use uniform s/50 scaling — the path's tall aspect ratio is preserved naturally because the y-distance from center is up to 70 path-units.

6. Real-browser confirmation BEFORE claiming done

Headless browsers throttle requestAnimationFrame to ~6fps when backgrounded — that's why vision sees "particles drifting in flow mode" instead of "particles converged on the silhouette." A static screenshot of an animation is unreliable.

Verification steps that actually work:

  • Pixel-density polling: getImageData() on canvas, count amber/teal/non-zero pixels in expected regions over 30+ seconds
  • Path geometry check: verify the deployed bundle contains the new path string (grep -oE "M 18 116" bundle.js)
  • Then ask the user: "Open https://... in Chrome/Safari, watch ~15 seconds, tell me what you see." Real browser at 60fps is the final gate.

When the user reports "5 lobes" or "looks like ears" — believe them, not your vision-test results.

Test Harness Template

scripts/svg-harness.html:

<!DOCTYPE html><html><head><meta charset="utf-8"><title>SVG Harness</title>
<style>
  body { background:#0a0a0a; color:#fff; font-family:system-ui; margin:0; padding:30px; }
  h2 { color:#FFD166; font-weight:500; margin:30px 0 10px; }
  .grid { display:grid; grid-template-columns:repeat(auto-fit,minmax(280px,1fr)); gap:30px; }
  .card { text-align:center; }
  .card span { display:block; margin-top:10px; color:#aaa; font-size:12px; max-width:280px; }
  .stage { background:radial-gradient(ellipse at center, rgba(255,180,70,0.10), #0a0a0a 75%);
           border:1px solid #222; border-radius:8px; aspect-ratio:4/3; position:relative; }
  canvas, svg { width:100%; height:100%; display:block; }
</style></head>
<body>
  <h2>Candidates (shape only, no effects)</h2>
  <div class="grid">
    <!-- one .card per candidate, render as SVG <path fill="none" stroke=...> -->
  </div>
  <h2>Same shape with effects</h2>
  <div class="grid">
    <!-- repeat with effects layered on -->
  </div>
  <h2>Trio composition (small + large + small) — production layout sanity</h2>
  <div class="grid">
    <!-- single SVG with three <g transform> instances at production scales -->
  </div>
</body></html>

Pitfalls

  1. Don't iterate on production. Astro build → rsync → service restart → browser cache bust = 30-60s per iteration. Test harness = 2s reload. Iteration speed compounds; you'll do 5-15 iterations on geometry alone.

  2. Vision-tool inconsistency is the #1 footgun. Vision is great for "is this clearly a chef hat?" but unreliable for "exactly 3 lobes vs 5 lobes" without multiple confirmation passes. Render at high resolution (400px+ wide) to give vision more pixels per feature. Tiny test renders inflate the inconsistency.

  3. Headless browser rAF throttling masks animation issues. When asked to "verify the convergence happens," vision captures whatever frame the headless engine produced — usually mid-flow, not mid-hold. Build debug freeze hooks (?freeze=hold&effect=A) into the production code so you can deterministically capture the desired state. Even those can be unreliable in headless — the live cycle continues to clear/redraw asynchronously.

  4. getPointAtLength interior interpretation differs by path topology. A single-contour outline + interior fill via rejection sampling produces different particle distributions than 4 separate sub-paths. Test the particle distribution visually in the harness before shipping — particles can converge into shapes you didn't intend (e.g. M ... Z M ... Z walks both sub-paths, but getPointAtLength(t) for t in [0, totalLen] will skip the inter-sub-path "jump").

  5. Vision counts wrong when iconography includes glows or gradients. Vision will count edge bumps in a soft-glowing radial fill as "extra lobes." Test the bare outline first, lock the geometry, THEN add glow effects. If you change the path AND the effects in the same iteration, you can't tell which one made vision unhappy.

  6. Production hash collisions with Astro/Vite are silent. Build → byte count unchanged → assume nothing deployed → look for the new path in the bundle: grep -oE 'NEW_FEATURE_MARKER' dist/_astro/*.js. Vite hashes by content — if the hash didn't change but the source did, you have a build cache problem. Wipe dist/ and .astro/ and rebuild.

  7. Don't trust your own taste for iconography. I (the agent) thought v4.2 was a clean 3-lobe toque. Vision said 5 lobes. Adam said "looks like a cloud, not a hat." All three could be right simultaneously — different observers, different rendering contexts. The user is always the final arbiter for aesthetic + brand iconography.

  8. The "ship while still iterating" trap. If 3 vision rounds haven't produced ≥8/10, STOP. The path geometry is fundamentally wrong. Don't keep tweaking control points hoping vision will warm up. Try a different topology (single closed contour vs compound path vs multi-element rendering).

  9. Vision drift across multi-round iteration. When iterating the SAME path across multiple sessions/rounds, vision can give you progressively different scores even on identical geometry. Validated 2026-04-25 round 7: v6.1 path scored 8.5/10 round 6, then 5/10 round 7 with no changes. Symptoms: vision starts demanding features that aren't in the reference (e.g. "fan-curving pleats" when the reference has straight pleats), or contradicts a previous round's feedback ("center should be tallest" → "lobes too uniform" on the same image). Fix: when vision flip-flops within 2-3 rounds, stop iterating and ship to production for HUMAN review. Vision's failure mode at small render sizes is "find something to critique." The user's eyes on a real browser at full size are the ground truth — vision is only useful for the FIRST 2-3 iterations to confirm structural correctness, after which marginal rounds produce noise, not signal.

Trace From a Reference Image When Iteration Stalls

When vision oscillates between scores (v4.2 = 8.5/10 then 6/10) and parametric tweaking isn't converging, ground the next iteration in an existing brand artifact instead of guessing. Use vision_analyze(image_url=logo_path, ...) to extract structural coordinates from the brand's own logo file:

vision_analyze(
  image_url="/path/to/brand/mark-512.png",
  question="""I need to recreate this silhouette as an SVG path. Describe in detail:
  1. How many distinct lobes/puffs? Are they equal sized or one (center) larger?
  2. What's the proportion of crown height to band height?
  3. Where do the side lobes peak relative to the center lobe (y-coordinates)?
  4. Does the band have features (curves, decoration) or is it plain rectangle?
  5. Approximate the silhouette in coordinates assuming viewBox 0 0 100 120,
     with the band sitting at y=92 to y=116 and the crown above it."""
)

Vision returns coordinates AND structural insight ("crown overhangs band by ~2 units," "center peak y=8-12, side peaks y=18-30"). Build the next path from those numbers, not from your visual intuition. Validated 2026-04-25 on Recipes hero v6.1: vision rated traced path 9/10 vs reference, after 5 rounds of parametric tweaking stalled at 4-7/10.

Pitfall: vision will sometimes give a buggy path string in the SVG snippet. Don't paste the path verbatim — extract the coordinate guidance ("3 lobes, center y=8, sides y=22-24, crown overhangs") and write the path yourself with that as your spec.

Multi-Topology Fallback

When parametric path tweaking stalls, switch topology:

Topology Pros Cons
Single closed contour One <path>, simplest particle sampling Vision misreads lobe count
Compound path (M ... Z M ... Z) Single element, visible internal divisions getPointAtLength walks all sub-paths sequentially — particle ordering may need re-shuffling
Multi-element (<rect> + <ellipse> × 3) Vision rates highest (9/10 for chef toque); cleanest semantic structure Need to sample 4 separate paths and concatenate; production rendering needs 4 stroke calls per hat
Hand-drawn raster asset No geometry math at all Bitmaps don't scale; can't feed getPointAtLength for animation; 6× file size

If vision rates a parametric path ≤6/10 after 3 rounds, jump to multi-element rendering. That's what worked for the Recipes chef hat.

Cross-Reference

  • verification-before-completion — the meta-discipline. This skill is the visual specialization.
  • hyperframes-composition-rules — for animated frames. Stricter rules apply (no clipping, no opacity stacking).
  • popular-web-designs — when you don't know what "good" looks like, copy from there first.
  • recipes-marketplace-deploy pitfall #27 — vision systematically underrates motion-heavy designs because it audits stills.

Verification Command

After shipping an SVG icon to production:

# Confirm the deployed bundle has your new path
JS=$(curl -sS https://your-site.com/ | grep -oE '_astro/Component[^"]+\.js' | head -1)
curl -sS "https://your-site.com/$JS" | grep -oE 'M XX YYY' | head -1
# Empty = old bundle still cached. Re-deploy or clear CDN.

Then ask the user: "Open the live site, look at [feature], does it match the spec?" If they say yes, only THEN claim done.