How we built it
Content Design
Assistant
A Figma plugin that brings Thumbtack content guidelines into your design workflow — powered by GPT-4o and Cloudflare Workers.
Custom GPT
Figma Plugin
Cloudflare Workers
OpenAI GPT-4o
Thumbtack Guidelines
Overview
What the plugin does
A chat assistant that lives inside Figma. Select any layer, pull the text into the panel, and get instant feedback — rewrites, reviews, and content creation — all aligned to Thumbtack standards.
1
Pull text from Figma
Select any layer or frame. The plugin recursively extracts all text nodes and loads them into the chat as context.
2
Review against guidelines
Ask the assistant to review, rewrite, or create content. It knows every Thumbtack voice, style, and email rule by heart.
3
Suggestion chips
One-click prompts for common tasks: review copy, rewrite for Thumbtack voice, check a button label, write an error message.
4
Full chat memory
The conversation maintains history so you can iterate — ask follow-ups, request alternatives, or dig deeper on a specific issue.
Architecture
How the pieces fit together
Three layers: the Figma plugin, a Cloudflare Worker proxy, and the OpenAI API. The worker keeps the API key out of plugin code.
🎨
Figma Plugin
ui.html + code.js
→
☁️
Cloudflare Worker
worker.js · API proxy
→
Why a proxy? Figma plugins run entirely client-side. Putting the OpenAI API key directly in plugin code would expose it to anyone who inspects the bundle. The Cloudflare Worker holds the key as a server-side secret.
Project files
What we built
Four files — two for the Figma plugin, one for the Cloudflare Worker, one configuration file.
content-design-assistant/
manifest.json ← plugin config
code.js ← Figma sandbox
ui.html ← chat UI
worker.js ← Cloudflare
manifest.json
Registers the plugin with Figma. Declares name, entry points, and which external domains the plugin is allowed to call.
code.js
Runs in Figma's privileged sandbox. Accesses the document, walks selected nodes, extracts text.
ui.html
Runs in an iframe. Contains the entire chat UI and the Thumbtack system prompt.
worker.js
Deployed to Cloudflare. Receives requests from the plugin, adds the API key, forwards to OpenAI.
Step 1 of 6
Build the Custom GPT
Before writing a single line of code, build and test the assistant as a Custom GPT on chatgpt.com. This is how you develop and validate the system prompt — no infrastructure needed.
-
1
Go to chatgpt.com → Explore GPTs → Create
Log in with your OpenAI account. Click your avatar → "My GPTs" → "Create a GPT".
-
2
Name it and write the instructions
Give it a name ("Content Design Assistant"). Paste your system prompt into the Instructions field — this is where all the Thumbtack guidelines live.
-
3
Configure capabilities
Disable web browsing, DALL-E, and Code Interpreter — none are needed. Keep it focused on text generation only.
-
4
Test with real Thumbtack content
Paste in actual copy from Figma files. Ask it to review, rewrite, and check against guidelines. Iterate on the instructions until it behaves consistently.
-
5
Save and copy the final instructions
Once the prompt is solid, copy it — you'll paste this exact text into ui.html when building the plugin.
Why start here?
The Custom GPT is a fast, zero-infrastructure way to prototype AI behavior. You can test hundreds of prompts and edge cases in minutes — before committing to building a plugin and deploying a Worker.
What you're validating
- Does the model apply sentence case correctly?
- Does it ask the two questions before writing emails?
- Does it give full rewrites, not inline edits?
- Does it identify content type before reviewing?
The system prompt developed here becomes the exact instructions embedded in the Figma plugin. Same behavior, new surface.
Step 1 of 6 — System prompt
What's in the system prompt
The system prompt is what makes this a Thumbtack tool, not a generic AI. It embeds five reference documents that the model consults on every response.
Thumbtack Content Spec
Component constraints, system states (Informational / Guidance / Blocking / Failure…), forbidden patterns, error message rules.
Writing Style Guide
Voice & tone, sentence case rules, capitalization, grammar, preferred terms ("pro" not "contractor", "sign in" not "log in").
Brand Messaging Playbook
Three value prop pillars — Convenience, Reliability, Achievement — mapped to the customer journey stage.
Prompt Engineering Toolkit
How to structure prompts for consistent, high-quality outputs. Few-shot examples, chain-of-thought techniques, guardrails.
Example GTM Comms
Real email examples (subject, preheader, body, CTA) across announcement and reminder scenarios. The model references these when writing email content.
Critical behavior rules
Always provide full rewrites (never inline edits). Always ask two questions before writing emails. Never assume content type when uncertain.
Step 2 of 6
Request an OpenAI key with budget
Once your system prompt is working in the Custom GPT, you need a provisioned OpenAI API key with an approved budget to power the plugin. This is managed by the #ml-infra team.
-
1
Identify your access pattern
For this plugin, the pattern is IT/DX/3rd Party Integration — key usage outside officially supported infra (not a backend service or Airflow task).
-
2
Obtain approval
Either via self-service (auto-approval) or through #ai-ml-review. Check whether your use case qualifies for auto-approval before posting.
-
3
Fill out the LLM expense tracker
Add an entry for your project. For IT/DX/3rd Party Integration, leave the expected dev cost blank.
-
4
Confirm OpenAI Platform access
You need to be in the Okta group for OpenAI. Try signing in via SSO — if you can't, open an IT ticket to be added to the group.
-
5
Start an ML Infra Help Request
Once steps 1–4 are done, post in #ml-infra to have your project created and keys provisioned.
Access pattern for this plugin
Inference model backend
Airflow task
Ad-hoc / Databricks
IT/DX/3rd Party Integration
AI Gateway
The Figma plugin is client-side tooling outside of Thumbtack's standard ML infra — it falls under IT/DX/3rd Party.
Where to go
#ml-infra — ML Infra Help Request to get keys provisioned
#ai-ml-review — approval for your use case
IT ticket — to be added to the OpenAI Okta group if SSO fails
The key you receive goes into the Cloudflare Worker as a secret environment variable — it's never in the plugin code. See Step 3.
Step 3 of 6
Set up the Cloudflare Worker
-
1
Open Claude Code in your project folder
Run claude in your terminal from the folder where you want the plugin files to live.
-
2
Ask Claude Code to create the Worker
Describe what it should do — proxy OpenAI, handle CORS, read the key from an environment variable. Claude Code writes worker.js.
-
3
Log into cloudflare.com → Workers & Pages → Create Worker
Paste the generated worker.js code into the editor and click Deploy. Note the .workers.dev URL.
-
4
Add the API key as a secret
In the Worker: Settings → Variables → Add secret. Name: OPENAI_API_KEY. Paste your provisioned key.
>Create a Cloudflare Worker in worker.js
that proxies requests to the OpenAI chat
completions API.
It should:
- Handle CORS preflight OPTIONS requests
- Accept POST requests and forward them
to the OpenAI API
- Read the key from an env variable
called OPENAI_API_KEY
- Return responses with CORS headers so
the Figma plugin can call it
Claude Code will write the complete worker.js. You only need to paste it into Cloudflare — no coding required.
Step 4 of 6
Register the plugin in Figma
-
1
Open the Figma desktop app
Plugin development requires the desktop app — it can't be done in the browser.
-
2
Create a new plugin
Main menu → Plugins → Development → New plugin. Choose "Figma Design". Point it at your project folder — Figma scaffolds starter files.
-
3
Ask Claude Code to configure manifest.json
Give it your plugin name and your Worker URL. Claude Code sets up the file paths, network access rules, and document access settings.
networkAccess is required. Figma blocks all outbound HTTP by default. Claude Code knows this and will add the correct allowedDomains entry automatically.
>Update manifest.json for a Figma plugin
called "Content Design Assistant".
Set it up so that:
- main is code.js
- ui is ui.html
- editorType is figma
- networkAccess allows calls to
[your-worker-url]
- documentAccess is dynamic-page so
the plugin can read any open page
Step 5 of 6 — Plugin logic
Write code.js with Claude Code
code.js runs in Figma's privileged sandbox — the only layer with direct document access. You don't need to know the Figma Plugin API. Describe the behavior to Claude Code and it handles the rest.
-
1
Describe the behavior to Claude Code
Explain that this file is the Figma plugin main thread and needs to open a UI panel and extract text from selected layers.
-
2
Claude Code writes code.js
It knows the Figma Plugin API — figma.showUI(), figma.ui.onmessage, figma.ui.postMessage() — and generates the correct structure.
-
3
Iterate if needed
If the plugin doesn't pull text correctly, describe the issue to Claude Code. It reads the file and fixes it.
>Write code.js for a Figma plugin.
This is the plugin's main thread. It should:
- Open ui.html as the plugin panel
(420x600px, titled "Content Design
Assistant")
- Listen for a "get-selected-text" message
from the UI
- When it receives that message, walk all
currently selected Figma nodes and
recursively collect text from any TEXT
nodes
- Send the collected text back to the UI
as a "selected-text" message
Step 5 of 6 — Chat interface
Build ui.html with Claude Code
-
1
Describe the interface to Claude Code
Tell it you need a chat UI that calls the Cloudflare Worker, has a "Pull from Figma" button, suggestion chips, and conversation history.
-
2
Give it your system prompt
Paste in the instructions you developed in the Custom GPT step. Claude Code embeds them as a constant in the file.
-
3
Test in Figma and iterate
Open the plugin, try it with real content. Describe anything that's off — wording, behavior, layout — and Claude Code updates the file.
Claude Code will ensure the UI calls the Worker URL (not OpenAI directly), keeping the API key secure and within the allowedDomains constraint.
>Build ui.html for a Figma plugin chat panel.
It should:
- Call [your-worker-url] to send messages
to GPT-4o and display the responses
- Have a "Pull from Figma" button that
requests selected text from code.js and
loads it into the input
- Show suggestion chips for common tasks:
review text, rewrite for voice, check a
button label, write an error message
- Keep full conversation history per session
- Embed this system prompt as a constant:
[paste your Custom GPT instructions here]
Step 6 of 6
Run & test the plugin
-
1
Open Figma desktop
Go to Main menu → Plugins → Development → Content Design Assistant.
-
2
Select a layer
Click any frame, component, or text layer on the canvas that contains copy you want to review.
-
3
Click "Pull from Figma"
The plugin extracts all text from the selection and loads it into the input, ready to send.
-
4
Use a suggestion chip or type a prompt
Hit enter or click send. The assistant reviews against Thumbtack guidelines and responds with a full rewrite or feedback.
-
5
Iterate in the chat
The conversation keeps history — ask for alternatives, request a different tone, or dig into a specific guideline.
Suggestion chips built in
Review selected text
Rewrite for Thumbtack voice
Write an error message
Check button label
Development mode tip
During development, changes to ui.html and code.js are picked up immediately — just close and reopen the plugin panel. No build step needed.
Publishing (optional)
To share with the team: submit through Figma's plugin review process, or share as a private plugin within your Figma organization.
Architecture decisions
Key decisions & why we made them
| Decision |
What we did |
Why |
| API key handling |
Cloudflare Worker as proxy |
Plugins are client-side — a key in the bundle is publicly readable. The Worker holds it as a server-side secret. |
| Network access |
Declared in manifest.json |
Figma blocks all outbound HTTP by default. Every allowed domain must be whitelisted explicitly. |
| Document access |
dynamic-page |
Lets the plugin read text from whichever Figma page is currently open, not just the page it was launched from. |
| Guidelines delivery |
System prompt in ui.html |
Embeds the full Thumbtack spec so the model always has context — no retrieval step, no latency, no missed guidelines. |
| Model & temperature |
GPT-4o at 0.4 |
GPT-4o for quality; temperature 0.4 keeps responses consistent and guideline-adherent without being robotic. |
| No build step |
Vanilla HTML/JS |
No bundler, no framework, no compilation. Edits to any file are live immediately — just reopen the plugin panel. |
| Code authoring |
Claude Code wrote all four files |
Describing behavior in plain language is faster and more reliable than writing Figma plugin API code from scratch. Iteration is a conversation, not a debugging session. |
That's it
Four files.
Six steps.
One content tool.
Step 1
Build the Custom GPT on chatgpt.com — develop and validate the system prompt
Step 2
Request an OpenAI API key via #ml-infra (IT/DX/3rd Party access pattern)
Step 3
Deploy Cloudflare Worker to proxy OpenAI with the secret API key
Step 4
Register the plugin in Figma desktop and configure manifest.json
Step 5
Write code.js (Figma access) and ui.html (chat UI + system prompt)
Step 6
Run from Figma → select layers → pull text → ask anything