A flat, scannable glossary for freelancers who keep running into AI jargon and want a one-line definition that’s actually useful — not a Wikipedia article. Every term here was added the first time it showed up in a real ToolMint review. If a term isn’t in this glossary, we don’t use it in a review.

TL;DR: Bookmark this page. Every term that shows up in a ToolMint review with a tooltip-style asterisk has a one-line definition here, written for solo operators making subscription and workflow decisions — not for ML engineers.
Who this is for: Freelance consultants who don’t need to know how a transformer works, but who do need to understand the difference between “context window” and “fine-tuning” before they pay $20/month for something.
Updated 2026-05-03
A
- Agent — A script or AI that can take multiple steps without you intervening between each. In freelance practice: usually overhyped. Most “agents” you can buy in 2026 are still chat with extra steps.
- API — The paid pipe a developer uses to call a model from their own code instead of through a chat window. You pay per word. Most freelancers don’t need it.
- Anthropic — The company that makes Claude.
B
- Benchmark — A standardized test (MMLU, HumanEval, etc.) used to compare model quality. Useful for narrowing the field; useless as the final word on whether a tool fits your work.
C
- Cache (prompt caching) — A trick where the model “remembers” a long instruction so you only pay for it once. Matters if you’re a developer building on the API; invisible if you’re using Claude/ChatGPT in the chat.
- ChatGPT — OpenAI’s chat product. The default many freelancers start with.
- Claude — Anthropic’s chat product. Stronger than ChatGPT for long-form drafting in our experience; weaker for short brainstorming.
- Context window — How many words a model can hold in its head at once. Bigger window = you can paste a longer document. 200K tokens (Claude) ≈ 150,000 English words.
D
- DeepSeek — A Chinese AI company. Notable in 2026 for sub-
$0.20-per-million-token pricing. Privacy/jurisdiction caveats — see our DeepSeek V4 Field Report.
E
- Embedding — A way of representing text as numbers so a computer can compare meaning. You don’t need to understand it. You need to know it powers “semantic search” features in tools like Perplexity.
F
- Field Report — ToolMint’s term for a post about a tool the author hasn’t personally used yet. Starts with “I haven’t tested this yet, but…” Used for new launches and Phase D weekday news.
- Fine-tuning — Training a model on your specific data. Expensive, slow, almost never the right move for solo operators in 2026 — RAG (see below) does the same job at
1%of the cost. - Free tier — The version of a paid tool you don’t pay for. Almost always good enough for casual use, almost never good enough for billable client work.
G
- GPT — OpenAI’s family of models (GPT-4, GPT-5, GPT-5.5, etc.). When you say “GPT” most people mean ChatGPT.
H
- Hallucination — When a model invents a fact (a citation, a feature, a number). Still happens in 2026. The defense is sourcing — the our writing SOP requires every numerical claim to be cited.
L
- LLM (Large Language Model) — The class of AI behind Claude, ChatGPT, Gemini, etc. “AI tool” usually means an app built on top of an LLM.
M
- Mode A / Mode B — ToolMint’s content modes. Mode A = Tested (the author has used the tool ≥ 2 weeks on real client work). Mode B = Field Report (the author has not used it; analysis is from public sources). Always disclosed in the post.
- MMLU — A benchmark testing general knowledge. ~
85%in 2026 means “competent generalist.” Numbers above 90 are state-of-the-art.
N
- Notion AI — Notion’s built-in AI features. Costs
$10/monthon top of Notion’s$12/monthPlus plan. Worth it only if your work already lives in Notion.
P
- Perplexity — An AI search engine that synthesizes across multiple sources and cites them. ToolMint’s default first stop for client research.
- Plus / Pro — Generic name for a tool’s paid tier. Plus = ChatGPT’s mid-tier (
$20/month). Pro = Claude’s mid-tier ($20/month). Confusing on purpose. - Prompt — What you type into the model. The whole industry of “prompt engineering” is mostly overhyped — clear writing is
80%of it.
R
- RAG (Retrieval-Augmented Generation) — A technique where the model looks up your documents before answering. Powers Claude Projects, Notion AI Q&A, Perplexity Spaces. The 2026 default for “ask my data” features.
S
- Sonnet / Opus / Haiku — Anthropic’s model sizes. Opus = biggest, slowest, smartest. Sonnet = mid-tier (the default in Claude Pro). Haiku = fastest, cheapest, dumbest.
- Subscription — What every AI tool wants from you. The ToolMint editorial position is that no single subscription should cost more than
2%of your average monthly client revenue.
T
- Token — A word fragment. Roughly 0.75 words = 1 token. Pricing is usually quoted per million tokens.
Words we deliberately avoid
These words appear on every AI marketing page. They mean nothing concrete. ToolMint doesn’t use them in reviews:
- Game-changer, revolutionary, mind-blowing, insane, no-brainer, must-have, seamless, powerful, unleash, next-generation
If a tool is genuinely useful, we’d rather show you a real number — minutes saved, dollars per month, files processed. The full list of banned words is in our writing SOP.
Reviewed and published by ToolMint editorial. New terms get added here when they show up in a review for the first time. Suggestions: see the About page for how to send them in.
Where these terms show up in real reviews
- GPT-5.5 vs Claude Code: What the Numbers Say Before I Test — Benchmarks in context — comparing GPT-5.5 against Claude Code before testing.
- Cursor for Non-Developers: A Freelancer’s First Month — Cursor for a non-developer freelancer: the month-one practical view.