Token Counter

Count tokens for GPT, Claude, Llama, and Gemini. All in-browser — no API key, no upload.

GPT-4o / 4.1
GPT-4 / 3.5
Claude 3.5 / 4
Llama 3
Gemini 1.5 / 2
0
Tokens
0
Characters
0
Words
0
Chars / Token
Estimate Cost →

How it works: GPT counts use the cl100k_base / o200k_base BPE tokenizer (matching OpenAI's exact counts). Claude, Llama, and Gemini use per-model approximations calibrated against published tokenizer behavior (typically within 1–3% for English text). For production billing, always verify with the provider's official API.

About This Tool

Count the number of tokens in your text for AI model context windows. Know whether your prompt fits within model limits before making expensive API calls, and optimize text length for cost efficiency.

How to Use

  1. Paste your text into the input area.
  2. See the token count update instantly.
  3. Compare against model context limits shown in the reference table.

Frequently Asked Questions

How many tokens fit in GPT-4's context window?

GPT-4 Turbo supports 128K tokens (roughly 96,000 words). Claude 3.5 supports 200K tokens. Check current model docs as limits change with new releases.

Why is my token count higher than my word count?

Tokens are subword units. Common words are one token, but longer or uncommon words get split into multiple tokens. Code and special characters also use more tokens.