37. Is ThinkHere really private?
Yes.
ThinkHere's privacy is architectural. The AI model runs on your device using WebGPU, and
your prompts and responses are stored only in your browser's local storage. No network request carries
your
conversation to any server during inference — not for any tier, not for any model.
You
can verify this yourself. ThinkHere is open source under the MIT licence. The codebase is
publicly available for inspection. You can also watch your browser's network activity while using
ThinkHere —
you will see no outbound requests during a conversation.
38. Do I need an account to use ThinkHere?
No.
You can start using ThinkHere immediately without creating an account. Visit the site in a
supported browser and SmolLM2 1.7B loads automatically after a one-time model download.
A free
account unlocks more models, conversation history, system prompts, file upload, and the
knowledge base. See ThinkHere pricing and tiers
explained
for the full comparison.
39. Does ThinkHere use cloud inference?
No.
ThinkHere does not use cloud inference at any tier. Every response is generated locally on
your device using WebGPU. There is no remote AI server processing your prompts — not even in the
background.
The
only connection ThinkHere makes to external servers is downloading model weights on first
use, and account authentication if you are signed in. Neither involves your conversation content.
40. Why do models need to be downloaded?
Because ThinkHere runs the AI model on your device rather than on a server, your browser needs
the model files available locally. Model weights — the data that defines how the model responds — are
Typically 600 MB–4.5 GB and cannot be generated or approximated on the fly.
The
download happens once per model. After that, ThinkHere loads the model from your browser's
local cache in seconds. You only need an internet connection for the initial download, not for ongoing
use.
41. Can I use ThinkHere on mobile?
iPhone
is not currently supported. iOS memory constraints prevent the browser from loading model
weights of the size required to run a language model in ThinkHere.
M-series iPads (M1 or later) may work using Safari 18+, though performance varies and load times
are longer than on a desktop or laptop.
Android
is tentatively supported on newer, high-end premium devices with Chrome 113+. Due to extreme memory
demands,
most phones will experience browser crashes or fail to load models completely.
For
the best experience, use ThinkHere on a desktop or laptop with Chrome 113+, Edge 113+, or
Safari 18+ and at least 6 GB of free RAM.