OpenAI-style streaming. Random prompts; rounds run until you click Stop. LaTeX in output: $$...$$, \[...\], or \(...\). Token counts from usage or a length estimate. Transient errors are retried. Reasoning and output stay cached locally; a new request does not clear them until the first token arrives. Only “Clear” removes them. Refreshed content persists.
Stream
—
Environmental Impact
Every token burned leaves a mark on the planet. Here's what yours looks like.
Carbon Footprint
0.0 g CO₂Estimated emissions from token generationDriving (equiv.)
0 kmEquivalent CO₂ from driving an average passenger carTrees to Offset
0Trees needed to absorb this CO₂Smartphones Charged
0Equivalent to fully charging this many phonesGoogle Searches
0Same emissions as this many searchesStreaming Hours
0 hrsEquivalent to streaming video for this longEstimates based on Patterson et al. (2022), EPA (2024), Carbon Trust, and Arbor Day Foundation data. Actual emissions vary by data center energy source, model size, and hardware efficiency.