Populære emner
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
Jeg lurer på om denne "piksler vs. tokens" også kan utvides til "lyd vs. tekst".
De fleste språk som noen gang er snakket har aldri en skriftlig form. Så mye rik informasjon (tone, undertekst, prosodi) går tapt når vi flater ut talte ord til en transkripsjon.
Det kan være et morsomt eksperiment å sammenligne "litterære" LLM-er (trent på bilder av tekst) med "muntlige" LLM-er (trent på rå lyd), og trekke en parallell til menneskelige samfunn: den ene tenker i symboler og rom, den andre i rytme og tid.

21. okt., 06:13
I quite like the new DeepSeek-OCR paper. It's a good OCR model (maybe a bit worse than dots), and yes data collection etc., but anyway it doesn't matter.
The more interesting part for me (esp as a computer vision at heart who is temporarily masquerading as a natural language person) is whether pixels are better inputs to LLMs than text. Whether text tokens are wasteful and just terrible, at the input.
Maybe it makes more sense that all inputs to LLMs should only ever be images. Even if you happen to have pure text input, maybe you'd prefer to render it and then feed that in:
- more information compression (see paper) => shorter context windows, more efficiency
- significantly more general information stream => not just text, but e.g. bold text, colored text, arbitrary images. 
- input can now be processed with bidirectional attention easily and as default, not autoregressive attention - a lot more powerful.
- delete the tokenizer (at the input)!! I already ranted about how much I dislike the tokenizer. Tokenizers are ugly, separate, not end-to-end stage. It "imports" all the ugliness of Unicode, byte encodings, it inherits a lot of historical baggage, security/jailbreak risk (e.g. continuation bytes). It makes two characters that look identical to the eye look as two completely different tokens internally in the network. A smiling emoji looks like a weird token, not an... actual smiling face, pixels and all, and all the transfer learning that brings along. The tokenizer must go.
OCR is just one of many useful vision -> text tasks. And text -> text tasks can be made to be vision ->text tasks. Not vice versa.
So many the User message is images, but the decoder (the Assistant response) remains  text. It's a lot less obvious how to output pixels realistically... or if you'd want to.
Now I have to also fight the urge to side quest an image-input-only version of nanochat...
Det vil være et spill mellom en sivilisasjon som bygger monumenter av stein kontra en som bygger monumenter i sang
En annen ting jeg tenker på er Shelleys Ozymandias, de kolossale relikviene begravd i sand, og likevel leveres dette billedspråket ikke gjennom dikterens egne øyne, men en muntlig historie om en reisende.
8,08K
Topp
Rangering
Favoritter

