LLM

Good info by Sebastian Raschka about his implementation of Gemma 3 ^fd6a17 model using only Python in this HN thread (he is canyon289). The benchmarks look interesting, with KV Cache compiled model on Mac M4 generating more tokens (224)than one running on A100 GPU (99)

On Prompt Structure — Omar Khattab on X: - “men will literally concatenate 10 different string blurbs instead of coding a Signature”, I’ve also seen this infographi in this talk on dspyLet the LLM Write the Prompts: An Intro to DSPy in Compound AI Pipelines by Drew Breunig.

Google’s Nano Banana AI Image Editing appears to be quite “amazing” (create a photo of someone looking straight ahead using their side profile pic)

Programming

Someone should take the idea behind Air, “new Python web framework built on FastAPI.” - and do this for Litestar. I buy Steve Bennett’s arguments about litestar being a sturdier framework for larger applications — Litestar is worth a look.

“Compounding Engineering” turns every pull request, bug fix, and code review into permanent lessons your development tools apply automatically.

This blog post — Why Semantic Layers Matter — and How to Build One with DuckDB - MotherDuck Blog taught me a new concept - “Semantic Layer”. Need to study this more. I feel this would have been really useful concept to know at Ollyver. Lot of links to follow in that post.

In Reserve First, Alex Kladov talks about tigerbeetle does “static memory allocation”:

When TigerBeetle starts, for every “object type” in the system it computes the worst-case upper bound for the number of objects needed, based on CLI arguments. Then TigerBeetle allocates exactly that number of objects and enters the main event loop. After startup, no new objects are created. Therefore no dynamic memory allocation or deallocation is needed.

From around the web

The sound of inevitability

(winning debates)… only trick in the book, once you boil it all down, is to make sure the conversation is framed in your terms. Once that happens, it’s all over bar the shouting.  Shoshana Zuboff’s fantastic book The Age of Surveillance Capitalism.  It’s a key success of Professor Zuboff’s book that it has introduced so many new terms to the lexicon.  “Inevitabilism” — the belief that certain developments are impossible to avoid.

CARELESS WHISPERS (Japanese Version 1984) sung by Hideki Saijo (1955 - 2018); we have many western and other world music influenced Bollywood, Sandalwood etc music, so it is interesting when George Michael’s Careless Whisper shows up on your youtube recommendation. Was looking at this AI generation when this happened - Wonderwall - Oasis (Cover) | Japanese Enka

My grandfather’s vagabond past | Aeon Essays via avataram.

The McPhee method « the jsomers.net blog — essentially a method driven by the zettelkasten process.

In brief, McPhee’s idea is to never face a blank page. Instead, in stage one he accumulates notes; in stage two he selects them; in stage three he structures them; and in stage four he writes. By the time he is crafting sentences the structure of the piece as a whole, and of each section, even paragraph, and the logic connecting them all, is already determined, thanks to the mechanical work done in the first three stages. McPhee is on rails the whole time he writes his first draft. From there it’s all downhill and the standard thing that everybody does: revision, revision again, then refinement—a sculptor with ax, then knife, then scalpel.

From this thread, I learnt that:

  • The camera sensor only captures ONE color (red, green, or blue) per pixel. The rest are made up
  • A staggering 2/3 of the colors in the photo are missing from the raw capture, and come from interpolation; or as some might say “hallucination.” Your camera’s software makes highly educated guesses to fill in the blanks.
  • what the sensor measures isn’t the photo; it’s the raw material for creating one.
  • What you see in a digital photo has never been exactly a perfect reconstruction of reality. While the aesthetics & authenticity of this reconstruction is a legitimate topic for discussion, the debate over photographic “realness” began well before modern AI ever got involved

and, … “And the result is that the Tesla self driving neural net has a lot fewer pixels to process AND it is faster since you don’t have sensor processing time AND it is far more sensitive in both low and high light situations.”

and, ” … Photon count data from sensors isn’t useable by humans. If you saw it mapped to the frame grid it would probably look too dark to see much of anything but for a NN that only sees the world using photon counts, it can see much more than what a human can see from a processed frame image.”


New Pages: LLM Task Specific Pruning, AI Tools

Updated Pages: movies, deno