№ 31 | A Model for Discussing LLM, Scaling Issues with LLM, AI Governance, Wonder & Awe, and Lessons from Disneyland

№ 31 | A Model for Discussing LLM, Scaling Issues with LLM, AI Governance, Wonder & Awe, and Lessons from Disneyland

A lot of my week was spent reading stuff related to ChapGPT (mostly due to zeitgeist, but also…) in preparation for a short talk I gave on Tuesday. Anyway… I can’t share a new card deck every week, right?

A model for LLM

I’ve been trying to make sense of the LLM hype, from the amazing things I’ve seen the technology do, to the “hallucinations” and gross errors, to deeper ethical concerns. Out of this came a model (v0.1) that I’m using to contain or frame these different conversations. I’ll write a post soon, but in the meanwhile, here’s where I’m at; hopefully this makes sense without narration:

Here's the Mural for this if you want more context and like to zoom in and out. 😜

Scaling issues with LLMs

The most fascinating bit of information I picked up this week came from Per Axbom. Essentially,  the notion that “the more information we feed these machines, the better the results will be” has…complications.

Essentially these models will remain insecure as they increase in size and lead to all sorts of serious problems. On the one hand they will inevitably contain significant amounts of privacy-sensitive information. On the other hand they will continue to be vulnerable to poisoning attacks. There is not enough trustworthy content in the world and to remain safe they must be reduced in size - and yes, performance must drop.

Read more about this, including the research paper he's drawing from.

Principled artificial intelligence

Lots of chatter recently about ethics and governance for AI. Guess what? This isn't new territory. Here's a synthesis (and visualization) from 2020 comparing “the contents of thirty-six prominent AI principles documents side-by-side.” [H/T Christina Wodtke]

And now onto things other than AI!

Wonder and awe

Unrelated (except by topic?), I recently came across this book (highlights) on Wonder and this article on Awe. Parallel paths? Maybe the universe is trying to tell me something? Also, are Wonder and Awe the same thing, and if not, what are the distinctions? Maybe I should go ask ChatGPT… Or not.

Learning from the Magic Kingdom

Let's round things out with this great article from Jorge Arango “Learning from the Magic Kingdom.”

I recall Jorge giving an early talk on this topic sometime in 2016, when it was called Lands, Hubs, and Wienies (’Wienies?!‘ Just read the paper!).  Anyway, great to see these thoughts unpacked in so much detail.  A lot in here related to architecture, experiences, imagination, and more. Oh, and I think it was from this talk that I was first introduced to Kevin Lynch's book The Image of the City and the “five elements that define how people experience urban environment.” A must read. The book and this article.

And… if you enjoy this paper, it pairs nicely with this talk from GDC: “Everything I learned about Level Design, I Learned from Disneyland” from Scott Rogers.

Read more

№ 81 | Scientific Papers as Comic Strips, 52 Things About Cards, Thermodynamics Cards, Collaborative Place Futures Toolkit, and Why The Work is Never Just “The Work”

№ 81 | Scientific Papers as Comic Strips, 52 Things About Cards, Thermodynamics Cards, Collaborative Place Futures Toolkit, and Why The Work is Never Just “The Work”

✉️Look for 2 extra special emails heading your way, later this week! 1/ The promised print-and-play Zombie Leadership cards. 2/ An announcement about our next Creative Challenge… Scientific Papers as Comic Strips Yes! Comics as things to think with. Here’s an interview with Kanaka Rajan, who uses “narrative illustrations

By Stephen P. Anderson