Notes from Monki Gras 2024


After a 5 year hiatus, Monki Gras is back! This year’s theme is prompting.

Prompting Craft: examining and discussing the art of the prompt in code and cultural creation.

I confess I’m not a big LLM or AI user. I attended this Monki Gras more for the vibe and the beer. In the end, I learned a lot, which gave me a good overview of what’s happening in the world of LLM!

Here are a few notes and reflections from some of the talks.

Learning and living with AI

Alex Chan draws parallels between learning to dance and collaborating with AI. First, start with small steps and increase confidence. Imagine doing a jump or flip with your dance partner. How much do you trust them? Dancing with a partner is all about trust and comfort. AI is a partner, too. For an LLM or AI to be successful, trust and comfort need to be built step by step with a good UX.

Rafe Colburn told us about his experience with encouraging the use of AI at Depop. He created initiatives like arranging access and vetting various tools to create a safe space for employees to start thinking about using AI tools. In my experience in the workplace, some people dislike the idea of using AI, others use some tools secretly, and others are more open about it. It’s important to remove the stigma and create a baseline. It’s okay to automate part of your job. It’s okay to use X or Y tool. We can work together to understand and discover how LLM and AI can transform our jobs.

Dr Cat Hicks from the Developer Success Lab presented their research on developers and the transition to generative AI-assisted software work.

43-45% of developers studied showed evidence of worry, anxiety and fear about whether they could succeed in this era of rapid generative-AI adoption with their current technical skill sets.

The ”AI Skill Threat” is real. Personally, coding and software engineering are part of my identity, and the news that my work can be fully or partially automated leads me to question my next career move.

The main takeaway from the talk is that fostering a learning culture and the feeling of belonging in a team alleviates the worry that comes from the “AI Skill Threat”. If we keep finding happiness in learning we have nothing to fear!

Have a look at their research paper and their Generative-AI Adoption Toolkit. Bonus a comic about IA skill threat.

Kristen Foster-Marks made a connection between second language acquisition, learning to code, and reading code. AI can be a great sidekick for learning because you can ask questions about a particular line or ask it to explain what a block of code does.


Zack Akil taught us about the IVO test he invented at Google. This test allows us to assess the viability or usefulness of an AI-based interface or software. The test is a simple question: Can the user immediately validate the generated output?

Zack followed with some tips for when our ideas or interfaces fail the IVO test:

  • Reframe the end user (e.g. make sure the tool is used by the intended audience who can interpret the results).
  • Add post-grounding to UI to increase confidence (e.g. show the source material).
  • Add pre-grounding (e.g. Retrieval augmented generation or RAG).

Creative tools

Patrick Debois presented the current state-of-the-art world of open-source creative AI. Some links:

  • ComfyUI is a fascinating node-based tool for manipulating LLM and AI models.
  • Segment Anything Model is a model that can “cut out” any object in any image.
  • Prompt travelling with AnimateDiff can generate animations.

I was surprised by the power of ComfyUI; it looks like a promising creative tool. It really excels at letting users connect and play with various models.

Exploiting AI software

Paul Molin gave us an excellent summary of LLMs’ security vulnerabilities and possible attacks. He showed us the concept of prompt injection (similar to SQL injection) and how we can run untrusted code or access resources available to the LLM by crafting some special prompts. Imagine that you created a product that is just using an LLM with a very elaborated prompt that you have crafted. You need to make sure the custom prompt your customers are paying for stays secret! Paul showed us some techniques to hack LLM-based applications, in order to access data used during RAG or find the “secret” prompt. Some examples include using images and steganography to hide a secret prompt that can be used as an input or using special prompts that trick the LLM. More examples here.

Creative projects

Matt Webb showcased his project: POEM/1. POEM/1 is an internet-connected LLM-based “sentient” and poetic clock. Every minute, the clock generates a poem based on the current time. I thought the generated poems are sometimes surprising and sometimes rubbish, but they truly show some personality that makes the clock lovable.

I was curious if anyone had created a comic book using AI. Jim Boulton has! His project is called Unsung Heroes of the Information Age. In each comic, he tells the story of someone who greatly influenced computer science. The comic he kindly gave to the audience is about Lynn Conway, who invented the “design methodology that makes today’s billion-transistor silicon chips possible.”

Other things I learned

  • Retrieval augmented generation or RAG
  • Prompt compression is a technique for reducing the number of tokens inputted in an LLM. This can be done to save money, for example. It reminded me of eigenvectors. You can think of a prompt as a set of vectors and want to keep the characteristic words in the prompt.
  • Emil Eifrem showed us how knowledge graphs can improve LLMs by providing context. Words, tone of voice, and body language account for 7%, 38%, and 55% of personal communication. What does this mean for how we prompt AI? Context matters, and that’s why graphs can help improve results.