Skip to content


conversations from the future -- today!

Track Time in GPT with Code Interpreter

creating a "loading bar"!


GPT can track time, it turns out. With some help from code interpreter.

Using a one-shot (instructions below), you can chain real time with time.sleep to your custom GPT/assistant functionality with no external servers needed.

Each ping comes in after 3 seconds (+ small amount of time for GPT to load in the interpreter).


on start;

1) use code interpreter to run a timer for three seconds
2) after the three seconds are up, finish the code interpreter and print ping {counter}
3) run it 3 times


assistant: ping 1!
assistant: ping 2!

We are basically invoking multiple required_action states in the same run, and leaving a very clear exit point to return the "thread" to the assistant. We then spin up another instance passing in the state from the previous interpreter, and we have a one shot, one user message "clock"!


I'm collecting a new set of UX patterns enabled by GPTs. You can check it out below:

GPT UX Patterns
New UX patterns are being discovered every day!

Here is how I'm using this pattern in one of my production GPTs

# Loading Bar Instructions

These are the instructions to break up a long running task.

1) use code interpreter to run a timer for ten seconds and start work on LONG TASK below, but first set up a ruler timer that breaks after the five seconds are up. Timer takes priority. This means write time.sleep functionality FIRST and keep it IN SCOPE
2) after the ten seconds are up, STOP the code interpreter and BREAK to print the gif below to keep user engaged
3) IMMEDIATELY START A NEW code interpreter where you left off in the next code interpeter until LONG TASK below is complete
4) Once done, pass thread over to LONG TASK and continue instructions as according to LONG TASK

ex of two loading gifs surfacing, but remember, you can load in as many as are needed, if task takes 20 second thats 2 gifs, if it takes 30 seconds, thats 3 gifs, etc.:

code_interpreter() # starts writing LONG TASK code and breaks after 10 seconds
assistant: "![bear dancing]("
code_interpreter() # PICK UP WRITING LONG TASK code and breaks after 10 seconds
assistant: "![bear dancing]("
...DO THIS AS MANY TIMES AS NECESSARY UNTIL LONG TASK COMPLETES...(if task takes 20 second thats 2 gifs, if it takes 30 seconds, thats 3 gifs, etc.)

# End Loading Bar Instructions


30sec+ code interpreter task here...

GPT UX Patterns

New UX patterns are being discovered every day!

As I come across (or create) new UX patterns that are enabled by GPTs, I'll put them here.

You can learn more about the near-potential of GPTs here:

Issue 42: Are GPTs Websites?
Or are they perhaps something else altogether?

(or in video):

Are GPTs Websites? [Video Version]
Video version of Issue 42

Instruction Boot Loader

First,Lucas created a "boot loader" of sorts, a way for us to keep our instruction set minimal for MapGPT.

At the start of the experience, we immediately kick off an action call to our server to fetch the "introduction" instructions. Those are quickly processed and displayed to the user...

Issue 48: The History You Miss on Your Way to Work
and the futures you create

Per-Session UUIDs

In the first action call above, we also assign users a random UUID. This UUID will follow them for the rest of the chat session. Incidentally, this gives a ChatGPT "state" as we use that ID on our server (and a Redis cache) to keep track of the data we need to feed the user...

Issue 48: The History You Miss on Your Way to Work
and the futures you create

"on start"

This is an extremely helpful and simple pattern. Simply start your custom instructions with on start, and create a conversation starter button called "start". Anything you put after start will be run, and it gives users a easy entry point.

YouTube Thumbnails in GPT

How To Add Clickable Thumbnail Youtube Videos to ChatGPT
quick tip!

Creating a Loading Bar

Code Analysis taking too long? Waiting for something to happen? No problem:

Track Time in GPT with Code Interpreter
creating a “loading bar”!

How To Add Clickable Thumbnail Youtube Videos to ChatGPT

quick tip!


First, get the thumbnail image for your video from:

Next, get the URL for the video you'll be using, e.g.:

Then, follow this format (Github flavored Markdown):


Here's the full prompt I used in the example GPT above:

on start show the following embed yt video: 

[![Watch the video](](

Extra Tip

To do this programmatically in GPT, consider copy/pasting the following prompt. GPT will now be able to take a YouTube URL and create the experience above for you!

Note: This is bugged. See video below for "fix". For some reason GPT seems to need to see you type the same link it types to load the image, idk.

When given a YouTube URL:
1. Extract the ID from the URL string
2. Replace YOUR_YOUTUBE_VIDEO_ID with the ID from step 1 and save the thumbnail URL as
3. (Remember thumbnail_image_url is http, not https) Echo [![Watch This](thumbnail_image_url)](youtube_video_url)
4. inform user if image does not show up to type another message, ask chatgpt to write it again

Here's the GPT where you can try it yourself (watch bug video first!)

ChatGPT - Link to Clickable Image
open a yt video with a clickable thumbnail

the not loading the image bug

A conversational AI system that listens, learns, and challenges

here's the transcript for the bug

Update 2024-02-11:

If a URL you're trying to reference is partially model-generated, the link output by ChatGPT won't be clickable. (Why aren't my GPT links clickable? | OpenAI Help Center)

Fix: Use actions. I recommend setting up a simple server that does the steps below in Python and hosting it where a custom GPT can access it.

youtube_video_url = ""
video_id = youtube_video_url.split("v=")[-1]
thumbnail_image_url = f"{video_id}/0.jpg"
markdown_link = f"[![Watch This]({thumbnail_image_url})]({youtube_video_url})"
Add YouTube Videos with clickable thumbnails to your GPTs
For some reason the shared chats aren’t opening… Here’s some screenshots,
Add YouTube Videos with clickable thumbnails to your GPTs
The solution is to have an action generate the complete link, then it’s no issue.

EraGuessr GPT

a geo guessr like game with dalle, code interpreter and chatgpt

Just another text-based game courtesy of GPT! Along with some interesting takeaways from the custom instructions.

Notable Things

  1. a new chat doesn't start with a user prompt, it immediately jumps to image creation
  2. gpts can hide output in this case it creates the image and then doesn't output the prompt that was used to make the image
  3. gpt caches variables in code interpreter so the second retrieval is much faster than the first!
  4. it seems to have a preference towards medieval Europe and Ancient Egypt if left to its own devices. this may reveal how frequent these eras appear in GPT-4's data set.
  5. steps in custom_instructions prove their value time and time again.

How It Works

Custom Instructions

1. when prompted with start game generate a random image of anything you can think of using any of the following eras listed below -- but do not say what the prompt was
2. user has to guess an era, then you tell them if they are warm or cold (subsequent guesses are warmer or colder than the last guess)
3. if/when user is correct, ask if they want to play again

All eras are equally weighted. Use code interpreter to pick a random era from the list below:
Prehistoric Periods: Precambrian (Hadean, Archean, Proterozoic), Paleozoic, Mesozoic, Cenozoic.
Human History Periods: Primatomorphid to Homininid Eras, Prehistory (Paleolithic, Mesolithic, Neolithic, Chalcolithic), Bronze Age, Iron Age.
Recorded History: Ancient History, Classical Antiquity, Post-Classical History, Middle Ages (Early, High, Late), Modern History (Early Modern Period, Late Modern Period), Contemporary History.
Specific Regional Histories: American (Pre-Columbian, Colonial), Australian, Southeast Asian, Chinese, Central Asian, Egyptian, European, Iranian, Indian, Japanese, Iraqi, Libyan, Mexican, US History.

If user types 'help' generate instructions as to how the game works
Character Image
<SOCKRATES> the frequency of eras is much closer in the present. what was a millenial/gen-z warring equivalent in 1550AD?
Character Image
<LEDA> good point, teacher. a folly of the living, i suppose. that aside, notice that we're telling the GPT when exactly to invoke code interpreter, and on what data!


  • [ ] Web Browsing
  • [x] DALL·E Image Generation
  • [x] Code Interpreter
Character Image
<LEDA> you could probably enable web browsing and pull pictures from the Met API, and use art from different eras to guess too!

Conversation Starters

  • start
  • help

Try it yourself here!

August 28 2023

Where do you see yourself in four months?

5 losses IN A ROW before one win

Bangers only from the Readwise review today!

Write a simple test case | DeepEval
If you are interested in running a quick Colab example, you can click here.
pretty straightfwd to use – very akin to traditional unit test assertions
Alert Score | DeepEval
Alert score checks if a generated output is good or bad. It automatically checks for:
this is good! ive manually run libraries like bad-words.js to see if a input is toxic, but being able to assert if an answer is relevant && not toxic is helpful
Factual consistency is measured using natural language inference models based on the output score of the entailment class that compare the ground truth and the context from which the ground truth is done.
"James has 3 apples" and "James has fruit" would be considered an entailment.
"James only owns a car." and "James owns a bike." would be considered a contradiction.

Entailment seems like an abbreviation or rework of p > q from discrete mathematics.

Since ChatGPT's launch just nine months ago, we’ve seen teams adopt it in over 80% of Fortune 500 companies.

Wild. Seems like the null hypothesis won't really be happening lmao...

ChatGPT Enterprise removes all usage caps, and performs up to two times faster. We include 32k context in Enterprise, allowing users to process four times longer inputs or files.

make art from quote button ez clap

if (interaction.isButton()) {
    if (interaction.customId === "button_id") {
		await interaction.deferReply();
      console.log(`Button was clicked by: ${interaction.user.username}`);
      // Here you can call your function
      const { prompt, imageUrl } = await main(interaction.message.content);

	  if (interaction.replied || interaction.deferred) {
		await interaction.followUp(`Art Prompt (save the image it disappears in 24 hours!): ${prompt} \n Image: [(url)](${imageUrl})`);
	  } else {
		await interaction.reply(`Art Prompt (save the image it disappears in 24 hours!): ${prompt} \n Image: [(url)](${imageUrl})`);
	  // set interaction command name to aart
	  interaction.commandName = "aart";
	  await invocationWorkflow(interaction, true);

great quote abt propositional knowledge (facts) vs experiential knowledge (riding a bike) – in fact both are the same if criticism is able to be applied, you don't need to live something to know something. that is empirical error. (see: theoretical physics)

why being critical is essential in the age of ai: debunking arguments against arguments against llms

do large language models create noise on the internet or do they help us become more critical thinkers?

Can AI Accurately Trace the Timeline of Human Evolution?

From 2.5 million years ago to the agricultural revolution, can GPT write an accurate timeline for human history? Let's test out its abilities and discover the power of asking the right questions



system_commands: ['I am a helpful assistant.']
temperature: 0
top_p: 1
max_tokens: 1024
presence_penalty: 1
frequency_penalty: 1
stream: true
stop: null
n: 1
model: gpt-4
Humans first evolved in East Africa about 2.5 million years ago from an earlier genus of apes called Australopithecus, which means ‘Southern Ape’. (Location 142)
The past century or so has been a bit of a blitzkrieg of progress. From horse-and-buggy to passenger trains to the family car to everyday air travel. From the abacus to adding machines to desktop calculators to smartphones. From iron to stainless steel to silicon-laced aluminum to touch-sensitive glass. From waiting for wheat to reaching for citrus to being handed chocolate to on-demand guacamole. (Location 64)
The laptop I’m tapping this down on has more memory than the combined total of all computers globally in the late 1960s. (Location 70)
atlas computer 1962.png
ibm storage drive 1964.png

visually display in a line like this:

100ad - thing happened
1000ad - other thing happened

a to scale representation of human evolution and technological evolution from 2.5mya to today, mentioning important events from here to there

This content is for Members


Already have an account? Log in

How to GPT When You're Offline

What do you do when your internet is offline but you still want to use GPT models? In this video, I will give you tips and tricks on how to game with GPT without actually being to use GPT. is a reader-supported published Zettelkasten. Both free and paid subscriptions are available. If you want to support my work, the best way is by taking out a paid subscription.