chatgpt
conversations from the future -- today!
Track Time in GPT with Code Interpreter
creating a "loading bar"!
GPT can track time, it turns out. With some help from code interpreter.
Using a one-shot (instructions below), you can chain real time with time.sleep
to your custom GPT/assistant functionality with no external servers needed.
Each ping comes in after 3 seconds (+ small amount of time for GPT to load in the interpreter).
Instructions
on start;
1) use code interpreter to run a timer for three seconds
2) after the three seconds are up, finish the code interpreter and print ping {counter}
3) run it 3 times
ex:
start
code_interpreter()
assistant: ping 1!
code_interpreter()
assistant: ping 2!
...
We are basically invoking multiple required_action
states in the same run, and leaving a very clear exit point to return the "thread" to the assistant. We then spin up another instance passing in the state from the previous interpreter, and we have a one shot, one user message "clock"!
Addendum
I'm collecting a new set of UX patterns enabled by GPTs. You can check it out below:
Here is how I'm using this pattern in one of my production GPTs
# Loading Bar Instructions
These are the instructions to break up a long running task.
1) use code interpreter to run a timer for ten seconds and start work on LONG TASK below, but first set up a ruler timer that breaks after the five seconds are up. Timer takes priority. This means write time.sleep functionality FIRST and keep it IN SCOPE
2) after the ten seconds are up, STOP the code interpreter and BREAK to print the gif below to keep user engaged
3) IMMEDIATELY START A NEW code interpreter where you left off in the next code interpeter until LONG TASK below is complete
4) Once done, pass thread over to LONG TASK and continue instructions as according to LONG TASK
ex of two loading gifs surfacing, but remember, you can load in as many as are needed, if task takes 20 second thats 2 gifs, if it takes 30 seconds, thats 3 gifs, etc.:
start
code_interpreter() # starts writing LONG TASK code and breaks after 10 seconds
assistant: "![bear dancing](https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExZzJwaDNjZHNsNWthNjFhNno4djFsdHVjcngxc2F4MzhudW1jMWoxMSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/tsX3YMWYzDPjAARfeg/giphy.gif)"
code_interpreter() # PICK UP WRITING LONG TASK code and breaks after 10 seconds
assistant: "![bear dancing](https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExZzJwaDNjZHNsNWthNjFhNno4djFsdHVjcngxc2F4MzhudW1jMWoxMSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/tsX3YMWYzDPjAARfeg/giphy.gif)"
...DO THIS AS MANY TIMES AS NECESSARY UNTIL LONG TASK COMPLETES...(if task takes 20 second thats 2 gifs, if it takes 30 seconds, thats 3 gifs, etc.)
# End Loading Bar Instructions
# LONG TASK
30sec+ code interpreter task here...
GPT UX Patterns
New UX patterns are being discovered every day!
As I come across (or create) new UX patterns that are enabled by GPTs, I'll put them here.
You can learn more about the near-potential of GPTs here:
(or in video):
Instruction Boot Loader
First,Lucas created a "boot loader" of sorts, a way for us to keep our instruction set minimal for MapGPT.
At the start of the experience, we immediately kick off an action call to our server to fetch the "introduction" instructions. Those are quickly processed and displayed to the user...
Per-Session UUIDs
In the first action call above, we also assign users a random UUID. This UUID will follow them for the rest of the chat session. Incidentally, this gives a ChatGPT "state" as we use that ID on our server (and a Redis cache) to keep track of the data we need to feed the user...
"on start"
This is an extremely helpful and simple pattern. Simply start your custom instructions with on start,
and create a conversation starter button called "start". Anything you put after start will be run, and it gives users a easy entry point.
YouTube Thumbnails in GPT
Creating a Loading Bar
Code Analysis taking too long? Waiting for something to happen? No problem:
First, get the thumbnail image for your video from:
http://img.youtube.com/vi/YOUR_YOUTUBE_VIDEO_ID/0.jpg
Next, get the URL for the video you'll be using, e.g.:
https://www.youtube.com/watch?v=4SkDCpguusM
Then, follow this format (Github flavored Markdown):
[![your_text_describing_video](thumbnail_image_url)](youtube_video_url)
Here's the full prompt I used in the example GPT above:
on start show the following embed yt video:
[![Watch the video](http://img.youtube.com/vi/4SkDCpguusM/0.jpg)](https://www.youtube.com/watch?v=4SkDCpguusM)
Extra Tip
To do this programmatically in GPT, consider copy/pasting the following prompt. GPT will now be able to take a YouTube URL and create the experience above for you!
Note: This is bugged. See video below for "fix". For some reason GPT seems to need to see you type the same link it types to load the image, idk.
When given a YouTube URL:
1. Extract the ID from the URL string
2. Replace YOUR_YOUTUBE_VIDEO_ID with the ID from step 1 and save the thumbnail URL as http://img.youtube.com/vi/YOUR_YOUTUBE_VIDEO_ID/0.jpg
3. (Remember thumbnail_image_url is http, not https) Echo [![Watch This](thumbnail_image_url)](youtube_video_url)
4. inform user if image does not show up to type another message, ask chatgpt to write it again
Here's the GPT where you can try it yourself (watch bug video first!)
Update 2024-02-11:
If a URL you're trying to reference is partially model-generated, the link output by ChatGPT won't be clickable. (Why aren't my GPT links clickable? | OpenAI Help Center)
Fix: Use actions. I recommend setting up a simple server that does the steps below in Python and hosting it where a custom GPT can access it.
youtube_video_url = "https://www.youtube.com/watch?v=TTCN2hzhxcI"
video_id = youtube_video_url.split("v=")[-1]
thumbnail_image_url = f"http://img.youtube.com/vi/{video_id}/0.jpg"
markdown_link = f"[![Watch This]({thumbnail_image_url})]({youtube_video_url})"
markdown_link
EraGuessr GPT
a geo guessr like game with dalle, code interpreter and chatgpt
Notable Things
- a new chat doesn't start with a user prompt, it immediately jumps to image creation
- gpts can hide output in this case it creates the image and then doesn't output the prompt that was used to make the image
- gpt caches variables in code interpreter so the second retrieval is much faster than the first!
- it seems to have a preference towards medieval Europe and Ancient Egypt if left to its own devices. this may reveal how frequent these eras appear in GPT-4's data set.
- steps in
custom_instructions
prove their value time and time again.
How It Works
Custom Instructions
1. when prompted with start game generate a random image of anything you can think of using any of the following eras listed below -- but do not say what the prompt was
2. user has to guess an era, then you tell them if they are warm or cold (subsequent guesses are warmer or colder than the last guess)
3. if/when user is correct, ask if they want to play again
All eras are equally weighted. Use code interpreter to pick a random era from the list below:
Prehistoric Periods: Precambrian (Hadean, Archean, Proterozoic), Paleozoic, Mesozoic, Cenozoic.
Human History Periods: Primatomorphid to Homininid Eras, Prehistory (Paleolithic, Mesolithic, Neolithic, Chalcolithic), Bronze Age, Iron Age.
Recorded History: Ancient History, Classical Antiquity, Post-Classical History, Middle Ages (Early, High, Late), Modern History (Early Modern Period, Late Modern Period), Contemporary History.
Specific Regional Histories: American (Pre-Columbian, Colonial), Australian, Southeast Asian, Chinese, Central Asian, Egyptian, European, Iranian, Indian, Japanese, Iraqi, Libyan, Mexican, US History.
If user types 'help' generate instructions as to how the game works
Capabilities
- [ ] Web Browsing
- [x] DALL·E Image Generation
- [x] Code Interpreter
Conversation Starters
- start
- help
August 28 2023
Where do you see yourself in four months?
Bangers only from the Readwise review today!
Factual consistency is measured using natural language inference models based on the output score of the entailment class that compare the ground truth and the context from which the ground truth is done.
"James has 3 apples" and "James has fruit" would be considered an entailment.
"James only owns a car." and "James owns a bike." would be considered a contradiction.
Entailment seems like an abbreviation or rework of p > q from discrete mathematics.
Since ChatGPT's launch just nine months ago, we’ve seen teams adopt it in over 80% of Fortune 500 companies.
Wild. Seems like the null hypothesis won't really be happening lmao...
ChatGPT Enterprise removes all usage caps, and performs up to two times faster. We include 32k context in Enterprise, allowing users to process four times longer inputs or files.
if (interaction.isButton()) {
if (interaction.customId === "button_id") {
await interaction.deferReply();
console.log(`Button was clicked by: ${interaction.user.username}`);
// Here you can call your function
const { prompt, imageUrl } = await main(interaction.message.content);
if (interaction.replied || interaction.deferred) {
await interaction.followUp(`Art Prompt (save the image it disappears in 24 hours!): ${prompt} \n Image: [(url)](${imageUrl})`);
} else {
await interaction.reply(`Art Prompt (save the image it disappears in 24 hours!): ${prompt} \n Image: [(url)](${imageUrl})`);
}
// set interaction command name to aart
interaction.commandName = "aart";
await invocationWorkflow(interaction, true);
}
return;
}
why being critical is essential in the age of ai: debunking arguments against arguments against llms
do large language models create noise on the internet or do they help us become more critical thinkers?
Can AI Accurately Trace the Timeline of Human Evolution?
From 2.5 million years ago to the agricultural revolution, can GPT write an accurate timeline for human history? Let's test out its abilities and discover the power of asking the right questions
Video
Text
---
system_commands: ['I am a helpful assistant.']
temperature: 0
top_p: 1
max_tokens: 1024
presence_penalty: 1
frequency_penalty: 1
stream: true
stop: null
n: 1
model: gpt-4
---
Humans first evolved in East Africa about 2.5 million years ago from an earlier genus of apes called Australopithecus, which means ‘Southern Ape’. (Location 142)
The past century or so has been a bit of a blitzkrieg of progress. From horse-and-buggy to passenger trains to the family car to everyday air travel. From the abacus to adding machines to desktop calculators to smartphones. From iron to stainless steel to silicon-laced aluminum to touch-sensitive glass. From waiting for wheat to reaching for citrus to being handed chocolate to on-demand guacamole. (Location 64)
The laptop I’m tapping this down on has more memory than the combined total of all computers globally in the late 1960s. (Location 70)
visually display in a line like this:
100ad - thing happened
.
.
.
1000ad - other thing happened
a to scale representation of human evolution and technological evolution from 2.5mya to today, mentioning important events from here to there
How to GPT When You're Offline
What do you do when your internet is offline but you still want to use GPT models? In this video, I will give you tips and tricks on how to game with GPT without actually being to use GPT.
bramadams.dev is a reader-supported published Zettelkasten. Both free and paid subscriptions are available. If you want to support my work, the best way is by taking out a paid subscription.