programming
How GPT Helped Me Write an e2e App From Bed (yt Assets From transcript)
What challenges can GPT overcome when combined with human laziness?
i set out this morning (from my bed) to create a server that does a workflow that i find tedious (im pretty lazy) when uploading to yt: coming up with a title, description and a thumbnail. i had gpt do the lion's share of the work, prodding when something didnt work the way i expect.
for the ai art prompt i just throw that shit into dalle then take my fav result and put it in canva lol.
lessons learned:
- the 25MB whisper api limit is ~10 mins in mp3
- exporting a range is muchhh cheaper data wise than an entire video (see pics below)
- that multer is a thing
- you can upload multipart forms in insomnia!
- the titles and descriptions are a leaping off point for creativity and personal style
- the ai art prompt can be copy pasted directly into dalle lol
you can find the repo here! GitHub - bramses/youtube-assets-from-video
---
system_commands: ['I am a generalist that is not afraid to make radical associations between distinct topics by using creative leaps.']
temperature: 0.0
top_p: 1
max_tokens: 2000
presence_penalty: 1
frequency_penalty: 1
stream: true
stop: null
n: 1
model: gpt-3.5-turbo
---
write an express js server only in method stubs and js docs that:
- allows user to upload a mp4 or mov file
- calls the openai whisper api with file (see # transcription api for request details)
- gets the transcript from response.text
- uses the chatgpt api with the template to create youtube assets (see # asset template)
Transcription Api
const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
const resp = await openai.createTranscription(
fs.createReadStream("audio.mp3"),
"whisper-1"
);
Asset Template
can you write:
- an eye catching youtube title for this transcript (focus mainly on the code)
- a youtube description (phrased as a question to get people to want to read more)
- a potential ai art prompt from this transcript -thematically related and should be slightly abstract and use a random lesser known art period from history with a color palette that fits the description and title
GPT:
/**
* POST /upload
* Allows user to upload a mp4 or mov file and creates YouTube assets using OpenAI APIs.
*
* @param {Object} req - The request object containing the uploaded file.
* @param {Object} res - The response object with the created YouTube assets.
*/
const uploadFile = async (req, res) => {
// TODO: Implement file upload logic
try {
// Call OpenAI Whisper API for transcription
const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
const resp = await openai.createTranscription(
fs.createReadStream("audio.mp3"),
"whisper-1"
);
// Get transcript from response text
const transcript = resp.text;
// Call ChatGPT API with template for creating YouTube assets
const chatgptResponse = await openai.complete({
engine: 'text-davinci-002',
prompt: `Can you write an eye-catching title and description for this code-related video? \n\nTitle:\nDescription as question:\nPotential AI art prompt from this transcript (using a random lesser-known art period):\n`,
maxTokens: 1000,
n: 1,
temperature: 0.5,
});
// Extract generated title, description, and AI art prompt from response choices array
const [generatedText] = chatgptResponse.choices.map((choice) => choice.text.trim());
const [title, descriptionQuestion, aiArtPrompt] =
generatedText.split("\n").map((line) => line.slice(line.indexOf(":") + 1).trim());
// Send back created YouTube assets
res.status(200).json({
title,
description: `${descriptionQuestion} \n\nTranscript:\n${transcript}`,
aiArtPrompt,
});
} catch (error) {
console.error(error);
res.status(500).send("Error creating YouTube assets");
}
};
Bram:
can you write the js that calls this fn and starts the server
GPT:
Sure! Here's an example of how you can call the uploadFile
function and start the server using Express.js:
const express = require("express");
const app = express();
// Define route for file upload
app.post("/upload", uploadFile);
// Start server on port 3000
app.listen(3000, () => {
console.log("Server started on port 3000");
});
Note that you'll need to import the uploadFile
function from wherever it is defined in your code.
Bram:
can you install dotenv and include it in the above code
GPT:
Sure! Here's an updated version of the code that includes dotenv
:
const express = require("express");
const app = express();
require("dotenv").config();
// Define route for file upload
app.post("/upload", uploadFile);
// Start server on port 3000
app.listen(3000, () => {
console.log("Server started on port 3000");
});
Make sure to install dotenv
by running npm install dotenv
.
Bram:
cany uo write the Implement file upload logic
GPT:
Sure! Here's an example implementation of the file upload logic using multer
middleware:
const multer = require("multer");
// Set up Multer storage engine for uploaded files
const storage = multer.diskStorage({
destination: function (req, file, cb) {
cb(null, "uploads/");
},
filename: function (req, file, cb) {
const uniqueSuffix =
Date.now() + "-" + Math.round(Math.random() * 1e9) + ".mp3";
cb(null, file.fieldname + "-" + uniqueSuffix);
},
});
// Create Multer instance with specified storage engine and accepted MIME types
const upload = multer({
storage,
limits: { fileSize: process.env.MAX_FILE_SIZE || "25MB" },
}).single("file");
/**
* POST /upload
* Allows user to upload a mp4 or mov file and creates YouTube assets using OpenAI APIs.
*
* @param {Object} req - The request object containing the uploaded file.
* @param {Object} res - The response object with the created YouTube assets.
*/
const uploadFile = async (req, res) => {
// Call Multer middleware to handle uploaded file
upload(req,res,function(err){
if(err instanceof multer.MulterError){
return res.status(400).json({message:"File too large"});
}else if(err){
return res.status(500).json({message:"Error uploading file"});
}
console.log(`Uploaded ${req.file.filename}`);
try {
// Call OpenAI Whisper API for transcription
const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
const resp = await openai.createTranscription(
fs.createReadStream(req.file.path),
"whisper-1"
);
// Get transcript from response text
const transcript = resp.text;
// Call ChatGPT API with template for creating YouTube assets
const chatgptResponse = await openai.complete({
engine: 'text-davinci-002',
prompt: `Can you write an eye-catching title and description for this code-related video? \n\nTitle:\nDescription as question:\nPotential AI art prompt from this transcript (using a random lesser-known art period):\n`,
maxTokens: 1000,
n: 1,
temperature: 0.5,
});
// Extract generated title, description, and AI art prompt from response choices array
const [generatedText] = chatgptResponse.choices.map((choice) => choice.text.trim());
const [title, descriptionQuestion, aiArtPrompt] =
generatedText.split("\n").map((line) => line.slice(line.indexOf(":") + 1).trim());
// Send back created YouTube assets
res.status(200).json({
title,
description: `${descriptionQuestion} \n\nTranscript:\n${transcript}`,
aiArtPrompt,
});
} catch (error) {
console.error(error);
res.status(500).send("Error creating YouTube assets");
}
})
};
This implementation uses the multer
middleware to handle file uploads. The uploaded file is stored in the uploads/
directory using a unique filename generated by combining the original fieldname with a timestamp and random number suffix.
The Multer instance is configured to accept only one file at a time with a maximum size of either the value specified in the .env
file or "50MB"
by default.
The uploadFile
function calls the Multer middleware to handle the uploaded file, then passes the resulting req
object to the OpenAI APIs for transcription and YouTube asset creation.
Bram:
how would i make a curl post request to this?
GPT:
You can make a curl
POST request to this server by running the following command in your terminal:
curl -X POST -F 'file=@/path/to/file.mp3' http://localhost:3000/upload
Replace /path/to/file.mp3
with the actual path to your audio file.
This curl
command sends a multipart/form-data POST request to the /upload
endpoint of your Express.js server, with the uploaded file attached as a form field named "file". The -F
option specifies that we're sending a form field, and the @
symbol before the file path indicates that we want to attach a file.
Bram:
getting no such file or directory
error from the multer part below:
const uniqueSuffix =
Date.now() + "-" + Math.round(Math.random() * 1e9) + ".mp3";
GPT:
The error message "no such file or directory" usually indicates that the specified directory does not exist. In this case, it's likely that the uploads/
directory has not been created yet.
To fix this issue, you can create the uploads/
directory manually in your project root folder by running:
mkdir uploads
Alternatively, you can modify the destination
property of the Multer storage engine to specify a different path where you want to store uploaded files. For example:
const storage = multer.diskStorage({
destination: function (req, file, cb) {
cb(null, "./public/uploads/");
},
filename: function (req, file, cb) {
const uniqueSuffix =
Date.now() + "-" + Math.round(Math.random() * 1e9) + ".mp3";
cb(null, file.fieldname + "-" + uniqueSuffix);
},
});
This will create an uploads/
subdirectory inside a new public/
folder in your project root. Make sure to also update any references to the upload directory elsewhere in your code if you change its location.
bramadams.dev is a reader-supported published Zettelkasten. Both free and paid subscriptions are available. If you want to support my work, the best way is by taking out a paid subscription.
Writing OpenAPI with... OpenAI
How can developers quickly and easily convert TypeScript handler functions into OpenAPI specs?
With the announcement of ChatGPT Plugins using the OpenAPI spec, developers the world over simultaneously thought: "damn, maybe I should've made those Swagger docs after all, lmao".
For NextJS users (at least), the I wrote a program that automatically introspects the /api
folder and finds endpoints within the handler method and uses GPT to infer an OpenAPI spec. Meta!!
How does this work?
First, we define two helper functions: getApiRoutes
that uses glob to find all TypeScript files in the pages/api
directory, and getHandlerFunction
that extracts the handler function from each file using Babel to parse the code and returns an array of {name, content}
objects with the name being the filename, and content being a string slice containing only the handler function code.
Then, we define a function called inferOpenApiSpec
, which calls both previously defined helper functions to obtain an array of handlers with their respective contents, then loops over each item in this array.
For each handler function found, it sends a message as input to GPT-3 along with some preamble text describing how to convert TS functions into JSDoc YAML.
Once it receives a response from GPT-3 it writes resulting contents into ./openapi/{handlerFunction.name}.js
.
/*
1. get the handler function from files
2. use gpt to infer the openapi yaml from the spec
3. return the result as a string
*/
import { glob } from "glob";
import { readFileWrapper, writeFileWrapper } from "./helpers";
import { loadEnvConfig } from "@next/env";
loadEnvConfig(process.cwd());
import { Configuration, OpenAIApi } from "openai";
const configuration = new Configuration({
organization: "…",
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
const SKIP_FILES = []; // files to skip from api folder
import * as babel from "@babel/core";
// needs to be required instead of imported to work
const parser = require("@babel/parser");
const getApiRoutes = async () => {
const apiRoutes = await glob("pages/api/**/*.ts");
return apiRoutes;
};
const getHandlerFunction = async (apiRoutes: string[]) => {
const handlerFunctions = [];
for (const apiRoute of apiRoutes) {
const file = await readFileWrapper(apiRoute);
const ast = parser.parse(file, {
sourceType: "module",
plugins: ["typescript"],
});
babel.traverse(ast, {
ExportDefaultDeclaration(path) {
const declaration = path.node.declaration;
if (
declaration.type === "FunctionDeclaration" &&
declaration.id.name === "handler"
) {
handlerFunctions.push({ content: file.slice(declaration.start, declaration.end), name: apiRoute.slice(10, -3) });
}
},
});
}
return handlerFunctions.filter((handlerFunction) => !SKIP_FILES.includes(handlerFunction.name));
};
const inferOpenApiSpec = async () => {
const apiRoutes = await getApiRoutes();
const handlerFunctions = await getHandlerFunction(apiRoutes);
for (const handlerFunction of handlerFunctions) {
console.log(`Reading ${handlerFunction.name}`);
const completion = await openai.createChatCompletion({
model: "gpt-3.5-turbo",
messages: [
{
role: "system",
content:
"I am a bot that converts handler functions into OpenAPI specs (the path only). I only return YAML in JSDoc format.",
},
{
role: "user",
content:
`I want to convert this handler function with path /api/${handlerFunction.name} into an OpenAPI spec: ${handlerFunction.content}`
},
],
});
const res = completion.data.choices[0].message.content;
const openApiSpec = res.slice(res.indexOf("```yaml") + 7, res.indexOf("```", res.indexOf("```yaml") + 7));
let extractedJSDoc = openApiSpec.slice(openApiSpec.indexOf("/**") + 3, openApiSpec.indexOf("*/"))
.replace("openapi", "swagger")
extractedJSDoc = "/**\n" + extractedJSDoc + "\n*/\n";
console.log("Full response:");
console.log(res)
console.log(`Writing ${handlerFunction.name}`);
await writeFileWrapper(`./openapi/${handlerFunction.name}.js`, extractedJSDoc);
}
};
inferOpenApiSpec();
bramadams.dev is a reader-supported published Zettelkasten. Both free and paid subscriptions are available. If you want to support my work, the best way is by taking out a paid subscription.
A Chat with GPT-4 : A Tangle With Ternary
What did GPT-4 teach me about using the ternary operator in JavaScript?
I had a sneaky bug in my JS logic trying to take advantage of the undefined || fallback
trick. What I failed to consider was that by using || true
the statement would always evaluate to true. Luckily, GPT-4 straightened me out!
---
system_commands: ['I am a generalist that is not afraid to make radical associations between distinct topics by using creative leaps. I write in short, quippy sentences that has a conversational tone.']
temperature: 0.0
top_p: 1
max_tokens: 512
presence_penalty: 1
frequency_penalty: 1
stream: true
stop: null
n: 1
model: gpt-4
---
what happens to stream: metaMatter?.stream || this.settings.stream || true
if settings.stream is false
GPT:
If settings.stream
is false, then the conditional statement metaMatter?.stream || this.settings.stream || true
will evaluate to true
. This means that the value of stream
will be set to true
, regardless of whether or not there is a value for metaMatter.stream
.
Bram:
what if i change the last or to and
GPT:
If you change the last ||
to &&
, then the value of stream
will be set to false
. This is because the conditional statement metaMatter?.stream || this.settings.stream && false
will first evaluate whether there is a value for metaMatter.stream
. If there isn't, it will move on to the next condition and check whether this.settings.stream
is true or false. Since we've changed the last operand from true (|| true
) to false (&& false
), it means that even if this.settings.stream
is true, it won't matter as long as there's no value for metaMatter?.stream
, and hence, stream would be set to false.
Bram:
if metaMatter?.stream is undefined what happens
GPT:
If metaMatter?.stream
is undefined, the conditional statement metaMatter?.stream || this.settings.stream || true
will move on to evaluate the next condition which is this.settings.stream
. If this.settings.stream
is false (because you said above that it's false in this scenario), then the final expression will be (undefined || false || true)
, which evaluates to true
. As a result, regardless of whether or not there's a value for metaMatter?.stream
, stream would still be set to true.
Bram:
can you write a ternary using the following rules (lower number == higher priority):
- stream can be undefined in frontmatter or be in file and be true or false
- if stream is undefined fallback on global settings toggle of stream value
- if toggle is undefined fallback on true
GPT:
Sure, here is a ternary that satisfies the rules you've mentioned:
const stream = metaMatter?.stream !== undefined
? metaMatter.stream // If defined in frontmatter, use its value.
: this.settings.stream !== undefined
? this.settings.stream // If not defined in frontmatter but exists globally, use its value.
: true; // Otherwise fallback on true.
The above ternary checks if metaMatter?.stream
is not equal to undefined
. If it's not, it uses the value of metaMatter.stream
. But if it is undefined, then we check whether there's a value for this.settings.stream
. If there is one and it's either true or false (not undefined), we'll use that. Finally, if all other conditions fail (i.e., both metaMatter?.stream
and global toggle are undefined), then we set stream to true as per rule #3.
bramadams.dev is a reader-supported published Zettelkasten. Both free and paid subscriptions are available. If you want to support my work, the best way is by taking out a paid subscription.
Parsing and Removing Frontmatter in Obsidian Plugins
How can Obsidian plugin developers process the body of a note without its metadata?
In this code block, we will learn how to remove frontmatter from a message and parse it using two different functions. These functions are useful for Obsidian plugin developers who need to process the body of a note without its metadata.
The first function removeYMLFromMessage
removes the YAML front matter from a given string by replacing it with an empty string. The second function getFrontmatter
uses the gray-matter library to extract the front matter object from a MarkdownView instance in Obsidian. Let's dive into each of these functions:
Removing Front Matter
removeYMLFromMessage(message: string) {
try {
const YAMLFrontMatter = /---\s*[\s\S]*?\s*---/g;
const newMessage = message.replace(YAMLFrontMatter, "");
return newMessage;
} catch (err) {
throw new Error("Error removing YML from message" + err);
}
}
This function takes in a message
parameter which is expected to be a string containing YAML front matter at the beginning of it. It then creates a regular expression pattern that matches any text between two sets of three dashes (---
). This pattern is stored in YAMLFrontMatter
. Finally, it replaces all instances of this pattern with an empty string using JavaScript's built-in replace method.
Parsing Front Matter
import matter from "gray-matter";
getFrontmatter(view: MarkdownView): Chat_MD_FrontMatter {
try {
const metaMatter =
app.metadataCache.getFileCache(noteFile)?.frontmatter;
const data = matter(view.getViewData());
const frontmatter = {
title: metaMatter?.title || view.file.basename,
...,
...data.data
};
return frontmatter;
} catch (err) {
throw new Error("Error getting frontmatter");
}
}
This function takes in an instance of MarkdownView
, which represents the currently active view within Obsidian's editor window. It then retrieves information about the current file being viewed through Obsidian's API and extracts its cached metadata using .getFileCache()
.
Next, it uses gray-matter library to extract both content and metadata objects as separate properties on returned object called 'data'. Then, we create our own custom object called 'frontmatter' where we can add additional properties or override existing ones if needed.
Finally, this parsed data is returned as an object containing various fields such as title, tags etc., which can be used by plugins for further processing.
bramadams.dev is a reader-supported published Zettelkasten. Both free and paid subscriptions are available. If you want to support my work, the best way is by taking out a paid subscription.
Explanation of the `callOpenAIAPI` Function
What is the purpose of the `callOpenAIAPI` function and how does it stream text into the Obsidian editor?
The following code block is a TypeScript function that calls the OpenAI API to generate text based on given parameters.
Motivation
Calling OpenAI's API from Obsidian requires a lot of forethought on what variables to include or exclude.
This particular tutorial also highlights streaming, which OpenAI passes back as an entire array and then timers are artificially added to show the user the text coming in a character at a time[1] Finally, these characters are written into the Obsidian editor in real time.
Tutorial
Let's go through each parameter and what it does:
editor
: an object representing the editor where the generated text will be displayed.messages
: an array of objects containing a role (e.g. "user" or "assistant") and content (the message itself) for each message in the conversation leading up to generating this response.model
: which language model to use, defaults to"gpt-3.5-turbo"
.max_tokens
: maximum number of tokens (words) in the generated response, defaults to 250.temperature
,top_p
,presence_penalty
, andfrequency_penalty
are all parameters used by OpenAI's language models for generating more diverse or focused responses; they all have default values but can be adjusted as needed.stream
: whether or not to stream back partial results as they become available; if true, then this function returns"streaming"
instead of waiting for a full response from OpenAI before returning.stop
: an optional array of strings that will cause generation to stop once one is encountered in the output; useful for ending conversations gracefully.- The remaining parameters (
n
,logit_bias
, anduser
) are currently unsupported by ChatGPT MD.
The function uses request library internally with async/await syntax. If streaming is enabled, it splits up the response into individual lines using newlines (\n\n
) as delimiters, removes any metadata at beginning of line ("data: "), parses JSON from each line, extracts delta content from choices field within JSON object returned by API call, appends delta content onto editor instance passed into method call while respecting typing speed settings set elsewhere in codebase.
If streaming is disabled then it simply waits until entire response has been received before parsing JSON data out of stringified HTTP body returned by request library call.
In case there's any error during API call process then it throws an error with details logged into console along with displaying notice about issue calling OpenAI API.
bramadams.dev is a reader-supported published Zettelkasten. Both free and paid subscriptions are available. If you want to support my work, the best way is by taking out a paid subscription.
async callOpenAIAPI(
editor: Editor,
messages: { role: string; content: string }[],
model = "gpt-3.5-turbo",
max_tokens = 250,
temperature = 0.3,
top_p = 1,
presence_penalty = 0.5,
frequency_penalty = 0.5,
stream = true,
stop: string[] | null = null,
n = 1,
logit_bias: any | null = null,
user: string | null = null
) {
try {
console.log("calling openai api");
const response = await request({
url: `https://api.openai.com/v1/chat/completions`,
method: "POST",
headers: {
Authorization: `Bearer ${this.settings.apiKey}`,
"Content-Type": "application/json",
},
contentType: "application/json",
body: JSON.stringify({
model: model,
messages: messages,
max_tokens: max_tokens,
temperature: temperature,
top_p: top_p,
presence_penalty: presence_penalty,
frequency_penalty: frequency_penalty,
stream: stream,
stop: stop,
n: n,
// logit_bias: logit_bias, // not yet supported
// user: user, // not yet supported
}),
});
if (stream) {
// split response by new line
const responseLines = response.split("\n\n");
// remove data: from each line
for (let i = 0; i < responseLines.length; i++) {
responseLines[i] = responseLines[i].split("data: ")[1];
}
const newLine = ``; // hr for assistant
editor.replaceRange(newLine, editor.getCursor());
// move cursor to end of line
const cursor = editor.getCursor();
const newCursor = {
line: cursor.line,
ch: cursor.ch + newLine.length,
};
editor.setCursor(newCursor);
let fullstr = "";
// loop through response lines
for (const responseLine of responseLines) {
// if response line is not [DONE] then parse json and append delta to file
if (responseLine && !responseLine.includes("[DONE]")) {
const responseJSON = JSON.parse(responseLine);
const delta = responseJSON.choices[0].delta.content;
// if delta is not undefined then append delta to file
if (delta) {
const cursor = editor.getCursor();
if (delta === "`") {
editor.replaceRange(delta, cursor);
await new Promise((r) => setTimeout(r, 82)); // what in the actual fuck -- why does this work lol
} else {
editor.replaceRange(delta, cursor);
await new Promise((r) =>
setTimeout(r, this.settings.streamSpeed)
);
}
const newCursor = {
line: cursor.line,
ch: cursor.ch + delta.length,
};
editor.setCursor(newCursor);
fullstr += delta;
}
}
}
console.log(fullstr);
return "streaming";
} else {
const responseJSON = JSON.parse(response);
return responseJSON.choices[0].message.content;
}
} catch (err) {
new Notice(
"issue calling OpenAI API, see console for more details"
);
throw new Error(
"issue calling OpenAI API, see error for more details: " + err
);
}
}
take note of the frustration I had as a dev for the special case of backticks. They need to run on their own time for some reason I still do not know!
Explanation of the ChatTemplates Class
What is the ChatTemplates class and how does it help users create new chat templates in Obsidian?
The following code block defines a TypeScript class called ChatTemplates
that helps users create new chat templates in Obsidian. It extends the built-in SuggestModal
class which provides an interface for displaying and selecting suggestions based on user input.
Motivation
To offer insight from an Obsidian plugin developer using the Obsidian API, which is pretty poorly documented/hard to parse.
Constructor Method
The constructor is like the "setup" function when you're building something new. It takes two things: an instance of the main app (App
) and some settings for chat templates (ChatGPT_MDSettings
). When we create a new instance of this class, we pass these things into it so that it can use them later on.
getFilesInChatFolder Method
This method gets all files from a specific folder path (which is stored as part of our settings). We only want to show chat templates to users, so we filter out any files that aren't in this folder.
getSuggestions Method
When someone types into the search box while using our plugin, this method will be called with whatever they typed as its argument. This method looks at all available chat template files and returns only those whose names match what was typed by the user (case-insensitive).
renderSuggestion Method
Once we have filtered down our list of possible matches based on what was typed by the user, each remaining suggestion needs to be displayed visually somehow. This method creates HTML elements representing each suggestion item one-by-one and adds them to the UI.
bramadams.dev is a reader-supported published Zettelkasten. Both free and paid subscriptions are available. If you want to support my work, the best way is by taking out a paid subscription.
onChooseSuggestion Method
Finally, when someone clicks or selects one of these suggestions from our list, this function will run! First it displays a message saying which template was selected by using Obsidian's built-in Notice API. Then it reads contents from selected template file using Vault API's .read()
, creates new markdown file using Vault API's .create()
, writes contents read earlier to newly created markdown file and opens newly created markdown note using Workspace API's .openLinkText()
.
export class ChatTemplates extends SuggestModal<ChatTemplate> {
settings: ChatGPT_MDSettings;
constructor(app: App, settings: ChatGPT_MDSettings) {
super(app);
this.settings = settings;
}
getFilesInChatFolder(): TFile[] {
return this.app.vault
.getFiles()
.filter(
(file) => file.parent.path === this.settings.chatTemplateFolder
);
}
// Returns all available suggestions.
getSuggestions(query: string): ChatTemplate[] {
const chatTemplateFiles = this.getFilesInChatFolder();
if (query == "") {
return chatTemplateFiles.map((file) => {
return {
title: file.basename,
file: file,
};
});
}
return chatTemplateFiles
.filter((file) => {
return file.basename.toLowerCase().includes(query.toLowerCase());
})
.map((file) => {
return {
title: file.basename,
file: file,
};
});
}
// Renders each suggestion item.
renderSuggestion(template: ChatTemplate, el: HTMLElement) {
el.createEl("div", { text: template.title });
}
// Perform action on the selected suggestion.
async onChooseSuggestion(
template: ChatTemplate,
evt: MouseEvent | KeyboardEvent
) {
new Notice(`Selected ${template.title}`);
const templateText = await this.app.vault.read(template.file);
// use template text to create new file in chat folder
const file = await this.app.vault.create(
`${this.settings.chatFolder}/${getDate()}.md`,
templateText
);
// open new file
this.app.workspace.openLinkText(file.basename, "", true);
}
}
ChatGPT ChatML Wrapper
What if there was a converter to help transition to ChatGPT's API with existing prompts?
With the announcement of ChatGPT’s API, many people will be looking to move to the new model with their old prompts. I’ve written two libraries that facilitate the transition for people who want to use turbo but only want to send one prompt along as usual.
Python: GitHub - bramses/gpt-to-chatgpt-py: Convert a regular GPT call into a ChatGPT call
Example:
import openai
import os
from dotenv import load_dotenv
from gpt_to_chatgpt import toChatML, get_message
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
res = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=toChatML("this is a test"),
)
print(get_message(res))
# As an AI language model, I don't really take tests, but I'm always ready to respond to your prompts and queries. How can I assist you today?
bramadams.dev is a reader-supported published Zettelkasten. Both free and paid subscriptions are available. If you want to support my work, the best way is by taking out a paid subscription.
Typescript/JS: GitHub - bramses/gpt-to-chatgpt-ts: Convert GPT Completion call to a ChatGPT call
Example:
const { Configuration, OpenAIApi } = require("openai");
const { toChatML, get_message } = require("gpt-to-chatgpt")
require("dotenv").config();
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
openai.createChatCompletion({
model: "gpt-3.5-turbo",
messages: toChatML('this is a test')
}).then((data) => {
console.log((get_message(data.data)));
});
// As an AI language model, I cannot provide a specific answer to the prompt, as it is too broad. However, I confirm that this is a test.
Enjoy!