Skip to content

Explanation of the `callOpenAIAPI` Function

What is the purpose of the `callOpenAIAPI` function and how does it stream text into the Obsidian editor?

The following code block is a TypeScript function that calls the OpenAI API to generate text based on given parameters.

Motivation

Calling OpenAI's API from Obsidian requires a lot of forethought on what variables to include or exclude.

This particular tutorial also highlights streaming, which OpenAI passes back as an entire array and then timers are artificially added to show the user the text coming in a character at a time[1] Finally, these characters are written into the Obsidian editor in real time.

Tutorial

Let's go through each parameter and what it does:

  • editor: an object representing the editor where the generated text will be displayed.
  • messages: an array of objects containing a role (e.g. "user" or "assistant") and content (the message itself) for each message in the conversation leading up to generating this response.
  • model: which language model to use, defaults to "gpt-3.5-turbo".
  • max_tokens: maximum number of tokens (words) in the generated response, defaults to 250.
  • temperature, top_p, presence_penalty, and frequency_penalty are all parameters used by OpenAI's language models for generating more diverse or focused responses; they all have default values but can be adjusted as needed.
  • stream: whether or not to stream back partial results as they become available; if true, then this function returns "streaming" instead of waiting for a full response from OpenAI before returning.
  • stop: an optional array of strings that will cause generation to stop once one is encountered in the output; useful for ending conversations gracefully.
  • The remaining parameters (n, logit_bias, and user) are currently unsupported by ChatGPT MD.

The function uses request library internally with async/await syntax. If streaming is enabled, it splits up the response into individual lines using newlines (\n\n) as delimiters, removes any metadata at beginning of line ("data: "), parses JSON from each line, extracts delta content from choices field within JSON object returned by API call, appends delta content onto editor instance passed into method call while respecting typing speed settings set elsewhere in codebase.

If streaming is disabled then it simply waits until entire response has been received before parsing JSON data out of stringified HTTP body returned by request library call.

In case there's any error during API call process then it throws an error with details logged into console along with displaying notice about issue calling OpenAI API.

bramadams.dev is a reader-supported published Zettelkasten. Both free and paid subscriptions are available. If you want to support my work, the best way is by taking out a paid subscription.

async callOpenAIAPI(
		editor: Editor,
		messages: { role: string; content: string }[],
		model = "gpt-3.5-turbo",
		max_tokens = 250,
		temperature = 0.3,
		top_p = 1,
		presence_penalty = 0.5,
		frequency_penalty = 0.5,
		stream = true,
		stop: string[] | null = null,
		n = 1,
		logit_bias: any | null = null,
		user: string | null = null
	) {
		try {
			console.log("calling openai api");
			

			const response = await request({
				url: `https://api.openai.com/v1/chat/completions`,
				method: "POST",
				headers: {
					Authorization: `Bearer ${this.settings.apiKey}`,
					"Content-Type": "application/json",
				},
				contentType: "application/json",
				body: JSON.stringify({
					model: model,
					messages: messages,
					max_tokens: max_tokens,
					temperature: temperature,
					top_p: top_p,
					presence_penalty: presence_penalty,
					frequency_penalty: frequency_penalty,
					stream: stream,
					stop: stop,
					n: n,
					// logit_bias: logit_bias, // not yet supported
					// user: user, // not yet supported
				}),
			});

			if (stream) {
				// split response by new line
				const responseLines = response.split("\n\n");

				// remove data: from each line
				for (let i = 0; i < responseLines.length; i++) {
					responseLines[i] = responseLines[i].split("data: ")[1];
				}

				const newLine = ``; // hr for assistant
				editor.replaceRange(newLine, editor.getCursor());

				// move cursor to end of line
				const cursor = editor.getCursor();
				const newCursor = {
					line: cursor.line,
					ch: cursor.ch + newLine.length,
				};
				editor.setCursor(newCursor);

				let fullstr = "";

				// loop through response lines
				for (const responseLine of responseLines) {
					// if response line is not [DONE] then parse json and append delta to file
					if (responseLine && !responseLine.includes("[DONE]")) {
						const responseJSON = JSON.parse(responseLine);
						const delta = responseJSON.choices[0].delta.content;

						// if delta is not undefined then append delta to file
						if (delta) {
							const cursor = editor.getCursor();
							if (delta === "`") {
								editor.replaceRange(delta, cursor);
								await new Promise((r) => setTimeout(r, 82)); // what in the actual fuck -- why does this work lol
							} else {
								editor.replaceRange(delta, cursor);
								await new Promise((r) =>
									setTimeout(r, this.settings.streamSpeed)
								);
							}

							const newCursor = {
								line: cursor.line,
								ch: cursor.ch + delta.length,
							};
							editor.setCursor(newCursor);

							fullstr += delta;
						}
					}
				}

				console.log(fullstr);

				return "streaming";
			} else {
				const responseJSON = JSON.parse(response);
				return responseJSON.choices[0].message.content;
			}
		} catch (err) {
			new Notice(
				"issue calling OpenAI API, see console for more details"
			);
			throw new Error(
				"issue calling OpenAI API, see error for more details: " + err
			);
		}
	}

  1. take note of the frustration I had as a dev for the special case of backticks. They need to run on their own time for some reason I still do not know!

Comments