I’ve been playing around with hooking up ChatGPT/Dall-E to WordPress and WP-CLI. To do this, I whipped up a super simple class to make this easier:
<?php
class OpenAI_API {
public const API_KEY = 'hunter2'; // Get your own darn key!
/**
* Generates an image based on the provided prompt using the OpenAI API.
*
* @param string $prompt The text prompt to generate the image from. Default is an empty string.
* @return string The response body from the OpenAI API, or a JSON-encoded error message if the request fails.
*/
public static function generate_image( string $prompt = '' ): string {
$data = array(
'model' => 'dall-e-3',
'prompt' => trim( $prompt ),
'quality' => 'hd',
'n' => 1,
'size' => '1024x1024',
);
$args = array(
'body' => wp_json_encode( $data ),
'headers' => array(
'Content-Type' => 'application/json',
'Authorization' => 'Bearer ' . OpenAI_API::API_KEY,
),
'method' => 'POST',
'data_format' => 'body',
);
$response = wp_remote_post( 'https://api.openai.com/v1/images/generations', $args );
if ( is_wp_error( $response ) ) {
return wp_json_encode( $response );
} else {
$body = wp_remote_retrieve_body( $response );
return $body;
}
}
/**
* Creates a chat completion using the OpenAI GPT-3.5-turbo model.
*
* @param string $prompt The user prompt to be sent to the OpenAI API.
* @param string $system_prompt Optional. The system prompt to be sent to the OpenAI API. Defaults to a predefined prompt.
*
* @return string The response body from the OpenAI API, or a JSON-encoded error message if the request fails.
*/
public static function create_chat_completion( string $prompt = '', string $system_prompt = '' ): string {
if ( empty( $system_prompt ) ) {
$system_prompt = 'You are a virtual assistant designed to provide general support across a wide range of topics. Answer concisely and directly, focusing on essential information only. Maintain a friendly and approachable tone, adjusting response length based on the complexity of the question.';
}
// The data to send in the request body
$data = array(
'model' => 'gpt-3.5-turbo',
'messages' => array(
array(
'role' => 'system',
'content' => trim( $system_prompt ),
),
array(
'role' => 'user',
'content' => trim( $prompt ),
),
),
);
$args = array(
'body' => wp_json_encode( $data ),
'headers' => array(
'Content-Type' => 'application/json',
'Authorization' => 'Bearer ' . OpenAI_API::API_KEY,
),
'method' => 'POST',
'data_format' => 'body',
'timeout' => 15,
);
// Perform the POST request
$response = wp_remote_post( 'https://api.openai.com/v1/chat/completions', $args );
// Error handling
if ( is_wp_error( $response ) ) {
return wp_json_encode( $response );
} else {
if ( wp_remote_retrieve_response_code( $response ) !== 200 ) {
return wp_json_encode( array( 'error' => 'API returned non-200 status code', 'response' => wp_remote_retrieve_body( $response ) ) );
}
// Assuming the request was successful, you can access the response body as follows:
$body = wp_remote_retrieve_body( $response );
return $body;
}
}
}
Code language: PHP (php)
I can generate images and get back text from the LLM. Here’s some examples ChatGPT made to show how you can use these:
Example 1: Generating an Image
This example generates an image of a “cozy cabin in the snowy woods at sunset” using the generate_image
method and displays it in an <img>
tag.
<?php
$image_url = OpenAI_API::generate_image("A cozy cabin in the snowy woods at sunset");
if ( ! empty( $image_url ) ) {
echo '<img src="' . esc_url( $image_url ) . '" alt="Cozy cabin in winter">';
} else {
echo 'Image generation failed.';
}
?>
Code language: PHP (php)
Example 2: Simple Chat Completion
This example sends a question to the create_chat_completion
method and prints the response directly.
<?php
$response = OpenAI_API::create_chat_completion("How does photosynthesis work?");
echo $response;
?>
Code language: PHP (php)
Example 3: Chat Completion with Custom System Prompt
This example sets a custom system prompt for a specific tone, here focusing on culinary advice, and asks a relevant question.
<?php
$system_prompt = "You are a culinary expert. Please provide advice on healthy meal planning.";
$response = OpenAI_API::create_chat_completion("What are some good meals for weight loss?", $system_prompt);
echo $response;
?>
Code language: PHP (php)
Here are some key limitations of this simple API implementation and why these are crucial considerations for production:
- Lack of Robust Error Handling:
- This API implementation has basic error handling that only checks if an error occurred during the request. It doesn’t provide specific error messages for different types of failures (like rate limits, invalid API keys, or network issues).
- Importance: In production, detailed error handling allows for clearer diagnostics and faster troubleshooting when issues arise.
- No Caching:
- The current API makes a fresh request for each call, even if the response might be identical to a recent query.
- Importance: Caching can reduce API usage costs, improve response times, and reduce server load, particularly for commonly repeated queries.
- No API Rate Limiting:
- This implementation doesn’t limit the number of requests sent within a certain time frame.
- Importance: Rate limiting prevents hitting API request quotas and helps avoid unexpected costs or blocked access if API limits are exceeded.
- No Logging for Debugging:
- There’s no logging in place for tracking request errors or failed attempts.
- Importance: Logs provide an audit trail that helps diagnose issues over time, which is crucial for maintaining a stable application in production.
- Lack of Security for API Key Management:
- The API key is currently hard coded into the class.
- Importance: In production, it’s best to use environment variables or a secure key management system to protect sensitive information and prevent accidental exposure of the API key.
- No Response Parsing or Validation:
- The code assumes that the API response format is always correct, without validation.
- Importance: Inconsistent or unexpected responses can cause failures. Validation ensures the app handles different cases gracefully.
Why Not Use in Production?
Due to these limitations, this API should be considered a prototype or learning tool rather than a production-ready solution. Adding robust error handling, caching, rate limiting, and logging would make it more resilient, secure, and efficient for a production environment.
Alright, so listen to the LLM and don’t do anything stupid with this, like I am doing.
Leave a Reply