Run ChatGPT and GPT Models on Your Website with PHP


Image from Pixabay.

GPT models can improve the user experience of websites and web apps. They can translate, summarize, answer questions, and do many other tasks.

Integrating all these functionalities into your online service is fairly easy with OpenAI API. Currently, OpenAI only provides official support for Python and NodeJS bindings.

Many third-party bindings have been developed by the community to facilitate deployment in other programming languages.

In this article, I will show you how to connect your website to OpenAI’s API in PHP. I will also explain how to parse and interpret the results returned by the API.

I will only cover GPT models but you can follow the same process for DALL-E and Whisper models.

GPT models

You don’t need to be familiar with the GPT models to understand and implement this article, but I still recommend you to read my simple introduction about GPT models:

PHP

You will only need to know the basics of PHP.

I will use a PHP library that we can install with Composer (so you will need Composer) and that requires at least PHP 8.1. Note: You won’t be able to install the library with an older version of PHP.

OpenAI account

You will need an OpenAI account. If you don’t have one, here is my guide on how to create and manage an OpenAI account:

You will have to create an API key in your account and have a few cents of credits remaining if you want to run the examples.

We will use the client maintained by OpenAI PHP (MIT license) to communicate with OpenAI API.

Other PHP libraries do the same, but I choose this one for the following reasons:

  • It is listed by OpenAI which is a reasonable guarantee that this library can be trusted.
  • It has the most stars on GitHub among all the PHP bindings for OpenAI API.
  • It is easy to install and use.
  • It is regularly updated to take into account the changes in the API and new OpenAI models.

To install it, open a terminal, go to your website/app parent’s root directory, and run composer as follows:

composer require openai-php/client

If you don’t have any errors, you can start using OpenAI API with PHP.

You must create an API key in your OpenAI account.

For safety reasons, I recommend creating a new API key for each web app you want to connect to the API.

If one of your products has a security breach, you can then just destroy the key in your OpenAI account without affecting your other apps.

You should not write this key directly in your PHP file but use an OS environment variable to store it. For instance, with Ubuntu/Debian, run:

export MY_OPENAI_KEY={your key}
#Replace {your key} by your OpenAI API key

In your PHP script you can get the value of this environment variable with:

<?php
$yourApiKey = getenv('MY_OPENAI_KEY');
....//remainder of your script
?>

If you don’t have access to your OS environment variables, the simplest alternative is to define a PHP constant in a separate file that you will require in all your PHP scripts using the API.

For instance, create a file “key.php”, preferably not in your website’s main directory, and write:

<?php
define('MY_OPENAI_KEY'. '{your key}');
?>

Then write the following at the top of all your files that will use the API:

<?php
require_once("path/to/key.php"); //the path to your key.php file
$yourApiKey = MY_OPENAI_KEY;
....//remainder of your script
?>

OpenAI PHP client supports all the tasks accessible through OpenAI API. In this article, I will focus on “completion tasks” using GPT models.

A completion task is a task in which we prompt the model with a text and the API answers by adding text after this prompt.

There are two different types of completion tasks proposed by the API:

  • standard: a GPT-3 or GPT-4 model is prompted and then generates tokens following this prompt
  • chat: Given a list of messages describing a conversation history, the model will return a response. So here, the prompt is a set of messages with information about whether it was written by the model or the user.

I will demonstrate how to use the OpenAI PHP client for these two types of tasks.

Completion task with GPT-3

First, we need an objective. What do we want the GPT model to accomplish?

For this article, let’s say that our goal is to “translate” text into emojis.

One of the most critical steps when using GPT models is to find a good prompt for our task. If your prompt is not good, the model’s answer won’t be great either.

What’s a good prompt?

Prompt engineering is a very active research area. I won’t tackle this topic here but I plan to do it in my next article.

For our task, inspired by previous machine translation work using large language models, I propose the following prompt that gave reasonably good results:

Translate the following text into emoji:

[TXT]

Where [TXT] will be replaced by the text to translate into emojis.

This prompt has the advantage to be short. It won’t cost much to use it.

For example, we will try to translate into emojis the following text:

I would like a hamburger without onions.

So our prompt becomes:

Translate the following text into emoji:

I would like a hamburger without onions.

With the OpenAI PHP client, we can do this with the following code:

<?php
//This line is necessary to load the PHP client installed by Composer
require_once('../vendor/autoload.php');

//Change the next line to $yourApiKey = MY_OPENAI_KEY; if you didn't use an environment variable and set your key in a separate file
$yourApiKey = getenv('MY_OPENAI_KEY');

//Create a client object
$client = OpenAI::client($yourApiKey);

//The $prompt variable stores our entire prompt
$prompt = "Translate the following text into emoji:

I would like an hamburger without onions.
";

//We send our prompt along with parameters to the API
//It creates a completion task
$result = $client->completions()->create([
'model' => 'text-davinci-003',
'prompt' => $prompt
]);

//After a few seconds the response will be stored in $results
//We can print the text answered by GPT
echo $result['choices'][0]['text'];

?>

In this code, I assume you are in the root directory of your website.

It should print a sequence of emojis. I obtained this one:

🍔🚫🧅

You may get a different sequence since GPT models are “non-deterministic”.

I used the “text-davinci-003” GPT model which is the most powerful GPT-3 model.

You can use a cheaper GPT model if your task is very simple. For instance, we can try to replace the model “text-davinci-003” with “ada”.

'model' => 'ada',

I got the following answer:

For example, enter This is the text “Looking For a hamburger

Yes, this is quite bad. There aren’t any emojis in this response. Choosing the right model is the most critical choice you will have to make when integrating the OpenAI API into your product.

  • If you choose an old or small model, the result will be of low quality and may not complete the task requested.
  • If you choose a bigger model, you may get the best results but for a higher cost.

You will have to try several models to figure out which is the best option given your objective. As a starting point, OpenAI provides some usage suggestions along with a list of available models.

In addition to the model name and prompt, the completion task can take many more parameters. They are all described in the API documentation.

We can precise for instance the maximum number of tokens in the response as follows:

$result = $client->completions()->create([
'model' => 'text-davinci-003',
'prompt' => $prompt,
'max_tokens' => 2
]);

This shouldn’t generate anything but 1 line break. Why?

1 emoji consists of 3 tokens for text-davinci-003. So if we set ‘max_tokens’ to 2, the model can’t even generate 1 emoji.

How do I know an emoji is made of 3 tokens?

I simply checked it in the playground of my OpenAI user account. For instance, if you put there “🍔🚫🧅”, the model will count 9 tokens.

Moreover, the GPT model generates a line break before the sequence of emojis. It counts as an additional token. In total, GPT answered me with 10 tokens.

Note that the “$result” variable contains all this information. We will have a look at it in the next part below.

But before that, let’s have a look at the chat completion task.

Chat completion task

Chat completion tasks are slightly different from what we did with GPT-3. Chat tasks are powered by gpt-3.5-turbo, which also powers ChatGPT.

With gpt-3.5-turbo, the “prompt” parameter is replaced by “messages”.

Technically, “messages” are associative arrays with two required keys, and one optional, as follows:

  • role (required): Can be either “system”, “assistant”, or “user”. At the time I write this article, “system” is almost ignored according to OpenAI documentation. It leaves “assistant” which is the model, and “user” which is a human.
  • content (required): This is where we put our prompt, or the context of our prompt, for instance, the chat history.
  • name (optional): If you want to give a specific name to the author of the message.

The length and number of messages are virtually unlimited. That way, the gpt-3.5-turbo can accept a very long chat history as input.

Chat completion can perform similar tasks as the standard GPT-3. In the documentation, OpenAI wrote the following:

Because gpt-3.5-turbo performs at a similar capability to text-davinci-003 but at 10% the price per token, we recommend gpt-3.5-turbo for most use cases.

Let’s check it with our task of translating text into emojis.

We only have a few modifications to perform:

<?php
//This line is necessary to load the PHP client installed by Composer
require_once('../vendor/autoload.php');

//Change the next line to $yourApiKey = MY_OPENAI_KEY; if you set your key in a separate file
$yourApiKey = getenv('MY_OPENAI_KEY');

//Create a client object
$client = OpenAI::client($yourApiKey);

//The $prompt variable stores our entire prompt
$prompt = "Translate the following text into emoji:

I would like an hamburger without onions.
";

//We send our prompt along with parameters to the API
//It creates a chat completion task
$result = $client->chat()->create([
'model' => 'gpt-3.5-turbo',
'messages' => [
['role' => 'user', 'content' => $prompt],
],
]);

//After a few seconds the respone will be store in results
//We can print the text answer by GPT
echo $result['choices'][0]['message']['content'];

?>

I obtained the same answer as with text-davinci-003, “🍔🚫🧅”, but at 10% of text-davinci-003’s price per token.

Now that you know how to communicate with OpenAI API in PHP, we can have a closer look at what the API returns. As we will see, there are useful data in the response that we can use to monitor the API cost, keep track of the users’ activity (e.g., to flag prohibited behavior), etc.

We can make a printable version of the “$result” variables like this:

print_r($result->toArray());

For a chat completion task, it will print this:

Array
(
[id] => chatcmpl-7AJFw****
[object] => chat.completion
[created] => 1682691656
[model] => gpt-3.5-turbo-0301
[choices] => Array
(
[0] => Array
(
[index] => 0
[message] => Array
(
[role] => assistant
[content] => 🍔🚫🧅
)

[finish_reason] => stop
)

)

[usage] => Array
(
[prompt_tokens] => 23
[completion_tokens] => 9
[total_tokens] => 32
)

Note: I manually masked part of the “id”.

We have the following entries:

  • id: A unique ID assigned by OpenAI to the response. This information can help to track interactions between the API and your users.
  • object: The type of task performed.
  • created: The timestamp of the creation of the response.
  • model: The model used to generate the response.
  • choices: By default, you will get only one message for a chat completion task, unless you change the “n” option when calling the API.
  • index: The index, starting at 0, of the message generated.
  • message: Information about the message generated.
  • role: The role of the author of the message.
  • content: The message itself.
  • finish_reason: The reason why the API stopped the generation of the message. By default it will be “stop”, i.e., the model stopped the generation without any constraints. It can change if you indicated a “stop” parameter when calling the API. The model would then stop the generation after generating one of the tokens you mentioned in “stop”.
  • usage: Information about the length in tokens. It can be used to monitor the API cost.
  • prompt_tokens: The number of tokens in your prompt.
  • completion_tokens: The number of tokens in the message generated by the API.
  • total_tokens: The sum of “prompt_tokens” and “completion_tokens”.

The most important fields are “choices”, since this is what you will have to deliver to your users, and “usage” since this is the only metric that will tell you how much it cost to generate this answer.

To know the exact cost of an API call, you have to multiply the value of “total_tokens” by the cost of the model per token. Note that OpenAI shows pricing for 1,000 tokens so you will have to divide this number by 1,000 to get the price per token.

For instance, if we use a model costing $0.002 per 1,000 tokens, and “total_tokens” is 32, we can compute the total cost as follows:

0.002 / 1000 * 32 = 0.000064

This API call would cost $0.000064.

The response fields of a standard GPT-3 completion are almost identical to the fields of the chat completion task.

The only notable difference is that a “text.completion” task can also return the log probabilities of the t most probable tokens. You can indicate “t” when you call the API with the “logprobs” parameter. The maximum value of t is 5. Note: OpenAI’s API reference indicates that you can manually request OpenAI a greater number if your application needs it.

We have learned how to communicate with OpenAI API in PHP. Your online service can now exploit all the power of GPT models.

The next step would be to implement the front end. You don’t need to do something over-complicated for this. A simple AJAX script, using jQuery for instance, would be enough to asynchronously get the response from the PHP script that made the API call.

It can be as simple as this:

$.ajax({  
type:"POST",
url:"call.php",
data:{ prompt: my_prompt //my_prompt stores the prompt
},
success:function(data){
data = $.parseJSON(data);
$('#my_GPT_response').html(data["choices"][0]["message"]["content"]);
}
});

This would print the content of the chat completion inside an HTML object with the id attribute set to “my_GPT_response”.

Your PHP script must receive the “prompt” as a $_POST variable, and the API answer should be encoded into a JSON object, as follows:

<?php
//This line is necessary to load the PHP client installed by Composer
require_once('../vendor/autoload.php');

//At least check that the prompt is sent
//Of course you should also check the content of the variable according to what you want to do with it
if (isset($_POST['prompt'])){
//Change the next line to $yourApiKey = MY_OPENAI_KEY; if you set your key in a separate file
$yourApiKey = getenv('MY_OPENAI_KEY');

//Create a client object
$client = OpenAI::client($yourApiKey);

//The $prompt variable stores our entire prompt
$prompt = "Translate the following text into emoji:

".$_POST['prompt']."
";

//We send our prompt along with parameters to the API
//It creates a chat completion task
$result = $client->chat()->create([
'model' => 'gpt-3.5-turbo',
'messages' => [
['role' => 'user', 'content' => $prompt],
],
]);
$result = $response->toArray();
echo json_encode($result);
}

?>

To conclude this article, I should mention once again that you must always check what you are sending to the API to ensure that you are not violating the policies and terms of use of OpenAI.

You can exploit the moderation model, free of use, proposed by OpenAI that can flag unsafe content before you send it to a GPT model.

It is also important to check the age of your users. OpenAI’s terms of use prohibit the use of their services for children under 13 while children under 18 can use the services only with the supervision of an adult.

If you like this article and would be interested to read the next ones, the best way to support my work is to become a Medium member using this link:

If you are already a member and want to support this work, just follow me on Medium.


Image from Pixabay.

GPT models can improve the user experience of websites and web apps. They can translate, summarize, answer questions, and do many other tasks.

Integrating all these functionalities into your online service is fairly easy with OpenAI API. Currently, OpenAI only provides official support for Python and NodeJS bindings.

Many third-party bindings have been developed by the community to facilitate deployment in other programming languages.

In this article, I will show you how to connect your website to OpenAI’s API in PHP. I will also explain how to parse and interpret the results returned by the API.

I will only cover GPT models but you can follow the same process for DALL-E and Whisper models.

GPT models

You don’t need to be familiar with the GPT models to understand and implement this article, but I still recommend you to read my simple introduction about GPT models:

PHP

You will only need to know the basics of PHP.

I will use a PHP library that we can install with Composer (so you will need Composer) and that requires at least PHP 8.1. Note: You won’t be able to install the library with an older version of PHP.

OpenAI account

You will need an OpenAI account. If you don’t have one, here is my guide on how to create and manage an OpenAI account:

You will have to create an API key in your account and have a few cents of credits remaining if you want to run the examples.

We will use the client maintained by OpenAI PHP (MIT license) to communicate with OpenAI API.

Other PHP libraries do the same, but I choose this one for the following reasons:

  • It is listed by OpenAI which is a reasonable guarantee that this library can be trusted.
  • It has the most stars on GitHub among all the PHP bindings for OpenAI API.
  • It is easy to install and use.
  • It is regularly updated to take into account the changes in the API and new OpenAI models.

To install it, open a terminal, go to your website/app parent’s root directory, and run composer as follows:

composer require openai-php/client

If you don’t have any errors, you can start using OpenAI API with PHP.

You must create an API key in your OpenAI account.

For safety reasons, I recommend creating a new API key for each web app you want to connect to the API.

If one of your products has a security breach, you can then just destroy the key in your OpenAI account without affecting your other apps.

You should not write this key directly in your PHP file but use an OS environment variable to store it. For instance, with Ubuntu/Debian, run:

export MY_OPENAI_KEY={your key}
#Replace {your key} by your OpenAI API key

In your PHP script you can get the value of this environment variable with:

<?php
$yourApiKey = getenv('MY_OPENAI_KEY');
....//remainder of your script
?>

If you don’t have access to your OS environment variables, the simplest alternative is to define a PHP constant in a separate file that you will require in all your PHP scripts using the API.

For instance, create a file “key.php”, preferably not in your website’s main directory, and write:

<?php
define('MY_OPENAI_KEY'. '{your key}');
?>

Then write the following at the top of all your files that will use the API:

<?php
require_once("path/to/key.php"); //the path to your key.php file
$yourApiKey = MY_OPENAI_KEY;
....//remainder of your script
?>

OpenAI PHP client supports all the tasks accessible through OpenAI API. In this article, I will focus on “completion tasks” using GPT models.

A completion task is a task in which we prompt the model with a text and the API answers by adding text after this prompt.

There are two different types of completion tasks proposed by the API:

  • standard: a GPT-3 or GPT-4 model is prompted and then generates tokens following this prompt
  • chat: Given a list of messages describing a conversation history, the model will return a response. So here, the prompt is a set of messages with information about whether it was written by the model or the user.

I will demonstrate how to use the OpenAI PHP client for these two types of tasks.

Completion task with GPT-3

First, we need an objective. What do we want the GPT model to accomplish?

For this article, let’s say that our goal is to “translate” text into emojis.

One of the most critical steps when using GPT models is to find a good prompt for our task. If your prompt is not good, the model’s answer won’t be great either.

What’s a good prompt?

Prompt engineering is a very active research area. I won’t tackle this topic here but I plan to do it in my next article.

For our task, inspired by previous machine translation work using large language models, I propose the following prompt that gave reasonably good results:

Translate the following text into emoji:

[TXT]

Where [TXT] will be replaced by the text to translate into emojis.

This prompt has the advantage to be short. It won’t cost much to use it.

For example, we will try to translate into emojis the following text:

I would like a hamburger without onions.

So our prompt becomes:

Translate the following text into emoji:

I would like a hamburger without onions.

With the OpenAI PHP client, we can do this with the following code:

<?php
//This line is necessary to load the PHP client installed by Composer
require_once('../vendor/autoload.php');

//Change the next line to $yourApiKey = MY_OPENAI_KEY; if you didn't use an environment variable and set your key in a separate file
$yourApiKey = getenv('MY_OPENAI_KEY');

//Create a client object
$client = OpenAI::client($yourApiKey);

//The $prompt variable stores our entire prompt
$prompt = "Translate the following text into emoji:

I would like an hamburger without onions.
";

//We send our prompt along with parameters to the API
//It creates a completion task
$result = $client->completions()->create([
'model' => 'text-davinci-003',
'prompt' => $prompt
]);

//After a few seconds the response will be stored in $results
//We can print the text answered by GPT
echo $result['choices'][0]['text'];

?>

In this code, I assume you are in the root directory of your website.

It should print a sequence of emojis. I obtained this one:

🍔🚫🧅

You may get a different sequence since GPT models are “non-deterministic”.

I used the “text-davinci-003” GPT model which is the most powerful GPT-3 model.

You can use a cheaper GPT model if your task is very simple. For instance, we can try to replace the model “text-davinci-003” with “ada”.

'model' => 'ada',

I got the following answer:

For example, enter This is the text “Looking For a hamburger

Yes, this is quite bad. There aren’t any emojis in this response. Choosing the right model is the most critical choice you will have to make when integrating the OpenAI API into your product.

  • If you choose an old or small model, the result will be of low quality and may not complete the task requested.
  • If you choose a bigger model, you may get the best results but for a higher cost.

You will have to try several models to figure out which is the best option given your objective. As a starting point, OpenAI provides some usage suggestions along with a list of available models.

In addition to the model name and prompt, the completion task can take many more parameters. They are all described in the API documentation.

We can precise for instance the maximum number of tokens in the response as follows:

$result = $client->completions()->create([
'model' => 'text-davinci-003',
'prompt' => $prompt,
'max_tokens' => 2
]);

This shouldn’t generate anything but 1 line break. Why?

1 emoji consists of 3 tokens for text-davinci-003. So if we set ‘max_tokens’ to 2, the model can’t even generate 1 emoji.

How do I know an emoji is made of 3 tokens?

I simply checked it in the playground of my OpenAI user account. For instance, if you put there “🍔🚫🧅”, the model will count 9 tokens.

Moreover, the GPT model generates a line break before the sequence of emojis. It counts as an additional token. In total, GPT answered me with 10 tokens.

Note that the “$result” variable contains all this information. We will have a look at it in the next part below.

But before that, let’s have a look at the chat completion task.

Chat completion task

Chat completion tasks are slightly different from what we did with GPT-3. Chat tasks are powered by gpt-3.5-turbo, which also powers ChatGPT.

With gpt-3.5-turbo, the “prompt” parameter is replaced by “messages”.

Technically, “messages” are associative arrays with two required keys, and one optional, as follows:

  • role (required): Can be either “system”, “assistant”, or “user”. At the time I write this article, “system” is almost ignored according to OpenAI documentation. It leaves “assistant” which is the model, and “user” which is a human.
  • content (required): This is where we put our prompt, or the context of our prompt, for instance, the chat history.
  • name (optional): If you want to give a specific name to the author of the message.

The length and number of messages are virtually unlimited. That way, the gpt-3.5-turbo can accept a very long chat history as input.

Chat completion can perform similar tasks as the standard GPT-3. In the documentation, OpenAI wrote the following:

Because gpt-3.5-turbo performs at a similar capability to text-davinci-003 but at 10% the price per token, we recommend gpt-3.5-turbo for most use cases.

Let’s check it with our task of translating text into emojis.

We only have a few modifications to perform:

<?php
//This line is necessary to load the PHP client installed by Composer
require_once('../vendor/autoload.php');

//Change the next line to $yourApiKey = MY_OPENAI_KEY; if you set your key in a separate file
$yourApiKey = getenv('MY_OPENAI_KEY');

//Create a client object
$client = OpenAI::client($yourApiKey);

//The $prompt variable stores our entire prompt
$prompt = "Translate the following text into emoji:

I would like an hamburger without onions.
";

//We send our prompt along with parameters to the API
//It creates a chat completion task
$result = $client->chat()->create([
'model' => 'gpt-3.5-turbo',
'messages' => [
['role' => 'user', 'content' => $prompt],
],
]);

//After a few seconds the respone will be store in results
//We can print the text answer by GPT
echo $result['choices'][0]['message']['content'];

?>

I obtained the same answer as with text-davinci-003, “🍔🚫🧅”, but at 10% of text-davinci-003’s price per token.

Now that you know how to communicate with OpenAI API in PHP, we can have a closer look at what the API returns. As we will see, there are useful data in the response that we can use to monitor the API cost, keep track of the users’ activity (e.g., to flag prohibited behavior), etc.

We can make a printable version of the “$result” variables like this:

print_r($result->toArray());

For a chat completion task, it will print this:

Array
(
[id] => chatcmpl-7AJFw****
[object] => chat.completion
[created] => 1682691656
[model] => gpt-3.5-turbo-0301
[choices] => Array
(
[0] => Array
(
[index] => 0
[message] => Array
(
[role] => assistant
[content] => 🍔🚫🧅
)

[finish_reason] => stop
)

)

[usage] => Array
(
[prompt_tokens] => 23
[completion_tokens] => 9
[total_tokens] => 32
)

Note: I manually masked part of the “id”.

We have the following entries:

  • id: A unique ID assigned by OpenAI to the response. This information can help to track interactions between the API and your users.
  • object: The type of task performed.
  • created: The timestamp of the creation of the response.
  • model: The model used to generate the response.
  • choices: By default, you will get only one message for a chat completion task, unless you change the “n” option when calling the API.
  • index: The index, starting at 0, of the message generated.
  • message: Information about the message generated.
  • role: The role of the author of the message.
  • content: The message itself.
  • finish_reason: The reason why the API stopped the generation of the message. By default it will be “stop”, i.e., the model stopped the generation without any constraints. It can change if you indicated a “stop” parameter when calling the API. The model would then stop the generation after generating one of the tokens you mentioned in “stop”.
  • usage: Information about the length in tokens. It can be used to monitor the API cost.
  • prompt_tokens: The number of tokens in your prompt.
  • completion_tokens: The number of tokens in the message generated by the API.
  • total_tokens: The sum of “prompt_tokens” and “completion_tokens”.

The most important fields are “choices”, since this is what you will have to deliver to your users, and “usage” since this is the only metric that will tell you how much it cost to generate this answer.

To know the exact cost of an API call, you have to multiply the value of “total_tokens” by the cost of the model per token. Note that OpenAI shows pricing for 1,000 tokens so you will have to divide this number by 1,000 to get the price per token.

For instance, if we use a model costing $0.002 per 1,000 tokens, and “total_tokens” is 32, we can compute the total cost as follows:

0.002 / 1000 * 32 = 0.000064

This API call would cost $0.000064.

The response fields of a standard GPT-3 completion are almost identical to the fields of the chat completion task.

The only notable difference is that a “text.completion” task can also return the log probabilities of the t most probable tokens. You can indicate “t” when you call the API with the “logprobs” parameter. The maximum value of t is 5. Note: OpenAI’s API reference indicates that you can manually request OpenAI a greater number if your application needs it.

We have learned how to communicate with OpenAI API in PHP. Your online service can now exploit all the power of GPT models.

The next step would be to implement the front end. You don’t need to do something over-complicated for this. A simple AJAX script, using jQuery for instance, would be enough to asynchronously get the response from the PHP script that made the API call.

It can be as simple as this:

$.ajax({  
type:"POST",
url:"call.php",
data:{ prompt: my_prompt //my_prompt stores the prompt
},
success:function(data){
data = $.parseJSON(data);
$('#my_GPT_response').html(data["choices"][0]["message"]["content"]);
}
});

This would print the content of the chat completion inside an HTML object with the id attribute set to “my_GPT_response”.

Your PHP script must receive the “prompt” as a $_POST variable, and the API answer should be encoded into a JSON object, as follows:

<?php
//This line is necessary to load the PHP client installed by Composer
require_once('../vendor/autoload.php');

//At least check that the prompt is sent
//Of course you should also check the content of the variable according to what you want to do with it
if (isset($_POST['prompt'])){
//Change the next line to $yourApiKey = MY_OPENAI_KEY; if you set your key in a separate file
$yourApiKey = getenv('MY_OPENAI_KEY');

//Create a client object
$client = OpenAI::client($yourApiKey);

//The $prompt variable stores our entire prompt
$prompt = "Translate the following text into emoji:

".$_POST['prompt']."
";

//We send our prompt along with parameters to the API
//It creates a chat completion task
$result = $client->chat()->create([
'model' => 'gpt-3.5-turbo',
'messages' => [
['role' => 'user', 'content' => $prompt],
],
]);
$result = $response->toArray();
echo json_encode($result);
}

?>

To conclude this article, I should mention once again that you must always check what you are sending to the API to ensure that you are not violating the policies and terms of use of OpenAI.

You can exploit the moderation model, free of use, proposed by OpenAI that can flag unsafe content before you send it to a GPT model.

It is also important to check the age of your users. OpenAI’s terms of use prohibit the use of their services for children under 13 while children under 18 can use the services only with the supervision of an adult.

If you like this article and would be interested to read the next ones, the best way to support my work is to become a Medium member using this link:

If you are already a member and want to support this work, just follow me on Medium.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@technoblender.com. The content will be deleted within 24 hours.
Ai NewsChatGPTGPTModelsPHPRunTech NewsTechnoblenderWebsite
Comments (0)
Add Comment