Techno Blender
Digitally Yours.

Turn GPT-4 into a Poker Coach. Unleashing Creativity Beyond Chatbot… | by Jacky Kaub | May, 2023

0 39


Unleashing Creativity Beyond Chatbot Boundaries

Photo by Michał Parzuchowski on Unsplash

In this article, we will not talk about how LLM models can pass a law exam or replace a developer.

We will not look at hints on optimizing prompts for making GPT do motivation letters or marketing content.

Like many people, I think that the emergence of the LLM like GPT4 is a little revolution from which a lot of new applications will emerge. I also think that we should not reduce their use to simple “chatbot assistants” and that with the appropriate backend and UX, those models can be leveraged to incredible next-level applications.

This is why, in this article, we are going to think a bit out of the box and create a real application around the GPT API that could not be accessed simply via the chatbot interface and how a proper app design could serve a better user experience.

Leveraging GPT4 in businesses

I played a lot with GPT4 since its release and I think there is globally two main families of use cases for using the model to generate a business.

The first way is to use GPT4 to generate static content. Say you want to write a cooking book with a particular theme (for example Italian food). You can make detailed prompts, generate a few recipes from GPT, try them yourself, and integrate the one you like in your book. In that case “prompting” will have a fixed cost and once the recipes are generated you don’t need GPT anymore. This type of use case can find a lot of variation (Marketing content, website content, or even generating some datasets for other uses), but is not as interesting if we want to focus on AI-oriented apps.

The logic of generating the content is outside the application, Author Illustration

The second use case is live prompting through an interface of your design. Going back to the cooking field: we could imagine a well-suited interface in which a user can pick up a few ingredients, a specialty, and ask the application to generate directly the recipe. Unlike in the first case, the content generated can be potentially infinite and suit better the needs of your users.

In this scenario, the user interacts directly with the LLM via a well-designed UX which will generate prompts and content, Author Illustration

The drawback of this is that the number of calls to the LLM will be potentially infinite and grow with the number of users, unlike before where the amount of calls to the LLM was finite and controlled. This implies that you will have to design properly your business model and take a lot of care into including the cost of prompts in your business model.

As of when I am writing these lines, GPT4 “prompt” costs 0.03$/1000 tokens (with both request and answer tokens counted in the pricing). It does not seem like a lot, but could quickly escalate if you don’t pay attention to it. To work around this, you could for example propose to your user a subscription depending on the amount of prompts or limited the amount of prompts per user (via a login system etc…). We will talk a bit more in detail about pricing later in this article.

Why a use-case around Poker?

I thought for some time of the perfect use case to try around LLMs.

First, poker analysis is theoretically a field in which LLM should perform well. In fact, every poker hand played can be translated into a standardized simple text describing the evolution of the hand. For example, the hand below describes a sequence in which “player1” win the pot after making a raise on the bet of “player2” after the “flop” action.

Seat 2: player1(€5.17 in chips) 
Seat 3: player3(€5 in chips)
Seat 4: player2(€5 in chips)
player1: posts small blind €0.02
player2: posts big blind €0.05
*** HOLE CARDS ***
Dealt to player2[4s 4c]
player2: raises €0.10 to €0.15
player1: calls €0.13
player3: folds
*** FLOP *** [Th 7h Td]
player1: checks
player2: bets €0.20
player1: raises €0.30 to €0.50
player2: folds
Uncalled bet (€0.30) returned to player1
player1collected €0.71 from pot

This standardization is important because it will make the development more simple. We will be able to simulate hands, translate them into this kind of prompt message, and “force” the answer of the LLM to continue the sequence.

A lot of theoretical content is available in books, online, etc… Making it likely that GPT has “learned” things around the game and good moves.

Also, a lot of added value will come from the app engine and the UX, and not only from the LLM itself (for example we will have to design our own poker engine to simulate a game), which will make the application harder to replicate, or to simply “reproduce” via GPTChat.

Finally, the use case might adapt well to the second case scenario described above, where the LLM and a good UX can bring a completely new experience to users. We could imagine our application playing hands again a real user, analyzing hands and also giving rates and areas of improvement. The price per request should not be a problem as poker learners are used to paying for this kind of service, so a “pay as you use” might be possible in this particular use case (unlike the recipe concept app mentioned earlier for example)

About GPT4 API

I decided to build this article around GPT4 API for its accuracy in comparison to GPT3.5. OpenAI provides a simple Python wrapper that can be used to send your inputs and receive your outputs from the model. For example:

import openai
openai.api_key = os.environ['OPENAI_KEY']

completion = openai.ChatCompletion.create(
model="gpt-4",
messages=[{"role": "system", "content": preprompt_message},
{"role": "user", "content": user_message}]
)

completion.choices[0].message["content"]

The “pre-prompt” used with the role “system” will help the model to act the way you want him to act (you can use it typically to enforce a response format), the role “user” is used to add the message from the user. In our case, those messages will be pre-designed by our engine, for example, passing a particular poker hand to complete.

Note that all the tokens from “system”, “user” and from the answer are counted in the price scheme, so it is really important to optimize those queries as much as you can.


Unleashing Creativity Beyond Chatbot Boundaries

Photo by Michał Parzuchowski on Unsplash

In this article, we will not talk about how LLM models can pass a law exam or replace a developer.

We will not look at hints on optimizing prompts for making GPT do motivation letters or marketing content.

Like many people, I think that the emergence of the LLM like GPT4 is a little revolution from which a lot of new applications will emerge. I also think that we should not reduce their use to simple “chatbot assistants” and that with the appropriate backend and UX, those models can be leveraged to incredible next-level applications.

This is why, in this article, we are going to think a bit out of the box and create a real application around the GPT API that could not be accessed simply via the chatbot interface and how a proper app design could serve a better user experience.

Leveraging GPT4 in businesses

I played a lot with GPT4 since its release and I think there is globally two main families of use cases for using the model to generate a business.

The first way is to use GPT4 to generate static content. Say you want to write a cooking book with a particular theme (for example Italian food). You can make detailed prompts, generate a few recipes from GPT, try them yourself, and integrate the one you like in your book. In that case “prompting” will have a fixed cost and once the recipes are generated you don’t need GPT anymore. This type of use case can find a lot of variation (Marketing content, website content, or even generating some datasets for other uses), but is not as interesting if we want to focus on AI-oriented apps.

The logic of generating the content is outside the application, Author Illustration

The second use case is live prompting through an interface of your design. Going back to the cooking field: we could imagine a well-suited interface in which a user can pick up a few ingredients, a specialty, and ask the application to generate directly the recipe. Unlike in the first case, the content generated can be potentially infinite and suit better the needs of your users.

In this scenario, the user interacts directly with the LLM via a well-designed UX which will generate prompts and content, Author Illustration

The drawback of this is that the number of calls to the LLM will be potentially infinite and grow with the number of users, unlike before where the amount of calls to the LLM was finite and controlled. This implies that you will have to design properly your business model and take a lot of care into including the cost of prompts in your business model.

As of when I am writing these lines, GPT4 “prompt” costs 0.03$/1000 tokens (with both request and answer tokens counted in the pricing). It does not seem like a lot, but could quickly escalate if you don’t pay attention to it. To work around this, you could for example propose to your user a subscription depending on the amount of prompts or limited the amount of prompts per user (via a login system etc…). We will talk a bit more in detail about pricing later in this article.

Why a use-case around Poker?

I thought for some time of the perfect use case to try around LLMs.

First, poker analysis is theoretically a field in which LLM should perform well. In fact, every poker hand played can be translated into a standardized simple text describing the evolution of the hand. For example, the hand below describes a sequence in which “player1” win the pot after making a raise on the bet of “player2” after the “flop” action.

Seat 2: player1(€5.17 in chips) 
Seat 3: player3(€5 in chips)
Seat 4: player2(€5 in chips)
player1: posts small blind €0.02
player2: posts big blind €0.05
*** HOLE CARDS ***
Dealt to player2[4s 4c]
player2: raises €0.10 to €0.15
player1: calls €0.13
player3: folds
*** FLOP *** [Th 7h Td]
player1: checks
player2: bets €0.20
player1: raises €0.30 to €0.50
player2: folds
Uncalled bet (€0.30) returned to player1
player1collected €0.71 from pot

This standardization is important because it will make the development more simple. We will be able to simulate hands, translate them into this kind of prompt message, and “force” the answer of the LLM to continue the sequence.

A lot of theoretical content is available in books, online, etc… Making it likely that GPT has “learned” things around the game and good moves.

Also, a lot of added value will come from the app engine and the UX, and not only from the LLM itself (for example we will have to design our own poker engine to simulate a game), which will make the application harder to replicate, or to simply “reproduce” via GPTChat.

Finally, the use case might adapt well to the second case scenario described above, where the LLM and a good UX can bring a completely new experience to users. We could imagine our application playing hands again a real user, analyzing hands and also giving rates and areas of improvement. The price per request should not be a problem as poker learners are used to paying for this kind of service, so a “pay as you use” might be possible in this particular use case (unlike the recipe concept app mentioned earlier for example)

About GPT4 API

I decided to build this article around GPT4 API for its accuracy in comparison to GPT3.5. OpenAI provides a simple Python wrapper that can be used to send your inputs and receive your outputs from the model. For example:

import openai
openai.api_key = os.environ['OPENAI_KEY']

completion = openai.ChatCompletion.create(
model="gpt-4",
messages=[{"role": "system", "content": preprompt_message},
{"role": "user", "content": user_message}]
)

completion.choices[0].message["content"]

The “pre-prompt” used with the role “system” will help the model to act the way you want him to act (you can use it typically to enforce a response format), the role “user” is used to add the message from the user. In our case, those messages will be pre-designed by our engine, for example, passing a particular poker hand to complete.

Note that all the tokens from “system”, “user” and from the answer are counted in the price scheme, so it is really important to optimize those queries as much as you can.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment