1 AI Chat in Rock: GPT 3.5, GTP 4 Shared by Brian Davis, ONE&ALL Church 2 years ago 12.0 General Beginner This recipe will have you chatting in with using GPT 3.5 or GPT 4 inside of Rock in 10-15 minutes.Sign up with an OpenAI Account, import a workflow and copy/paste in an API key and you'll be ready to chat.The Rock Core Team is working on building functionality directly into Rock that will allow you to do what is included in this recipe and far more in the future. If you have any ideas that you would like to be added to Rock don't hesitate to add an idea to the idea board at https://community.rockrms.com/ideas or reach out to the Rock Core Team at info@sparkdevnetwork.org to sponsor a feature.I've also written a more detailed recipe here that allows you to drop AI chat responses anywhere into Rock.Step 1: Create OpenAI AccountChatGPT is a product of OpenAI so you'll need to set up an account with OpenAI and create an API Key in order to interact with the ChatGPT API. Fortunately OpenAI has made this process easy.Go to https://platform.openai.com/ to create an account.Go to https://platform.openai.com/account/billing/overview and click "Set Up Paid Account" to add billing info. If you have recently created an account you don't technically need to do this immediately since you will be set up with $5 worth of free tokens that expire after 3 months. That said, it might be a good idea to add payment info so that your integration does not stop working when those tokens expire. (More info about free tokens.) Pricing with OpenAI uses a token model and the cost of the tokens used for this project are VERY cheap for GPT 3.5. GPT 4 is more expensive but provides better responses and allows you to input/output more text. As of this recipe being published you need to apply to use GTP 4 here. I applied and was granted access a couple days later.More info about tokens in general.More about pricing for tokens based on model.Go to https://platform.openai.com/account/api-keys and click "Create new secret key" to create an API Key. Save this key in a safe place. After the key is shown to you in the OpenAI interface it can not be viewed again. That said, you can always create a new key if you need a new one.Step 2: Import The Workflow Into RockDownload the workflow by clicking the "Download File" button at the bottom of this page. Head over to [YourAdminSite]/admin/power-tools/workflow-import and import the downloaded workflow. The workflow can also be accessed and downloaded from Github here. You can find more details about importing a workflow into Rock here.Step 4: Secure your WorkflowSince the workflow will allow access to using your API key I would recommend securing the workflow to only be accessible to a specific group of people that you are comfortable spending tokens. Navigate to the imported workflow and click the padlock to set security.Step 4: Add API Key to WorkflowNavigate to the imported workflow and click "Edit". Find the Action named Query AI and add your API Key into the first line, replacing the text PLACEYOURKEYHERE and then save your workflow. You will see other settings that you can change but they are not required to run the workflow. Save the workflow.Step 5: Run The WorkflowRun the workflow and begin chatting!Much more is possible than simple chatting. You can make updates to the workflow to pass in variables or entities and have that data be included in the chat or role! Since this is a fully functional workflow you can also add other options to the form to add/remove preset chat content from the request. The sky is the limit!Step 6: Securing Your API Key (Optional)If you would prefer to not store your API Key in the workflow is it possible to store this key in a secured Global attribute instead. I'll detail the method of configuring that attribute and then adding the code to the workflow to access the attribute value. Special thanks Leah Jennings, Kevin Rutledge and Michael Allen in the RocketChat #API channel for the insights on how to best implement this.Navigate to [YourInternalApplicationRoot]/admin/general/global-attributesClick the "+" Sign at the top right of the list to add a new attributeConfigure the attribute as shown below and click Save. Critical field are listed below.Field Type: Encrypted TextPassword Field: TrueSecure the attribute by clicking the Padlock next to the new attribute and setting the View permissions to Rock Admins only.Click on the attribute and enter your OpenAI Key into the password protected field.Copy the Attribute Id shown on the far left (Highlighted above)Return to your workflow and replace the {%- assign openAIKey = 'YOURAPIKEY' %} code with:{%- attributevalue where:'AttributeId == YOURATTRIBUTEID' limit:'1' securityenabled:'false' -%} {%- assign openAIKey = attributevalue.Value | Decrypt -%} {%- endattributevalue -%}Ensure that 'Rock Entity' is checked in your Enabled Lava Commands in the Lava Run action.The attribute is now stored in an encrypted attribute that can only be accessed by Rock Admins or those with the ability to run entity calls.Known LimitationsChatGPT has some limitations, most of which are intentional. The prominent limitations I have found are:Response times can be slow. Most responses are returned within a second or two, but complex queries can take up to 20 seconds or more to return.It it limited to the number of combined tokens it can use on it's input and output. For English text, 1 token is approximately 4 characters or 0.75 words. As of March 24th, 2023 this is limitation is 4,096 tokens for the gpt-3.5-turbo model and 8,192 or 32,768 for GPT 4. A full list of the current models and their limitations can be found at https://platform.openai.com/docs/models/. If you are making small requests these tokens will not be an issue, but if you ask the AI to write or summarize large amounts of text you will start to notice issues.If asked to give information on religious or any opinion it will respond with a clarifying statement that it does not have opinions on these matters. In most cases it will then follow up with a qualified response. For example, the question of "Who was Jesus Christ?" will result in a response such as "As an AI language model, I cannot provide personal opinions or beliefs. However, Jesus Christ is a central figure in Christianity who is believed to be the Son of God, born to the Virgin Mary, and was crucified, died, and resurrected to save humanity from sin. His teachings and life have had a profound impact on countless people throughout history."It will moderate it's responses to avoid any responses that could encourage harm or illegal activity. In one example I asked for it to write a "rap battle" and it refused to provide a response because it perceived that the "battle" could be violent. Rewriting the prompt allowed it to understand more clearly and create a fairly convincing and harmless rap. 🙂 As the model becomes more sophisticated the accidental limitations will likely decrease, but it is good to know that it may moderate your content if you happen to be promoting a rap battle. A full list of these policies is available at https://openai.com/policies/usage-policies.It's responses are not perfect. It is trained on the data found on the internet that is imperfect and can "misunderstand" and provide factually untrue information.Share Your ExperienceIf you have any issues implementing this recipe or would like to share ideas about how this can be implemented please don't hesitate to reach out to me on Twitter or RocketChat (@bscottdavis) or by emailing me at brian.davis@oneandall.church. Download File