In this recipe we'll walk through the steps to build a basic integration in Rock RMS to communicate with ChatGPT to ask questions via the ChatGPT API. I'll also provide some examples of how this integration might be used. The recipe might look alarmingly long, but I'll have you chatting with an AI in a couple minutes and the rest is just examples and refinements. I've include a list of limitations to the API at the bottom of this article, so if you run into any issues be sure to take a glance down there before you start your troubleshooting.


If you are chomping at the bit to begin working with ChatGPT in Rock this is a great place to start.

Much more is possible via a deeper integration that involves embedding and fine tuning via the ChatGPT API. API's also exist to transcribe audio, translate text from English into other languages and even help you write code. I won't cover these areas in this recipe, but they are good to keep in mind as you follow the recipe.


The Rock Core Team is working on building functionality directly into Rock that will allow you to do what is included in this recipe and far more in the future. If you have any ideas that you would like to be added to Rock don't hesitate to add an idea to the idea board at https://community.rockrms.com/ideas or reach out to the Rock Core Team at info@sparkdevnetwork.org to sponsor a feature.

Step 1: Create OpenAI Account

ChatGPT is a product of OpenAI so you'll need to set up an account with OpenAI and create an API Key in order to interact with the ChatGPT API. Fortunately OpenAI has made this process easy.


  1. Go to https://platform.openai.com/ to create an account.
  2. Go to https://platform.openai.com/account/billing/overview and click "Set Up Paid Account" to add billing info. If you have recently created an account you don't technically need to do this immediately since you will be set up with $5 worth of free tokens that expire after 3 months. That said, it might be a good idea to add payment info so that your integration does not stop working when those tokens expire. (More info about free tokens.) Pricing with OpenAI uses a token model and the cost of the tokens used for this project are VERY cheap. A typical request will use less than 500 tokens and with the cost of tokens at $0.002 / 1K tokens this means that you will often pay less than $0.001 for each API call. (More info about tokens in general.)
  3. Go to https://platform.openai.com/account/api-keys and click "Create new secret key" to create an API Key. Save this key in a safe place. After the key is shown to you in the OpenAI interface it can not be viewed again. That said, you can always create a new key if you need a new one.


Step 2: Test Integration with OpenAI

Now we get to test this out!


Navigate to your Lava Tester or add an HTML block to a page and run the code below while replacing YOUR API KEY HERE with your recently created API key . You will need to have the Web Request command enabled in your Lava Tester or HTML block.

{% assign openAIKey = 'YOUR API KEY HERE' %}
{% assign content = 'Hello World' %}

{%- capture aiwebrequestbodycapture -%}
    {
      "model": "gpt-3.5-turbo",
      "messages": [{"role": "user", "content": "{{ content | Escape }}"}]
    }
{%- endcapture -%}
    
{%- assign aiwebrequestbodytrimmed = aiwebrequestbodycapture | Trim -%}

{%- webrequest url:'https://api.openai.com/v1/chat/completions' headers:'Authorization^Bearer {{ openAIKey }}' method:'POST' body:'{{ aiwebrequestbodytrimmed }}' requestcontenttype:'application/json' timeout:'60000' -%}
    {% for choice in results.choices %}
        {{- choice.message.content | Trim | Trim:'\n'  | NewlineToBr  -}}
    {% endfor %}
    {{ results | Debug }}
{%- endwebrequest -%}


The kind response shown on the output is generated by the AI, the Lava Debug Info below the response includes all of the data sent back from OpenAI including details about the tokens used. Feel free to change up the "content" to play around with it a bit, but be careful, that "Hello World" request cost 20 tokens or about $0.00004. Tokens don't grow on trees. :)


The "model" parameter found in this code is the most recent model as of March 10th, 2023, but this model updates regularly. Version 4 will likely be released soon and further updates will likely continue indefinitely with each update bringing significant improvements. Check the OpenAI documentation to find the most recent stable model that meets your needs.


You can ask something simple and you will get a response similar to what you would expect to see at the top of Google results, you can ask something very detailed and it will respond back with an entire blog article, social media post, sonnet, or rap battle. You can also ask it to mimic a specific writing sample you provide or respond as if speaking to a specific audience,


If you try something like "Write a social media post to promote a church high school camp in the Florida everglades." you'll get something similar to:


Attention all high school students! Join us for an unforgettable summer camp experience in the Florida Everglades! Connect with friends, explore nature, and deepen your faith. Don't miss out on this incredible opportunity to grow in your relationship with God and create memories that will last a lifetime. Register now for our church high school camp! #EvergladesSummerCamp #ChristianYouth #HighSchoolersUnite #FaithAndNature #ComingThisSummer


It might be tempting to start splashing this code all over your site to test different usages, but instead we are going to create a simple shortcode that you can use to keep all of this AI related code in one place. We will then go over a common usages of the code and then supercharge the shortcode to allow caching and more control of the API call.


Step 3: Build Simple Shortcode

Rock provides a very convenient way to encapsulate all of this code in a shortcode so that you can access it anywhere that you want to utilize ChatGPT.


To do this:

  1. Head over to [YourInternalSite]/admin/cms/lava-shortcodes/
  2. Click The "+" sign to create a new shortcode
  3. Enter the Information Below
    • Name: This can be whatever you would like.
    • Tag Name: openaichat
    • Tag Type: Block
    • Categories: Whatever you would like.
    • Documentation:
Example of usage:
{[ openaichat ]}
What is the answer to the Ultimate Question of Life, The Universe, and Everything?
{[ endopenaichat ]}
    • Shortcode Markup: (Be Sure to Replace API Key)
{%- if blockContent != empty and blockContent != null -%}
    {%- assign openAIKey = 'YOURAPIKEYHERE'  -%}
    
    {%- capture aiwebrequestbodycapture -%}
        {
          "model": "gpt-3.5-turbo",
          "messages": [{"role": "user", "content": "{{ blockContent | StripNewlines | Escape }}"}]
        }
    {%- endcapture -%}
    
    {%- assign aiwebrequestbodytrimmed = aiwebrequestbodycapture | Trim -%}
    
    {%- webrequest url:'https://api.openai.com/v1/chat/completions' headers:'Authorization^Bearer {{ openAIKey }}' method:'POST' body:'{{aiwebrequestbodytrimmed }}' requestcontenttype:'application/json' timeout:'60000' -%}
        {% for choice in results.choices %}
            {{- choice.message.content | Trim | Trim:'\n'  | NewlineToBr  -}}
        {% endfor %}
    {%- endwebrequest -%}
{% endif %}
    • Enabled Lava Commands: Web Request



Now head back to your lava tester and run this code:

{[ openaichat ]}
What is the answer to the Ultimate Question of Life, The Universe, and Everything?
{[ endopenaichat ]}


Step 4: Create an Event Idea Lab Powered By ChatGPT

Before we do a little more refinement lets test out this shortcode on some data in the database.


We have a fair amount of good data in our Calendar system at ONE&ALL so it was a prime candidate for using AI to create some content. You can follow the steps below to create something similar or you can just follow along to get an idea of what is possible.


  1. Navigate to your internal public calendar. For many churches this will be at: [InternalWebsiteRoot]/web/calendars/1/
  2. Click on any event. This will take you to the Event Detail page.
  3. Add a new HTML block to the bottom of the Main section the page
  4. Secure the new block so that only you or Rock Admins can view the block by clicking on the padlock icon. More about setting security settings here.
  5. Click the gear icon to edit the settings of the newly created HTML block.
  6. At the very bottom on the block settings set the Context Entity Type to "Event Item" then click Save. We are using Context Entity Types in this case because it is already configured on the page but you could do something similar using an entity call if you would like. Michael Garrison wrote a great recipe here on how Context works on a page.
  7. Click the pencil/paper icon on the newly created block to Edit the HTML of the block.
  8. Paste the following content into the HTML field while in the Code Editor mode.
{% assign eventItem = Context.EventItem %}
{% assign eventItemDescriptionStripped =  eventItem.Description | HtmlDecode | StripHtml %}
{% eventscheduledinstance eventid:'{{eventItem.Id}}' maxoccurrences:'1'  %}
    {% for occurrence in EventItems %}
        {% for item in occurrence %}
            {% capture nextDay %}{{item.DateTime | Date:'MMM' }} {{item.DateTime | Date:'d' | NumberToOrdinal }}{% endcapture %}
        {% endfor %}
    {% endfor %}
{% endeventscheduledinstance %}

{[ panel title:'Alternate Summary Idea (AI Generated)' icon:'fas fa-lightbulb' ]}
    <h3>Current Summary</h3>
    <p>{{ eventItem.Summary }}</p>
    <h3>AI Generated</h3>
    {[ openaichat ]}
       How would you improve this summary text for a {{ eventItem.Name }} event: {{ eventItem.Summary }}
    {[ endopenaichat ]}
{[ endpanel ]}

{[ panel title:'Alternate Description Idea (AI Generated)' icon:'fas fa-lightbulb' ]}
    <h3>Current Description</h3>
    {{ eventItem.Description }}
    <h3>AI Generated</h3>
    {[ openaichat ]}
       How would you improve this promotional text for a {{ eventItem.Name }} event: {{ eventItemDescriptionStripped }}
    {[ endopenaichat ]}
{[ endpanel ]}

{[ panel title:'Example Promotional Text (AI Generated)' icon:'fas fa-lightbulb' ]}
    {[ openaichat ]}
        Create 3 different options of short promotional text to be included in an email to promote a {{ eventItem.Name }} event on {{ nextDay }} at {{ 'Global' | Attribute:'OrganizationName' }}. The event is summarized as {{ eventItem.Summary }} is being promoted with the text: {{ eventItemDescriptionStripped }}.
    {[ endopenaichat ]}
{[ endpanel ]}


Each use of this shortcode will constitute a new API call to OpenAI, so depending on the complexity of each request it may add some delay to the loading of the page. (Which is why we secured the block.)


Below are some other ideas for panels that you could add. Be warned that adding all of these to a single page will cause the page to load VERY slowly.


{[ panel title:'Example Promotional Text' icon:'fas fa-lightbulb' ]}
    {[ openaichat ]}
        Create 3 different options of short promotional text for {{ 'Global' | Attribute:'OrganizationName' }} to be included in an email to promote a {{ eventItem.Name }} event on {{ nextDay }}. The event is summarized as {{ eventItem.Summary }} is being promoted with the text: {{ eventItemDescriptionStripped }}.
    {[ endopenaichat ]}
{[ endpanel ]}


{[ panel title:'Example Instagram Posts' icon:'fab fa-instagram' ]}
    {[ openaichat ]}
        Write 3 instagram posts for {{ 'Global' | Attribute:'OrganizationName' }} to promote a {{ eventItem.Name }} event on {{ nextDay }} that can be summarized as {{ eventItem.Summary}} and is being promoted with the text: {{ eventItemDescriptionStripped }}.
    {[ endopenaichat ]}
{[ endpanel ]}

{[ panel title:'Snapchat Post Ideas (AI Generated)' icon:'fab fa-snapchat-ghost' ]}
    {[ openaicompletion ]}
        Give me 3 ideas for Snapchat posts to promote a {{ eventItem.Name }} event on {{ nextDay }} at {{ 'Global' | Attribute:'OrganizationName' }} that can be summarized as {{ eventItem.Summary }} and is being promoted with the text: {{ eventItemDescriptionStripped }}
    {[ endopenaichat ]}
{[ endpanel ]}

{[ panel title:'TikTok Post Ideas (AI Generated)' icon:'fab fa-tiktok' ]}
    {[ openaichat ]}
        Give me 3 ideas for TikTok posts to promote a {{ eventItem.Name  }} event on {{ nextDay }} at {{ 'Global' | Attribute:'OrganizationName' }} that can be summarized as {{ eventItem.Summary }} and is being promoted with the text: {{ eventItemDescriptionStripped }}.
    {[ endopenaichat ]}
{[ endpanel ]}

{[ panel title:'General Promotional Ideas (AI Generated)' icon:'fas fa-lightbulb' ]}
    {[ openaichat ]}
         Give me 3 ideas to promote a {{ eventItem.Name }} event on {{ nextDay }} at {{ 'Global' | Attribute:'OrganizationName' }} that can be summarized as {{ eventItem.Summary }} and is being promoted with the text: {{ eventItemDescriptionStripped }}.
    {[ endopenaichat ]}
{[ endpanel ]}


Hopefully this gives you some ideas of how this integration could be used with your data. If you have a wealth of data stored in content items related to sermons and sermon series you can ask for ideas to promote those. You can use it to summarize blog posts and extract keywords.


The strength of this idea lab concept is that all of the responses don't have to be golden, but they allow you to experiment with the AI and integrate the ideas as your team sees fit. By placing this data alongside the admin pages your team uses every day it also allows your team to use the AI as an assistant to help them brainstorm and stumble upon inspiration.


Step 5: Upgrade Your Shortcode

There are some changes that you can make to your shortcode to help with troubleshooting, speed and help the AI understand a little more about the context of the church.


Church Info

If your church is very large ChatGPT may understand the dynamics of your church but if your church is smaller or has a common name the AI may not be able to as well tuned to understand the context of promoting your programs and events. We can solve this by adding a little text before each request that helps to provide the context. Feel free to change the logic to the needs of your church. Including this information in your request will increase the cost and response time of each request. It may also result in the AI parroting some of the church info back to you when not appropriate, so "less is more" and there is a bit of an art to finding the right balance. In many cases providing just the linkage of the church to your public website is enough to allow AI to understand exactly which church you are talking about. This can be done with something like "ONE&ALL Church is a church with the website of www.oneandall.church."


Cacheing

Each time the shortcode is run it will make an API call by default. If you would like it to only ask each question once and save the reply I have included some code below that allows you to cache the request by default. You'll need to determine if this usage is appropriate for your church. The Rock documentation warns that overuse of caching can cause issues. If caching is not ideal for you it is also possible to save the output of the API calls into attributes or persisted datasets.


Verbose Mode

If you would like to view the data sent to and returned from the API I have included some code that outputs this information if verbose: 'true' is included in the shortcode.


API Request Options

There are many optional variables that you can define when making an API call. Most of these variables relate to the logic used by the AI to create the output. More about these variables can be found here. For the purpose of this example I have included options in the shortcode that allow you to edit the model, the timeout duration for the API call (in milliseconds) and the number of choices to generate (n) but you can add others if you would like. It wouldn't be a great shortcode without dozens of variables that are rarely used. 🙂


This an example of the upgraded code:

{%- if blockContent != empty and blockContent != null -%}
    {%- assign openAIKey = 'YOURAPIKEY'  -%}

    {%- if churchinfo == 'medium' -%}
        {% capture churchtext %}ONE&ALL Church is a church in Southern California with a mission to see those who are far from God, come near to God. The primary audience for ONE&ALL Church is 18-35 year olds. The ONE&ALL church website is www.oneandall.church.{% endcapture %}
    {%- elseif churchinfo == 'large' -%}
        {% capture churchtext %}ONE&ALL Church is a church in Southern California with a mission to see those who are far from God, come near to God. ONE&ALL has campuses in San Dimas, Rancho Cucamonda, West Covina and Upland. The primary audience for ONE&ALL Church is 18-35 year olds. The ONE&ALL church website is www.oneandall.church.{% endcapture %}
    {%- elseif churchinfo == 'none' -%}
        {% assign churchtext = '' %}
    {%- else -%}
        {% capture churchtext %}ONE&ALL Church is a church in Southern California. The primary audience for ONE&ALL Church is 18-35 year olds. The ONE&ALL church website is www.oneandall.church.{% endcapture %}
    {%- endif -%}
    
    {%- capture aiwebrequestbodycapture -%}
        {
          "model": "{{ model }}",
          "n": {{ n }},
          "messages": [{"role": "user", "content": "{{ churchtext | Append: blockContent | StripNewlines | Escape }}"}]
        }
    {%- endcapture -%}
    
    {%- assign aiwebrequestbodytrimmed = aiwebrequestbodycapture | Trim -%}
    
    {%- cache key:'aichat-{{ aiwebrequestbodytrimmed | Append: verbose | Md5}}' duration:'{{ cacheduration }}' -%}
        {%- webrequest url:'https://api.openai.com/v1/chat/completions' headers:'Authorization^Bearer {{ openAIKey }}' method:'POST' body:'{{aiwebrequestbodytrimmed }}' requestcontenttype:'application/json' timeout:'{{ timeout }}' -%}
            {% for choice in results.choices %}
                {{- choice.message.content | Trim | Trim:'\n'  | NewlineToBr  -}}
            {% endfor %}
        {%- endwebrequest -%}
        {% if verbose == 'true' %}
            <br>Verbose Mode:
            <br>Prompt: {{ churchtext | Append: blockContent | StripNewlines | Escape }}
            <br>Json Sent: {{ aiwebrequestbodytrimmed }}
            <br><br>Response{{ results | Debug }}
        {% endif %}
    {%- endcache -%}
{% endif %}


Below is an example of the parameters and lava commands required for this code to function.

Step 6: Securing Your API Key

If you would prefer to not store your API Key in the shortcode is it possible to store this key in a secured Global attribute instead. I'll detail the method of configuring that attribute and then adding the code to the shortcode to access the attribute value. Special thanks Leah Jennings, Kevin Rutledge and Michael Allen in the RocketChat #API channel for the insights on how to best implement this.


  1. Navigate to [YourInternalApplicationRoot]/admin/general/global-attributes
  2. Click the "+" Sign at the top right of the list to add a new attribute
  3. Configure the attribute as shown below and click Save. Critical field are listed below.
    • Field Type: Encrypted Text
    • Password Field: True

  1. Secure the attribute by clicking the Padlock next to the new attribute and setting the View permissions to Rock Admins only.



  1. Click on the attribute and enter your OpenAI Key into the password protected field.
  2. Copy the Attribute Id shown on the far left (Highlighted above)
  3. Return to your shortcode and replace the {%- assign openAIKey = 'YOURAPIKEY' %} code with:
{%- attributevalue where:'AttributeId == YOURATTRIBUTEID' limit:'1' securityenabled:'false' -%}
    {%- assign openAIKey = attributevalue.Value | Decrypt  -%}
{%- endattributevalue -%}
  1. Ensure that 'Rock Entity' is checked in your Enabled Lava Commands of your shortcode.


The attribute is now stored in an encrypted attribute that can only be accessed by Rock Admins or those with the ability to run entity calls.


Known Limitations

ChatGPT has some limitations, most of which are intentional.


The prominent limitations I have found are:

  • Response times can be slow. Most responses are returned within a second or two, but complex queries can take up to 20 seconds or more to return.
  • It it limited to the number of combined tokens it can use on it's input and output. For English text, 1 token is approximately 4 characters or 0.75 words. As of March 10th, 2023 this is limitation is 4,096 tokens for the gpt-3.5-turbo model, but this will likely change for future models. A full list of the current models and their limitations can be found at https://platform.openai.com/docs/models/. If you are making small requests these tokens will not be an issue, but if you ask the AI to write or summarize large amounts of text you will start to notice issues.
  • If asked to give information on religious or any opinion it will respond with a clarifying statement that it does not have opinions on these matters. In most cases it will then follow up with a qualified response. For example, the question of "Who was Jesus Christ?" will result in a response such as "As an AI language model, I cannot provide personal opinions or beliefs. However, Jesus Christ is a central figure in Christianity who is believed to be the Son of God, born to the Virgin Mary, and was crucified, died, and resurrected to save humanity from sin. His teachings and life have had a profound impact on countless people throughout history."
  • It will moderate it's responses to avoid any responses that could encourage harm or illegal activity. In one example I asked for it to write a "rap battle" and it refused to provide a response because it perceived that the "battle" could be violent. Rewriting the prompt allowed it to understand more clearly and create a fairly convincing and harmless rap. 🙂 As the model becomes more sophisticated the accidental limitations will likely decrease, but it is good to know that it may moderate your content if you happen to be promoting a rap battle. A full list of these policies is available at https://openai.com/policies/usage-policies.
  • It's responses are not perfect. It is trained on the data found on the internet that is imperfect and can "misunderstand" and provide factually untrue information.


Share Your Experience

If you have any issues implementing this recipe or would like to share ideas about how this can be implemented please don't hesitate to reach out to me on Twitter or RocketChat (@bscottdavis) or by emailing me at brian.davis@oneandall.church.