Skip to main content
From time immemorial, well let’s say late 2022, we’ve all had the experience of AI models confidently responding to a question with nonsense or made-up facts. That’s a term known as “hallucinating”. It happens when models lack the knowledge needed and are incentivized to try and be helpful. At other times, their responses are simply vague and unhelpful. The good news? There’s a fix! This tutorial walks you through how to use Gloo AI’s Grounded Completions API with RAG (Retrieval-Augmented Generation). You’ll see how giving your AI the right context makes all the difference.
Key Problem: Without grounding, AI models may hallucinate answers about your organization, products, or services.The Solution: Grounded Completions uses RAG to retrieve relevant content from YOUR publisher before generating responses. This tutorial shows how grounding on your own content transforms generic AI into an accurate, source-backed assistant.

Prerequisites

Before starting, ensure you have:
New to Publishers? Check out the Upload Files to Data Engine recipe to learn how to create a Publisher and upload your documents using Gloo AI APIs.

What You’ll Build

A 2-step comparison that shows you how RAG grounding works:
  1. Non-grounded (Step 1): Generic model knowledge, may hallucinate
  2. Grounded on your publisher (Step 2): Your specific content, accurate and source-backed
This comparison shows the dramatic difference grounding on your own content makes.

Example: Bezalel Ministries

For this tutorial, we’ve cooked up a fictional organization called Bezalel Ministries. They’re a faith-based creative group who produce biblically-accurate artwork and educational resources. They’re delightful. We’ve created 5 sample documents that cover everything from their hiring process to their educational programs and research methodology. Why use a made-up org? Because it lets us show you exactly how the API works without any real-world baggage. Plus, Bezalel Ministries is more interesting than “Company X” or “Acme Corp”. What the comparison reveals:
  • Step 1 (Non-grounded): Generic hiring advice with no knowledge of Bezalel
  • Step 2 (Publisher grounded): Accurate details about Bezalel’s 3-phase selection journey from their actual docs

Working Code Sample

Follow along with complete working examples in all 6 languages (JavaScript, TypeScript, Python, PHP, Go, Java). Includes the 5 Bezalel sample content files.Setup and testing instructions are provided later.

Step 1: Understanding the Hallucination Problem

Without grounding, when you ask about specific organizational information, the model relies on general knowledge and may:
  • Provide generic answers that don’t match your reality
  • Make up plausible-sounding but incorrect information
  • Say “I don’t have information” even though you’ve uploaded relevant content

The Non-Grounded Request

This is a standard Completions V2 call:
import requests

def make_non_grounded_request(query, token):
    """Standard completion WITHOUT RAG - may hallucinate."""

    api_url = "https://platform.ai.gloo.com/ai/v2/chat/completions"
    headers = {
        "Authorization": f"Bearer {token}",
        "Content-Type": "application/json"
    }

    payload = {
        "messages": [{"role": "user", "content": query}],
        "auto_routing": True,
        "max_tokens": 500
    }

    response = requests.post(api_url, headers=headers, json=payload)
    response.raise_for_status()
    return response.json()

# Example query
query = "What is Bezalel Ministries' hiring process?"
result = make_non_grounded_request(query, access_token)
print(result['choices'][0]['message']['content'])
# Result: Generic hiring advice, not specific to Bezalel
What you’ll see: A generic response about hiring processes, or possibly “I don’t have specific information about Bezalel Ministries.” The model doesn’t know about your organization because it wasn’t trained on your content.

Step 2: Grounding on Your Publisher

Now let’s use the /grounded endpoint with the rag_publisher parameter to ground on YOUR specific content.

The Publisher Grounded Request

This adds two key changes from Step 1: the /grounded endpoint and the rag_publisher parameter:
import requests

def make_publisher_grounded_request(query, token, publisher_name, sources_limit=3):
    """Grounded on YOUR publisher - accurate, source-backed."""

    api_url = "https://platform.ai.gloo.com/ai/v2/chat/completions/grounded"
    headers = {
        "Authorization": f"Bearer {token}",
        "Content-Type": "application/json"
    }

    payload = {
        "messages": [{"role": "user", "content": query}],
        "auto_routing": True,
        "rag_publisher": publisher_name,  # KEY: This grounds on YOUR content
        "sources_limit": sources_limit,
        "max_tokens": 500
    }

    response = requests.post(api_url, headers=headers, json=payload)
    response.raise_for_status()
    return response.json()

# Same query, now grounded on YOUR publisher
query = "What is Bezalel Ministries' hiring process?"
result = make_publisher_grounded_request(query, access_token, "Bezalel")
print(result['choices'][0]['message']['content'])
print(f"Sources used: {result.get('sources_returned')}")
# Result: Detailed answer about Bezalel's 3-phase selection journey
# Sources used: True
What you’ll see: A detailed, accurate response about Bezalel’s specific 3-phase selection journey, pulled directly from their uploaded content. The sources_returned: true flag confirms RAG found and used relevant sources.
Why this query works: The Bezalel publisher contains a document titled “bezalel_hiring_process.txt” that describes their selection journey in detail. When you ground on this publisher, the RAG system retrieves this document before generating the response.For your own use case, design queries that match your uploaded content. If you upload product documentation, ask product questions. If you upload HR policies, ask HR questions.

Side-by-Side Comparison

StepEndpointrag_publishersources_returnedResponse Quality
1. Non-grounded/ai/v2/chat/completionsN/AN/AGeneric/may hallucinate
2. Publisher grounded/ai/v2/chat/completions/groundedYour publishertrueAccurate, specific

Quick Reference: Publisher & Query Examples

Different content types require different query approaches:
Content TypePublisher ExampleGood Query ExamplesWhy It Works
Organizational Docs”Bezalel""What is the hiring process?"
"What benefits do employees get?”
Documents contain policy/process descriptions
Bible Study Resources”FaithLibrary""What study guides are available on the Book of Romans?"
"How do I start a small group Bible study?”
Study materials answer how-to and topical questions
Pastoral Care Support”CareMinistry""What counseling services are available?"
"How do I request prayer support?”
Ministry docs answer member care questions
Educational Content”CourseCatalog""What courses are available on Python?"
"What are the prerequisites for Course 101?”
Course descriptions match query topics
Key Pattern: Your queries should directly relate to the information in your uploaded documents.

Step 3: Comparing Side-by-Side

The most powerful way to demonstrate RAG’s value is to compare both approaches. Here’s how to build a comparison function:
def compare_responses(query, token, publisher_name):
    """Compare non-grounded vs publisher-grounded responses."""

    print(f"\nQuery: {query}")
    print("=" * 80)

    # Step 1: Non-grounded
    print("\n🔹 STEP 1: NON-GROUNDED Response (Generic Model Knowledge):")
    print("-" * 80)
    non_grounded = make_non_grounded_request(query, token)
    print(non_grounded['choices'][0]['message']['content'])
    print(f"\n📊 Sources used: {non_grounded.get('sources_returned', False)}")

    print("\n" + "=" * 80 + "\n")

    # Step 2: Publisher grounded
    print("🔹 STEP 2: GROUNDED on Your Publisher (Your Specific Content):")
    print("-" * 80)
    publisher_grounded = make_publisher_grounded_request(query, token, publisher_name)
    print(publisher_grounded['choices'][0]['message']['content'])
    print(f"\n📊 Sources used: {publisher_grounded.get('sources_returned', False)}")
    print(f"📊 Model: {publisher_grounded.get('model')}")

# Run comparison
compare_responses(
    "What is Bezalel Ministries' hiring process?",
    access_token,
    "Bezalel"
)

Example Output

Query: What is Bezalel Ministries' hiring process?
================================================================================

🔹 STEP 1: NON-GROUNDED Response (Generic Model Knowledge):
--------------------------------------------------------------------------------
I cannot provide specific details about the internal hiring process for Bezalel
Ministries, as that information is not publicly available...

📊 Sources used: false

================================================================================

🔹 STEP 2: GROUNDED on Your Publisher (Your Specific Content):
--------------------------------------------------------------------------------
Bezalel Ministries views its hiring process as discerning a divine calling,
modeling their approach on God's selection of Bezalel in Exodus. Their process
includes three phases: calling alignment through structured interviews about
spiritual gifts, skills assessment with portfolio reviews, and a discernment
period where candidates spend time with the team...

📊 Sources used: true
📊 Model: gloo-google-gemini-2.5-pro
Notice the difference:
  • Step 1 (Non-grounded): Generic or “I don’t have specific information”
  • Step 2 (Publisher grounded): Detailed, accurate answer from Bezalel’s actual documentation

Testing Your Implementation

This tutorial provides two testing paths:
  1. Quick Start (Recommended): Test the Bezalel example to validate your setup
  2. Your Own Content: Adapt the code to use your publisher and content

Phase 1: Quick Start with Bezalel Example

First, validate your setup works by running the provided Bezalel example.

Setup

  1. Install dependencies for your language (see cookbook READMEs)
  2. Configure .env with your credentials:
    GLOO_CLIENT_ID=your_client_id
    GLOO_CLIENT_SECRET=your_client_secret
    PUBLISHER_NAME=Bezalel
    
  3. Create the Bezalel publisher in Gloo Studio:
    • Create a new publisher named “Bezalel” (exact name, case-sensitive)
    • Upload the 5 sample content files from the cookbook’s content/ directory
    • Wait for all documents to show “Completed” status before proceeding

Run the Comparison

Each language implementation includes a ready-to-run comparison demo:
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install -r requirements.txt
python main.py

What to Look For

When testing with ANY publisher (Bezalel or your own): Step 1 (Non-grounded):
  • Generic response not specific to your content
  • Example: “A typical hiring process involves…” (for any hiring query)
Step 2 (Publisher grounded):
  • Specific, accurate information from YOUR uploaded documents
  • sources_returned: true (if this is false, see troubleshooting)
  • Example: For Bezalel → “3-phase selection journey with calling alignment…”
  • Example: For your product docs → “The enterprise plan is $X/month and includes…”
If Step 2 fails or returns generic responses, see the Troubleshooting section.

Phase 2: Adapt to Your Own Content

Once the Bezalel example works, adapt it to your own use case.

Step 1: Prepare Your Content

Before modifying code, ensure you have:
  1. Created a Publisher in Gloo Studio
  2. Uploaded relevant documents (PDFs, text files, Word docs, etc.)
    • Upload content that answers the questions your users will ask
    • Example: HR policies for HR questions, product docs for product questions
  3. Note your Publisher name (you’ll need this for the code)
Content Quality Matters: The more relevant and well-structured your content, the better your grounded responses will be. Each document should focus on a specific topic area.

Step 2: Update the Code

Modify two areas in the code: A. Update Environment Variables In your .env file:
# Before
PUBLISHER_NAME=Bezalel

# After
PUBLISHER_NAME=MyCompanyDocs
B. Update Queries Design queries that match your uploaded content:
# Before (Bezalel example)
queries = [
    "What is Bezalel Ministries' hiring process?",
    "What educational resources does Bezalel Ministries provide?",
]

# After (your content - example: e-commerce)
queries = [
    "What is the return policy for damaged items?",
    "How do I track my order?",
]

# After (your content - example: technical docs)
queries = [
    "How do I configure SSL certificates?",
    "What are the system requirements for installation?",
]

Step 3: Run and Validate

Run your modified code using the same commands from Phase 1. Interpreting Results: Success indicators:
  • Step 2 shows sources_returned: true
  • Step 2 response contains specific information from your documents
  • Step 2 is noticeably more accurate than Step 1
⚠️ If Step 2 is generic or wrong:
  • Check that your publisher name matches exactly (case-sensitive)
  • Verify content is uploaded and indexed in Studio
  • Ensure your queries are relevant to your content
  • Try increasing sources_limit (change 3 to 5 or 10)

Step 3: Refine Your Queries

If results aren’t accurate:
  1. Make queries more specific: “What is the pricing?” → “What is the enterprise plan pricing for annual subscriptions?”
  2. Match content structure: If your document is titled “Installation Guide,” ask installation questions
  3. Test incrementally: Start with one query, verify it works, then add more
  4. Check sources: If available in the API response, inspect which documents were retrieved
Iterative Testing: RAG quality improves with iteration. Test different queries, refine your content, and adjust sources_limit until you get optimal results.

Working Code Sample

View Complete Code

Clone or browse the complete working examples for all 6 languages (JavaScript, TypeScript, Python, PHP, Go, Java) with setup instructions and Bezalel content files.

Next Steps

Now that you understand grounded completions:

Use Your Own Content

See Phase 2: Adapt to Your Own Content above for detailed instructions on:
  • Preparing and uploading your content
  • Updating the code for your publisher
  • Designing effective queries
  • Validating and refining your results

Build a Production Chatbot

Extend this recipe:
  • Add streaming for real-time responses
  • Implement conversation history
  • Extract and display source citations
  • Handle edge cases (no sources found, publisher not found)
  • Add user feedback collection

Explore Advanced Features

  • Adjust source limits: Use sources_limit to control how many documents are retrieved
  • Search without generation: Use the Search API for RAG without completion
  • Multi-turn conversations: Maintain context across multiple grounded queries
  • Hybrid approaches: Mix grounded and non-grounded based on query type

Troubleshooting

Publisher Not Found Error

Error: Publisher 'YourPublisher' not found
Solutions:
  • Verify publisher name is exact match (case-sensitive) to Studio
  • Confirm publisher exists at Gloo Studio
  • Check you’re using the correct account/credentials

No Sources Returned

sources_returned: false  (in grounded request)
Solutions:
  • Verify content is uploaded to the publisher in Studio
  • Check that your query is relevant to uploaded content
  • Increase sources_limit (try 5-10 for complex queries)
  • Ensure content has finished indexing in Studio

Authentication Errors

401 Unauthorized
Solutions:
  • Verify GLOO_CLIENT_ID and GLOO_CLIENT_SECRET are correct
  • Get fresh credentials from API Credentials page
  • Check token expiration—implementation auto-refreshes tokens

Generic Responses in Grounded Mode

If Step 2 (publisher grounded) responses still seem generic:
  • Check sources_returned flag—should be true for Step 2
  • Verify you’re using the rag_publisher parameter
  • Confirm the right content is uploaded to your publisher in Studio
  • Try more specific queries that match your content
  • Review content in Studio to ensure it’s indexed

Step 2 Isn’t Using My Content

If Step 2 returns sources_returned: false or generic responses: Check Publisher Name:
  • Publisher names are case-sensitive: “MyPublisher” ≠ “mypublisher”
  • Verify exact name in Gloo Studio
  • Check for typos in your code or .env file
Verify Content is Indexed:
  • Log into Gloo Studio
  • Navigate to your Publisher
  • Confirm documents show “Completed” status (not “Pending”)
  • Wait 5-10 minutes after upload before testing
Query-Content Mismatch:
  • Your query might not match your content
  • Example: Asking “What is the pricing?” when you only uploaded a “User Guide”
  • Solution: Ask questions your documents can answer
Increase sources_limit:
  • Default is 3 sources
  • Try 5-10 for complex queries: sources_limit: 10
  • More sources = broader retrieval, better coverage
Content Quality Issues:
  • Scanned PDFs may have poor OCR
  • Very short documents (<100 words) may not be retrieved
  • Highly technical jargon may need more context

Additional Resources