Building a Retrieval-Augmented Generation (RAG) System to Support Your Fitness Goals

Building a Retrieval-Augmented Generation (RAG) System to Support Your Fitness Goals

Welcome to a step-by-step guide on creating a Retrieval-Augmented Generation (RAG) system using open-source tools like Weaviate, Verba, and Ollama. In this tutorial, I’ll walk you through how I built Fit T. Cent, my personal AI assistant designed to answer fitness, wellness, and nutrition questions based on the evidence-based curriculums of the National Academy of Sports Medicine (NASM). This project serves as a study tool and a companion on my journey to becoming the world's fastest centenarian.

It’s important to note: Fit T. Cent is for ENTERTAINMENT use only!
Please consult your doctor, a medical professional, and a fitness professional before making changes to your lifestyle.

NASM’s content is proprietary, and I do not have permission to deploy or share it publicly. It would be unethical and possibly illegal to do so. This project is for entertainment purposes and complements my studies and consultations with medical and fitness professionals. If you’re building a similar project, ensure you respect the rights of content owners.


Part 1: Understanding Retrieval-Augmented Generation (RAG)

RAG combines two core technologies:

  1. Retrieval: Searches a database for the most relevant information.
  2. Generation: Uses an AI model to craft a natural-language response based on the retrieved data.

This hybrid approach ensures the answers are accurate and conversational. Fit T. Cent, for example, uses NASM’s curriculums for Certified Personal Trainers, Certified Nutrition Coaches, and Corrective Exercise Specialists to generate evidence-based responses.


Part 2: Tools and Technology Overview

The project uses the following tools:

  • Weaviate: An open-source vector database to store and search the NASM curriculum.
  • Verba: A locally hosted UI to interact with the system.
  • Ollama: A local AI language model that generates natural-language responses.

Why these tools?

  • Weaviate organizes and retrieves data efficiently.
    • Verba provides a user-friendly interface.
    • Ollama ensures privacy by running AI locally.

    All tools are open-source, making them accessible to anyone with a modern computer.


    Part 3: Setting Up Your RAG System

    Step 1: Gather Your Data

    In 2021, I received access to NASM’s curriculum after sharing my centenarian goal with a NASM representative who believed in my mission. If you have similar access to proprietary or personal data, ensure you use it ethically and for personal purposes.

    Step 2: Install the Required Software

    Tech Requirements:

    • A computer with at least 16GB RAM and a fast processor.
    • 50GB+ of free storage.
    • Docker installed.

    Installation Steps:

    1. Install Docker:

    2. Download and install Docker Desktop from Docker’s website.

    3. Set Up Weaviate:

    4. Pull the Weaviate Docker image:

      docker pull semitechnologies/weaviate
      
    5. Run Weaviate locally:

      docker run -d -p 8080:8080 semitechnologies/weaviate
      
    6. Install Verba: (requires Python >= 3.10.0)

    7. Download Verba from its GitHub repository.

      1. (Very Important) Initialize a new Python Environment
        1. `python3 -m venv venv`
        2. `source venv/bin/activate`
      2. Install Verba
        1. pip install goldenverba
      3. Launch Verba
        1. verba start
          1. You may specify the --port and --host via flags.
            1. I chose to use the defaults.
      4. Access Verba
        1. Visit localhost:8000
      5. (Optional) Create .env file and add environment variables.
        1. I chose to use a .env file with environment variables. The file I used is below. You may use it and alter the values to fit your system.
        2. .env file contents
          OLLAMA_MODEL=llama3.2:latest
          OLLAMA_EMBED_MODEL=snowflake-arctic-embed
          WEAVIATE_PORT=8081
          WEAVIATE_URL_VERBA=http://localhost:8080
        3. .env file explanation: 
          1. Here’s a breakdown of what each line in your .env file does:
            1. OLLAMA_MODEL=llama3.2:latest
            Specifies the version of the Ollama language model to be used.
            Ollama3.2 refers to the name and version of the model, and latest ensures that the most recent version of the model is used.
            2. OLLAMA_EMBED_MODEL=snowflake-arctic-embed
            Defines the embedding model to be used for generating vector representations of data for retrieval purposes.
            snowflake-arctic-embed likely refers to a specific embedding model optimized for efficient and accurate vectorization.
            3. OLLAMA_URL=http://localhost:11434
            Sets the URL where the Ollama service is running.
            http://localhost:11434 indicates that the service is hosted locally (on the same machine) and listens on port 11434.
            4. WEAVIATE_PORT=8081
            Specifies the port on which the Weaviate vector database is expected to operate.
            In this case, Weaviate will use port 8081.
            5. WEAVIATE_URL_VERBA=http://localhost:8080
            Indicates the URL where Verba, the UI interface, connects to Weaviate.
            http://localhost:8080 shows that the Verba interface communicates with Weaviate hosted locally on port 8080.
            Summary
            This .env file configures the local setup for connecting and running the components of your RAG system:
            Ollama for language model and embedding services.
            Weaviate for vector database operations.
            Verba as a user interface to interact with Weaviate.
            These environment variables are essential for ensuring that each component of your setup is correctly configured and communicates seamlessly.
    8. Set Up Ollama:
      1. Download and install Ollama from Ollama’s website.
        1. Visit ollama.com and click the download button on macOS. The steps may be different on Linux and Windows.
        2. Open the downloaded file and it'll walk you through the installation process. I attached a YouTube video from Network Chuck that I used to learn the process.

      1. Connect Ollama to Verba and Weaviate.
        1. The .env file created earlier will connect all 3 tools. Ensure the .env file is in the same directory you ran the Verba and Weaviate commands.

      Step 3: Import Your Data

      You may use Weaviate’s APIs to import your dataset. I chose to use the Verba UI. It's simpler and the purpose of having a UI is to lower the barriers of entry to using cool software like this. Here's a short video walking you through the UI and showing how to import data and tag it to limit Verba and Weaviate's search options.

      Step 4: Test the System

      Use Verba’s UI to ask questions and verify the responses. For example:

      • Input: "What’s the proper form for a squat?"
      • Response: Retrieves information directly from NASM’s database and generates a clear answer.

      Part 4: Ethical Considerations

      I was excited when I got this tool to work. So excited that I started posting goofy YouTube videos and social media posts about it. I gave the tool a puntastic name too, Fit T. Cent. And every response asked "Where's it deployed?", "How do I use it?". That's when I knew I needed to explain that it'll never be deployed and Fit T. Cent is only for my use. I won't deploy it because I don't own the data it uses to answer the questions. Nor do I have permission to deploy it. I think it's a great tool and would provide lots of benefit to the world. I plan on reaching out to NASM and other firms to show them the benefits a simple tool would provide their community.

      Also, You Should Consult Medical and Fitness Professionals Before Using The Advice of A Computer. 

      1. Respect Copyright: This project remains private because I do not own NASM’s content. Similarly, if you use third-party data, ensure you comply with copyright laws.

      2. Consult Professionals: Always consult certified professionals for fitness, nutrition, or medical advice. My responses are for entertainment and study purposes.


      Part 5: Applications of RAG Systems

      Building a RAG system on your own data opens up endless possibilities. Here are some use cases:

      Personal:

      • Use journals or blogs to create a personalized AI assistant.
      • Study partner for exams and certifications.

      Corporate:

      • Provide instant customer support using internal knowledge bases.
      • Train employees by answering job-specific questions.

      Educational:

      • Build AI tools to study and prepare for exams.
      • Create curriculum companions for teaching and learning.

      Family:

      • Document family history and create an AI to answer related questions.

      Part 6: Ready to Build Your Own?

      If you’re interested in creating a RAG system, I’m here to help. Visit my website AwesomeWebStore.com or text “CENT” to 833.752.8102 to get started. You can also follow my journey to becoming the world’s fastest centenarian at AwesomeWebStore.com.

      Get Fit & Enjoy Trying!

      Back to blog

      1 comment

      Thank you for the helpful links!

      Lorenza Black

      Leave a comment

      Please note, comments need to be approved before they are published.