My AI Backend Projects

RAG-first backend systems with reusable AI microservices and workflow coordination

Notes Memory Core API

A production-grade Go API with CRUD note management, structured logging (zerolog), metrics middleware, and Postgres integration.

View Repo Test Live API
            POST /notes
            {
            "title": "Test",
            "content": "Hello world"
            }

            GET /notes
            [
            {
                "id": 1,
                "title": "Test",
                "content": "Hello world",
                "created_at": "..."
            }
            ]
    

Notes Memory Core – RAG Extension

A full Retrieval-Augmented Generation (RAG) system built with Go, pgvector, and OpenAI—featuring semantic search and a Query pipeline that retrieves relevant notes and generates grounded AI responses using GPT-4o-Mini and SmallEmbedding-3.

View Repo Test Live RAG API

Try the RAG Query

Response will appear here.
        POST /query
        {
        "query": "What does my note say?"
        }

        RESPONSE:
        {
        "query": "What does my note say?",
        "response": "... grounded AI answer ...",
        "results": [...]
        }
        

Generative AI Service

A production-deployed Generative AI microservice built in Go. This service exposes a prompt-driven API for text generation and serves as a foundational LLM capability used by other backend services.

Designed to be reusable and provider-agnostic, this service focuses purely on generation and avoids task-specific or orchestration logic.

View Repo View Live Service
POST /generate
{
  "prompt": "Explain pgvector simply"
}

RESPONSE:
{
  "output": "...",
  "tokens_used": 279
}
  

AI Embedding Microservice

An independent, scalable microservice that converts text into embedding vectors using either deterministic mock mode or OpenAI embeddings, with full validation, logging, and metrics.

View Repo Test Embedding Service

Try the Embedding Service

Embedding vector will appear here.
        POST /embed
        {
        "text": "hello"
        }

        RESPONSE:
        {
        "embedding": [0.123, -0.482, ...]
        }
        

AI Summary Microservice

A standalone microservice that summarizes text using either a mock response or OpenAI’s GPT-4o-Mini, designed for reusability, scalability, and clean microservice architecture.

View Repo Test Summary Service

Try the Summary Service

Summary will appear here.
        POST /summarize
        {
        "text": "long text..."
        }

        RESPONSE:
        {
        "summary": "short summary..."
        }
        

Workflow / Orchestration Service

A lightweight Go service that coordinates retrieval and summarization across internal AI microservices. This project demonstrates how RAG pipelines and reusable AI services can be composed into a controlled workflow with evaluation and tracing.

This service supports the RAG system and summary service but is not required for direct API usage.

View Repo View Live Service
POST /run
{
  "input": "What notes do I have about Docker?"
}

RESPONSE:
{
  "final_answer": "...",
  "evaluation_score": 0.6
}
  

Core Backend & Cloud Projects

Foundational Go and AWS projects demonstrating cloud-native backend design and reliability

Go + AWS Serverless Backend

A small Go backend running on AWS Lambda that persists data to DynamoDB and publishes asynchronous messages to SQS.

Designed to be stateless, failure-aware, and safe to restart, with clean separation between handlers, services, and AWS infrastructure.

View Repo