Smart Assist

AI • RAG Systems • Enterprise UX

A conversational AI assistant integrated into Smarteeva’s Docusaurus documentation ecosystem. It helps users instantly understand complex product concepts, find answers, and navigate technical documentation through natural-language queries. Powering this experience is a hybrid RAG architecture, combining static embeddings, dynamic updates, and reference-driven responses for trust and accuracy.

Project Overview

Smarteeva’s product documentation spans thousands of pages — complex, dense, and spread across multiple versions. Internal teams and customers struggled to find accurate information quickly.

I designed Smart Assist, an AI-powered documentation search assistant integrated directly into the Docusaurus ecosystem. It enables natural-language queries, retrieves relevant content through a hybrid RAG pipeline, and returns citation-backed answers with source links.

The goal: Reduce time-to-answer, increase documentation adoption, and empower teams with fast, trustworthy self-service.

The Problem

Before Smart Assist

  • Users manually searched large docs, often taking several minutes

  • Keyword search returned inconsistent results

  • Support teams answered repetitive low-complexity questions

  • Users couldn’t find specific API or workflow details

  • Release notes and feature updates were buried

Core Pain Points

  • “I don’t know the exact keyword.”

  • “The answer exists somewhere, but I can’t find it quickly.”

  • “Documentation is too long and technical.”

  • “Docusaurus search doesn’t understand context.”

Business Impact

  • High support load

  • Low documentation adoption

  • Increased onboarding time

  • Internal teams losing productivity searching for answers

Users / Audience

  • Implementation specialists

  • QA engineers

  • Customers using the Smarteeva platform

  • Support teams

  • Developers and internal product stakeholders

User Needs

  • Instant, accurate AI-generated answers

  • Clear citations linked to real documentation

  • Natural-language understanding

  • Readable, scannable formatting for long answers

  • Confidence that answers are grounded in real docs

  • Fast navigation from answer → source page

Goals

  • Reduce documentation search time by ~80%

  • Improve onboarding speed for new customers

  • Provide reliable, citation-based responses

  • Increase documentation adoption across teams

  • Reduce dependence on support

Architecture Overview

Smart Assist is powered by a hybrid RAG architecture designed for documentation that changes frequently.


1. Static Documentation → OpenAI Vector Store

Stable documentation (SOPs, long-form guides, API references) is embedded once and stored in OpenAI’s vector store.
This reduces reindexing overhead and provides fast, static retrieval.

2. Dynamic Documentation → Pinecone Vector Store

Content that changes regularly — such as release notes and versioned updates — is processed separately.

3. Automated Update Pipeline (AWS EC2)

When an admin triggers an update:

  • EC2 instance fetches updated MDX pages

  • Content is cleaned, chunked, and embedded

  • Pinecone is updated with only the changed segments

  • Reference metadata (URL, section, version) is stored

This ensures the assistant always reflects the latest documentation.

4. Unified Retrieval

User Query → Query Embedding → Parallel Retrieval

  • OpenAI Vector Store (static)

  • Pinecone Vector Store (dynamic)

Results are merged and deduplicated into a single Top-K context.

5. LLM Reasoning

Merged chunks are passed to an LLM (GPT-4/4o) with constraints ensuring:

  • grounded answers

  • no hallucinations

  • accurate source citations

  • readable formatting

6. Response Delivery

  • Markdown output

  • Citation blocks

  • Reference links

  • Scroll-optimized UI for long answers

UX & Interaction Design

Key UX Decisions

  • Clean, minimal chat interface embedded within Docusaurus

  • Two-panel layout: questions on the left, answers on the right

  • High-clarity formatting for long responses

  • Example prompt suggestions for discoverability

  • Smooth scrolling and spacing for readability

  • Trust-focused citation presentation

UI States Designed

  • Empty state with onboarding prompts

  • Loading skeleton

  • Retrieval failure / fallback state

  • Answer state with citations

  • Long response scroll handling

  • Error-handling patterns (timeouts, empty retrieval)

The Process

1. Research & Discovery

  • Analysed documentation structure (MDX, versions)

  • Reviewed search logs to understand real queries

  • Shadowed support teams

  • Identified patterns in repeated questions

2. Concepting & Iteration

  • Explored multiple layouts and prompt-suggestion models

  • Designed early dark-mode interface for clarity

  • Built prototype to test chunk readability

  • Iterated on citation placement for trust

3. UX Design

  • Semantic search experience integrated into chat UI

  • Markdown-friendly answer pane

  • Smooth transitions for long responses

  • Mobile-responsive layout

4. Implementation Handoff

  • Provided detailed Markdown response spec

  • Chunk-to-UI formatting rules

  • Error and fallback patterns

  • Citation grouping logic

  • UI integration guidelines for Docusaurus

Impact

Quantitative

  • Search time reduced from minutes to seconds

  • Documentation adoption increased across 4 internal teams

  • Significant reduction in support escalations

  • Faster onboarding for new users

Qualitative

  • “This is the fastest way to find anything in the product.”

  • “Finally answers I can trust because of the citations.”

  • “Much better than the default Docusaurus search.”

  • “Having links back to the source page is extremely useful.”

Challenges

  • Designing for long, technical answer structures

  • Keeping responses grounded when context was missing

  • Managing noise from multiple document versions

  • Balancing minimal UI with powerful features

  • Ensuring readability across long Markdown outputs

  • Handling irregular documentation formatting

Reflection

This project strengthened my ability to design AI-first interfaces that balance accuracy, clarity, and trust. I learned to:

  • Think in retrieval-first design

  • Blend UX craft with AI system architectures

  • Design UI for long-form AI responses

  • Create trust through citations and reference mapping

  • Work across engineering, data, and product teams

Smart Assist became one of the most impactful tools for internal productivity and customer onboarding at Smarteeva.

Flowchart


Screens


All rights reserved, ©2026

All rights reserved, ©2026

All rights reserved, ©2026