AI backend for ingesting case communications and policy documents and producing structured, auditable LLM-based analyses over an evidence corpus. Includes a document indexing pipeline (extraction, normalization, token-aware chunking, metadata traceability), embeddings + vector persistence for retrieval-enabled analysis, and an LLM analysis engine with prompt/context composition and JSON-schema outputs. Built async background processing with retries, optimistic locking, and crash-safe job recovery, plus secure APIs (OAuth2/JWT/JWKS) and production observability with structured logging and request correlation.
Start: Dec. 2025
Tech: Python, LlamaIndex, RAG pipeline, Document Indexing, Vector Store (pgvector), Windsurf, Codex App