textrawl
byJeff Green
Getting Started

Installation

Detailed installation guide for textrawl

System Requirements

RequirementMinimumRecommended
Node.js22.0.022.x LTS
pnpm9.0.09.x
RAM512 MB1 GB+
Disk100 MB1 GB+

Installation Methods

# Clone the repository
git clone https://github.com/jeffgreendesign/textrawl.git
cd textrawl
 
# Install dependencies
pnpm install
 
# Run interactive setup
pnpm run setup
 
# Start development server
pnpm run dev

Docker

# Clone the repository
git clone https://github.com/jeffgreendesign/textrawl.git
cd textrawl
 
# Create .env file
cp .env.example .env
# Edit .env with your credentials
 
# Build and run
docker compose up -d

Docker (Local Stack)

For a fully self-hosted setup with local PostgreSQL and Ollama:

# Start PostgreSQL + pgvector
docker compose -f docker-compose.local.yml up -d
 
# Optionally start Ollama for local embeddings
docker compose -f docker-compose.local.yml --profile ollama up -d
 
# Optionally start pgAdmin for database management
docker compose -f docker-compose.local.yml --profile admin up -d

Database Setup

textrawl requires PostgreSQL with the pgvector extension. Supabase provides this out of the box.

  1. Create a project at supabase.com
  2. Go to SQL Editor
  3. Run the schema setup:
-- For OpenAI embeddings (1536 dimensions)
-- Copy contents of scripts/setup-db.sql
 
-- For Ollama embeddings (1024 dimensions)
-- Copy contents of scripts/setup-db-ollama.sql
  1. Run security hardening:
-- Copy contents of scripts/security-rls.sql

Option 2: Self-Hosted PostgreSQL

# Using docker-compose.local.yml
docker compose -f docker-compose.local.yml up -d
 
# Connect to database
psql postgresql://textrawl:textrawl@localhost:5432/textrawl
 
# Enable pgvector
CREATE EXTENSION IF NOT EXISTS vector;
 
# Run setup SQL files
\i scripts/setup-db.sql
\i scripts/security-rls.sql

Environment Configuration

Copy the example and configure:

cp .env.example .env

Required Variables

# Supabase connection
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_SERVICE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
 
# OpenAI for embeddings
OPENAI_API_KEY=sk-...

Optional Variables

# Server configuration
PORT=3000
LOG_LEVEL=info
 
# Authentication (required for production)
API_BEARER_TOKEN=your-secure-token-min-32-chars
 
# CORS (comma-separated origins)
ALLOWED_ORIGINS=http://localhost:3000
 
# Alternative: Use Ollama instead of OpenAI
EMBEDDING_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=nomic-embed-text

Verify Installation

# Check health endpoint
curl http://localhost:3000/health
 
# Expected response:
# {"status":"ok","timestamp":"2025-01-01T00:00:00.000Z"}
 
# Check readiness (includes database)
curl http://localhost:3000/health/ready
 
# Test MCP Inspector
pnpm run inspector

Next Steps

On this page