Profile & Settings

Account
?
Unknown
No details saved yet

Server (Permanent)

Enter your Ollama server URL and click Pin to lock it permanently.


AI Persona

You can also just tell the AI "I want to call you Bill" in chat and it will update automatically.

You can also say things like "from now on always be concise" in chat and it will be added automatically.

Model Defaults

If set, this model is automatically selected when you connect. Connect first to populate the list.


Basic Information
About You
Memory
Enable memory
AI extracts and remembers full-sentence facts from your conversations.
Memory Tags

Full-sentence facts the AI always remembers — extracted automatically or added manually.


System Prompt Preview

Prepended to every conversation so the AI knows who you are.


Documentation

Quick Start Guide

FDEV Chat is a premium frontend for Local AI models and external LLMs. To get started with local privacy, you need a backend running.

  • 1. Install Ollama: Download from ollama.com or use Docker to host your own AI.
  • 2. Run a model: Open your terminal and run ollama run llama3.2 to download a state-of-the-art model.
  • 3. Connect: In FDEV Chat, ensure the server URL is set (default localhost:11434) and click Connect to interface with your local AI.
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Data Privacy & Security

Your data privacy is paramount. FDEV Chat operates in two modes:

Local Mode (Private)

If you do not sign in, all conversations, settings, and memory are stored in your browser's Local Storage. Nothing leaves your device except requests to your own local Ollama server.

Cloud Sync (Optional)

Sign in with Google to sync your history and AI memory across devices. Data is stored in Google Firestore, secured by Firebase Security Rules. Only your authenticated account can access your data.

Supported Models

We support all models available via Ollama (Llama 3, Mistral, Gemma, etc.) and are adding external providers. You can pull new local models directly from the interface:

  • Open the model dropdown in the top bar.
  • Type a model name (e.g., mistral, gemma, llava) in the "Pull" field.
  • Click Pull and wait for the download to complete.

Vision Support: Models like llava or moondream allow you to drag and drop images into the chat for analysis.

External Providers Coming Soon

We are integrating external API providers to allow you to use powerful cloud-based models alongside your private local ones.

  • OpenAI: Connect your API key to use GPT-4o.
  • Anthropic: Support for Claude 3.5 Sonnet.
  • Google Gemini: Integration with Gemini Pro models.

These features will allow you to switch seamlessly between local privacy and cloud performance.

Server is pinned. Change it in Settings → Server.
SERVER
New conversation
👁 Vision
no model

Ready to chat

Connect to Ollama or external providers. Select a model like Llama 3 or Mistral and start chatting.

Features: Long-term memory, vision analysis, document chat, and secure sync.