Move BizNode from SQLite to PostgreSQL for production scale, concurrent bots, and VPS hosting — without losing a single row of data.
BizNode ships with SQLite because it works out of the box with zero configuration. For most single-operator deployments it is the right choice. Switch to PostgreSQL when you hit these conditions:
| Situation | SQLite | PostgreSQL |
|---|---|---|
| Single bot on one machine | Ideal | Overkill |
| Multiple bot containers simultaneously | Locking errors possible | Required |
| Hosted on a VPS / cloud server | Works | Recommended |
| 10,000+ rows per table | Works but slower queries | Better |
| Managed DB (Supabase, Neon, Railway) | Not compatible | Supported |
| pg_dump backups / replication | Not available | Built-in |
Complete every item before starting the migration.
pip install psycopg2-binary
http://localhost:7777
You will update settings there in Step 6.
memory/biznode.db to a safe location. The migration script never
deletes SQLite data, but have a copy before you start.
docker compose stop bot dashboardThis prevents writes to SQLite while you migrate.
Connect to your PostgreSQL server and create the database and user. Open a terminal and run:
# Connect as the postgres superuser psql -U postgres -- Inside psql: CREATE DATABASE biznode; CREATE USER biznode_user WITH PASSWORD 'choose_a_strong_password'; GRANT ALL PRIVILEGES ON DATABASE biznode TO biznode_user; -- PostgreSQL 15+ also requires this: \c biznode GRANT ALL ON SCHEMA public TO biznode_user; \q
biznodebiznode_user, set a passwordbiznode database → Properties → Security → add biznode_user with ALL privilegesCreate a new project. Copy the connection details — host, port, database name, user, password — you will use them in Step 5.
init_db() which creates all tables on the PostgreSQL
database before inserting any data.
Always take a full backup before any migration. Two options:
http://localhost:7777backups/# Windows PowerShell — from your BizNode folder Copy-Item memory\biznode.db memory\biznode.db.pre-migration-backup # Or copy to an external location Copy-Item memory\biznode.db "C:\Backups\biznode-$(Get-Date -f yyyyMMdd-HHmmss).db"
docker compose stop bot before backing up for a clean snapshot.
BizNode includes scripts/migrate_to_postgresql.py — it reads every table from SQLite and inserts the data into PostgreSQL. It is safe to run multiple times (uses ON CONFLICT DO NOTHING).
cd "C:\path\to\your\BizNode" python scripts/migrate_to_postgresql.py --dry-run
Expected output — shows what will be migrated without writing anything:
python scripts/migrate_to_postgresql.py ^ --pg-host localhost ^ --pg-port 5432 ^ --pg-db biznode ^ --pg-user biznode_user ^ --pg-password your_password
Expected output:
set DB_TYPE=postgresql set DB_HOST=localhost set DB_NAME=biznode set DB_USER=biznode_user set DB_PASSWORD=your_password python scripts/migrate_to_postgresql.py
Tell BizNode to connect to PostgreSQL instead of SQLite through the Setup section.
Navigate to http://localhost:7777
In the left navigation, click Setup. Scroll to the Database group.
Update each field:
DB_TYPE = postgresql DB_HOST = localhost (or your VPS IP / managed service hostname) DB_PORT = 5432 DB_NAME = biznode DB_USER = biznode_user DB_PASSWORD = your_password
The settings are written to your .env file immediately.
DB_TYPE at startup. Saving in the dashboard
writes to .env — you must restart the containers in Step 7 for the
change to take effect.
cd "C:\path\to\your\BizNode" docker compose up -d --force-recreate
The --force-recreate flag ensures containers re-read the updated .env values.
docker compose logs --tail=40 dashboard docker compose logs --tail=40 bot
Look for a clean startup. If you see psycopg2.OperationalError the credentials in .env don't match PostgreSQL. Double-check host, port, user, and password.
python -c "
import os
os.environ['DB_TYPE'] = 'postgresql'
os.environ['DB_HOST'] = 'localhost'
os.environ['DB_NAME'] = 'biznode'
os.environ['DB_USER'] = 'biznode_user'
os.environ['DB_PASSWORD'] = 'your_password'
from memory.db_adapter import get_connection
conn = get_connection()
cur = conn.cursor()
cur.execute('SELECT COUNT(*) as n FROM agent_identity')
print('agent_identity rows:', cur.fetchone())
conn.close()
print('PostgreSQL connection: OK')
"
Open Telegram and send /start to your bot. A response confirms the bot is connected to PostgreSQL and running correctly.
python scripts/migrate_to_postgresql.py --dry-run
Run the dry-run again — it now reads from PostgreSQL (since DB_TYPE=postgresql is set) and will show current row counts for confirmation.
If you want PostgreSQL to run as a Docker container alongside BizNode instead of a separate installation, add this service to your docker-compose.yml:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: biznode
POSTGRES_USER: biznode_user
POSTGRES_PASSWORD: your_password
volumes:
- ./memory/postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
restart: unless-stopped
Then add postgres to the depends_on list of the bot and dashboard services:
dashboard:
depends_on:
- qdrant
- ollama
- postgres # ← add this
bot:
depends_on:
- qdrant
- ollama
- postgres # ← add this
localhost. Set:
DB_HOST=postgres
localhost inside a container refers to that container itself,
not the host machine or other containers.
Then start everything:
docker compose up -d
PostgreSQL data is persisted in memory/postgres/ which is mounted as a volume — it survives container restarts and upgrades.
If anything goes wrong, reverting to SQLite takes under a minute.
Change DB_TYPE back to sqlite. Click Save Settings.
Or edit .env directly: set DB_TYPE=sqlite
docker compose up -d --force-recreate
The original memory/biznode.db file was never modified.
All your original data is intact.
ON CONFLICT DO NOTHING for every INSERT. Rows that already exist in PostgreSQL are silently skipped. Running it again after a partial failure is safe.memory/qdrant/ on disk — completely separate from SQLite. It is unaffected by this migration. If you move to a different machine, copy the memory/qdrant/ folder separately or re-seed using the seeder scripts.SELECT setval(pg_get_serial_sequence('table_name', 'id'), COALESCE(MAX(id), 1)) FROM table_name;DB_HOST to the managed service's hostname, use the provided port, database name, user, and password. The adapter uses standard psycopg2 — compatible with any PostgreSQL 14+ server.DB_TYPE is read at module import time by memory/db_adapter.py. All connections use one backend at a time. You pick one and restart to switch.init_db() runs at every startup using CREATE TABLE IF NOT EXISTS — new tables from new releases are created automatically in PostgreSQL, just like they were in SQLite.pymysql instead of psycopg2-binary, create the MySQL database, and set DB_TYPE=mysql. The migration script works the same way since it uses raw psycopg2 or raw sqlite3 for connections, not the adapter.Your node is now running on PostgreSQL. No data was lost, rollback is always available.