Migrating a Moodle database from MySQL to PostgreSQL requires careful sequencing: installing PostgreSQL and pgloader, creating the target database, and handling data type conversions. This guide covers the practical workflow including pitfalls with Ubuntu's pgloader package and PostgreSQL restore procedures.

Migrating from MySQL to PostgreSQL

Install PostgreSQL

Add the following packages:

apt-get install postgresql
apt-get install php8.1-pgsql
apt-get install pgloader

Please make sure to check the actual php version you’re using on your Moodle or Totara website.

Install pgloader

The version of pgloader that comes with Ubuntu 22.04 (through apt-get) did not work for me, it gave me an error for which I found no solution.

Instead, I compiled pgloader from source as explained here:

sudo apt install sbcl unzip libsqlite3-dev gawk curl make freetds-dev libzip-dev
curl -fsSLO https://github.com/dimitri/pgloader/archive/v3.6.10.tar.gz
tar -xvf  v3.6.10.tar.gz
cd pgloader-3.6.10/
make pgloader
mv ./build/bin/pgloader /usr/local/bin/

And finally check whether the installation worked:

root@localhost:/home/sandbox/pgloader-3.6.10# pgloader --version
pgloader version "3.6.7~devel"
compiled with SBCL 2.1.11.debian

Configure PostgreSQL

Connect to the database server:

sudo -u postgres psql

Set a password for the postgres user:

ALTER USER postgres PASSWORD '<password>';

Replace the <password> with the one you want. Then quit with:

\q

(From: Install PostgreSQL Linux)

Create Database

On the command prompt (so, not in the psql client):

sudo -u postgres createdb -E utf8 {dbname}

Make sure that MySQL is using:

default-authentication-plugin=mysql_native_password

(You can specify that in /etc/mysql/mysql.cnf under the [mysqld] section. Simply add that section if it doesn’t exist. Don’t forget to restart the MySQL server: service mysql restart).

Also make sure that the MySQL user is using this authentication method:

ALTER USER 'sandbox'@'localhost' IDENTIFIED WITH mysql_native_password BY 'secretpassword';

Migrate MySQL Database to PostgreSQL

Use this command to do the actual migration (which is really a copy & convert, since the original MySQL database is left intact) :

pgloader mysql://sandbox:passwd@127.0.0.1/sample_db pgsql://postgres:passwd@localhost/sampledb
  • Please note: this should be the following –
  • pgloader --with "prefetch rows = 1000" mysql://sandbox:passwd@127.0.0.1/sample_db pgsql://postgres:passwd@localhost/sampledb

To see if it worked, use the following command to list all tables:

\dt
Issue: Heap exhausted
2024-03-22T13:02:33.040001Z LOG pgloader version "3.6.7~devel"
2024-03-22T13:02:33.188004Z LOG Migrating from #<MYSQL-CONNECTION mysql://sandbox@127.0.0.1:3306/fabrikamlive {10072891C3}>
2024-03-22T13:02:33.188004Z LOG Migrating into #<PGSQL-CONNECTION pgsql://postgres@localhost:5432/fabrikamsandbox {10074545A3}>
Heap exhausted during garbage collection: 0 bytes available, 32 requested.
Gen  Boxed   Code    Raw  LgBox LgCode  LgRaw  Pin       Alloc     Waste        Trig      WP GCs Mem-age
fatal error encountered in SBCL pid 859879 tid 859888:
GC invariant lost, file "gencgc.c", line 488

Solved this particular issue by adding this:

--with "prefetch rows = 1000"

So:

pgloader --with "prefetch rows = 1000" mysql://sandbox:passwd@127.0.0.1/sample_db pgsql://postgres:passwd@localhost/sampledb
Issue: out of shared memory

I subsequently ran into:

ERROR Database error 53200: out of shared memory
HINT: You might need to increase max_locks_per_transaction.

It turns out that this is a PostgreSQL setting which can be adjusted here:

vim /etc/postgresql/14/main/postgresql.conf

I set it to:

max_locks_per_transaction = 128

Restart the PostgreSQL server:

service postgresql restart

This seems to have worked:

fabrikamlive.mdl_workshopform_rubric_config          0          0                    11.144s
                                       fabrikamlive.mdl_workspace          0          0                    11.072s
                        fabrikamlive.mdl_workspace_member_request          0          0                    11.084s
------------------------------------------------------------------  ---------  ---------  ---------  --------------
                                           COPY Threads Completion          0          4                  28m1.160s
                                                    Create Indexes          0       4334               3h59m42.459s
                                            Index Build Completion          0       4334                     0.164s
                                                   Reset Sequences          0        915                     2.876s
                                                      Primary Keys          0        915                     0.792s
                                               Create Foreign Keys          0        148                     0.424s
                                                   Create Triggers          0          0                     0.000s
                                                   Set Search Path          0          1                     0.000s
                                                  Install Comments          0        747                     0.264s
------------------------------------------------------------------  ---------  ---------  ---------  --------------
                                                 Total import time             86727644    20.9 GB    4h27m48.138s
Issue 20240323

This should select columns on fabrikamsandbox, but doesn’t:

SELECT a.attnum, a.attname AS field, t.typname AS type, a.attlen, a.atttypmod, a.attnotnull, a.atthasdef, pg_get_expr(d.adbin, d.adrelid) AS adsrc FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace as ns ON ns.oid = c.relnamespace JOIN pg_catalog.pg_attribute a ON a.attrelid = c.oid JOIN pg_catalog.pg_type t ON t.oid = a.atttypid LEFT JOIN pg_catalog.pg_attrdef d ON (d.adrelid = c.oid AND d.adnum = a.attnum) WHERE relkind = 'r' AND c.relname = 'mdl_context' AND c.reltype > 0 AND a.attnum > 0 AND (ns.nspname = current_schema() OR ns.oid = pg_my_temp_schema( ORDER BY a.attnum

It looks like pgloader does not populate the pg_catalog tables.

20240415

But in reality, the solution is very simple: in PostgreSQL, each database has a schema. By default, this schema is named public. If you don’t use the fully qualified name of a database object, such as a table, in PostgreSQL, then the public schema will be used implicitly.

Now, pg_loader names a new schema after the imported database. And since Totara does not explicitly mention the schema name, none of the Totara queries will work.

The solution is to connect to the database and then to simply rename the schema created by pgloader to ‘public’:

alter schema public rename to original_public;
alter schema fabrikamlive rename to public;

We can do this, because the PostgreSQL documentation states: “There is nothing special about the public schema except that it exists by default.”

Dump PostgreSQL Database
sudo -u postgres pg_dump fabrikamsandbox > fabrikamsandbox.20240329.sql

Here’s the canonical command:

pg_dump -h SRC_HOST -U SRC_USER -d SRC_DB \
  --format=plain \
  --no-owner --no-acl \
  --encoding=UTF8 \
  --blobs \
  | gzip > /path/to/src_db.sql.gz

–no-acl means “don’t include access control lists (privileges/grants) in the dump.”We don’t want “owners” and “privileges/grants” because they cause problems when restoring the dump if you don’t have the same owner and privileges in place.

Restore PostgreSQL Database
sudo -u postgres psql fabrikamlive < fabrikamsandbox.20240329.sql
sudo -u postgres createdb -E utf8 fabrikamlive

Or, with a dump that has no owner and privileges/grant:

  • CREATE ROLE LOGIN PASSWORD '***';
  • CREATE DATABASE newdb OWNER app_user;
  • psql -h DST_HOST -U -d newdb -v ON_ERROR_STOP=1 -f /path/to/src_db.sql
20240408
mysqldump --single-transaction --compatible=ansi --default-character-set=utf8 -u root -p fabrikamlive > fabrikamsandbox.20240408.sql

https://gist.github.com/barseghyanartur/56876ab3acbd3d5d6ab7dcc477c29238: completely out of date, throws python errors

https://github.com/AnatolyUss/NMIG: throws syntax errors (probably wrong version of nodejs / npm)

https://pgdba.org/post/migration/ pg_chameleon

The instructions were largely okay, but I had to replace the following cli command with versions which have double dashes (–) instead of a single long one:

chameleon create_replica_schema
chameleon add_source --config migration --source mysql
chameleon show_status --config migration

https://github.com/maxlapshin/mysql2postgres last updates from 4 years ago, uses Ruby gems. Tried to install, but was stranded with gem dependency errors.

https://stackoverflow.com/questions/92043/is-there-a-simple-tool-to-convert-mysql-to-postgresql-syntax

root@localhost:~# df -h
Filesystem             Size  Used Avail Use% Mounted on
tmpfs                  791M  1.7M  789M   1% /run
/dev/mapper/vg00-lv01  392G  234G  142G  63% /
tmpfs                  3.9G   32K  3.9G   1% /dev/shm
tmpfs                  5.0M     0  5.0M   0% /run/lock
/dev/sda1              488M  251M  201M  56% /boot
tmpfs                  791M  8.0K  791M   1% /run/user/0

More pg_loader

20240415
pgloader --with "prefetch rows = 1000" --with "drop schema" mysql://sandbox:passwd@127.0.0.1/sample_db pgsql://postgres:passwd@localhost/sampledb

Current issue: type casting issue arises when you do a query like

select id, username from mdl_user where username = 'onno' and delete <> 1;

The <> operator can’t handle whatever type 1 is, after importing the data and the schema through pgloader.

So I had Totara create the schema (simply do a Totara install and stop when they ask for the admin user data) and then asked pgloader to only load the data:

pgloader --with "prefetch rows = 1000" --with "data only" mysql://sandbox:passwd@127.0.0.1/fabrikamlive pgsql://postgres:passwd@localhost/fabrikamsandbox



SET session_replication_role = 'replica';
SELECT tablename FROM pg_tables WHERE schemaname = 'fabrikamlive';
-- Generate and execute the truncate command for each table
DO $$ DECLARE table_name TEXT;
BEGIN
 FOR table_name IN (SELECT tablename FROM pg_catalog.pg_tables WHERE schemaname = 'fabrikamlive') LOOP
   EXECUTE 'TRUNCATE TABLE ' || quote_ident(table_name) || ' CASCADE;';
 END LOOP;
END $$;
SET session_replication_role = 'origin';
COMMIT;
20240416
2024-04-15T16:53:53.414533Z NOTICE COPY fabrikamlive.mdl_user with 41927 rows estimated [1/4]
2024-04-15T16:53:54.394558Z NOTICE COPY fabrikamlive.mdl_totara_sync_log with 41018 rows estimated [1/4]
2024-04-15T16:53:54.394558Z ERROR Database error 22P02: invalid input syntax for type smallint: "t"
CONTEXT: COPY mdl_user, line 1, column confirmed: "t"

Solved this issue by explicitly casting tinyint to smallint:

pgloader --with "prefetch rows = 1000" --with "data only" --cast "type tinyint  when (= precision 1) to smallint keep typemod" mysql://sandbox:passwd@127.0.0.1/fabrikamlive pgsql://postgres:passwd@localhost/fabrikamsandbox

In the end, I got the migration working by:

  • Having Totara create the database schema
  • Using pgloader to import the data
  • Using pgloader to import the schema for the dynamically generated mdl_appraisal_quest_data_* tables.

20240424

But then I found out that Totara has dynamically generated tables which are not created by Totara upon installation.

Unfortunately, using pgloader to do the entire migration, not just the data but also the schema, is not really an option as it leads to numerous errors and crashes.

… And it turns out that max_locks_per_transaction had somehow been set from 128 to 12, which explains the memory issues. Let’s quickly fix that…

Alas, the memory issues persist!

Next Steps
  • Find out how config file works for pgloader
  • With the existing database, fabrikamsandbox, import only the missing tables

This works, more or less, if we use a loader file to include only specific tables:

LOAD DATABASE
     FROM mysql://sandbox:passwd@127.0.0.1/fabrikamlive
     INTO pgsql://postgres:passwd@localhost/sampledb01
WITH include drop, create tables, reset sequences, prefetch rows = 1000, disable triggers, create no indexes

SET MySQL PARAMETERS
     net_read_timeout  = '1200',
     net_write_timeout = '1200'

CAST type tinyint to smallint drop typemod

INCLUDING ONLY TABLE NAMES MATCHING ~/^mdl_appraisal_quest_data_[^\s]+/
 ;

[^\s]+ matches one or more characters that are not whitespace characters.

It might be necessary to free up some memory on the server, before running pgloader, by:

  • turning off the cron jobs for Totara (e.g. through Webmin)
  • rebooting the server
  • turning off Apache: service apache2 stop

Potentially troublesome table: mdl_temp_rb_course_completions_v1

20240720

Preparation
vim /etc/postgresql/14/main/postgresql.conf

I set it to:

max_locks_per_transaction = 128

Restart the PostgreSQL server:

service postgresql restart
Steps

Did Live Fabrikam Foods migration by:

  • Having Totara create the database schema
  • Using pgloader to import the data
  • Using pgloader to import the schema for the dynamically generated mdl_appraisal_quest_data_* tables.

pgloader script:

LOAD DATABASE
     FROM mysql://sandbox:passwd@127.0.0.1/fabrikamlive
     INTO pgsql://postgres:passwd@localhost/sampledb01

WITH prefetch rows = 1000, data only, truncate

SET MySQL PARAMETERS
     net_read_timeout  = '1200',
     net_write_timeout = '1200'

CAST 
     type tinyint when (= precision 1) to smallint keep typemod

 ;

Execute, e.g.:

pgloader myloader.load

Various small issues:

  • Schema must be renamed from public to fabrikamlive: alter schema public rename to fabrikamlive; If you issue this command while being connected to database fabrikamlive, apparently the command is only executed for that particular database.
  • /etc/mysql/mysql.cnf must have default-authentication-plugin=mysql_native_password (and even then it might be necessary to do: ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'your_root_password';)
  • /etc/postgresql/14/main/postgresql.conf must have: max_locks_per_transaction = 128
  • If you have Totara create the database tables, be sure to include the ‘WITH truncate’ option in pgloader – see pgloader script above. Otherwise tables which have records will not be filled (mdl_user and mdl_course for instance).

20240928

I had to run VACUUM ANALYZE; on the fabrikamlive database. The catalog was loading very slowly (1 minute or more) for a specific user. VACUUM ANALYZE; fixed this.

Actually, the real issue was with the online content marketplaces feature, which does an extremely inefficient has_capability_in_any_context check.

Live branch was at commit 1daa62ced.

20241025

  • After importing a Postgres DB it is usually necessary to:
  • Change the schema to public
  • Change the ownership of all tables to your DB user

Ad 1:

DO $$ 
DECLARE 
    tbl RECORD;
BEGIN
    FOR tbl IN 
        SELECT table_name 
        FROM information_schema.tables 
        WHERE table_schema = 'fabrikamlive'
    LOOP
        EXECUTE format('ALTER TABLE fabrikamlive.%I SET SCHEMA public;', tbl.table_name);
    END LOOP;
END $$;

Ad 2:

DO
$$
DECLARE
  r RECORD;
BEGIN
  FOR r IN SELECT table_name FROM information_schema.tables WHERE table_schema = 'public' LOOP
    EXECUTE 'ALTER TABLE public.' || quote_ident(r.table_name) || ' OWNER TO fabrikamsand2;';
  END LOOP;
END
$$;

I also found out I had to do these, on one occasion:

DO $$
DECLARE r record;
BEGIN
  FOR r IN
    SELECT c.oid::regclass AS obj
    FROM pg_class c
    JOIN pg_namespace n ON n.oid = c.relnamespace
    WHERE n.nspname='public' AND c.relkind IN ('r','p') AND c.relname LIKE 'mdl_%'
  LOOP
    EXECUTE format('ALTER TABLE %s OWNER TO fabrikamstaging;', r.obj);
  END LOOP;
END$$;

-- Sequences
DO $$
DECLARE r record;
BEGIN
  FOR r IN
    SELECT c.oid::regclass AS obj
    FROM pg_class c
    JOIN pg_namespace n ON n.oid = c.relnamespace
    WHERE n.nspname='public' AND c.relkind='S' AND c.relname LIKE 'mdl_%'
  LOOP
    EXECUTE format('ALTER SEQUENCE %s OWNER TO fabrikamstaging;', r.obj);
  END LOOP;
END$$;

(Or simply use ChatGPT to produce the code with your specific user and database inserted right away.)

  • Create a separate PostgreSQL user (can have the same name as the database)
  • sudo -u postgres psql
  • CREATE USER moodleuser WITH PASSWORD 'yourpassword';
  • (If user already exists): ALTER USER moodleuser PASSWORD 'new_password';
  • Simply keep all tables in the public schema (or move them into there when importing)
  • Use password authentication (as opposed to peer authentication, which uses the Unix user in the background) to access the database as the user, e.g.:
  • psql -U fabrikamstaging -d fabrikam -h 127.0.0.1

Proper PostgreSQL Restore Procedure (and Why It’s Necessary)

When restoring a PostgreSQL database for Moodle or Totara, it’s common to encounter permission and ownership problems such as:

ERROR:  permission denied for table mdl_config
ERROR:  cannot change owner of sequence ... because it is linked to table ...

These errors occur because PostgreSQL, unlike MySQL, preserves exact ownership metadata from the original dump. Every table, sequence, and schema has an explicit owner. If you restore the dump as the postgres superuser, all objects will be owned by postgres—but Totara will later connect as its own role (e.g. totara) and lack permissions to manage those objects.

MySQL does not enforce this distinction: the importing user automatically “owns” everything in the database, so these issues never appear there.

To prevent these conflicts and ensure smooth operation during upgrades or plugin installations, you must import the database as the application’s own user, not as postgres.

Correct Restore Template

This 6-line process guarantees that all imported tables, sequences, and schemas are owned by the intended role.

# 1. Switch to the postgres system user
sudo -u postgres psql

# 2. Create the database user (role)
CREATE USER totara WITH PASSWORD 'strongpassword';

# 3. Create the database owned by that user
CREATE DATABASE totara OWNER totara ENCODING 'UTF8' TEMPLATE template0;

# 4. Quit psql
\q

# 5. Import the dump *as the totara user*
psql -U totara -d totara -h 127.0.0.1 -f /path/to/your_dump.sql

That’s all you need—no post-import ALTER OWNER scripts, no schema renames, no reassign loops.

Additional Notes

If the dump was created with pg_dump -Fc, use pg_restore instead:pg_restore -U totara -d totara /path/to/your_dump.dump

  • Ensure pg_hba.conf allows local password authentication for the totara user.

Always verify ownership after import:\dt+ public.mdl_user

  • Owner should be totara.

In config.php, make sure the dbhost variable is not set to ‘localhost’, but this instead:

Why PostgreSQL Is Stricter

PostgreSQL’s role system enforces fine-grained security:

  • Each object (table, sequence, view, function, etc.) has a single explicit owner.
  • Linked sequences must share ownership with their tables.
  • Privileges do not cascade automatically.

This model increases safety and maintainability in multi-application environments—but requires a bit more care during setup and migrations.

MySQL, by contrast, automatically grants implicit ownership of all imported objects to the current user, masking these details.

Summary

Following the procedure above ensures Totara (or Moodle) can:

  • Run upgrades and install plugins without permission errors
  • Create and modify tables during schema changes
  • Access sequences and perform inserts normally

SOP: Plain-text PostgreSQL dump/restore (sudo -u postgres, no owner/ACL)

0. Summary

  • Dump as plain SQL with --no-owner --no-acl --blobs, run as the postgres OS user.
  • Create one owner role and one database on the target.
  • Restore as that owner. Minimal hassle, version-friendly.

1. Dump on the source (local host)

sudo -u postgres pg_dump -d fabrikamlive \
  --format=plain --no-owner --no-acl --encoding=UTF8 --blobs \
| gzip > /backups/fabrikamlive.YYYYMMDD.sql.gz

Notes:

  • Uses peer auth via the local Unix socket (no password prompts).
  • Keep --blobs unless you are sure there are no large objects.

2. Transfer the dump

scp /backups/fabrikamlive.YYYYMMDD.sql.gz user@DST_HOST:/tmp/

3. Prepare the target (one role, one DB)

Run once on the target (psql as a superuser/dbadmin):

sudo -u postgres psql
CREATE ROLE app_user LOGIN PASSWORD 'REPLACE_ME';
CREATE DATABASE newdb OWNER app_user;

Optional hardening (safe for single-role apps):

REVOKE ALL ON DATABASE newdb FROM PUBLIC;
REVOKE CREATE ON SCHEMA public FROM PUBLIC;
ALTER SCHEMA public OWNER TO app_user;
ALTER ROLE app_user IN DATABASE newdb SET search_path = public;

4. Restore

If the dump is gzipped:

gunzip -c /tmp/fabrikamlive.YYYYMMDD.sql.gz | psql \
  -h DST_HOST -U app_user -d newdb -v ON_ERROR_STOP=1

(You’ll be prompted for the password, but the process will continue smoothly upon entering it.)

If you placed the dump on the DB server itself and want to use the postgres OS user:

gunzip -c /tmp/fabrikamlive.YYYYMMDD.sql.gz | sudo -u postgres psql -d newdb -v ON_ERROR_STOP=1

5. Post-restore sanity checks

Run as app_user on newdb unless noted.

Show search path:

SHOW search_path;

Extensions present:

SELECT extname, extversion FROM pg_extension ORDER BY 1;

Quick row counts (example tables):

SELECT COUNT(id) FROM mdl_user

6. Common pitfalls and fixes

  • Error about permissions during restore: ensure you are restoring as app_user, and that public is owned by app_user (see hardening block).
  • Role "someone" does not exist: indicates owners/ACLs leaked into the dump. Re-dump exactly as above with --no-owner --no-acl.
  • Version differences: plain SQL is generally fine. If you hit extension issues, precreate allowed extensions on the target, or drop/adjust unsupported ones.

7. Optional: one-liner remote pull (no intermediate file on source)

From a machine that can reach the source and target:

ssh SRC_HOST "sudo -u postgres pg_dump -d fabrikamlive --format=plain --no-owner --no-acl --encoding=UTF8 --blobs | gzip" \
  > /tmp/fabrikamlive.YYYYMMDD.sql.gz

Then proceed with steps 3–4.

That’s it. This variant keeps everything simple, avoids passwords on the dump step, and restores cleanly under a single owner on the target.

Solin specializes in Moodle database migrations and PostgreSQL configuration.

Contact us