Editor’s note: This guide includes field-tested setup notes that started as internal engineering notes. Rough edges have been retained where they still carry useful implementation detail.

Initial Server Setup Notes

  • Virtualmin install script: installs everything, on a fresh system (for instance, if you’ve got a completely new VPS). Use the –minimal directive to skip all the email related packages. But first install MySQL, since Virtualmin will install MariaDB in a way that is not compatible with Moodle usage.

“Do not pre-install or pre-configure any additional packages on your OS (except in the specific and not recommended case where you want to install MySQL instead of MariaDB, wherein you should install the MySQL packages first). The installer expects a freshly installed, supported, OS, and anything you do differently may cause conflicts or configuration errors. If you will be enabling third-party package repositories, do so after installation of Virtualmin, and only with extreme caution.”

https://www.virtualmin.com/download/
  • Add section on how to install MySQL

    • Ever since MySQL 8 or thereabouts, the default installation no longer allows you to login with something like mysql -uroot -p

    • Instead, you have to use the root user (or sudo). This is typically only a problem on a VPS if you’re not using the root user though.

      • For instance: sudo mysql -uroot -p

    • Please note that if you install MySQL through Virtualmin, it will set up the security for you, and ask you for a password. You don’t need to do anything else.

    • MariaDB has innodb_adaptive_hash_index turned off, by default. We absolutely need this turned on for these large and complex involving the lookup of quiz attempts etc.. Change this to ‘ON’ in your version of ‘my.cnf’. (For example-host on the VPS provider, this file is: /etc/mysql/mariadb.conf.d/50-server.cnf). Please note: I still couldn’t get MariaDB to perform adequately. Course loading times were 10 – 50 times what they were on the exact same server using MySQL.

    • Make sure the default collation is set to utf8mb4_unicode_ci.

  • Make sure that the MySQL system variable log_bin is OFF (to prevent enormous files from being generated).

    • If it’s ON, go to /etc/mysql/mysql.conf.d/mysqld.cnf and under the [mysqld] section, add the following: disable_log_bin for MySQL 8. Lower versions have: skip-log-bin.

  • Automatically apply security updates.

    • Go to Webmin, select System > Software Package Updates.

    • Click on the Scheduled Upgrades tab

    • Set “Check for updates on schedule?” to every day.

    • Set “Action when update needed” to “Install security updates”

    • Click on the Save button.

  • If you can’t get your php apache module (e.g. mod php7.4) to work, check that your vhost (i.e. your virtual host’s configuration file) does not contain the line:

php_admin_value engine off

Otherwise, your php code may be visible as source code if you visit the url for a php file.

  • Add Dutch locales (for Dutch clients only):

    • locale-gen nl_NL

    • locale-gen nl_NL.UTF8 (if that doesn’t work, do sudo apt-get install language-pack-nl)

    • update-locale (then restart Apache and php-fpm)

  • Secure .git directories by including this in /etc/apache2/apache2.conf (usually after the section that starts with <FilesMatch “^\.ht”>) :

# Deny public access to any .git directory to enhance security

<DirectoryMatch “^/.*/\.git/”>

Require all denied

</DirectoryMatch>

Migrating from MySQL to PostgreSQL

Install PostgreSQL

Add the following packages:

apt-get install postgresql

apt-get install php8.1-pgsql

apt-get install pgloader

Please make sure to check the actual php version you’re using on your Moodle or Totara website.

Install pgloader

The version of pgloader that comes with Ubuntu 22.04 (through apt-get) did not work for me, it gave me an error for which I found no solution.

Instead, I compiled pgloader from source as explained here:

sudo apt install sbcl unzip libsqlite3-dev gawk curl make freetds-dev
libzip-dev
curl -fsSLO https://github.com/dimitri/pgloader/archive/v3.6.10.tar.gz
tar -xvf v3.6.10.tar.gz
cd pgloader-3.6.10/

make pgloader

mv ./build/bin/pgloader /usr/local/bin/

And finally check whether the installation worked:

root@localhost:/home/sandbox/pgloader-3.6.10# pgloader --version
pgloader version "3.6.7~devel"

compiled with SBCL 2.1.11.debian

Configure PostgreSQL

Connect to the database server:

sudo -u postgres psql

Set a password for the postgres user:

ALTER USER postgres PASSWORD '<password>';

Replace the <password> with the one you want. Then quit with:

\q

(From: Install PostgreSQL Linux)

Create Database

On the command prompt (so, not in the psql client):

sudo -u postgres createdb -E utf8 {dbname}

Make sure that MySQL is using:

default-authentication-plugin=mysql_native_password

(You can specify that in /etc/mysql/mysql.cnf under the [mysqld] section. Simply add that section if it doesn’t exist. Don’t forget to restart the MySQL server: service mysql restart).

Also make sure that the MySQL user is using this authentication method:

ALTER USER 'sandbox'@'localhost' IDENTIFIED WITH
mysql_native_password BY 'secretpassword';

Migrate MySQL Database to PostgreSQL

Use this command to do the actual migration (which is really a copy & convert, since the original MySQL database is left intact) :

pgloader mysql://sandbox:passwd@127.0.0.1/sample_db
pgsql://postgres:passwd@localhost/sampledb

Please note: this should be the following –

pgloader --with "prefetch rows = 1000"
mysql://sandbox:passwd@127.0.0.1/sample_db
pgsql://postgres:passwd@localhost/sampledb

To see if it worked, use the following command to list all tables:

\dt

Issue: Heap exhausted

2024-03-22T13:02:33.040001Z LOG pgloader version "3.6.7~devel"
2024-03-22T13:02:33.188004Z LOG Migrating from #<MYSQL-CONNECTION
mysql://sandbox@127.0.0.1:3306/appdb {10072891C3}>
2024-03-22T13:02:33.188004Z LOG Migrating into #<PGSQL-CONNECTION
pgsql://postgres@localhost:5432/appdb {10074545A3}>

Heap exhausted during garbage collection: 0 bytes available, 32 requested.

Gen Boxed Code Raw LgBox LgCode LgRaw Pin Alloc Waste Trig WP GCs Mem-age

fatal error encountered in SBCL pid 859879 tid 859888:

GC invariant lost, file “gencgc.c”, line 488

Solved this particular issue by adding this:

–with “prefetch rows = 1000”

So:

pgloader --with "prefetch rows = 1000"
mysql://sandbox:passwd@127.0.0.1/sample_db
pgsql://postgres:passwd@localhost/sampledb

Issue: out of shared memory

I subsequently ran into:

ERROR Database error 53200: out of shared memory

HINT: You might need to increase max_locks_per_transaction.

It turns out that this is a PostgreSQL setting which can be adjusted here:

vim /etc/postgresql/14/main/postgresql.conf

I set it to:

max_locks_per_transaction = 128

Restart the PostgreSQL server:

service postgresql restart

This seems to have worked:

appdb.mdl_workshopform_rubric_config 0 0 11.144s

appdb.mdl_workspace 0 0 11.072s

appdb.mdl_workspace_member_request 0 0 11.084s

—————————————————————— ——— ——— ——— ————–

COPY Threads Completion 0 4 28m1.160s

Create Indexes 0 4334 3h59m42.459s

Index Build Completion 0 4334 0.164s

Reset Sequences 0 915 2.876s

Primary Keys 0 915 0.792s

Create Foreign Keys 0 148 0.424s

Create Triggers 0 0 0.000s

Set Search Path 0 1 0.000s

Install Comments 0 747 0.264s

—————————————————————— ——— ——— ——— ————–

Total import time 86727644 20.9 GB 4h27m48.138s

Issue 20240323

This should select columns on appdb, but doesn’t:

SELECT a.attnum, a.attname AS field, t.typname AS type, a.attlen,
a.atttypmod, a.attnotnull, a.atthasdef, pg_get_expr(d.adbin, d.adrelid)
AS adsrc FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace as ns
ON ns.oid = c.relnamespace JOIN pg_catalog.pg_attribute a ON a.attrelid
= c.oid JOIN pg_catalog.pg_type t ON t.oid = a.atttypid LEFT JOIN
pg_catalog.pg_attrdef d ON (d.adrelid = c.oid AND d.adnum = a.attnum)
WHERE relkind = 'r' AND c.relname = 'mdl_context' AND c.reltype > 0
AND a.attnum > 0 AND (ns.nspname = current_schema() OR ns.oid =
pg_my_temp_schema( ORDER BY a.attnum

It looks like pgloader does not populate the pg_catalog tables.

20240415

But in reality, the solution is very simple: in PostgreSQL, each database has a schema. By default, this schema is named public. If you don’t use the fully qualified name of a database object, such as a table, in PostgreSQL, then the public schema will be used implicitly.

Now, pg_loader names a new schema after the imported database. And since Totara does not explicitly mention the schema name, none of the Totara queries will work.

The solution is to connect to the database and then to simply rename the schema created by pgloader to ‘public’:

alter schema public rename to original_public;

alter schema appdb rename to public;

We can do this, because the PostgreSQL documentation states: “There is nothing special about the public schema except that it exists by default.”

Dump PostgreSQL Database

sudo -u postgres pg_dump appdb >
appdb.20240329.sql

Here’s the canonical command:

pg_dump -h SRC_HOST -U SRC_USER -d SRC_DB \
--format=plain \
--no-owner --no-acl \
--encoding=UTF8 \
--blobs \

| gzip > /path/to/src_db.sql.gz

-no-acl means "don't include access control lists
(privileges/grants) in the dump."

We don't want "owners" and "privileges/grants" because they cause
problems when restoring the dump if you don't have the same owner and
privileges in place.

Restore PostgreSQL Database

sudo -u postgres psql appdb <
appdb.20240329.sql
sudo -u postgres createdb -E utf8 appdb

Or, with a dump that has no owner and privileges/grant:

    CREATE ROLE app_user LOGIN PASSWORD '***';
CREATE DATABASE newdb OWNER app_user;
    psql -h DST_HOST -U app_user -d newdb -v ON_ERROR_STOP=1
    -f /path/to/src_db.sql

20240408

mysqldump –single-transaction –compatible=ansi –default-character-set=utf8 -u root -p appdb > appdb.20240408.sql

https://gist.github.com/barseghyanartur/56876ab3acbd3d5d6ab7dcc477c29238:
completely out of date, throws python errors
https://github.com/AnatolyUss/NMIG:
throws syntax errors (probably wrong version of nodejs / npm)

lms.example.com/post/migration/ pg_chameleon

The instructions were largely okay, but I had to replace the following cli command with versions which have double dashes (–) instead of a single long one:

chameleon create_replica_schema

chameleon add_source –config migration –source mysql

chameleon show_status –config migration

https://github.com/maxlapshin/mysql2postgres
last updates from 4 years ago, uses Ruby gems. Tried to install, but was
stranded with gem dependency errors.
https://stackoverflow.com/questions/92043/is-there-a-simple-tool-to-convert-mysql-to-postgresql-syntax
root@localhost:~# df -h

Filesystem Size Used Avail Use% Mounted on

tmpfs 791M 1.7M 789M 1% /run

/dev/mapper/vg00-lv01 392G 234G 142G 63% /

tmpfs 3.9G 32K 3.9G 1% /dev/shm

tmpfs 5.0M 0 5.0M 0% /run/lock

/dev/sda1 488M 251M 201M 56% /boot

tmpfs 791M 8.0K 791M 1% /run/user/0

More pg_loader

20240415

pgloader --with "prefetch rows = 1000" --with "drop schema"
mysql://sandbox:passwd@127.0.0.1/sample_db
pgsql://postgres:passwd@localhost/sampledb

Current issue: type casting issue arises when you do a query like

select id, username from mdl_user where username = ‘the lead engineer’ and delete <> 1;

The <> operator can’t handle whatever type 1 is, after importing the data and the schema through pgloader.

So I had Totara create the schema (simply do a Totara install and stop when they ask for the admin user data) and then asked pgloader to only load the data:

pgloader --with "prefetch rows = 1000" --with "data only"
mysql://sandbox:passwd@127.0.0.1/appdb
pgsql://postgres:passwd@localhost/appdb
SET session_replication_role = 'replica';
SELECT tablename FROM pg_tables WHERE schemaname =
'appdb';

— Generate and execute the truncate command for each table

DO $$ DECLARE table_name TEXT;
BEGIN

FOR table_name IN (SELECT tablename FROM pg_catalog.pg_tables WHERE schemaname = ‘appdb’) LOOP

EXECUTE ‘TRUNCATE TABLE ‘ || quote_ident(table_name) || ‘ CASCADE;’;

END LOOP;

END $$;
SET session_replication_role = 'origin';
COMMIT;

20240416

2024-04-15T16:53:53.414533Z NOTICE COPY appdb.mdl_user with 41927 rows estimated [1/4]

2024-04-15T16:53:54.394558Z NOTICE COPY appdb.mdl_totara_sync_log with 41018 rows estimated [1/4]

2024-04-15T16:53:54.394558Z ERROR Database error 22P02: invalid input
syntax for type smallint: "t"
CONTEXT: COPY mdl_user, line 1, column confirmed: "t"

Solved this issue by explicitly casting tinyint to smallint:

pgloader --with "prefetch rows = 1000" --with "data only"
--cast "type tinyint when (= precision 1) to smallint keep
typemod" mysql://sandbox:passwd@127.0.0.1/appdb
pgsql://postgres:passwd@localhost/appdb

In the end, I got the migration working by:

  1. Having Totara create the database schema

  2. Using pgloader to import the data

  3. Using pgloader to import the schema for the dynamically generated mdl_appraisal_quest_data_* tables.

20240424

But then I found out that Totara has dynamically generated tables which are not created by Totara upon installation.

Unfortunately, using pgloader to do the entire migration, not just the data but also the schema, is not really an option as it leads to numerous errors and crashes.

… And it turns out that max_locks_per_transaction had somehow been set from 128 to 12, which explains the memory issues. Let’s quickly fix that…

Alas, the memory issues persist!

Next Steps

  • Find out how config file works for pgloader

  • With the existing database, appdb, import only the missing tables

This works, more or less, if we use a loader file to include only specific tables:

LOAD DATABASE
FROM mysql://sandbox:passwd@127.0.0.1/appdb
INTO pgsql://postgres:passwd@localhost/onnotest01
WITH include drop, create tables, reset sequences, prefetch rows =
1000, disable triggers, create no indexes
SET MySQL PARAMETERS

net_read_timeout = ‘1200’,

net_write_timeout = ‘1200’

CAST type tinyint to smallint drop typemod
INCLUDING ONLY TABLE NAMES MATCHING
~/^mdl_appraisal_quest_data_[^\s]+/

;

[^\s]+ matches one or more characters that are not whitespace characters.

It might be necessary to free up some memory on the server, before running pgloader, by:

  • turning off the cron jobs for Totara (e.g. through Webmin)

  • rebooting the server

  • turning off Apache: service apache2 stop

Potentially troublesome table: mdl_temp_rb_course_completions_v1

20240720

Preparation

vim /etc/postgresql/14/main/postgresql.conf

I set it to:

max_locks_per_transaction = 128

Restart the PostgreSQL server:

service postgresql restart

Steps

Did Live appdb migration by:

  1. Having Totara create the database schema

  2. Using pgloader to import the data

  3. Using pgloader to import the schema for the dynamically generated mdl_appraisal_quest_data_* tables.

pgloader script:
LOAD DATABASE
FROM mysql://sandbox:passwd@127.0.0.1/appdb
INTO pgsql://postgres:passwd@localhost/onnotest01
WITH prefetch rows = 1000, data only, truncate
SET MySQL PARAMETERS

net_read_timeout = ‘1200’,

net_write_timeout = ‘1200’

CAST

type tinyint when (= precision 1) to smallint keep typemod

;

Execute, e.g.:

pgloader myloader.load

Various small issues:

  • Schema must be renamed from public to appdb: alter schema public rename to appdb; If you issue this command while being connected to database appdb, apparently the command is only executed for that particular database.

/etc/mysql/mysql.cnf must have
default-authentication-plugin=mysql_native_password (and even then it
might be necessary to do: ALTER USER 'root'@'localhost' IDENTIFIED WITH
mysql_native_password BY 'your_root_password';)
/etc/postgresql/14/main/postgresql.conf must have:
max_locks_per_transaction = 128
  • If you have Totara create the database tables, be sure to include the ‘WITH truncate’ option in pgloader – see pgloader script above. Otherwise tables which have records will not be filled (mdl_user and mdl_course for instance).

  • 20240928

    I had to run VACUUM ANALYZE; on the appdb database. The catalog was loading very slowly (1 minute or more) for a specific user. VACUUM ANALYZE; fixed this.

    Actually, the real issue was with the online content marketplaces feature, which does an extremely inefficient has_capability_in_any_context check.

    Live branch was at commit 1daa62ced.

    20241025

    After importing a Postgres DB it is usually necessary to:

    1. Change the schema to public

    2. Change the ownership of all tables to your DB user

    Ad 1:

    DO $$

    DECLARE

    tbl RECORD;

    BEGIN

    FOR tbl IN

    SELECT table_name

    FROM information_schema.tables

    WHERE table_schema = ‘appdb’

    LOOP

    EXECUTE format(‘ALTER TABLE appdb.%I SET SCHEMA public;’, tbl.table_name);

    END LOOP;

    END $$;

    Ad 2:

    DO

    $$

    DECLARE

    r RECORD;

    BEGIN

    FOR r IN SELECT table_name FROM information_schema.tables WHERE table_schema = ‘public’ LOOP

    EXECUTE ‘ALTER TABLE public.’ || quote_ident(r.table_name) || ‘ OWNER TO cranswicksand2;’;

    END LOOP;

    END

    $$;

    I also found out I had to do these, on one occasion:

    DO $$

    DECLARE r record;

    BEGIN

    FOR r IN

    SELECT c.oid::regclass AS obj

    FROM pg_class c

    JOIN pg_namespace n ON n.oid = c.relnamespace

    WHERE n.nspname=’public’ AND c.relkind IN (‘r’,’p’) AND c.relname LIKE ‘mdl_%’

    LOOP

    EXECUTE format(‘ALTER TABLE %s OWNER TO appdb;’, r.obj);

    END LOOP;

    END$$;

    — Sequences

    DO $$

    DECLARE r record;

    BEGIN

    FOR r IN

    SELECT c.oid::regclass AS obj

    FROM pg_class c

    JOIN pg_namespace n ON n.oid = c.relnamespace

    WHERE n.nspname=’public’ AND c.relkind=’S’ AND c.relname LIKE ‘mdl_%’

    LOOP

    EXECUTE format(‘ALTER SEQUENCE %s OWNER TO appdb;’, r.obj);

    END LOOP;

    END$$;

    (Or simply use ChatGPT to produce the code with your specific user and database inserted right away.)

    • Create a separate PostgreSQL user (can have the same name as the database)

      sudo -u postgres psql
      CREATE USER moodleuser WITH PASSWORD 'yourpassword';
    • (If user already exists): ALTER USER moodleuser PASSWORD ‘new_password’;

  • Simply keep all tables in the public schema (or move them into there when importing)

  • Use password authentication (as opposed to peer authentication, which uses the Unix user in the background) to access the database as the user, e.g.:

    psql -U appdb -d appdb -h 127.0.0.1
  • Proper PostgreSQL Restore Procedure (and Why It’s Necessary)

    When restoring a PostgreSQL database for Moodle or Totara, it’s common to encounter permission and ownership problems such as:

    ERROR: permission denied for table mdl_config

    ERROR: cannot change owner of sequence … because it is linked to table …

    These errors occur because PostgreSQL, unlike MySQL, preserves exact ownership metadata from the original dump. Every table, sequence, and schema has an explicit owner. If you restore the dump as the postgres superuser, all objects will be owned by postgres—but Totara will later connect as its own role (e.g. totara) and lack permissions to manage those objects.

    MySQL does not enforce this distinction: the importing user automatically “owns” everything in the database, so these issues never appear there.

    To prevent these conflicts and ensure smooth operation during upgrades or plugin installations, you must import the database as the application’s own user, not as postgres.

    Correct Restore Template

    This 6-line process guarantees that all imported tables, sequences, and schemas are owned by the intended role.

    # 1. Switch to the postgres system user

    sudo -u postgres psql

    # 2. Create the database user (role)

    CREATE USER totara WITH PASSWORD 'strongpassword';

    # 3. Create the database owned by that user

    CREATE DATABASE totara OWNER totara ENCODING 'UTF8' TEMPLATE
    template0;

    # 4. Quit psql

    \q

    # 5. Import the dump *as the totara user*

    psql -U totara -d totara -h 127.0.0.1 -f /path/to/your_dump.sql

    That’s all you need—no post-import ALTER OWNER scripts, no schema renames, no reassign loops.

    Additional Notes

    If the dump was created with pg_dump -Fc, use pg_restore instead:

    pg_restore -U totara -d totara /path/to/your_dump.dump
    • Ensure pg_hba.conf allows local password authentication for the totara user.

    Always verify ownership after import:
    \dt+ public.mdl_user

    • → Owner should be totara.

    In config.php, make sure the dbhost variable is not set to ‘localhost’, but this instead:

    $CFG->dbhost = '127.0.0.1';

    Why PostgreSQL Is Stricter

    PostgreSQL’s role system enforces fine-grained security:

    • Each object (table, sequence, view, function, etc.) has a single explicit owner.

    • Linked sequences must share ownership with their tables.

    • Privileges do not cascade automatically.

    This model increases safety and maintainability in multi-application environments—but requires a bit more care during setup and migrations.

    MySQL, by contrast, automatically grants implicit ownership of all imported objects to the current user, masking these details.

    Summary

    Database System Ownership Model Typical Import Behavior Post-Import Fix Required?
    MySQL Implicit / loose Everything owned by importer No
    PostgreSQL Explicit / strict Ownership preserved exactly Yes, unless imported as correct user

    Following the procedure above ensures Totara (or Moodle) can:

    • Run upgrades and install plugins without permission errors

    • Create and modify tables during schema changes

    • Access sequences and perform inserts normally

    SOP: Plain-text PostgreSQL dump/restore (sudo -u postgres, no owner/ACL)

    0. Summary

    • Dump as plain SQL with –no-owner –no-acl –blobs, run as the postgres OS user.

    • Create one owner role and one database on the target.

    • Restore as that owner. Minimal hassle, version-friendly.

    1. Dump on the source (local host)

    sudo -u postgres pg_dump -d appdb \
    --format=plain --no-owner --no-acl --encoding=UTF8 --blobs \

    | gzip > /backups/appdb.YYYYMMDD.sql.gz

    Notes:

    • Uses peer auth via the local Unix socket (no password prompts).

    • Keep –blobs unless you are sure there are no large objects.

    2. Transfer the dump

    scp /backups/appdb.YYYYMMDD.sql.gz user@DST_HOST:/tmp/

    3. Prepare the target (one role, one DB)

    Run once on the target (psql as a superuser/dbadmin):

    sudo -u postgres psql
    CREATE ROLE app_user LOGIN PASSWORD 'REPLACE_ME';
    CREATE DATABASE newdb OWNER app_user;

    Optional hardening (safe for single-role apps):

    REVOKE ALL ON DATABASE newdb FROM PUBLIC;
    REVOKE CREATE ON SCHEMA public FROM PUBLIC;

    ALTER SCHEMA public OWNER TO app_user;

    ALTER ROLE app_user IN DATABASE newdb SET search_path = public;

    4. Restore

    If the dump is gzipped:

    gunzip -c /tmp/appdb.YYYYMMDD.sql.gz | psql \

    -h DST_HOST -U app_user -d newdb -v ON_ERROR_STOP=1

    (You’ll be prompted for the password, but the process will continue smoothly upon entering it.)

    If you placed the dump on the DB server itself and want to use the postgres OS user:

    gunzip -c /tmp/appdb.YYYYMMDD.sql.gz | sudo -u postgres psql
    -d newdb -v ON_ERROR_STOP=1

    5. Post-restore sanity checks

    Run as app_user on newdb unless noted.

    Show search path:

    SHOW search_path;

    Extensions present:

    SELECT extname, extversion FROM pg_extension ORDER BY 1;

    Quick row counts (example tables):

    SELECT COUNT(id) FROM mdl_user

    6. Common pitfalls and fixes

    • Error about permissions during restore: ensure you are restoring as app_user, and that public is owned by app_user (see hardening block).

    • Role “someone” does not exist: indicates owners/ACLs leaked into the dump. Re-dump exactly as above with –no-owner –no-acl.

    • Version differences: plain SQL is generally fine. If you hit extension issues, precreate allowed extensions on the target, or drop/adjust unsupported ones.

    7. Optional: one-liner remote pull (no intermediate file on source)

    From a machine that can reach the source and target:

    ssh SRC_HOST “sudo -u postgres pg_dump -d appdb –format=plain –no-owner –no-acl –encoding=UTF8 –blobs | gzip” \

    > /tmp/appdb.YYYYMMDD.sql.gz

    Then proceed with steps 3–4.

    That’s it. This variant keeps everything simple, avoids passwords on the dump step, and restores cleanly under a single owner on the target.

    Solin specializes in Moodle hosting, server hardening, monitoring, and operational support. Need help? Contact us.

    Contact us