Diagnosing Moodle problems systematically requires checking error logs, database integrity, cron execution, and plugin conflicts. This guide walks through the troubleshooting workflow and specific checks for common failure modes.
List the Symptoms
Get access to Moodle and server
Ask your users for:
- Moodle url
- Moodle admin username
- Moodle admin password
- Web server url or ip address
- Web server username
- Web server password
If the Moodle website is hosted on one of our own servers, we should already have this information – don’t bother your users in that case, but ask the lead engineer instead.
Find the Root Cause
Checking and Enabling X-Sendfile for Large File Delivery (Moodle)
Overview
When Moodle serves large files (video, H5P assets, SCORM, large images), the default behaviour is for PHP-FPM to stream the file through pluginfile.php. This can block PHP workers for minutes and cause sitewide slowdown. Apache’s mod_xsendfile allows Apache to serve files directly from moodledata, bypassing PHP entirely. This section describes how to enable X-Sendfile and how to test whether it is working.
Symptoms
- Slow or hanging requests to
pluginfile.php. - PHP-FPM slow logs show stack traces in
byteserving_send_file() or readfile_accel(). - Many long-running PHP-FPM workers during video playback.
- Users report Moodle becoming unresponsive while accessing large videos or files.
Step 1: Verify Apache has mod_xsendfile
On the server, run:
a2enmod xsendfile
If the module is missing, install it:
apt update
apt install libapache2-mod-xsendfile
a2enmod xsendfile
service apache2 restart
Step 2: Add required directives to the vhost
Inside the VirtualHost:
<IfModule mod_xsendfile.c>
XSendFile on
XSendFilePath /path/to/moodledata
XSendFilePath /path/to/moodle/code
</IfModule>
Example for a standard installation:
XSendFilePath /home/USERNAME/moodledata
XSendFilePath /home/USERNAME/public_html
Apache must be allowed to access both moodledata (filedir, temp, cache, etc) and dirroot.
Reload Apache:
service apache2 reload
Step 3: Enable X-Sendfile in config.php
Add these lines above the require_once(__DIR__ . '/lib/setup.php'); line:
$CFG->xsendfile = 'X-Sendfile';
$CFG->xsendfilealiases = array();
- Default for standard Apache Moodle sites: use X-Sendfile and leave
$CFG->xsendfilealiases = []; - If logs or an X-Sendfile probe show aliased paths such as
/dataroot/..., add the required alias mapping for that site.
Step 4: Basic Apache test (without Moodle)
Create xsendtest.php in the webroot:
<?php
header("X-Sendfile: /home/USERNAME/moodledata/xsend-test.txt");
header("Content-Type: text/plain");
header("Content-Disposition: inline; filename=\"xsend-test.txt\"");
exit;
Create the file:
echo "hello xsendfile" > /home/USERNAME/moodledata/xsend-test.txt
Test:
curl -i https://yourdomain/xsendtest.php
Expected:
- Response shows Content-Disposition, Content-Type, but no X-Sendfile header (Apache strips it)
- File downloads quickly. If you remove the file or path, Apache should return 404, confirming Apache is handling the file rather than PHP.
Step 5: Moodle side test (optional but recommended)
This test confirms Moodle’s send_stored_file() triggers X-Sendfile.
Create moodle_xsend_probe.php in public_html:
<?php
require(__DIR__ . '/config.php');
require_once($CFG->libdir . '/filelib.php');
// Create a temporary stored_file so Moodle goes through send_stored_file().
$fs = get_file_storage();
$syscontext = context_system::instance();
$content = "xsend probe test";
$file = $fs->create_file_from_string(
array(
'contextid' => $syscontext->id,
'component' => 'local_test',
'filearea' => 'probe',
'itemid' => 0,
'filepath' => '/',
'filename' => 'xsend_probe.txt'
),
$content
);
// Capture headers before Apache strips them.
register_shutdown_function(function() {
file_put_contents(
$GLOBALS['CFG']->dataroot . '/moodle_xsend_probe.log',
"headers_list() at shutdown:\n\n" . implode("\n", headers_list(
);
});
send_stored_file($file);
Test:
curl -I https://yourdomain/moodle_xsend_probe.php
Then check:
cat /home/USERNAME/moodledata/moodle_xsend_probe.log
Expected inside the log:
X-Sendfile: /home/USERNAME/moodledata/filedir/xx/yy/xxxxxxxx...
This is definitive proof that Moodle is using X-Sendfile.
Step 6: Final check if troubleshooting performance
If you still see slow requests:
- Inspect php-fpm slow logs (
/var/log/php*-fpm.slow) - Look for long traces involving:
byteserving_send_file
readfile_accel
pluginfile.php
If these disappear after enabling X-Sendfile, the configuration is correct.
Result
When X-Sendfile is working:
- PHP-FPM workers are not tied up streaming videos.
- pluginfile.php requests become extremely fast.
- Concurrency improves and site sluggishness disappears.
Tweaking php-fpm for Optimal Performance
Summary
Use Compute php-fpm settings for www.conf to compute the values for /etc/php/X.Y/fpm/pool.d/www.conf.
Max Children Issue
Php-fpm may cause trouble if the number of pm.max_children is set too low, for a specific vhost (this is configured through a ‘pool’ file). You’ll see this mentioned in the log, /var/log/php7.4-fpm.log, e.g.:
[30-Sep-2021 10:08:11] WARNING: [pool www] server reached pm.max_children setting (5), consider raising it
Or:
26-Oct-2023 11:50:13] WARNING: [pool 1692438784256193] server reached pm.max_children setting (16), consider raising it
On Ubuntu (HostEurope), the config file for the web user (typically www-data) is here:
/etc/php/X.Y/fpm/pool.d/www.conf. The other pool files are in the same directory. For instance:
root@blueyondercoaching:/etc/php/8.1/fpm/pool.d# ls -lah
total 64K
drwxr-xr-x 2 root root 4,0K okt 26 15:06 .
drwxr-xr-x 4 root root 4,0K okt 26 14:49 ..
-rw-r--r-- 1 root root 420 aug 17 12:06 169227396768674.conf
-rw-r--r-- 1 root root 413 aug 17 13:09 169227776679898.conf
-rw-r--r-- 1 root root 412 okt 26 14:54 1692438784256193.conf
-rw-r--r-- 1 root root 438 aug 28 11:28 16932221191307961.conf
-rw-r--r-- 1 root root 445 aug 28 11:32 16932223221310218.conf
-rw-r--r-- 1 root root 438 aug 28 19:54 16932524711390798.conf
-rw-r--r-- 1 root root 438 aug 28 19:57 16932526681392819.conf
-rw-r--r-- 1 root root 473 sep 14 11:34 16946912911243061.conf
-rw-r--r-- 1 root root 21K okt 26 13:37 www.conf
Here, 1692438784256193.conf is the php8.1-fpm configuration file for the user elo. How do we know? Well, virtualmin creates an entry in the vhost conf file, e.g. /etc/apache2/sites-available/portal.blueyonder-coaching.example.conf:
<FilesMatch \.php$>
SetHandler proxy:unix:/var/php-fpm/1692438784256193.sock|fcgi://127.0.0.1
</FilesMatch>
As you can see, the socket number matches with the number used in the conf file name.
In addition, we find the user elo in the output of the php8.1-fpm configuration command:
php-fpm8.1 -tt
Which yields (for this example):
[1692438784256193]
prefix = undefined
user = elo
group = elo
listen = /var/php-fpm/1692438784256193.sock
Again, the numbers match and here the socket is also explicitly mentioned.
So, in the php-fpm conf file that you have found, change the setting for pm.max_children, e.g. from 5 to 25 (this is really just an example, see the spreadsheet below to compute the actual value), and do:
service php7.4-fpm restart
Or:
service php7.4-fpm restart
Getting The Correct php-fpm Settings
See An Introduction to PHP-FPM Tuning – Tideways. The spreadsheet Compute php-fpm settings for www.conf is based on this article. Please note that the:
free -hl
command outputs free and available memory. Use the available memory in your calculations.
- The difference between free memory vs. available memory in Linux is, free memory is not in use and sits there doing nothing. While available memory is used memory that includes but is not limited to caches and buffers, that can be freed without the performance penalty of using swap space.
- https://haydenjames.io/free-vs-available-memory-in-linux/
Use something like pstree -c -H 19741 -S 19741 to see the current number of php-fpm7.4 processes, where you find the pid by looking for php-fpm: master process (/etc/php/7.4/fpm/php-fpm.conf) in the output of ps -ef. This tells you how many ‘children’ are currently spun up.
Compute php-fpm process size:
python /usr/local/bin/ps_mem.py | grep php-fpm (grab the Python script here).
(Or: python3)
To convert Gi to Mi: https://www.convertunits.com/from/GiB/to/MiB
Multiple Sites or Vhosts
A final thing to keep in mind is ‘if these "tuned" values are calculated based on the maximum capacity of yyour server and you put the same values to every site, you'll consume your resources multiple times. Instead, these values should be distributed between the pools so that the sum from all pools is equivalent with the "tuned" value.’
(..) ‘Whatever you do, keep your sites in separate pools with separate user accounts. Otherwise a compromise on a single site can spread across all your sites.’
https://serverfault.com/questions/952658/php-fpm-conf-per-site-vs-server-php-pool-performance-tunning
The latest versions of Virtualmin (writing this 20231026) seem to automatically create a pool file for each new vhost. These articles describes how to do it manually:
From the latter article: “To complete the process, you should repeat the steps for each of your virtual hosts. When you are entirely sure mod_php is not being used anymore you can disable it through
$ sudo a2dismod php8.1
Until you've done this, Apache will still include a PHP process for every request, meaning the memory usage will stay the same and possibly be even higher.”
Checking php-fpm Configuration
Simply use this command to see the configuration parameters that are currently being used:
php-fpm8.1 -tt
And use the following command to test whether the configuration is correct:
php-fpm8.1 -t (or php-fpm8.1 --test)
As an aside, you can also check the configuration for Apache:
apachectl configtest
Checking the php-fpm status
Go to /etc/php/X.Y/fpm/pool.d/www.conf (e.g. /etc/php/7.4/fpm/pool.d/www.conf) and look for pm.status_path. Uncomment it and add /phpXY-fpm/status, e.g.:
pm.status_path = /php74-fpm/status
(Do NOT insert a dot here)
Then, in your /etc/apache2/apache2.conf file, add:
<LocationMatch "/php74-fpm/status">
ProxyPass "unix:/var/run/php/php7.4-fpm.sock|fcgi://127.0.0.1/php74-fpm/status"
</LocationMatch>
Obviously, the paths must match exactly. Restart Apache and php-fpm, and go to your web server’s location, e.g.:
http://213.165.72.180/php81-fpm/status
This should give you something like:
pool: www
process manager: dynamic
start time: 26/Oct/2023:13:37:45 +0000
start since: 13
accepted conn: 1
listen queue: 0
max listen queue: 0
listen queue len: 0
idle processes: 31
active processes: 1
total processes: 32
max active processes: 1
max children reached: 0
slow requests: 0
Please note: this only shows the status for that specific pool. In the example above, that’s www.
Sluggish Moodle Or Totara Site
- Check the images that themes like Adaptle allow you to upload for the frontpage, header and background. We once had a Moodle site that was loading two essentially the same images of 7 MB each. Needless to say, this makes loading the site quite sluggish.
- The scheduled tasks may add up, especially if you have many users and many automatic cohort syncs (audiences, in Totara). Please note that Totara (and Moodle prior to version 3.7) does not have the task_scheduled table. No serious logging takes place for scheduled tasks.
MySQL Connection Timeout Or Lost Connection
From your-vps server, /etc/mysql/mysql.conf.d/mysqld.cnf:
## TEMP 20230322 the lead engineer - successfully imported a 2.8 GB dump with these settings - main thing is innodb_buffer_pool_size which must be SMALLER
## See https://dba.stackexchange.com/questions/124964/error-2013-hy000-lost-connection-to-mysql-server-during-query-while-load-of-my
#innodb_lock_wait_timeout = 60
#net_read_timeout = 28800
#net_write_timeout = 28800
#connect_timeout = 28800
#wait_timeout = 28800
#delayed_insert_timeout = 28800
#innodb_buffer_pool_size = 4294967296
Timeout Issues
mod_fcgid: read data timeout in 40 seconds
This error may pop up if you’re using fcgi and you’re trying to upload a large scorm file (say 400M) that requires quite some processing time:
mod_fcgid: read data timeout in 40 seconds
(This error may be disguised as a HTTP 500 error, but the Apache log file should contain the actual error message.)
The solution is to increase the value of FcgidIOTimeout in your vhost configuration, e.g.:
FcgidIOTimeout 600
This directive can be put directly inside the VirtualHost part of the Apache configuration file for the website, e.g.:
<VirtualHost 11.22.33.44:443>
FcgidIOTimeout 600
Don’t forget to save the conf file and restart the webserver afterwards, with /etc/init.d/apache2 graceful.
Varnish Timeout
Usually it’s enough to increase max_execution_time in php.ini to solve any timeout issues. However, I recently (20250206) encountered a timeout issue in an AWS EC2 instance that turned out to be using Varnish, which is a caching tool.Varnish was set (in /etc/varnish/.default.vcl) to 30s. After resetting set bereq.first_byte_timeout to 300s (notice the ‘s’ by the way: Varnish completely crashes the site if you leave it out), I had to restart varnish:sudo systemctl restart varnish
And that solved the issue (together with the max_execution_time increase, of course).
Common But Hard to Spot Issues
Spaces in PHP Files
If you inadvertently introduce a space before the <?php opening tag, especially in config.php, things will go wrong. This will pollute the output buffer in cases where output is created and sent to the browser through php, e.g. pluginfile.php or theme/yui_combo.php. Examples are images, css, and javascript. If any output, such as a space, is sent before the headers, the results are catastrophic, especially for all binary files such as files (often, css and javascript is zipped before being sent to the browser, making these resources binary too).
Symptoms: images do not load, no layout (css) is applied and menus (requiring javascript) do not work.
Configuration Issues
Unexpected Name Showing as Sender in Course Welcome Emails
We encountered a case where Moodle’s course welcome emails were being sent with the name of an unrelated user as the sender. This was confusing, since the person whose name appeared had nothing to do with the specific course.
Root cause By default, the setting Send course welcome message – From in the Manual enrolments plugin (/admin/settings.php?section=enrolsettingsmanual) was configured as From the course contact. Moodle interprets “course contact” as the first user it finds with a role listed in the coursecontact configuration (commonly the editingteacher role). Importantly, this lookup also includes role assignments at the system or category level, not only at the course level. Because a user had the editingteacher role at system level, their name was used as the sender in welcome emails across the site.
Resolution We changed the configuration in: Site administration Plugins Enrolments Manual enrolments
Setting: Send course welcome message – From From the no-reply address
This ensures that all course welcome emails are now sent from the neutral noreply address (e.g. noreply@example.com), and no personal user names appear unexpectedly as the sender.
Best practice Always configure Send course welcome message – From to From the no-reply address unless there is a strong reason to display the course contact’s name. This avoids confusion and ensures consistency across all courses.
Case Study: Extremely Slow Course Creation (ICM 2025)
Symptoms
Creating a new course in Moodle 4.1 took minutes to complete, whereas on the previous host it completed in under 30 seconds. Logging in via SAML2 also appeared slower.
Root Cause
The issue was caused by inefficient MySQL write performance, specifically InnoDB’s default behavior of flushing transaction logs to disk after every single write operation. This configuration is safe but can be extremely slow on systems handling many small transactions (like Moodle’s course creation process).
In this case, the new hosting environment used SSD storage but was still using conservative, HDD-era MySQL defaults:
innodb_flush_log_at_trx_commit = 1innodb_io_capacity = 200innodb_io_capacity_max = 2000
This caused excessive fsync operations (waiting for every commit to fully write to disk).
Resolution
The hosting partner (Kaliber) optimized MySQL for modern SSD storage and Moodle’s workload. The following key changes were applied:
innodb_io_capacity = 1000
innodb_io_capacity_max = 2000
innodb_flush_log_at_trx_commit = 2
innodb_read_io_threads = 4
innodb_write_io_threads = 4
Explanation:
- Increasing
innodb_io_capacity allows MySQL to perform more I/O operations per second, aligning with SSD capabilities. - Setting
innodb_flush_log_at_trx_commit = 2 reduces log flushes to once per second instead of every transaction — a safe compromise that greatly reduces latency. - Increasing I/O threads improves concurrency on SSDs with multiple queues.
Testing on the development server confirmed that even the most aggressive options (O_DIRECT_NO_FSYNC) only gave minimal additional gain; the 1 fsync per second configuration was the “golden mean.”
After implementing these settings, the “Create new course” operation returned to expected performance levels (seconds rather than minutes).
Lessons Learned / Best Practice
- Moodle’s “create course” is a write-heavy operation; database write latency dominates performance.
- Always review and tune MySQL’s InnoDB write path after migrations.
- On SSDs, conservative defaults such as
innodb_flush_log_at_trx_commit=1 and low I/O capacity settings can cause severe slowdowns.
Start with:innodb_io_capacity = 1000
innodb_io_capacity_max = 2000
innodb_flush_log_at_trx_commit = 2
- If performance issues persist, consider checking:
- Actual disk type (ensure SSD/NVMe)
innodb_flush_method (O_DIRECT is optimal for SSD)- Background flushing and redo log sizes
Reference
This issue occurred on the ICM production system (2025) after migrating Moodle 4.1 to a new host. It was resolved by Bart (Kaliber) following performance analysis and parameter tuning.
Moodle upgrades are straightforward in theory and full of edge cases in practice: a PHP version that needs bumping, a theme that doesn’t survive the transition, a third-party plugin that’s been abandoned, a collation mismatch that only shows up halfway through the upgrade. This guide walks through the full workflow for both major and minor upgrades using Git, with the checks and commands that actually catch problems before they hit production.
Minor vs. major upgrades
A minor upgrade (for example 4.5.3 to 4.5.4) is usually a security or maintenance patch. You can deploy it by overwriting the core codebase. A major upgrade (for example 4.1 to 4.5) typically also requires:
- Updating parts of the LAMP/LEMP stack: PHP, database, sometimes the web server
- Reviewing or rebuilding a customized theme
- Re-validating third-party plugins against the new version
Most of this guide is about major upgrades. The minor upgrade section at the end covers what differs.
Read Moodle’s upgrade documentation first
Start with the official Upgrading page on docs.moodle.org. The core procedure comes down to:
- Back up everything: database, moodledata directory, and the entire codebase (usually under public_html, htdocs, or a moodle/ subdirectory)
- Replace the codebase with the new version
- Copy the old config.php into the new tree
- Reinstall the new versions of any third-party plugins
Everything else in this guide is about making that procedure survive contact with reality.
Check the server requirements
Verify that the server meets the new Moodle version’s minimum requirements. The things that tend to bite:
- Database: MySQL or MariaDB version, plus row format. Upgrades crossing the 3.1 boundary need the Antelope-to-Barracuda conversion and the
utf8mb4_unicode_ci collation. - PHP: version and all required extensions.
- Web server: Moodle itself doesn’t care much, but PHP-FPM socket paths and proxy configuration change when you bump PHP versions.
If the stack doesn’t meet the requirements, do not update the stack yet. An OS or PHP bump on the running site will break the current Moodle before the new code is in place. Plan those changes so they happen during the upgrade maintenance window, with the site in maintenance mode and the new codebase ready to deploy.
Installing a separate PHP version with PHP-FPM on Ubuntu
Running two PHP versions side by side lets you upgrade Moodle without disturbing other sites on the same server. (See also this reference.)
Add Ondřej Surý’s Apache and PHP repositories:
sudo add-apt-repository ppa:ondrej/apache2
sudo apt-get update
sudo apt install software-properties-common libapache2-mod-fcgid
sudo add-apt-repository ppa:ondrej/php && sudo apt update
Install the target PHP version with the extensions Moodle needs:
sudo apt-get install -y php8.3-{bcmath,bz2,curl,gd,intl,mbstring,mysql,soap,xml,zip,common,fpm}
In the vhost configuration, tell Apache to hand PHP requests to this FPM pool. Place the FilesMatch directive before the Directory block:
<FilesMatch .php>
SetHandler "proxy:unix:/var/run/php/php8.3-fpm.sock|fcgi://localhost/"
</FilesMatch>
<Directory /home/yoursite/public_html>
...
</Directory>
Enable the proxy modules if they aren’t already, and reload Apache:
sudo a2enmod proxy_fcgi proxy
sudo systemctl reload apache2
Review /etc/php/8.3/fpm/php.ini and carry over the tuning from your old php.ini — at minimum upload_max_filesize, post_max_size, max_execution_time, memory_limit, and max_input_vars. Restart FPM after any change:
sudo systemctl restart php8.3-fpm
PHP-FPM also needs its pool settings tuned for your traffic profile (pm, pm.max_children, and friends), which is out of scope for this guide.
Update the cron job
If the PHP version has changed and you’re running multiple PHP versions on the server, update the Moodle cron command accordingly. For example, from:
/usr/bin/php8.1 /home/yoursite/public_html/admin/cli/cron.php >/dev/null
to:
/usr/bin/php8.3 /home/yoursite/public_html/admin/cli/cron.php >/dev/null
Update backup scripts
Any custom backup scripts that invoke PHP directly should be updated to the new PHP version. Also double-check any hard-coded paths: the location of config.php and the default data directory occasionally shift across major Moodle or Totara versions, and a backup script that silently misses them will fail exactly when you need it.
Take inventory of third-party plugins
Visit /admin/plugins.php on the running site and list every plugin marked Additional. These are the third-party plugins you will need to upgrade separately, usually from moodle.org/plugins/.
For very old installations, you may need to chain upgrades (for example 3.1 → 3.9 → 4.1 → 4.5) because Moodle only supports direct upgrades within a bounded range. For each intermediate version, check that the plugin set you depend on has a release covering that step.
The theme deserves special attention
Themes rarely survive a major upgrade unchanged, unless the theme is a stock Moodle theme with only configuration customizations (logo, colors, favicon). A heavily customized child theme almost always needs developer work to run on the new version. Budget for this before you start — discovering it on upgrade day is expensive.
Watch for non-Moodle content inside the Moodle directory
External files — images, PDFs, archived course exports, the occasional forgotten test page — sometimes end up inside the wwwroot even though they don’t belong there. Inventory them before the upgrade and restore them in the same relative paths afterwards, or take the opportunity to move them to a proper asset location outside the Moodle tree.
Copying third-party plugins between code bases
A quick script to copy directories present in a modified code base but missing from a clean upstream tree. Useful when you’re reconstituting the plugin set on top of a clean Moodle install:
#!/bin/bash
clean="/home/you/temp/moodle-clean"
modified="/home/you/temp/moodle-modified"
target="/home/you/php/example-site/public_html"
# Find directories in modified that are missing in clean, excluding hidden directories
find "$modified" -mindepth 1 -type d ! -path "*/.*" | while read dir; do
relative_path="${dir#$modified/}"
if [ ! -d "$clean/$relative_path" ]; then
echo "Copying: $relative_path"
rsync -av "$dir" "$target/${relative_path%/*}/"
fi
done
Check for core hacks
Core hacks on modern Moodle are rare, but verify rather than assume. Diff the running code base against the exact official build for the same version:
diff -qr --exclude='.git' moodle-clean/ moodle-current/
Any file reported as different needs a decision: port the change forward, drop it, or replace it with a supported mechanism (a local plugin, a hook, a subtheme override). Unannotated core hacks are a major upgrade’s favorite way to go sideways.
Finding the exact Moodle build
To compare against the right upstream commit, pin your clone to the same version the running site is on. In a local clone of the official Moodle repository:
1. Switch to and update the stable branch
git checkout MOODLE_405_STABLE
git pull origin MOODLE_405_STABLE
This ensures your local branch contains every version bump and bugfix up to the latest commit.
2. Search for the version bump in version.php
Get the build date from $release in the running site’s version.php (e.g. 20250131), then:
git log -S "20250131" -- version.php
The -S flag looks for commits that add or remove that literal string in version.php. The top result is the commit where Moodle’s maintainers set $release = '4.1.15+ (Build: 20250131)';.
3. Check out that specific commit
git checkout 59a1f3e4a2b8c5d7e8f9a0123b4c5d6e7f8a9b0c
Or, if you want a named branch at that point:
git checkout -b moodle-4.1.15-build20250131 59a1f3e4a2b8c5d7e8f9a0123b4c5d6e7f8a9b0c
4. Verify
grep '$version|$release' version.php
You should see exactly:
$version = 2022112815.06;
$release = '4.1.15+ (Build: 20250131)';
Your working copy is now pinned to the exact same Moodle build the running site is supposedly on, and any diff output represents a real modification.
Staging upgrade for stakeholder sign-off
Before touching production, build a staging environment that mirrors the live one as closely as you can: same OS, same PHP version, same database engine, same Moodle version as the starting point. Give it its own domain — for example contoso-staging.example.com.
Create the vhost as you would for a fresh install, but instead of installing a new site, restore a copy of production and upgrade that.
Setting expectations with stakeholders
Make one thing unambiguous to anyone you invite to test: changes made on the staging site will not transfer to the live upgrade. Any content, configuration, or test data entered there is scratch.
Focus testing on the customizations — the custom theme, custom plugins, integrations with external systems (SSO, HR sync, reporting tools). Moodle core and widely-used community plugins have already been tested by thousands of sites on the same version. Your custom stack has not.
Getting the upgraded code into the repository
The cleanest way to deploy the new Moodle version is through a dedicated upgrade branch, based on the official upstream. In your local clone of the site’s repository, add the official Moodle repo as upstream:
git remote add upstream https://github.com/moodle/moodle.git
Fetch the target stable branch (confirm the exact branch name with git remote show upstream):
git fetch upstream MOODLE_405_STABLE
Create a local tracking branch:
git checkout -b moodle405 upstream/MOODLE_405_STABLE
Optionally push to your own origin for safekeeping:
git push -u origin moodle405
Create the upgrade branch based on it:
git checkout -b upgrade moodle405
Then drop in the new versions of your third-party plugins, commit, and push:
git commit -a -m "Bring in Moodle 4.5 code and updated plugins"
git push --set-upstream origin upgrade
Upgrading the staging site
On the staging server:
- Restore the production database
- Restore the production moodledata directory
- Check out the upgrade branch
- Confirm all third-party plugins are in place at the new versions
- Copy
config-dist.php to config.php and adjust wwwroot, dataroot, database credentials, and any directory references - Check whether any language packs need updating for the new version
Then visit /admin/ in a browser. Look carefully at the Current release information page before clicking Continue:
- Are all Server Checks and Other Checks green? HTTPS warnings are acceptable on a throwaway staging domain.
- Any collation issues? Don’t just change the value in
config.php — run the CLI converter:
php admin/cli/mysql_collation.php --collation=utf8mb4_unicode_ci
Once the checks are clean, run the actual upgrade from the command line rather than the web UI. The web upgrade will hit PHP or Apache timeouts on any reasonably sized database.
- Confirm your CLI PHP matches the PHP-FPM version serving the site:
php -v. - Start a screen (or tmux) session so the upgrade survives a broken SSH connection.
- Run the upgrade:
php admin/cli/upgrade.php
If multiple PHP versions are installed, specify the right one explicitly, for example php8.3 admin/cli/upgrade.php.
Post-upgrade smoke tests
- Can Moodle send email? Send a test from Site administration → Server → Email → Test outgoing mail configuration.
- Is cron running cleanly? Run
admin/cli/cron.php once manually and watch the output. - Spot-check the custom theme, custom plugins, and any SSO or external integrations.
Handing staging to stakeholders
Share the staging URL and credentials with the people who need to sign off. Use a secure channel for the credentials — a password manager share, an encrypted file, or another out-of-band mechanism — not plain email. Set a clear deadline for feedback so the upgrade doesn’t stall for weeks.
Performing the live upgrade
Once staging is approved, prepare the Git repository. Fast-forward master to the upgrade branch:
git checkout upgrade
git pull
git checkout master
git pull
git reset --hard upgrade
git push --force origin master
If your team uses main, substitute. The force-push is deliberate — a major upgrade is a hard cutover, and a fast-forward would leave the old pre-upgrade history tangled with the new code.
Then, on the live server:
- Confirm all third-party plugins are committed to the branch.
- Put Moodle in maintenance mode: Site administration → Server → Maintenance mode, or visit
/admin/settings.php?section=maintenancemode. - Take a complete backup of the moodledata directory, the database, and the entire code base. If the hosting environment supports it, a full system snapshot is faster and more reliable.
- If the OS or stack needs updating (for example, bumping PHP), do it now. Moodle is in maintenance mode, so the transient breakage is contained. Confirm that no other sites on the server depend on the old PHP version or database software.
- Preserve your php.ini tuning. Before cutting over, note
upload_max_filesize, post_max_size, max_execution_time, memory_limit, and max_input_vars, and carry them forward into the new version’s php.ini. - Replace the source code:
git checkout master
git fetch origin
git reset --hard origin/master
- Restore
config.php from the backup of the old code base. - Visit
/admin/ to check for plugin issues before triggering the upgrade. - Start a screen session and run the upgrade:
php admin/cli/upgrade.php
- Run the same smoke tests you ran on staging: email, cron, custom theme, integrations.
- If staging accumulated backend configuration you want on live (for example, customized theme settings), export it with Site administration → Development → Admin presets from staging, and import it on live via
/admin/tool/admin_presets/index.php. Beware: if the preset was exported from a site with maintenance mode disabled, the import will disable maintenance mode on the target too. - Take the site out of maintenance mode.
- Notify stakeholders that the upgrade is complete.
Minor upgrades
Minor upgrades typically land in response to a Moodle Security Alert — the email from securityalerts@moodle.org with subject “Moodle Security Alerts.” For any site with real users, these patches should be applied promptly. For sites with contractual SLAs, it’s often non-negotiable.
Doing it manually
The manual minor-upgrade flow mirrors the major one but is much shorter:
- Fetch upstream and merge the latest minor tag into your site branch:
git fetch upstream
git merge v4.5.4
- Push, then pull on the server.
- Enable maintenance mode.
- Run
php admin/cli/upgrade.php --non-interactive. - Disable maintenance mode.
- Smoke-test: email, cron, a logged-in course view.
Third-party plugins stay in place because this is a merge, not a tree replacement. As long as the plugins are committed to the site branch, the merge only touches the core files that changed upstream.
Automating it
Minor upgrades are a good candidate for automation: they arrive on a predictable cadence, follow a deterministic flow, and rarely need human judgement. Solin runs managed client sites through an internal pipeline that:
- Polls upstream Moodle tags nightly and selects the latest eligible patch within the site’s major version
- Merges the tag into the site branch and pushes
- Triggers a deploy job via webhook — the job pulls, enables maintenance mode, runs the non-interactive upgrade, disables maintenance mode, and runs health checks
- On failure, rolls back automatically from the pre-deploy backup and alerts the on-call engineer
If you build something similar in-house, the main pitfalls are:
- Don’t force-rewrite site branches — that’s how custom plugins disappear.
- Make the health check hit a URL that actually exercises the database, not just a cached front page.
- Log enough to reconstruct what happened after an unattended rollback at 3am.
Troubleshooting
Forced HTTPS redirect on a staging domain
Symptom: a test PHP script runs fine on the staging vhost, but any request to Moodle itself is redirected to the server’s default site or forced to https:// even though there’s no certificate on the staging domain.
Cause: the running site has $CFG->overridetossl set. This was a common pattern on older Moodle installations that only wanted HTTPS on login pages. After an upgrade, and especially when restoring into a staging environment without a certificate, it produces unwanted forced redirects.
Fix: in lib/setuplib.php, inside initialise_fullme(), add unset_config('overridetossl'); at the top of the relevant block:
unset_config('overridetossl');
if (!empty($CFG->overridetossl {
if (strpos($CFG->wwwroot, 'http://') === 0) {
$CFG->wwwroot = str_replace('http:', 'https:', $CFG->wwwroot);
} else {
unset_config('overridetossl');
}
}
This is a staging workaround. Don’t commit it to the branch that ships to production.
H5P content type downloads
After an upgrade, H5P content types may not appear until the “Download available H5P content types from h5p.org” scheduled task has had a chance to run. To force it, temporarily set that task’s schedule to every 1 minute under Site administration → Server → Scheduled tasks, let it run once, then reset it to the default schedule.
Need help?
Solin has been doing Moodle and Totara upgrades for over 15 years — from small single-tenant sites to multi-tenant enterprise installations with heavy customization. If you’d like a second pair of eyes on an upgrade plan, or want the whole thing handled end to end, get in touch.
Legacy Totara and Moodle checkouts requiring PHP 7.4 run into problems when the host's default PHP is newer. This guide shows how to create a dedicated PHP launcher directory and bypass Totara's PATH reset behavior so PHPUnit uses the correct interpreter.
Running the correct version of PHP and PHPUnit for a specific Moodle release is a prerequisite for reliable plugin tests. This guide covers setting up a parallel PHP 7.4 environment alongside a newer system PHP, and configuring PHPUnit to use it — working around the PATH reset issue in Totara's PHPUnit bootstrap.
Use this when a legacy Totara/Moodle checkout needs PHPUnit under PHP 7.4, but the host default php binary is a newer version and Totara's PHPUnit bootstrap keeps resolving nested php calls to the wrong interpreter.
Scope
This applies when all of the following are true:
- the application requires PHP 7.4 for test tooling
- `/usr/bin/php` points to a newer version
- Totara's `test/phpunit/phpunit.php` bootstrap is used
- Composer dependency install is blocked by a newer Composer or audit defaults
Why the normal PATH trick fails
Totara's PHPUnit wrapper resets PATH using dirname(PHP_BINARY). If you start it with /usr/bin/php7.4, it prepends /usr/bin again, and nested php calls still resolve to /usr/bin/php instead of PHP 7.4.
Recommended approach
- Create a dedicated launcher directory with a copied `php` binary for PHP 7.4.
- Skip Totara's Composer bootstrap if necessary and install PHPUnit dependencies manually with a Composer version that still supports the locked package set.
- Use Totara's lower-level `server/admin/tool/phpunit/cli/util.php` entrypoint directly, not `test/phpunit/phpunit.php init`, if the wrapper still leaks to the system PHP.
Example
Assumptions:
- PHP 7.4 binary: `/usr/bin/php7.4`
- project root: `/path/to/project/totara`
Create the launcher:
mkdir -p /tmp/php74bin
cp /usr/bin/php7.4 /tmp/php74bin/php
/tmp/php74bin/php -r 'echo PHP_BINARY, PHP_EOL;'
Expected output:
/tmp/php74bin/php
If the Totara checkout expects totara/config.php, provide the instance bridge temporarily:
ln -sf /path/to/instance/config.php /path/to/project/totara/config.php
Prepare Composer manually when Totara's init step cannot:
cd /path/to/project/totara
curl -L https://getcomposer.org/download/2.6.6/composer.phar -o composer.phar
/tmp/php74bin/php composer.phar --version
/tmp/php74bin/php composer.phar -d test/phpunit update phpunit/phpunit brianium/paratest -W
Provide a PHPUnit config if the instance config does not define phpunit_* settings:
cat > /tmp/project-phpunit-config.php <<'PHP'
<?php
$CFG = new stdClass();
$CFG->dataroot = '/tmp/project_phpunit_data';
$CFG->dbtype = 'pgsql';
$CFG->dblibrary = 'native';
$CFG->dbhost = 'localhost';
$CFG->dbname = 'project_db';
$CFG->dbuser = 'dbuser';
$CFG->dbpass = 'dbpass';
$CFG->prefix = 'phpu_';
$CFG->dboptions = [
'dbpersist' => false,
'dbsocket' => false,
'dbport' => '',
];
PHP
ln -sf /tmp/project-phpunit-config.php /path/to/project/totara/test/phpunit/config.php
Initialize the isolated PHPUnit site directly:
cd /path/to/project/totara
/tmp/php74bin/php -d max_input_vars=5000 server/admin/tool/phpunit/cli/util.php --install
/tmp/php74bin/php -d max_input_vars=5000 server/admin/tool/phpunit/cli/util.php --buildconfig
Run a targeted test:
cd /path/to/project/totara
/tmp/php74bin/php -d max_input_vars=5000 server/admin/tool/phpunit/cli/util.php --run --filter local_multitenant_delete_capability_testcase server/local/multitenant/tests/delete_capability_test.php
Cleanup:
rm -f /path/to/project/totara/config.php
rm -f /path/to/project/totara/test/phpunit/config.php
Notes
- Prefer a project-local `composer.phar` instead of changing the system Composer.
- If Composer still blocks old locked packages, either use an older Composer release or repair the PHPUnit lockfile under PHP 7.4.
- Do not use a shell wrapper script for `php`; in this environment `PHP_BINARY` still resolved to the original binary path. A copied PHP 7.4 binary worked.
- The `test/phpunit/phpunit.php init` wrapper may still leak to the host default `php`. If that happens, use `server/admin/tool/phpunit/cli/util.php` directly.
Migrating a Moodle database from MySQL to PostgreSQL requires careful sequencing: installing PostgreSQL and pgloader, creating the target database, and handling data type conversions. This guide covers the practical workflow including pitfalls with Ubuntu's pgloader package and PostgreSQL restore procedures.
Migrating from MySQL to PostgreSQL
Install PostgreSQL
Add the following packages:
apt-get install postgresql
apt-get install php8.1-pgsql
apt-get install pgloader
Please make sure to check the actual php version you’re using on your Moodle or Totara website.
Install pgloader
The version of pgloader that comes with Ubuntu 22.04 (through apt-get) did not work for me, it gave me an error for which I found no solution.
Instead, I compiled pgloader from source as explained here:
sudo apt install sbcl unzip libsqlite3-dev gawk curl make freetds-dev libzip-dev
curl -fsSLO https://github.com/dimitri/pgloader/archive/v3.6.10.tar.gz
tar -xvf v3.6.10.tar.gz
cd pgloader-3.6.10/
make pgloader
mv ./build/bin/pgloader /usr/local/bin/
And finally check whether the installation worked:
root@localhost:/home/sandbox/pgloader-3.6.10# pgloader --version
pgloader version "3.6.7~devel"
compiled with SBCL 2.1.11.debian
Configure PostgreSQL
Connect to the database server:
sudo -u postgres psql
Set a password for the postgres user:
ALTER USER postgres PASSWORD '<password>';
Replace the <password> with the one you want. Then quit with:
\q
(From: Install PostgreSQL Linux)
Create Database
On the command prompt (so, not in the psql client):
sudo -u postgres createdb -E utf8 {dbname}
Make sure that MySQL is using:
default-authentication-plugin=mysql_native_password
(You can specify that in /etc/mysql/mysql.cnf under the [mysqld] section. Simply add that section if it doesn’t exist. Don’t forget to restart the MySQL server: service mysql restart).
Also make sure that the MySQL user is using this authentication method:
ALTER USER 'sandbox'@'localhost' IDENTIFIED WITH mysql_native_password BY 'secretpassword';
Migrate MySQL Database to PostgreSQL
Use this command to do the actual migration (which is really a copy & convert, since the original MySQL database is left intact) :
pgloader mysql://sandbox:passwd@127.0.0.1/sample_db pgsql://postgres:passwd@localhost/sampledb
- Please note: this should be the following –
pgloader --with "prefetch rows = 1000" mysql://sandbox:passwd@127.0.0.1/sample_db pgsql://postgres:passwd@localhost/sampledb
To see if it worked, use the following command to list all tables:
\dt
Issue: Heap exhausted
2024-03-22T13:02:33.040001Z LOG pgloader version "3.6.7~devel"
2024-03-22T13:02:33.188004Z LOG Migrating from #<MYSQL-CONNECTION mysql://sandbox@127.0.0.1:3306/fabrikamlive {10072891C3}>
2024-03-22T13:02:33.188004Z LOG Migrating into #<PGSQL-CONNECTION pgsql://postgres@localhost:5432/fabrikamsandbox {10074545A3}>
Heap exhausted during garbage collection: 0 bytes available, 32 requested.
Gen Boxed Code Raw LgBox LgCode LgRaw Pin Alloc Waste Trig WP GCs Mem-age
fatal error encountered in SBCL pid 859879 tid 859888:
GC invariant lost, file "gencgc.c", line 488
Solved this particular issue by adding this:
--with "prefetch rows = 1000"
So:
pgloader --with "prefetch rows = 1000" mysql://sandbox:passwd@127.0.0.1/sample_db pgsql://postgres:passwd@localhost/sampledb
Issue: out of shared memory
I subsequently ran into:
ERROR Database error 53200: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
It turns out that this is a PostgreSQL setting which can be adjusted here:
vim /etc/postgresql/14/main/postgresql.conf
I set it to:
max_locks_per_transaction = 128
Restart the PostgreSQL server:
service postgresql restart
This seems to have worked:
fabrikamlive.mdl_workshopform_rubric_config 0 0 11.144s
fabrikamlive.mdl_workspace 0 0 11.072s
fabrikamlive.mdl_workspace_member_request 0 0 11.084s
------------------------------------------------------------------ --------- --------- --------- --------------
COPY Threads Completion 0 4 28m1.160s
Create Indexes 0 4334 3h59m42.459s
Index Build Completion 0 4334 0.164s
Reset Sequences 0 915 2.876s
Primary Keys 0 915 0.792s
Create Foreign Keys 0 148 0.424s
Create Triggers 0 0 0.000s
Set Search Path 0 1 0.000s
Install Comments 0 747 0.264s
------------------------------------------------------------------ --------- --------- --------- --------------
Total import time 86727644 20.9 GB 4h27m48.138s
Issue 20240323
This should select columns on fabrikamsandbox, but doesn’t:
SELECT a.attnum, a.attname AS field, t.typname AS type, a.attlen, a.atttypmod, a.attnotnull, a.atthasdef, pg_get_expr(d.adbin, d.adrelid) AS adsrc FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace as ns ON ns.oid = c.relnamespace JOIN pg_catalog.pg_attribute a ON a.attrelid = c.oid JOIN pg_catalog.pg_type t ON t.oid = a.atttypid LEFT JOIN pg_catalog.pg_attrdef d ON (d.adrelid = c.oid AND d.adnum = a.attnum) WHERE relkind = 'r' AND c.relname = 'mdl_context' AND c.reltype > 0 AND a.attnum > 0 AND (ns.nspname = current_schema() OR ns.oid = pg_my_temp_schema( ORDER BY a.attnum
It looks like pgloader does not populate the pg_catalog tables.
20240415
But in reality, the solution is very simple: in PostgreSQL, each database has a schema. By default, this schema is named public. If you don’t use the fully qualified name of a database object, such as a table, in PostgreSQL, then the public schema will be used implicitly.
Now, pg_loader names a new schema after the imported database. And since Totara does not explicitly mention the schema name, none of the Totara queries will work.
The solution is to connect to the database and then to simply rename the schema created by pgloader to ‘public’:
alter schema public rename to original_public;
alter schema fabrikamlive rename to public;
We can do this, because the PostgreSQL documentation states: “There is nothing special about the public schema except that it exists by default.”
Dump PostgreSQL Database
sudo -u postgres pg_dump fabrikamsandbox > fabrikamsandbox.20240329.sql
Here’s the canonical command:
pg_dump -h SRC_HOST -U SRC_USER -d SRC_DB \
--format=plain \
--no-owner --no-acl \
--encoding=UTF8 \
--blobs \
| gzip > /path/to/src_db.sql.gz
–no-acl means “don’t include access control lists (privileges/grants) in the dump.”We don’t want “owners” and “privileges/grants” because they cause problems when restoring the dump if you don’t have the same owner and privileges in place.
Restore PostgreSQL Database
sudo -u postgres psql fabrikamlive < fabrikamsandbox.20240329.sql
sudo -u postgres createdb -E utf8 fabrikamlive
Or, with a dump that has no owner and privileges/grant:
CREATE ROLE LOGIN PASSWORD '***';CREATE DATABASE newdb OWNER app_user;psql -h DST_HOST -U -d newdb -v ON_ERROR_STOP=1 -f /path/to/src_db.sql
20240408
mysqldump --single-transaction --compatible=ansi --default-character-set=utf8 -u root -p fabrikamlive > fabrikamsandbox.20240408.sql
https://gist.github.com/barseghyanartur/56876ab3acbd3d5d6ab7dcc477c29238: completely out of date, throws python errors
https://github.com/AnatolyUss/NMIG: throws syntax errors (probably wrong version of nodejs / npm)
https://pgdba.org/post/migration/ pg_chameleon
The instructions were largely okay, but I had to replace the following cli command with versions which have double dashes (–) instead of a single long one:
chameleon create_replica_schema
chameleon add_source --config migration --source mysql
chameleon show_status --config migration
https://github.com/maxlapshin/mysql2postgres last updates from 4 years ago, uses Ruby gems. Tried to install, but was stranded with gem dependency errors.
https://stackoverflow.com/questions/92043/is-there-a-simple-tool-to-convert-mysql-to-postgresql-syntax
root@localhost:~# df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 791M 1.7M 789M 1% /run
/dev/mapper/vg00-lv01 392G 234G 142G 63% /
tmpfs 3.9G 32K 3.9G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda1 488M 251M 201M 56% /boot
tmpfs 791M 8.0K 791M 1% /run/user/0
More pg_loader
20240415
pgloader --with "prefetch rows = 1000" --with "drop schema" mysql://sandbox:passwd@127.0.0.1/sample_db pgsql://postgres:passwd@localhost/sampledb
Current issue: type casting issue arises when you do a query like
select id, username from mdl_user where username = ‘onno’ and delete <> 1;
The <> operator can’t handle whatever type 1 is, after importing the data and the schema through pgloader.
So I had Totara create the schema (simply do a Totara install and stop when they ask for the admin user data) and then asked pgloader to only load the data:
pgloader --with "prefetch rows = 1000" --with "data only" mysql://sandbox:passwd@127.0.0.1/fabrikamlive pgsql://postgres:passwd@localhost/fabrikamsandbox
SET session_replication_role = 'replica';
SELECT tablename FROM pg_tables WHERE schemaname = 'fabrikamlive';
-- Generate and execute the truncate command for each table
DO $$ DECLARE table_name TEXT;
BEGIN
FOR table_name IN (SELECT tablename FROM pg_catalog.pg_tables WHERE schemaname = 'fabrikamlive') LOOP
EXECUTE 'TRUNCATE TABLE ' || quote_ident(table_name) || ' CASCADE;';
END LOOP;
END $$;
SET session_replication_role = 'origin';
COMMIT;
20240416
2024-04-15T16:53:53.414533Z NOTICE COPY fabrikamlive.mdl_user with 41927 rows estimated [1/4]
2024-04-15T16:53:54.394558Z NOTICE COPY fabrikamlive.mdl_totara_sync_log with 41018 rows estimated [1/4]
2024-04-15T16:53:54.394558Z ERROR Database error 22P02: invalid input syntax for type smallint: "t"
CONTEXT: COPY mdl_user, line 1, column confirmed: "t"
Solved this issue by explicitly casting tinyint to smallint:
pgloader --with "prefetch rows = 1000" --with "data only" --cast "type tinyint when (= precision 1) to smallint keep typemod" mysql://sandbox:passwd@127.0.0.1/fabrikamlive pgsql://postgres:passwd@localhost/fabrikamsandbox
In the end, I got the migration working by:
- Having Totara create the database schema
- Using pgloader to import the data
- Using pgloader to import the schema for the dynamically generated
mdl_appraisal_quest_data_* tables.
20240424
But then I found out that Totara has dynamically generated tables which are not created by Totara upon installation.
Unfortunately, using pgloader to do the entire migration, not just the data but also the schema, is not really an option as it leads to numerous errors and crashes.
… And it turns out that max_locks_per_transaction had somehow been set from 128 to 12, which explains the memory issues. Let’s quickly fix that…
Alas, the memory issues persist!
Next Steps
- Find out how config file works for pgloader
- With the existing database, fabrikamsandbox, import only the missing tables
This works, more or less, if we use a loader file to include only specific tables:
LOAD DATABASE
FROM mysql://sandbox:passwd@127.0.0.1/fabrikamlive
INTO pgsql://postgres:passwd@localhost/sampledb01
WITH include drop, create tables, reset sequences, prefetch rows = 1000, disable triggers, create no indexes
SET MySQL PARAMETERS
net_read_timeout = '1200',
net_write_timeout = '1200'
CAST type tinyint to smallint drop typemod
INCLUDING ONLY TABLE NAMES MATCHING ~/^mdl_appraisal_quest_data_[^\s]+/
;
[^\s]+ matches one or more characters that are not whitespace characters.
It might be necessary to free up some memory on the server, before running pgloader, by:
- turning off the cron jobs for Totara (e.g. through Webmin)
- rebooting the server
- turning off Apache:
service apache2 stop
Potentially troublesome table: mdl_temp_rb_course_completions_v1
20240720
Preparation
vim /etc/postgresql/14/main/postgresql.conf
I set it to:
max_locks_per_transaction = 128
Restart the PostgreSQL server:
service postgresql restart
Steps
Did Live Fabrikam Foods migration by:
- Having Totara create the database schema
- Using pgloader to import the data
- Using pgloader to import the schema for the dynamically generated
mdl_appraisal_quest_data_* tables.
pgloader script:
LOAD DATABASE
FROM mysql://sandbox:passwd@127.0.0.1/fabrikamlive
INTO pgsql://postgres:passwd@localhost/sampledb01
WITH prefetch rows = 1000, data only, truncate
SET MySQL PARAMETERS
net_read_timeout = '1200',
net_write_timeout = '1200'
CAST
type tinyint when (= precision 1) to smallint keep typemod
;
Execute, e.g.:
pgloader myloader.load
Various small issues:
- Schema must be renamed from public to fabrikamlive:
alter schema public rename to fabrikamlive; If you issue this command while being connected to database fabrikamlive, apparently the command is only executed for that particular database. - /etc/mysql/mysql.cnf must have
default-authentication-plugin=mysql_native_password (and even then it might be necessary to do: ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'your_root_password';) /etc/postgresql/14/main/postgresql.conf must have: max_locks_per_transaction = 128- If you have Totara create the database tables, be sure to include the ‘
WITH truncate’ option in pgloader – see pgloader script above. Otherwise tables which have records will not be filled (mdl_user and mdl_course for instance).
20240928
I had to run VACUUM ANALYZE; on the fabrikamlive database. The catalog was loading very slowly (1 minute or more) for a specific user. VACUUM ANALYZE; fixed this.
Actually, the real issue was with the online content marketplaces feature, which does an extremely inefficient has_capability_in_any_context check.
Live branch was at commit 1daa62ced.
20241025
- After importing a Postgres DB it is usually necessary to:
- Change the schema to public
- Change the ownership of all tables to your DB user
Ad 1:
DO $$
DECLARE
tbl RECORD;
BEGIN
FOR tbl IN
SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'fabrikamlive'
LOOP
EXECUTE format('ALTER TABLE fabrikamlive.%I SET SCHEMA public;', tbl.table_name);
END LOOP;
END $$;
Ad 2:
DO
$$
DECLARE
r RECORD;
BEGIN
FOR r IN SELECT table_name FROM information_schema.tables WHERE table_schema = 'public' LOOP
EXECUTE 'ALTER TABLE public.' || quote_ident(r.table_name) || ' OWNER TO fabrikamsand2;';
END LOOP;
END
$$;
I also found out I had to do these, on one occasion:
DO $$
DECLARE r record;
BEGIN
FOR r IN
SELECT c.oid::regclass AS obj
FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE n.nspname='public' AND c.relkind IN ('r','p') AND c.relname LIKE 'mdl_%'
LOOP
EXECUTE format('ALTER TABLE %s OWNER TO fabrikamstaging;', r.obj);
END LOOP;
END$$;
-- Sequences
DO $$
DECLARE r record;
BEGIN
FOR r IN
SELECT c.oid::regclass AS obj
FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE n.nspname='public' AND c.relkind='S' AND c.relname LIKE 'mdl_%'
LOOP
EXECUTE format('ALTER SEQUENCE %s OWNER TO fabrikamstaging;', r.obj);
END LOOP;
END$$;
(Or simply use ChatGPT to produce the code with your specific user and database inserted right away.)
Recommended PostgreSQL Database Handling
- Create a separate PostgreSQL user (can have the same name as the database)
sudo -u postgres psqlCREATE USER moodleuser WITH PASSWORD 'yourpassword';- (If user already exists):
ALTER USER moodleuser PASSWORD 'new_password'; - Simply keep all tables in the public schema (or move them into there when importing)
- Use password authentication (as opposed to peer authentication, which uses the Unix user in the background) to access the database as the user, e.g.:
psql -U fabrikamstaging -d fabrikam -h 127.0.0.1
Proper PostgreSQL Restore Procedure (and Why It’s Necessary)
When restoring a PostgreSQL database for Moodle or Totara, it’s common to encounter permission and ownership problems such as:
ERROR: permission denied for table mdl_config
ERROR: cannot change owner of sequence ... because it is linked to table ...
These errors occur because PostgreSQL, unlike MySQL, preserves exact ownership metadata from the original dump. Every table, sequence, and schema has an explicit owner. If you restore the dump as the postgres superuser, all objects will be owned by postgres—but Totara will later connect as its own role (e.g. totara) and lack permissions to manage those objects.
MySQL does not enforce this distinction: the importing user automatically “owns” everything in the database, so these issues never appear there.
To prevent these conflicts and ensure smooth operation during upgrades or plugin installations, you must import the database as the application’s own user, not as postgres.
Correct Restore Template
This 6-line process guarantees that all imported tables, sequences, and schemas are owned by the intended role.
# 1. Switch to the postgres system user
sudo -u postgres psql
# 2. Create the database user (role)
CREATE USER totara WITH PASSWORD 'strongpassword';
# 3. Create the database owned by that user
CREATE DATABASE totara OWNER totara ENCODING 'UTF8' TEMPLATE template0;
# 4. Quit psql
\q
# 5. Import the dump *as the totara user*
psql -U totara -d totara -h 127.0.0.1 -f /path/to/your_dump.sql
That’s all you need—no post-import ALTER OWNER scripts, no schema renames, no reassign loops.
Additional Notes
If the dump was created with pg_dump -Fc, use pg_restore instead:pg_restore -U totara -d totara /path/to/your_dump.dump
- Ensure
pg_hba.conf allows local password authentication for the totara user.
Always verify ownership after import:\dt+ public.mdl_user
In config.php, make sure the dbhost variable is not set to ‘localhost’, but this instead:
Why PostgreSQL Is Stricter
PostgreSQL’s role system enforces fine-grained security:
- Each object (table, sequence, view, function, etc.) has a single explicit owner.
- Linked sequences must share ownership with their tables.
- Privileges do not cascade automatically.
This model increases safety and maintainability in multi-application environments—but requires a bit more care during setup and migrations.
MySQL, by contrast, automatically grants implicit ownership of all imported objects to the current user, masking these details.
Summary
Following the procedure above ensures Totara (or Moodle) can:
- Run upgrades and install plugins without permission errors
- Create and modify tables during schema changes
- Access sequences and perform inserts normally
SOP: Plain-text PostgreSQL dump/restore (sudo -u postgres, no owner/ACL)
0. Summary
- Dump as plain SQL with
--no-owner --no-acl --blobs, run as the postgres OS user. - Create one owner role and one database on the target.
- Restore as that owner. Minimal hassle, version-friendly.
1. Dump on the source (local host)
sudo -u postgres pg_dump -d fabrikamlive \
--format=plain --no-owner --no-acl --encoding=UTF8 --blobs \
| gzip > /backups/fabrikamlive.YYYYMMDD.sql.gz
Notes:
- Uses peer auth via the local Unix socket (no password prompts).
- Keep
--blobs unless you are sure there are no large objects.
2. Transfer the dump
scp /backups/fabrikamlive.YYYYMMDD.sql.gz user@DST_HOST:/tmp/
3. Prepare the target (one role, one DB)
Run once on the target (psql as a superuser/dbadmin):
sudo -u postgres psql
CREATE ROLE app_user LOGIN PASSWORD 'REPLACE_ME';
CREATE DATABASE newdb OWNER app_user;
Optional hardening (safe for single-role apps):
REVOKE ALL ON DATABASE newdb FROM PUBLIC;
REVOKE CREATE ON SCHEMA public FROM PUBLIC;
ALTER SCHEMA public OWNER TO app_user;
ALTER ROLE app_user IN DATABASE newdb SET search_path = public;
4. Restore
If the dump is gzipped:
gunzip -c /tmp/fabrikamlive.YYYYMMDD.sql.gz | psql \
-h DST_HOST -U app_user -d newdb -v ON_ERROR_STOP=1
(You’ll be prompted for the password, but the process will continue smoothly upon entering it.)
If you placed the dump on the DB server itself and want to use the postgres OS user:
gunzip -c /tmp/fabrikamlive.YYYYMMDD.sql.gz | sudo -u postgres psql -d newdb -v ON_ERROR_STOP=1
5. Post-restore sanity checks
Run as app_user on newdb unless noted.
Show search path:
SHOW search_path;
Extensions present:
SELECT extname, extversion FROM pg_extension ORDER BY 1;
Quick row counts (example tables):
SELECT COUNT(id) FROM mdl_user
6. Common pitfalls and fixes
- Error about permissions during restore: ensure you are restoring as
app_user, and that public is owned by app_user (see hardening block). - Role "someone" does not exist: indicates owners/ACLs leaked into the dump. Re-dump exactly as above with
--no-owner --no-acl. - Version differences: plain SQL is generally fine. If you hit extension issues, precreate allowed extensions on the target, or drop/adjust unsupported ones.
7. Optional: one-liner remote pull (no intermediate file on source)
From a machine that can reach the source and target:
ssh SRC_HOST "sudo -u postgres pg_dump -d fabrikamlive --format=plain --no-owner --no-acl --encoding=UTF8 --blobs | gzip" \
> /tmp/fabrikamlive.YYYYMMDD.sql.gz
Then proceed with steps 3–4.
That’s it. This variant keeps everything simple, avoids passwords on the dump step, and restores cleanly under a single owner on the target.
Building Moodle for high availability on AWS requires separating state from compute: managed services like RDS, EFS, and ElastiCache handle persistent data while Spot instances serve stateless web traffic. This guide covers the full architecture, including failover strategies and critical configuration gotchas.
Glossary of AWS Terminology
- EC2 (Elastic Compute Cloud): Virtual machines in AWS.
- Spot Instance: Unused EC2 capacity sold at a discount. AWS may reclaim them with a 2-minute warning.
- On-Demand Instance: Normal EC2 instance billed at the standard hourly rate with no risk of interruption.
- ASG (Auto Scaling Group): Service that manages the scaling and lifecycle of EC2 instances.
- ALB (Application Load Balancer): Layer 7 load balancer for web traffic.
- RDS: Managed database service for MySQL, MariaDB, PostgreSQL, or Aurora.
- EFS (Elastic File System): Managed NFS file store that can be mounted on multiple EC2 instances.
- ElastiCache (Redis): Managed caching service used for Moodle MUC and optionally session caching.
1. Establish Shared Storage and State (The Stateless Layer)
In a Spot-based architecture, no EC2 instance should contain state that is required for proper operation. All persistent data must live in managed services.
Database (RDS or Aurora)
Purpose
Stores all Moodle application data, including course content, activities, logs, configuration, user accounts, enrolments, and (optionally) sessions.
Steps
- Create a new RDS MySQL or Aurora MySQL instance. Recommended versions: MySQL 8.0 or Aurora MySQL 3.
- Place the DB in private subnets in at least two Availability Zones.
- Enable automated backups and set a retention period that matches business requirements.
- Create a security group allowing inbound MySQL traffic only from:
- The stable instance
- The Spot web servers
- (Optional) a bastion host or VPN
- Set the database parameter group:
innodb_buffer_pool_size sized to available memory.max_allowed_packet to at least 256M for large file uploads.
File Storage (EFS)
Purpose
Provides shared storage for moodledata, which must be accessible by all EC2 instances. Local disk storage cannot be used in a stateless architecture.
Steps
- Create an EFS file system in the same region and VPC.
- Enable Mount Targets in at least two Availability Zones in the private subnets.
- Configure security group rules to allow NFS (TCP 2049) from all EC2 instances.
- On each EC2 instance (stable and spot):
- Install the EFS utilities.
- Mount EFS to
/var/www/moodledata (or your chosen location). - Set correct ownership and permissions:
- Typical:
www-data:www-data, u+rwX, g+rwX.
Performance Notes
- Use General Purpose mode for most Moodle deployments.
- Use Elastic Throughput unless running a very large site.
Caching (ElastiCache Redis)
Purpose
Provides fast shared cache storage for Moodle Universal Cache (MUC), session data (optional), and request-level caching.
Steps
- Create an ElastiCache Redis cluster (cluster mode disabled is fine for most Moodle installs).
- Place the cluster in private subnets.
- Allow inbound traffic from the EC2 instances.
- Configure Moodle to use Redis for:
- Application cache
- Session cache (optional, but recommended)
Notes
- Do not use local file caching or local sessions on EC2 instances.
- If using Redis for sessions, configure a replication group with automatic failover.
2. Configure the Stable Instance (Admin/Cron)
Role of the Stable Instance
This EC2 instance is not part of the Spot fleet. It runs on On-Demand pricing and is never interrupted. It performs all critical background processing:
- Moodle cron
- Scheduled tasks
- Ad-hoc tasks such as course backups, grading operations, and import/export tasks
- CLI maintenance actions
- Long-running administrative operations
It effectively acts as the control plane of the Moodle application.
Configuration Requirements
- Instance Type: Choose an instance with enough CPU and memory to handle all background tasks. Suggested minimum:
t3.medium or t3.large. - Mount EFS: Mount the same EFS file system used by the Spot fleet.
- Networking:
- Instance must be in a private subnet.
- Ensure outbound internet access through a NAT Gateway.
- Security group must allow:
- Outbound access for email and updates.
- Inbound SSH from trusted locations.
Cron Setup: Add the following to root or www-data crontab:* * * * * /usr/bin/php /var/www/html/admin/cli/cron.php >/dev/null 2>&1
- Web Traffic: It does not need to be in the ALB Target Group unless you want redundancy for admin access. The stable instance should not serve normal web traffic.
3. Configure the Spot Fleet (Web Servers)
These EC2 instances serve all user-facing traffic and can scale horizontally. They can be terminated at any time by AWS, so they must be fully stateless.
Auto Scaling Group (ASG)
Steps
- Create an ASG in at least two Availability Zones.
- Configure Mixed Instances Policy:
- Majority Spot Instances
- One or two On-Demand instances (optional) as capacity floor
- Enable Instance Type Diversification:
- Include 3 to 6 instance types across t, m, and c families.
- Enable Capacity Rebalancing so AWS can replace at-risk Spot instances proactively.
- Set the desired capacity, minimum, and maximum:
- Example: min 2, desired 4, max 8.
Launch Template Configuration
The launch template defines how each web server is set up.
Required items
- AMI (Ubuntu 24.04 or Amazon Linux 2023 recommended)
- User Data script that:
- Installs Apache or Nginx
- Installs PHP and required extensions
- Installs AWS EFS utilities
- Mounts
/var/www/moodledata - Pulls the Moodle code (Git, EFS, or CodeDeploy)
- Configures Redis and DB settings via config.php
- Starts the web server
Critical Requirement
The web servers must never run the Moodle cron. Ensure the user data does not write any cron entries.
4. Configure Load Balancing and Termination Handling
Application Load Balancer (ALB)
Steps
- Create an ALB in public subnets.
- Attach AWS ACM certificates for HTTPS.
- Create a Target Group using HTTP or HTTPS.
- Register only the ASG (Spot fleet) instances.
- Configure health checks:
- Path:
/login/index.php or a custom health check script - Interval: 30 seconds
- Healthy threshold: 2
Connection Draining (Deregistration Delay)
Connection Draining protects users when AWS terminates a Spot instance.
Steps
- Open the Target Group settings in the AWS Console.
- Set Deregistration Delay to 60 to 120 seconds.
- Enable Connection Termination Protection if available.
How it Works
- AWS gives a 2-minute warning before reclaiming a Spot instance.
- The instance receives the termination notice via instance metadata.
- The ASG marks the instance as terminating.
- The ALB immediately stops routing new requests to that instance.
- Existing user requests have up to 120 seconds to finish.
This prevents unexpected 500 errors during page loads.
5. Moodle Application Configuration
To complete the architecture, configure Moodle so that all stateful components use shared services.
Critical Settings
- Sessions
- Enable Database or Redis session storage.
- Do not use file based sessions.
- Path: Site administration > Server > Session handling.
- Asynchronous Backups
- Set to Enabled.
- Path: Site administration > Courses > Backups > General backup defaults.
- Reason: Ensures backups triggered by users running on Spot instances are delegated to the Stable instance.
- Universal Cache (MUC)
- Configure all cache stores (Application, Session, Request) to use Redis.
- Path: Site administration > Plugins > Caching > Configuration.
Config.php Adjustments Typical minimal adjustments:$CFG->sessionredis_host = 'redis.example.internal';
$CFG->session_handler_class = 'core\session\redis';
$CFG->directorypermissions = 02775;
$CFG->preventfilelocking = true;
Risks and Limitations
- Uploads longer than 2 minutes may fail if a Spot instance is reclaimed mid-upload, even with connection draining.
- Cron must remain isolated on the Stable instance. Running cron on Spot instances can cause:
- race conditions
- tasks running twice
- course backups failing
- cron locks becoming stuck
- EFS performance limits may apply to very large Moodle sites. For heavy workloads, consider:
- EFS provisioned throughput
- Local SSD cache combined with rsync or S3 (advanced)
- Redis is a single point of failure unless deployed with replication and automatic failover.
- Spot capacity fluctuations may cause scaling delays during high traffic periods.
Moodle's backup and restore API is powerful but complex. This guide covers the required context setup, database operations, and workflows for programmatic backups and targeted course/section restores, including critical pitfalls around controller cleanup.
This guide aims to help developers programmatically interact with Moodle's robust backup and restore (B&R) system. While powerful, the API can be complex, and certain scenarios, like targeted section restores, require precise handling.
This guide is based on the Moodle 4.1 backup & restore API.
I. General Principles & Best Practices
- Moodle Environment is Key:
- Configuration: Always include Moodle's main
config.php at the beginning of your script. - CLI Context: If the script is intended for command-line execution, define
define('CLI_SCRIPT', true); before including config.php. - Required Libraries: Include essential Moodle libraries. Common ones for B&R include:
$CFG->libdir . '/clilib.php' (for CLI scripts)$CFG->libdir . '/backup/util/includes/backup_includes.php'$CFG->libdir . '/backup/util/includes/restore_includes.php'$CFG->dirroot . '/course/lib.php' (for course/section operations)- User Context:
- Most B&R operations require a valid Moodle user context, typically an administrator, to ensure appropriate permissions.
Fetch an admin user using get_admin():global $USER; // Ensure $USER is global if not already
$USER = get_admin();
- Error Handling & Logging:
- Exception Handling: Moodle's B&R API can throw various exceptions (e.g.,
moodle_exception, backup_exception, restore_controller_exception). Wrap B&R operations in try...catch blocks to handle potential errors gracefully. - Detailed Logging: Implement a robust logging mechanism to track the script's progress, key variable states, and any errors encountered. This is invaluable for debugging complex B&R workflows. For CLI scripts,
cli_writeln() can be used for console output, supplemented by logging to a file. - Resource Management – The
destroy() Method: - CRITICAL PITFALL:
backup_controller and restore_controller objects, along with their associated backup_plan and restore_plan objects, create complex internal structures. They may hold database connections, temporary file references, and significant amounts of data in memory. - Best Practice: ALWAYS call the
$controller->destroy() method after you are completely finished with the controller object. This is typically done in a finally block to ensure cleanup even if errors occur. The destroy() method on the controller will also handle the destruction of its associated plan. - Consequences of Neglect: Failure to call
destroy() can lead to memory leaks, PHP timeouts, database connection exhaustion, and persistent temporary data (e.g., "existing temptables found" errors when Moodle attempts its own cleanup later). - Temporary Directories:
- Moodle's Temp Space: Moodle B&R operations use a temporary directory, typically located at
$CFG->dataroot . '/temp/backup/'. - Path Relativity (Restore): When providing a path to extracted backup contents for a
restore_controller, this path must be relative to Moodle's main backup temporary directory (i.e., $CFG->dataroot/temp/backup/). - Creating Temp Dirs: Use Moodle's
make_backup_temp_directory() function to create uniquely named subdirectories within this temporary space. This function returns an absolute path. - Cleanup:
- Moodle's controller
destroy() methods are responsible for cleaning up Moodle's internal temporary files and database records related to that specific B&R operation. - Your script is responsible for cleaning up any additional temporary directories or files it creates (e.g., a directory where you manually copy an MBZ file or extract its contents before passing a relative path to the
restore_controller). Use fulldelete() for this. - Database Context (If Working with Multiple Databases):
- If your script interacts with a separate database for sourcing backup data (e.g., a dedicated backup Moodle instance's database), ensure you manage the global
$DB object context correctly.
Switch to the backup database connection before performing operations against it, and restore the original live Moodle database connection immediately afterward.global $DB;
$original_live_db_connection = $DB;
// Assume $backup_db_conn is a Moodle database object for your backup DB
$DB = $backup_db_conn;
// ... perform operations against the backup database ...
$DB = $original_live_db_connection; // Switch back to live Moodle DB
- Moodle Version Awareness:
- The B&R API, like other Moodle APIs, can evolve between Moodle versions. Code written for one version might require adjustments for another. Always test thoroughly against your target Moodle version.
- Consult the Source Code:
- When API behavior is unclear or documentation is sparse, the Moodle source code (primarily in
backup/controller/, backup/moodle2/, and related task/plan files) is the definitive reference. PHPDoc comments within the code can also be very helpful. - Iterative Development & Debugging:
- Programmatic B&R can be complex. Start with simpler operations and incrementally add complexity. Use systematic debugging techniques and the logging you've implemented.
II. Key Components of the B&R API
- Controllers (
backup_controller, restore_controller): - These are the central orchestrators for backup and restore operations.
- Instantiation Examples:
new backup_controller($type, $id, $format, $interactive, $mode, $userid)new restore_controller($tempdir_relative_path, $courseid, $interactive, $mode, $userid, $target)- Common Parameters:
$type (Backup): The scope of the backup, e.g., backup::TYPE_1COURSE, backup::TYPE_1SECTION.$id (Backup): The Moodle ID of the item to be backed up (e.g., course ID, section ID from the source database).$format: The backup format, typically backup::FORMAT_MOODLE.$interactive: For scripts, this should be backup::INTERACTIVE_NO.$mode: The purpose/context of the operation, e.g., backup::MODE_GENERAL.$userid: The Moodle user ID of the user performing the operation.$tempdir_relative_path (Restore): The path to the directory containing the extracted backup contents, relative to $CFG->dataroot/temp/backup/.$courseid (Restore): The Moodle ID of the target course for the restore operation.$target (Restore): Specifies how the restore interacts with the target course, e.g., backup::TARGET_NEW_COURSE, backup::TARGET_CURRENT_ADDING.- Typical Lifecycle: Instantiate Controller -> Get Plan -> (Optionally Adjust Plan Settings/Tasks) -> (For Restore: Execute Prechecks) -> Execute Plan -> (Optionally Get Results) -> Destroy Controller.
- Plans (
backup_plan, restore_plan): - The detailed blueprint of the B&R operation, generated by its controller.
- Contains:
- Settings: High-level configuration options for the entire operation (e.g., whether to include users, logs, files). Accessed via
$plan->get_setting('setting_name'). - Tasks: An ordered list of individual operations that will be performed (e.g., restore course structure, restore a specific section's content, restore an activity module). Accessed via
$plan->get_tasks(). - The plan is obtained from the controller using
$controller->get_plan(). - Tasks (e.g.,
backup_section_task, restore_section_task, restore_activity_task): - Represent discrete units of work within a plan. Each task is responsible for a specific part of the B&R process.
- Tasks can have their own settings, specific to that unit of work (e.g., whether to include user data for a particular activity).
- Importance for Targeted Operations: For specific operations like restoring a single section into a particular place, understanding and correctly configuring the relevant task (e.g.,
restore_section_task) is crucial. - Settings (
backup_setting): - Objects that represent configurable options within the B&R process.
- They exist at both the overall plan level and within individual tasks.
- Access settings using
$plan->get_setting('name') or $task->get_setting('name'). - Set their values using
$setting_object->set_value(...). - Challenge: Discovering the exact name of a setting and determining whether it's a plan-level or task-level setting often requires inspecting Moodle source code (e.g., the
define_settings() methods in task classes or how settings are added in plan builder classes). - Backup Types & Restore Targets (Constants):
- These constants, defined in
backup/util/defines.php (which is usually included via backup_includes.php), dictate the scope and destination of B&R operations. - Backup Types (Scope Examples):
backup::TYPE_1COURSE: Backs up an entire course.backup::TYPE_1SECTION: Backs up a single section from a course.backup::TYPE_1ACTIVITY: Backs up a single activity module.- Restore Targets (Destination Examples):
backup::TARGET_NEW_COURSE: Restores the backup into a brand new Moodle course.backup::TARGET_CURRENT_ADDING: Adds content from the backup into the course specified by $courseid (passed to restore_controller), merging with any existing content.backup::TARGET_CURRENT_DELETING: Deletes existing content from the target $courseid before restoring content from the backup.- (Others like
TARGET_EXISTING_ADDING, TARGET_EXISTING_DELETING are for restoring into a different existing course, selected during UI workflows or configured programmatically). - Key Interaction: The interplay between the backup
TYPE_... and the restore TARGET_... significantly influences how settings and tasks behave. For instance, restoring a TYPE_1SECTION backup using TARGET_CURRENT_ADDING requires careful targeting of the restore_section_task to place the content correctly.
III. Core Operations: Step-by-Step with Examples
A. Programmatically Backing Up a Single Section
This example demonstrates backing up a specific section from a Moodle course.
// Assumed variables:
// $source_db_connection: Moodle DB connection object for the source database (if different from live).
// $USER: An admin user object.
// $source_course_id: The ID of the course containing the section.
// $source_section_id: The ID of the section record to back up (from the source DB).
global $DB; // Live Moodle DB
$original_live_db = $DB;
$copied_mbz_path = null;
$backup_controller = null;
$script_controlled_mbz_temp_dir = null;
try {
// Switch DB context if source is different
if (isset($source_db_connection) && $source_db_connection !== $original_live_db) {
$DB = $source_db_connection;
cli_writeln("Switched DB context to source for backup.");
}
cli_writeln("Backing up section ID {$source_section_id} from course {$source_course_id}");
$backup_controller = new backup_controller(
backup::TYPE_1SECTION,
$source_section_id,
backup::FORMAT_MOODLE,
backup::INTERACTIVE_NO,
backup::MODE_GENERAL,
$USER->id
);
$backup_plan = $backup_controller->get_plan();
// Configure general plan settings (e.g., exclude user data)
$general_plan_settings = $backup_plan->get_settings();
if (isset($general_plan_settings['users']) && $general_plan_settings['users'] instanceof backup_setting) {
$general_plan_settings['users']->set_value(0);
}
// Add other general settings as needed (e.g., role_assignments, grade_histories to 0)
// Configure settings for the specific section task (e.g., ensure activities and files are included)
$backup_tasks = $backup_plan->get_tasks();
foreach ($backup_tasks as $task) {
if ($task instanceof backup_section_task && $task->get_sectionid() == $source_section_id) {
// Ensure activities and files within this section are backed up
if (method_exists($task, 'get_setting' { // Good practice to check
$activities_setting = $task->get_setting('activities');
if ($activities_setting instanceof backup_setting) {
$activities_setting->set_value(1); // 1 for include
}
$files_setting = $task->get_setting('files');
if ($files_setting instanceof backup_setting) {
$files_setting->set_value(1); // 1 for include
}
}
break; // Found the relevant section task
}
}
$backup_controller->execute_plan();
$backup_status = $backup_controller->get_status();
if ($backup_status != backup::STNorthwind UniversityS_FINISHED_OK && $backup_status != backup::STNorthwind UniversityS_FINISHED_WARN) {
throw new moodle_exception("Backup failed. Controller status: " . $backup_status);
}
$backup_results = $backup_controller->get_results();
$moodle_managed_backup_file = $backup_results['backup_destination'] ?? null;
if (!$moodle_managed_backup_file instanceof stored_file) {
throw new moodle_exception('Backup destination file (stored_file object) not found in backup results.');
}
// Copy the Moodle-managed backup file to a script-controlled temporary location
$script_controlled_mbz_temp_dir = make_backup_temp_directory('my_script_section_backup_' . $source_section_id . '_' . uniqid(;
$copied_mbz_path = $script_controlled_mbz_temp_dir . DIRECTORY_SEPARATOR . $moodle_managed_backup_file->get_filename();
if (!$moodle_managed_backup_file->copy_content_to($copied_mbz_path {
throw new moodle_exception('Failed to copy MBZ file to script-controlled location: ' . $copied_mbz_path);
}
cli_writeln("Section backup MBZ successfully created at: " . $copied_mbz_path);
} catch (Exception $e) {
cli_writeln("ERROR during backup operation: " . $e->getMessage(;
if ($e instanceof moodle_exception && !empty($e->debuginfo {
cli_writeln("Moodle Exception Debug Info: " . $e->debuginfo);
}
// Further error handling or re-throwing as appropriate
} finally {
// Restore original DB context if it was changed
if (isset($source_db_connection) && $source_db_connection !== $original_live_db && $DB !== $original_live_db) {
$DB = $original_live_db;
cli_writeln("Restored DB context to live Moodle DB.");
}
// Destroy the backup controller to clean up Moodle's internal temp files and DB records
if ($backup_controller instanceof backup_controller) {
try {
$backup_controller->destroy();
} catch (Exception $e_destroy) {
cli_writeln("Warning: Exception during backup_controller destroy: " . $e_destroy->getMessage(;
}
}
// The $copied_mbz_path is now available for the restore step.
// Cleanup of $script_controlled_mbz_temp_dir should happen after $copied_mbz_path is no longer needed.
}
// The $copied_mbz_path variable now holds the path to the generated .mbz file.
// Remember to clean up $script_controlled_mbz_temp_dir later using fulldelete().
B. Programmatically Restoring a Single Section into a Specific Location in an Existing Course
This scenario requires creating an empty placeholder section in the target course first, then instructing the restore process to populate this specific placeholder.
// Assumed variables:
// $target_live_course_id: ID of the live Moodle course to restore into.
// $target_insertion_section_number: The desired section *number* for the new content.
// $USER: An admin user object.
// $path_to_section_mbz: Absolute path to the .mbz file created in the backup step.
// $original_section_backup_details: (Optional) An object/array holding metadata (name, summary, visibility)
// from the original section in the backup, to apply after restore.
global $DB; // Live Moodle DB
$restore_controller = null;
$script_controlled_extraction_dir = null; // For script-managed extraction path (absolute)
$placeholder_section_id_in_live_course = null;
try {
// 1. Create a placeholder section in the live target course
cli_writeln("Creating placeholder section at position {$target_insertion_section_number} in course {$target_live_course_id}...");
$placeholder_section_record = course_create_section($target_live_course_id, $target_insertion_section_number);
if (!$placeholder_section_record || !isset($placeholder_section_record->id {
throw new moodle_exception("Failed to create placeholder section in target course.");
}
$placeholder_section_id_in_live_course = $placeholder_section_record->id;
cli_writeln("Placeholder section created in live course: ID {$placeholder_section_id_in_live_course}, Number {$placeholder_section_record->section}");
// 2. Extract the .mbz file to a Moodle temporary directory (for restore_controller)
// The path passed to restore_controller must be relative to $CFG->dataroot/temp/backup/
$extraction_dir_relative_name = 'my_script_restore_extraction_' . uniqid();
$script_controlled_extraction_dir = make_backup_temp_directory($extraction_dir_relative_name); // Gets absolute path
if (!$script_controlled_extraction_dir) {
throw new moodle_exception("Could not create script-controlled extraction directory.");
}
$packer = get_file_packer('application/vnd.moodle.backup'); // Standard Moodle MBZ packer
if (!$packer) {
throw new moodle_exception('Could not get file packer for MBZ.');
}
if (!$packer->extract_to_pathname($path_to_section_mbz, $script_controlled_extraction_dir {
$packer_errors = $packer->get_errors();
throw new moodle_exception('Failed to extract .mbz file: ' . implode('; ', $packer_errors;
}
cli_writeln("MBZ successfully extracted to: " . $script_controlled_extraction_dir . " (Relative path for controller: " . $extraction_dir_relative_name . ")");
// Ensure moodle_backup.xml exists in the extraction
if (!file_exists($script_controlled_extraction_dir . DIRECTORY_SEPARATOR . 'moodle_backup.xml' {
throw new moodle_exception('moodle_backup.xml not found in extraction directory.');
}
// 3. Instantiate the restore_controller
$restore_controller = new restore_controller(
$extraction_dir_relative_name, // Relative path to extracted content
$target_live_course_id, // Target course ID
backup::INTERACTIVE_NO,
backup::MODE_GENERAL,
$USER->id,
backup::TARGET_CURRENT_ADDING // Adding content to the current (target) course
);
// 4. (Optional but recommended) Execute prechecks
if ($restore_controller->get_status() == backup::STNorthwind UniversityS_NEED_PRECHECK) {
cli_writeln("Executing restore prechecks...");
if (!$restore_controller->execute_precheck( {
// Handle precheck failures/warnings from $restore_controller->get_precheck_results()
// For example, log them and decide whether to proceed.
$precheck_results = $restore_controller->get_precheck_results();
cli_writeln("Restore prechecks indicated issues: " . var_export($precheck_results, true;
// Depending on severity, you might throw an exception here.
}
cli_writeln("Restore prechecks completed. Status: " . $restore_controller->get_status(;
}
// 5. Get the restore plan
$restore_plan = $restore_controller->get_plan();
if (!$restore_plan) {
throw new moodle_exception("Failed to retrieve restore plan from controller.");
}
// 6. CRITICAL: Configure the restore_section_task to target the placeholder
// This is essential for TYPE_1SECTION restores into a specific slot.
if ($restore_controller->get_type() == backup::TYPE_1SECTION) {
$restore_tasks = $restore_plan->get_tasks();
$section_task_configured_for_target = false;
foreach ($restore_tasks as $task) {
if ($task instanceof restore_section_task) {
// This task will process the single section from the backup.
// We instruct it to use our placeholder section's ID as its target in the live course.
cli_writeln("Configuring restore_section_task (Task Name: {$task->get_name()}) to target live section ID: {$placeholder_section_id_in_live_course}");
$task->set_sectionid($placeholder_section_id_in_live_course);
$section_task_configured_for_target = true;
break; // Assuming only one restore_section_task for a TYPE_1SECTION backup
}
}
if (!$section_task_configured_for_target) {
throw new moodle_exception("Could not find or configure the restore_section_task within the plan for TYPE_1SECTION restore.");
}
} else {
// This script's specific targeting logic is primarily for TYPE_1SECTION.
// Handling other backup types (e.g., TYPE_1COURSE) would require different plan/task configuration.
cli_writeln("WARNING: The backup type is not TYPE_1SECTION (actual: {$restore_controller->get_type()}). The specific section targeting logic used here may not apply or work as expected.");
}
// 7. Adjust other general plan settings (e.g., exclude user data, roles)
$general_restore_plan_settings = $restore_plan->get_settings();
if (is_array($general_restore_plan_settings {
foreach ($general_restore_plan_settings as $setting) {
if ($setting instanceof backup_setting) {
if (in_array($setting->get_name(), ['users', 'role_assignments', 'grade_histories', 'course_module_completion'] {
$setting->set_value(0); // 0 for exclude
}
}
}
}
// 8. Execute the restore plan
if ($restore_controller->get_status() != backup::STNorthwind UniversityS_AWAITING) {
throw new moodle_exception("Restore controller is not in AWAITING state before execute_plan. Current status: " . $restore_controller->get_status(;
}
cli_writeln("Executing restore plan...");
$restore_controller->execute_plan();
$restore_status = $restore_controller->get_status();
if ($restore_status != backup::STNorthwind UniversityS_FINISHED_OK && $restore_status != backup::STNorthwind UniversityS_FINISHED_WARN) {
throw new moodle_exception("Restore plan execution failed. Controller status: " . $restore_status);
}
cli_writeln("Restore plan execution finished with status: " . $restore_status);
// 9. (Optional but recommended) Update metadata of the now-populated section
// The restore_section_structure_step populates the section. This step ensures
// metadata like name, summary, visibility exactly matches the source backup object.
$final_live_section_obj = $DB->get_record('course_sections', ['id' => $placeholder_section_id_in_live_course]);
if ($final_live_section_obj && isset($original_section_backup_details {
$update_params = new stdClass();
$update_params->id = $final_live_section_obj->id;
$update_params->name = $original_section_backup_details->name;
$update_params->summary = $original_section_backup_details->summary;
$update_params->summaryformat = $original_section_backup_details->summaryformat;
$update_params->visible = $original_section_backup_details->visible;
$update_params->availability = $original_section_backup_details->availability; // If applicable
$DB->update_record('course_sections', $update_params);
cli_writeln("Metadata (name, summary, etc.) updated for restored section ID {$final_live_section_obj->id}.");
} else if (!$final_live_section_obj) {
cli_writeln("WARNING: Could not re-fetch section ID {$placeholder_section_id_in_live_course} after restore to update metadata. It might not have been correctly populated.");
}
cli_writeln("Section content successfully restored into live section ID {$placeholder_section_id_in_live_course}.");
// 10. Rebuild the course cache for the target course
cli_writeln("Rebuilding course cache for course ID {$target_live_course_id}...");
rebuild_course_cache($target_live_course_id, true);
cli_writeln("Course cache rebuilt.");
} catch (Exception $e) {
cli_writeln("ERROR during restore operation: " . $e->getMessage(;
if ($e instanceof moodle_exception && !empty($e->debuginfo {
cli_writeln("Moodle Exception Debug Info: " . $e->debuginfo);
}
// Further error handling
} finally {
// Destroy the restore controller
if ($restore_controller instanceof restore_controller) {
try {
$restore_controller->destroy();
} catch (Exception $e_destroy) {
cli_writeln("Warning: Exception during restore_controller destroy: " . $e_destroy->getMessage(;
}
}
// Clean up the script-controlled extraction directory
if ($script_controlled_extraction_dir && is_dir($script_controlled_extraction_dir {
fulldelete($script_controlled_extraction_dir);
cli_writeln("Cleaned up script-controlled extraction directory: " . $script_controlled_extraction_dir);
}
// The original .mbz file ($path_to_section_mbz) might also need cleanup if it was temporary.
}
IV. Stumbling Blocks & Pitfalls (Summary)
- Targeting
TYPE_1SECTION Restores into Existing Courses: - The Challenge: Ensuring the single section's content from the backup is placed into a specific, pre-determined slot (placeholder section) within the target course.
- Solution: Create an empty placeholder section using
course_create_section(). Then, during the restore process, find the restore_section_task in the restore_plan and call its set_sectionid() method, passing the ID of your placeholder section. This directs the task to populate that specific section record. - Distinguishing Plan-Level vs. Task-Level Settings:
- Settings can apply globally to the plan or be specific to individual tasks. Knowing where a setting resides is crucial for configuring it correctly.
- Identifying Correct Setting Names:
- Setting names are not always obvious. Inspecting
define_settings() methods in task classes or related Moodle source code is often necessary. - Essential
destroy() Calls: - Always call
$controller->destroy() to prevent resource leaks and cleanup issues. - Correct Paths for
restore_controller: - The
$tempdir parameter for new restore_controller(...) must be the path to the directory containing the extracted backup contents, relative to $CFG->dataroot/temp/backup/. It is not the path to the .mbz file itself.
V. Debugging Tips
- Comprehensive Logging: Log controller statuses, plan details (settings and tasks), key variable values, and any error messages.
- Inspect Objects: Use
var_dump() or print_r() on controller, plan, task, and setting objects to understand their structure and current state (e.g., $plan->get_settings(), $task->get_settings()). - Moodle Developer Debugging: Enable developer-level debugging in Moodle's site administration to get more verbose error messages and stack traces from Moodle itself.
- Server and Moodle Error Logs: Check the web server's error log and Moodle's configured error log file for PHP errors or other Moodle-specific error details.
- Incremental Testing: Start with the simplest possible B&R operation and gradually add complexity to isolate where issues arise.
backup/util/defines.php: Refer to this file (usually included via backup_includes.php) for the definitions of all backup:: constants.- Examine
moodle_backup.xml: This file, found within an extracted MBZ, describes the structure and contents of the backup. Reviewing it can provide insights into relevant settings and tasks.
LTI 1.3 uses OAuth 2.0-based authentication to securely share courses between Moodle instances or with external platforms. This guide covers tool registration, course publishing, and the complete launch flow — including troubleshooting JWT and cookie issues.
LTI at a Glance
LTI 1.3 (Learning Tools Interoperability) makes it possible to share learning content between different learning environments. A course can be hosted in one system (the Tool) and made accessible in another system (the Platform). This means you can either offer content to other platforms (for example, as an e-learning provider) or consume content that another system has made available.
The key concepts in LTI 1.3 are:
- Platform: your users’s system that wants to use the content. Technically, this is the system that initiates the LTI connection. Examples include Moodle, Blackboard, or Canvas on the consumer side.
- Tool: the system that hosts and delivers the content or course via LTI. This is where the learning activity actually runs.
- Registration: the mutual trust relationship between the Tool and Platform, established either dynamically or manually.
- Deployment: a specific activation of the connection, in which the Platform is granted access to certain courses or content.
- Launch: the moment when a user clicks an LTI link in the Platform and is redirected to the Tool.
- Line item and scores: parts of the LTI standard used—for example—to send grades from the Tool back to the Platform.
In this guide, we explain how to register a Platform (client) in the ‘server’, how to publish content as an LTI 1.3 Tool, and how a customer can consume that content in their own Moodle.
1. Enable LTI in Moodle
- Log in as an administrator.
- Go to Site administration > Plugins > Manage enrol plugins (/admin/settings.php?section=manageenrols).
- Locate Publish as LTI tool and click the toggle to Activate.
- (Optional) Configure defaults: Site administration > Plugins > Enrolments > Publish as LTI tool > Settings.
2. Perform Automated (Dynamic) Registration
Use when the Tool Consumer (Moodle B) supports dynamic registration.
2.1 On the Tool Provider (Moodle A)
- Go to Site administration > Plugins > Enrolments > Publish as LTI tool > Tool registration.
- Click Register a platform.
- Enter a Platform name (e.g., "Moodle B") and click 'Continue'.
- Copy the Registration URL displayed under Tool details and share it with your consumer.
2.2 On the Platform (Moodle B, the consumer)
- Go to Site administration > Plugins > Activity modules > External tool > Manage tools.
- In the Add tool field at the top, paste the Registration URL.
- Click Add LTI Advantage. Moodle will import the tool and list it under Tools.
2.3 Configure Custom parameters on the Registered Tool
- Still under Manage tools (Moodle B), locate your newly registered LTI tool.
- Click the Edit (pencil or gear) icon to open the tool settings.
- Scroll down (or expand Show more… if you’re doing this in the activity settings) until you see the Custom parameters field.
- Enter the resource identifier from Moodle A’s Published content > Custom properties, for example:
id=fe3dfbec-bd5b-4532-8bd5-0804a7102631 - Click Save changes.
3. Publish Course as LTI Content (Tool Provider)
- Navigate to the course you want to share (e.g., /course/view.php?id=<courseid>).
- From the More menu, select Publish as LTI tool.
- On the Published content page under the LTI Advantage tab, Moodle will automatically list the Launch URL and Custom properties (including your previously configured deployment_id).
- No additional input is required here—Moodle handles the URL and ID generation.
- Copy the Launch URL and Custom properties values for use when registering the tool on the consumer side.
Note: you also have the option to set the ‘Maximum enrolled users’ for the LTI course.
4. Add External Tool to a Course
Before adding the activity, ensure the LTI tool is visible in the course activity chooser:
- In your course, click the More menu and select LTI External tools.
- On the LTI External tools page, locate your tool in the list and toggle Show in activity chooser to On.
- Close the LTI External tools page by clicking on the Course tab.
Now add the External tool activity:
- In your course, click Add an activity or resource and click on the name of the External tool.
- Under New External tool, enter an Activity name and adjust settings (appearance, grades, privacy) as needed.
- Click Save and return to course.
5. Test the LTI Launch. Test the LTI Launch
- As a test user, click the External tool link in Moodle B.
- Confirm you are redirected into Moodle A content.
- Verify navigation, content access, and grade passback if enabled.
6. Troubleshooting
- Invalid launch data: custom claim 'id' missing: Ensure the id=<deployment_id> in the External tool’s Custom parameters matches the Deployment ID published in the Tool Provider’s course settings. See section 2.5: Configure Custom parameters on the Registered Tool.
- Unable to find deployment: Verify the deployment_id claim in the JWT matches the Deployment ID on the Tool Provider under Publish as LTI tool Configure Deployments.
- JWT signature or key errors: Confirm both sites’ JWKS URLs are correct and reachable.
- No grades returned: Make sure Line item services are enabled in both Moodle A’s Platform details and the Platform’s External tool settings.
- Browser blocking LTI cookies in IFRAME: If the tool is embedded in an iframe and cookies are rejected despite allowing third‑party cookies:
- In Moodle A’s web root, add or update your .htaccess (Apache) or server config (NGINX) to mark cookies with SameSite=None; Secure. For Apache:
Header always edit Set-Cookie ^(.*)$ "$1; SameSite=None; Secure" - For NGINX, add in yyour server block:
proxy_cookie_path / "/; SameSite=None; Secure"; - Reload/restart your web server.
- Clear your browser cookies for the Moodle A domain and try the launch again.
Totara’s HR Import automates user provisioning from CSV files via SFTP, supporting delta or complete imports of users, organizations, positions, and job assignments. This guide covers setup, custom profile field mapping, and transitioning from manual testing to fully automated imports.
Assumptions
This guide assumes:
- The customer has an external system from which they want to import data into Totara.
- They can provide CSV files through SFTP in an automated way (we do not cover any other data sources for HR Import in this guide yet).
If the integration architecture calls for direct API calls rather than file transfer — or if you’re integrating with a Moodle installation alongside Totara — see Moodle Web Services and REST API Integration: A Practical Guide.
Terminology
- Element: determines which type of data can be imported — Job assignment, Organisation, Position, and User.
- Source: for each element, you can configure a data source. For example, the User element can be fed from a CSV file or an external database.
- SFTP: Secure File Transfer Protocol, used to copy files from one host (e.g., a web server) to another over the internet.
- Public–private key pair: to provide secure, passwordless access to the SFTP server, ask the customer for their public SSH key and add it to the
.ssh/authorized_keys file on the web server. Their private key handles encryption on their end.
Provide Sample CSV Files and Documentation
Most customers ask for an example of the CSV files. Use spreadsheets to list the column headers and provide a row of sample data. Also add a clearly delineated row of documentation (explaining the data types, uniqueness requirements, and so on).
If the site is using any custom profile fields, you must adapt the sample CSV files and documentation accordingly. Here is an example: Kiwa – HR Import Samples Plus Instructions.
The file names for automated HR Import should be:
- Users:
user.csv - Job assignments:
jobassignment.csv - Organisations:
org.csv - Positions:
pos.csv
Ask for a Public SSH Key for SFTP
In this scenario, the customer will set up an automated process to upload their CSV files to the Totara web server via SFTP. To make this easy and safe, ask them to send over their public SSH key and add it to the .ssh/authorized_keys file (or ask the system administrator to do this).
Since we don’t know anything about the customer’s system (and don’t want to), we’re not going to explain how to set up an automated upload process or how to generate a public–private key pair.
Configure HR Import
In this section, we’re going to configure HR Import for the customer’s data. One important note: don’t automate the import yet. Test manually first (see Test the Customer’s CSV Files), then switch to automation (see Automate the HR Import).
Delta vs. Complete Import
It is worth understanding what happens when you set “Source contains all records” to “Yes”. This is not spelled out explicitly in the documentation, but combining the various descriptions, the behavior is as follows:
- Users who are present in Totara but missing from the source file will be removed.
- What removed means depends on your “Allowed HR Import actions” > “Delete” setting: keep, suspend, or full delete. A full delete removes all user data, including associated records (grades, appraisals, etc.).
- The recommended setting is suspend rather than full delete.
- If you are only importing a delta, include a deleted column in your source file (0 for retained users, 1 for users to be removed).
The following is documented: for the source user, there is a timemodified field. The documentation states:
If you want records to be updated every time an import is performed and not take the Unix timestamp value into account, you can specify a zero (0) value for this field. This will effectively force HR Import to import the data every time and ignore changes in Unix timestamp values.
Check the Customer's SFTP Upload of the CSV Files
The goal here is twofold:
- Make sure that the CSV files were transferred to the correct directory on the web server (a directory that must also be readable by Totara).
- Check the format of the CSV files.
Format of the CSV Files
Check the uploaded CSV files against the documentation and samples you provided earlier. Keep a sharp lookout for custom profile fields. These must have a column header in the following format: customfield_[shortname] (replace [shortname] with the actual short name, without brackets).
Test the Customer's CSV Files
If the CSV files are present on the web server and their formatting is correct (see the previous section), test the actual import. Use a test version of Totara to do this.
Automate the HR Import
Scheduling uses the totara_coretasktool_totara_sync_task scheduled task. HR Import expects the following folder structure on the server:
path-to-totara/csv/ready/
path-to-totara/csv/store/
In the HR Import settings, set the files directory to the root of this structure (i.e., path-to-totara, without the csv/ready suffix). Totara appends csv/ready automatically.
Place CSV files in the ready folder before each scheduled run. After the import completes, Totara moves them to the store folder and appends a Unix timestamp to each filename. The store folder is created automatically on the first run; if Totara lacks permission to create it, create it manually beforehand.
Agree with the customer on a retention policy for files in the store folder.
Totara's GraphQL API is powerful but poorly documented. Rather than relying on outdated reference materials, this guide shows how to use introspection queries to discover available queries, filters, and pagination options directly from your Totara instance.
Working with Moodle rather than Totara? The REST-based equivalent is covered in Moodle Web Services and REST API Integration: A Practical Guide.
Get access to Totara
Ask your users for:
- Totara url
- Totara admin username
- Totara admin password
If the Totara website is hosted on one of our own servers, we should already have this information – don’t bother your users in that case, but ask the lead engineer instead.
Create a GraphQL User in the Role of API User
- Go to Home > Site administration > Users >Manage users and use the ‘Create user’ button to make a new user, e.g. ‘GraphQL User’.
- Go to Home > Site administration > Permissions > Assign system roles and assign the new user to the API User role.
Add a New API Client
- Go to Home > Site administration > Development > API > API clients > Add client.
- Choose a name, and link the new client to the GraphQL user you just created.
- Copy the client ID and secret to a secure place.
Test the API Client
Using the client ID and client secret, request a new token which you are going to use in a subsequent test call:
curl -X POST 'https://portal.woodgrove-logistics.example/totara/oauth2/token.php'
-H 'Content-Type: application/x-www-form-urlencoded'
-d 'grant_type=client_credentials&client_id=*******&client_secret=*******'
This should return the token, which you can use in your next test call:
curl 'https://portal.woodgrove-logistics.example/api/graphql.php'
-X POST
-H 'Authorization: Bearer ******YOUR_TOKEN******'
-H 'Content-Type: application/json'
-H 'Accept: application/json'
--data-binary '{
"query": "query { totara_webapi_status { status } }",
"variables": "{}"
}'
This should return something like:
{"data":{"totara_webapi_status":{"status":"ok"}}}
And to get the first 50 suspended users:
curl 'https://portal.woodgrove-logistics.example/api/graphql.php'
-X POST
-H 'Authorization: Bearer *****'
-H 'Content-Type: application/json'
-H 'Accept: application/json'
--data-binary '{
"query": "query core_user_users($query: core_user_users_query) {n core_user_users(query: $query) {n items {n idn usernamen firstnamen lastnamen emailn }n totaln next_cursorn }n}",
"variables": {
"query": {
"filters": {
"status": "SUSPENDED"
},
"pagination": {
"limit": 50
}
}
}
}'
Introspection instead of Documentation
Please note that the Totara documentation for their GraphQL API (schema) is not very good. Instead, you should probably rely on querying the schema.
I have asked ChatGPT (o1) to explain a drill down I did to construct the test query mentioned above (get the first 50 suspended users):
Below is a concise recap of the introspection calls we made (and why):
1. Top-Level Queries
Call
{
__schema {
queryType {
fields {
name
}
}
}
}
What It Does
- Fetches all top-level query names in your Totara site’s GraphQL schema (e.g., core_user_users, core_course_courses, totara_webapi_status, etc.).
- This helps you discover which queries actually exist in your environment (since Totara’s online documentation might not match your custom or specific build).
2. Introspecting a Specific Query Type
Call
{
__type(name: "core_user_users_query") {
name
inputFields {
name
type {
name
kind
ofType {
name
kind
}
}
}
}
}
What It Does
- Looks at the input object for the core_user_users query, which is named core_user_users_query.
- Tells you which fields are available on that input type (e.g., filters, pagination, sort).
- This shows how you must structure your "variables" when calling core_user_users.
3. Introspecting the Filters Object
Call
{
__type(name: "core_user_users_filters") {
name
inputFields {
name
type {
name
kind
ofType {
name
kind
}
}
}
}
}
What It Does
- Zeroes in on the filtering options for the core_user_users_query.
- Reveals the fields you can use to filter users (e.g., status, since_timecreated, since_timemodified).
- Important for building your "filters": { … } object when requesting suspended/active users.
4. Introspecting the Pagination Object
Call
{
__type(name: "core_pagination_input") {
name
inputFields {
name
type {
name
kind
ofType {
name
kind
}
}
}
}
}
What It Does
- Shows the pagination fields for core_user_users_query (or other queries that accept pagination).
- Typically returns fields like limit, page, cursor.
- Tells you how to structure the "pagination" object in your variables (e.g., "pagination": { "limit": 50 }).
5. Checking an ENUM Type (User Status)
Call
{
__type(name: "core_user_user_status") {
name
kind
enumValues {
name
description
}
}
}
What It Does
- Fetches the enum values (e.g., ACTIVE, SUSPENDED, ALL) used in the status filter.
- Tells you the exact strings you can pass in the filters.status field ("SUSPENDED", "ACTIVE", etc.).
Why Introspection?
Totara’s GraphQL schema can vary by version or customization. Introspection lets you see exactly:
- Which queries your site supports.
- What input objects (filters, pagination, etc.) those queries expect.
- What fields those input objects have.
- Which types (e.g., scalars, enums, etc.) you need to pass in.
By combining that info, you build the correct GraphQL query + variables, ensuring you don’t get errors like “Field X is not defined.”
Summary
- Top-Level Query Introspection (queryType) showed which queries exist in your Totara site.
- Specific Type Introspection (__type(name: "xyz" revealed the exact input fields for those queries, including filters, pagination, and enums.
- Each step helped us pin down precisely how to structure our "variables" so that Totara 18’s GraphQL endpoint would recognize our requests for suspended users (via status = "SUSPENDED") and handle pagination (limit, page, etc.).
Here’s the basic logic behind why the query structure includes filters and pagination parameters (and sometimes sort) for core_user_users in Totara 18:
1. Strongly Typed Input Objects in GraphQL
- Totara’s GraphQL schema is designed so that each query can accept one or more input objects rather than a bunch of individual arguments.
- In this case, core_user_users expects a single parameter called
query, which itself is an object of type core_user_users_query. - That
core_user_users_query object is defined to have three sub-fields: filters: core_user_users_filterspagination: core_pagination_inputsort: [ someSortType ] (optional, as a list)
This is a common GraphQL design pattern: the query accepts an input object so that filtering, sorting, and pagination are all grouped in one place instead of being separate arguments.
2. The filters Sub-Object
- The filters sub-object is specifically where you set conditions on which users to return.
- From your introspection, you saw that filters has:
- status: core_user_user_status (an enum with ALL, ACTIVE, SUSPENDED)
- since_timecreated: core_date
- since_timemodified: core_date
In other words, if you want only suspended users, you set:
"filters": {
"status": "SUSPENDED"
}
If you want all users, you could set "status": "ALL" or just leave the filters object empty (depending on how Totara is configured).
3. The pagination Sub-Object
- The pagination sub-object deals with how many items you get back and/or which “page” you’re on:
- limit (the maximum number of results to return)
- page (a page number)
- cursor (an alternative approach for “cursor-based” pagination)
In your working example:
"pagination": {
"limit": 50
}
tells Totara to return up to 50 suspended users in one response. If there are more than 50, you’d need to check next_cursor (or specify page) to retrieve additional sets of results.
4. Bringing It All Together
So the GraphQL call to core_user_users looks like:
Query definition:
query core_user_users($query: core_user_users_query) {
core_user_users(query: $query) {
items {
id
username
firstname
lastname
email
}
total
next_cursor
}
}
Variables (JSON object):
{
"query": {
"filters": {
"status": "SUSPENDED"
},
"pagination": {
"limit": 50
}
}
}
filters: Which users do we want? In this case, only suspended ones.pagination: How many to return per page? Limit 50.
sort (Optional): add a "sort" key to your variables to control result ordering:
"sort": [
{ "field": "firstname", "direction": "ASC" }
]
You can use this to sort by first name ascending, if the schema supports that.
5. Why Totara Uses This Approach
- Consistency: Many Totara GraphQL queries follow the pattern of a single “input object” with sub-fields for filtering, sorting, and pagination.
- Scalability: As more filters or pagination modes are added in future versions, Totara can just extend the input type instead of adding separate arguments.
- Clarity: The naming (e.g., filters, pagination, sort) makes it easy to see what each piece of input does.
In Short
- filters is where you define the “condition” on users: suspended, active, etc.
- pagination tells Totara how many results to return (limit) and which “page” or cursor to start from.
That’s why your GraphQL request must nest these properties inside a single query input object—it’s how Totara 18 structures the request to keep the API consistent and strongly typed.
PsySH provides an interactive REPL for Moodle development, similar to `rails console`. This guide shows how to set it up project-locally with a custom init script, giving you direct access to Moodle’s database and API without polluting the repository.
This is a guide for Moodle developers who want to install and use PsySH on a per-project basis, using an init.php file to load Moodle and helper logic — without polluting the Moodle Git repo.
Installing and Using PsySH REPL for Moodle Development (Project-Scoped)
Overview
This guide helps Moodle developers:
- Set up an interactive REPL (like
rails console or irb) using PsySH
- Run it within a Moodle context (with
$CFG, $DB, $USER, etc.)
- Keep all files outside the Git-tracked Moodle directory
- Ensure correct PHP version is used per project
1. Requirements
- PHP CLI (matching your project version, e.g.
php7.4)
- Composer (any global version)
- Local Moodle install (e.g. in
~/php/icm/public_html/)
2. Install PsySH per project
Create a tools/ directory next to your Moodle root:
cd ~/php/icm
mkdir tools && cd tools
php7.4 /usr/bin/composer require psy/psysh:^0.11
This installs a project-local version of PsySH compatible with PHP 7.4. (Assumption: your webroot is something like ~/php/icm/public_html)
3. Create init.php to bootstrap Moodle and helpers
Inside ~/php/icm/tools/init.php, add:
<?php
// Load Moodle
define('CLI_SCRIPT', true);
require(__DIR__ . '/../public_html/config.php');
// Simulate a logged-in admin user
$USER = get_admin();
// Setup typical Moodle page context
$PAGE = new moodle_page();
$PAGE->set_context(context_system::instance(;
$PAGE->set_url('/');
$PAGE->set_pagelayout('admin');
$COURSE = get_site();
$SITE = $COURSE;
// Optional helpers
function uname(int $id): string {
global $DB;
$user = $DB->get_record('user', ['id' => $id], '*', MUST_EXIST);
return fullname($user);
}
echo "[Moodle REPL ready — user={$USER->username}]\n";
This gives you access to the full Moodle environment in the REPL.
4. Create a launch script: moodlerepl
Inside ~/php/icm/tools/, create:
touch moodlerepl
chmod +x moodlerepl
With the contents:
#!/usr/bin/env bash
DIR="$(cd "$(dirname "$0")" && pwd)"
PHPBIN="php7.4" # Change if needed
"$PHPBIN" "$DIR/vendor/bin/psysh" "$DIR/init.php"
Now you can just run:
./moodlerepl
5. Example REPL session
$ ./moodlerepl
[Moodle REPL ready — user=admin]
Psy Shell v0.12.7 (PHP 7.4.33 — cli)
>>> $DB->get_record('user', ['id' => 2]);
>>> uname(2);
6. Keep everything out of Git
Add this to your .gitignore if needed:
tools/
Or selectively ignore:
tools/vendor/
tools/init.php
tools/moodlerepl