Plesk & SocialBee have teamed up. Save 30% now!

How to Install Plesk in a High Availability Cluster

Have you ever had an issue with a Plesk server when it stops providing service for the Internet due to hardware, network or software problems? What if I told you that it is now possible to switch a load from the broken server to another one in just a few minutes or even less? Meaning you will not need to rush to fix an issue on the broken server or even start receiving lots of calls and messages from customers about their websites being unavailable because, after switching, the websites will be available!

In this post, we will discuss what is needed to install and start Plesk in a high availability (HA) cluster, how to prepare it for working on different nodes, and how to install the required software for organizing a HA cluster. We will also configure nodes to have a disaster every 30 minutes and see how it works and what limitations the cluster has. Ready? Let’s go!

Table of Contents

High Availability (HA) Cluster

High availability cluster with Plesk blog

First, let me explain what type of cluster we are discussing. Also called a failover cluster, this cluster consists of two servers where there is only one server that acts as a primary/active server, and a second server that remains in a passive state while the first server is running well. If the primary/active server ever stops working, the second server takes over the role of primary and starts servicing incoming requests.

Please note the failover cluster is not intended to split load between two or more servers, and it is not a high-load (HL) cluster where nodes process requests simultaneously.

Infrastructure For an HA Cluster

During converting Plesk, to allow it to work on any node of HA cluster, we need to be sure that a server with Plesk does not have any single point of failure (SPOF). So, to ensure that, each node in the cluster should be able to get access to the required files, databases, and any other resources required for providing service like a public IP-address, which is used for accessing the hosting. Here is an example of resources “shared” between nodes:

  • Plesk files
  • Websites’ files (“vhosts”)
  • Databases
  • Public floating IP-address

…but the final list really depends on the infrastructure where the cluster is started. As an example, managing floating IP addresses between servers is different on each infrastructure (public cloud, internal cloud, dedicated servers, virtualization software, etc.) because they provide a different API for that. Which means some of the scripts should be adapted for the infrastructure where you are going to deploy a cluster.

An additional requirement here is that the nodes should be configured as similar as possible. Just an example, Plesk relies on system users and system groups created on a server to manage permissions for subscriptions’ files and starting dedicated php-fpm pools, and it is quite important to have them similar between servers. Otherwise, it could generate issues like, for example, a process on a passive node will not be able to get access to required files that previously were created on an active node because of differently configured owners.

HA Cluster: Preparation Steps

It is time to start deploying a cluster. At this point, it is required to have previously prepared a server with a database service installed, a Network File Storage (NFS) server, and two new servers, which we will use as active/passive nodes. For more information on how to configure and deploy centralized database and NFS for Plesk, I recommend reading our recent blog post here.

Now, I hope your infrastructure is looking similar to the one shown below (which I have redeployed from scratch and going to use for the next steps):

High availability cluster with Plesk blog

HA Cluster: Installation Software

The most important thing here is to have similar configuration between nodes. You need to install the same software on each nodes and I highly recommend to install the software even in the same sequences. This is because some of packages install their own system users and groups, and the identification numbers of such users/groups should be similar between nodes to avoid issues with permissions.

Like in the previous blog post, I will use Oracle Cloud infrastructure. I will deploy HA cluster on servers with ARM processors with pre-installed fresh Ubuntu 22.04 LTS.

Plesk

To install Plesk, I use the following commands (“10.0.0.11” is IP-address of remote DB in my infrastructure):

  • For the ha-node1. It will install Plesk with using database “ha_*” on remote DB. Now, let’s imagine that ha-node1 is the active node; in one of the next steps, we will copy that installation to the shared NFS storage.

ha-node1# wget https://autoinstall.plesk.com/plesk-installer
ha-node1# chmod +x plesk-installer
ha-node1# PLESK_DB_DSN_PREFIX=mysql://plesk_db_admin:<plesk_db_admin_password>@10.0.0.11/ha_ ./plesk-installer install plesk --components panel bind l10n mysqlgroup webservers php7.4 php8.2 nginx wp-toolkit sslit letsencrypt ssh-terminal

  • For the ha-node2. It will install Plesk with using database “ha-node2_*” on remote DB. Let’s imagine that ha-node1 is the passive node. For the passive node, it is important to use different database to avoid overwriting the database from the active node.

ha-node2# wget https://autoinstall.plesk.com/plesk-installer
ha-node2# chmod +x plesk-installer
ha-node2# PLESK_DB_DSN_PREFIX=mysql://plesk_db_admin:<plesk_db_admin_password>@10.0.0.11/ha-node2_ ./plesk-installer install plesk --components panel bind l10n mysqlgroup webservers php7.4 php8.2 nginx wp-toolkit sslit letsencrypt ssh-terminal

Plesk: panel.ini

To let Plesk know that it works as a part of HA cluster and the “vhosts” directory is located on NFS storage, add the next options into the “/usr/local/psa/admin/conf/panel.ini” file. As we previously have decided to imagine the ha-node1 as an active node, changing “panel.ini” is necessary to do on that specific node.

[webserver]
enablePleskHA = true
syncModeOnRemoveConfiguration = true

HA cluster software

For managing HA cluster and shared resource, we will use open source solution like “corosync” and “pacemaker”. To install them run the next code on both nodes.

ha-node1 and ha-node2# apt install corosync pacemaker

Floating IP management

To allow cluster nodes manage floating IP address in Oracle Cloud, I have to install an ‘oci’ tool from Oracle Cloud that allows me to manage IP-address from the command-line interface (it will later be required for a resource agent) and setup trust between the ‘oci’ tool and Oracle Cloud. That step depends on your infrastructure, and different environment will require a different step. It should be done on both nodes.

ha-node1 and ha-node2# bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"
ha-node1 and ha-node2# oci setup config

For getting more information about setup the “oci” tool, I would recommend to read a Quickstart documentation from the Oracle Doc Portal here.

For this step to be successful, it should be possible to get status of a floating IP-address and also possible to attach/detach an IP-address to any node of the cluster with CLI without any additional steps like entering login/password.

Monitoring

I am also going to monitor the status of each server. For that purpose, I am going to use external 360 Monitoring because only testing from external sources can provide us with reliable data about the availability of servers and websites. Plesk will only work on one node at a time (but we have 4 servers), so I will install agent on each node with command-line interface (the unique token for the command, could be obtained from web-interface)

# wget -q -N monitoring.platform360.io/agent360.sh && bash agent360.sh <unique_token>

High availability cluster with Plesk blog

HA Cluster: Configuring Software

Hosting-related system services

Since the software responsible for HA cluster will manage system services like Plesk, nginx, etc, we should disable all these services as they should not be started automatically with system. With the next command, we disable auto-starting for these services, which should be executed on the both nodes.

ha-node1 and ha-node2# for i in \

plesk-ip-remapping \
plesk-php74-fpm.service \
plesk-php82-fpm.service \
plesk-web-socket.service \
plesk-task-manager.service \
plesk-ssh-terminal.service \
plesk-repaird.socket \
sw-engine.service \
sw-cp-server.service \
psa.service \
cron.service \
xinetd.service \
nginx.service \
apache2.service httpd.service \
mariadb.service mysql.service postgresql.service \
named.service bind9.service named-chroot.service \
postfix.service; \
do systemctl disable $i && systemctl stop $i; \

done

As an output, you may see lines like “Failed to disable unit: Unit file bind9.service does not exist”. This is not a critical error because the command contains different name of the same services for different OSes like CentOS and Ubuntu or name of different services that provides similar functionality (like MySQL and MariaDB). If you have installed any additional component with Plesk like “php80”, you also need to disable a service for that component if a service was added to the server by the Plesk component.

You may run `ps ax` to double check that there are no more running services related to Plesk or any of its component.

Plesk Files

In the previous blog post, you can see how it is possible to copy the Plesk “vhosts” directory to the NFS storage. For HA cluster, we need to do the same and a few additional steps for the rest of the Plesk directories to make them available for nodes of the HA cluster.

On the NFS server, configure exporting of “/var/nfs/plesk-ha/plesk_files” the same way as “/var/nfs/plesk-ha/vhosts”. After configuring, you should see the directories are available for remote mounting from the internal network.

ha-nfs# exportfs -v
/var/nfs/plesk-ha/vhosts

10.0.0.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)

/var/nfs/plesk-ha/plesk_files

10.0.0.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)

When exporting is configured, we will need to copy Plesk to the NFS. For that, we have to pre-create a directory on each node for mounting NFS storage:

ha-node1 and ha-node2# mkdir -p /nfs/plesk_files

Files: vhosts

As we previously have decided to imagine the ha-node1 as an active node, the next commands should be executed on the ha-node1. To copy existing vhosts directory, run the next commands:

ha-node1# mount -t nfs -o "hard,timeo=600,retrans=2,_netdev" 10.0.0.12:/var/nfs/plesk-ha/vhosts /mnt
ha-node1# cp -aRv /var/www/vhosts/* /mnt
ha-node1# umount /mnt

Files: Plesk-related

Also, as we previously have decided to imagine the ha-node1 as an active node, the next commands should be executed on the ha-node1.

ha-node1# mount -t nfs -o "hard,timeo=600,retrans=2,_netdev" 10.0.0.12:/var/nfs/plesk-ha/plesk_files /nfs/plesk_files

ha-node1# mkdir -p /nfs/plesk_files/etc/{apache2,nginx,psa,sw,sw-cp-server,domainkeys,psa-webmail}
ha-node1# cp -a /etc/passwd /nfs/plesk_files/etc/
ha-node1# cp -aR /etc/apache2/. /nfs/plesk_files/etc/apache2
ha-node1# cp -aR /etc/nginx/. /nfs/plesk_files/etc/nginx
ha-node1# cp -aR /etc/psa/. /nfs/plesk_files/etc/psa
ha-node1# cp -aR /etc/sw/. /nfs/plesk_files/etc/sw
ha-node1# cp -aR /etc/sw-cp-server/. /nfs/plesk_files/etc/sw-cp-server
ha-node1# cp -aR /etc/sw-engine/. /nfs/plesk_files/etc/sw-engine
ha-node1# cp -aR /etc/domainkeys/. /nfs/plesk_files/etc/domainkeys
ha-node1# cp -aR /etc/psa-webmail/. /nfs/plesk_files/etc/psa-webmail

ha-node1# mkdir -p /nfs/plesk_files/var/{spool,named}
ha-node1# cp -aR /var/named/. /nfs/plesk_files/var/named
ha-node1# cp -aR /var/spool/. /nfs/plesk_files/var/spool

ha-node1# mkdir -p /nfs/plesk_files/opt/plesk/php/{7.4,8.2}/etc
ha-node1# cp -aR /opt/plesk/php/7.4/etc/. /nfs/plesk_files/opt/plesk/php/7.4/etc
ha-node1# cp -aR /opt/plesk/php/8.2/etc/. /nfs/plesk_files/opt/plesk/php/8.2/etc

ha-node1# mkdir -p /nfs/plesk_files/usr/local/psa/{admin/conf,admin/plib/modules,etc/modules,var/modules,var/certificates}
ha-node1# cp -aR /usr/local/psa/admin/conf/. /nfs/plesk_files/usr/local/psa/admin/conf
ha-node1# cp -aR /usr/local/psa/admin/plib/modules/. /nfs/plesk_files/usr/local/psa/admin/plib/modules
ha-node1# cp -aR /usr/local/psa/etc/modules/. /nfs/plesk_files/usr/local/psa/etc/modules
ha-node1# cp -aR /usr/local/psa/var/modules/. /nfs/plesk_files/usr/local/psa/var/modules
ha-node1# cp -aR /usr/local/psa/var/certificates/. /nfs/plesk_files/usr/local/psa/var/certificates

ha-node1# umount /nfs/plesk_files

Event Handler to keep /etc/passwd up2date

We need to update the passwd and group file on the NFS storage every time when Plesk updates the system users. For that, we will create a few event handlers for scenarios like domain creating, subscription updating, etc. Event handlers are saved in a Plesk database, which means we need to run the next command only on active node.

As we previously have decided to imagine the ha-node1 as an active node, the next commands should be executed on the ha-node1.

ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event phys_hosting_create
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event phys_hosting_update
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event phys_hosting_delete

ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event ftpuser_create
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event ftpuser_update
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event ftpuser_delete

ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event site_subdomain_create
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event site_subdomain_update
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event site_subdomain_delete
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event site_subdomain_move

High availability cluster with Plesk blog

Cluster management software

Set up /etc/hosts on both nodes by adding new lines, so corosync and pacemaker would not depend on hostnames. In your infrastructure, your should use you IP-addresses and hostnames.

10.0.0.101 ha-node1.local
10.0.0.102 ha-node2.local

Add the following configuration to /etc/corosync/corosync.conf on both nodes, replacing existing blocks. Again, for your infrastructure, you need to use your IP-addresses and names.

[...]

quorum {
provider: corosync_votequorum
two_node: 1
}

nodelist {
node {
name: ha-node1
nodeid: 1
ring0_addr: 10.0.0.101
}
node {
name: ha-node2
nodeid: 2
ring0_addr: 10.0.0.102
}
}

Cluster management software: resource agents

To manage shared resources between nodes, there are required resource agents. A resource agent is an executable file, a shell script that follows certain specifications. We need several of such agents:

  • to manage the floating IP-address,
  • to manage directories from the NFS storages,
  • to manage the Plesk-related services.

For your infrastructure, you need to find or create a resource agent for managing floating IP-address between nodes. You can see the whole list of the resources agents on the ClusterLabs GitHub repository.

No one has tried to make Plesk part of HA cluster before, which would mean that resource agent for Plesk should be written from scratch. But there is no need to do that because we have already done it.

So, what you need to do next is to save the following files to the “/usr/lib/ocf/resource.d/heartbeat/” directory and make them executable:

For that, the next steps should be done on both nodes:

ha-node1 and ha-node2# apt install jq
ha-node1 and ha-node2# cd /usr/lib/ocf/resource.d/heartbeat/
ha-node1 and ha-node2# wget <https://scdn1.plesk.com/wp-content/uploads/2023/03/08181130/OracleCloudFloatingIP.pdf>
ha-node1 and ha-node2# wget https://raw.githubusercontent.com/ClusterLabs/resource-agents/main/heartbeat/Filesystem
ha-node1 and ha-node2# wget <https://scdn1.plesk.com/wp-content/uploads/2023/03/08181505/BindFilesystem.pdf>
ha-node1 and ha-node2# wget <https://scdn1.plesk.com/wp-content/uploads/2023/03/08181504/PleskHA.pdf>
ha-node1 and ha-node2# chmod 755 OracleCloudFloatingIP Filesystem BindFilesystem PleskHA

HA Cluster: Starting the Cluster

These are almost all the required steps before we start a HA cluster software and configure nodes for using shared resources. To start corosync and pacemaker, run the next command on the both nodes:

ha-node1 and ha-node2# systemctl enable --now corosync pacemaker

Now you should be able to connect and configure the cluster. To monitor the cluster status, run the `crm_mon` command on any of nodes:

High availability cluster with Plesk blog

There is no any active resource and we need to configure them now. To enter the configuration mode, execute `crm config`, and the next commands will configure resources and the cluster. There are also additional comments on what each part does:

  • Default settings:

property stonith-enabled=false
property no-quorum-policy=ignore
rsc_defaults resource-stickiness=100

  • Configuring primitive for sharing floating IP address: “FloatingIP” is a name, “OracleCloudFloatingIP” is the name of a resource agent (RA), and “params ip=130.162.213.97” is what we define for the RA. There are increased timeouts because the API requires some time to reconfigure the IP address.

primitive FloatingIP OracleCloudFloatingIP \
params ip=130.162.213.97 \
op start timeout=120s interval=0 \
op stop timeout=120s interval=0 \
meta target-role=Started

  • Configuring primitives for NFS: there are two primitives, the first for Plesk files, and the second for the “vhosts” directory.

primitive PrimaryFS Filesystem \
params device="10.0.0.12:/var/nfs/plesk-ha/plesk_files" directory="/nfs/plesk_files" fstype=nfs options="hard,timeo=600,retrans=
2,_netdev"
primitive WebFS Filesystem \
params device="10.0.0.12:/var/nfs/plesk-ha/vhosts" directory="/var/www/vhosts" fstype=nfs options="hard,timeo=600,retrans=2,_netdev"

  • Configuring lots of bind-mounts to put configuration files to required place on the file system for each service used to provide hosting service.

primitive ApacheConfigFS BindFilesystem \

params source="/nfs/plesk_files/etc/apache2" target="/etc/apache2"

primitive BindConfigFS BindFilesystem \

params source="/nfs/plesk_files/var/named" target="/var/named"

primitive MailDKIMFS BindFilesystem \

params source="/nfs/plesk_files/etc/domainkeys" target="/etc/domainkeys"

primitive NginxConfigFS BindFilesystem \

params source="/nfs/plesk_files/etc/nginx" target="/etc/nginx"

primitive PleskCertificateStoreFS BindFilesystem \

params source="/nfs/plesk_files/usr/local/psa/var/certificates" target="/usr/local/psa/var/certificates"

primitive PleskConfigFS BindFilesystem \

params source="/nfs/plesk_files/usr/local/psa/admin/conf" target="/usr/local/psa/admin/conf"

primitive PleskModuleConfigFS BindFilesystem \

params source="/nfs/plesk_files/usr/local/psa/etc/modules" target="/usr/local/psa/etc/modules"

primitive PleskModuleFS BindFilesystem \

params source="/nfs/plesk_files/usr/local/psa/admin/plib/modules" target="/usr/local/psa/admin/plib/modules"

primitive PleskModuleVarConfigFS BindFilesystem \

params source="/nfs/plesk_files/usr/local/psa/var/modules" target="/usr/local/psa/var/modules"

primitive PleskPHP74FPMConfigFS BindFilesystem \

params source="/nfs/plesk_files/opt/plesk/php/7.4/etc" target="/opt/plesk/php/7.4/etc" allow_missing=1

primitive PleskPHP82FPMConfigFS BindFilesystem \

params source="/nfs/plesk_files/opt/plesk/php/8.2/etc" target="/opt/plesk/php/8.2/etc" allow_missing=1

primitive PsaConfigFS BindFilesystem \

params source="/nfs/plesk_files/etc/psa" target="/etc/psa"

primitive SpoolFS BindFilesystem \

params source="/nfs/plesk_files/var/spool" target="/var/spool"

primitive SwConfigFS BindFilesystem \

params source="/nfs/plesk_files/etc/sw" target="/etc/sw"

primitive SwCpServerConfigFS BindFilesystem \

params source="/nfs/plesk_files/etc/sw-cp-server" target="/etc/sw-cp-server"

primitive SwEngineConfigFS BindFilesystem \

params source="/nfs/plesk_files/etc/sw-engine" target="/etc/sw-engine"

  • Configuring a group of local-mounts primitives to simplify further configuring:

group MountConfigGroup ApacheConfigFS NginxConfigFS PsaConfigFS SwConfigFS SwCpServerConfigFS SwEngineConfigFS PleskConfigFS PleskCertificateStoreFS MailDKIMFS PleskModuleFS PleskModuleConfigFS PleskModuleVarConfigFS PleskPHP74FPMConfigFS PleskPHP82FPMConfigFS BindConfigFS SpoolFS

  • Configuring Plesk service as a resource primitive as well:

primitive Service-Plesk PleskHA \
op start interval=0 timeout=600 \
op stop interval=0 timeout=100 \
op monitor interval=30 timeout=10 \
meta target-role=Started

  • Configuring the right order how shared resources should be started on a node (IP address first, then NFS mounts, then local-binds, and finally, Plesk service at the end):

order NFS-BindFilesystem-Order Mandatory: FloatingIP PrimaryFS WebFS MountConfigGroup
order Configs-Then-Plesk-Order Mandatory: MountConfigGroup Service-Plesk

  • Configuring a condition that all resources must run on the same node:

colocation Plesk-All-Colo inf: FloatingIP PrimaryFS WebFS MountConfigGroup Service-Plesk

  • Commit configuration changes and quit:

commit
quit

Once you have done all the above, you may again check the cluster status with the `crm_mon` command:

High availability cluster with Plesk blog

The following image shows what we should have now:

High availability cluster with Plesk blog

HA Cluster: Moving Plesk Between Nodes

Planned moving

Sometimes, it might be required to move Plesk from one node to another node due to maintenance procedures. If that happens, you can start such migration manually with command `node standby ha-node2`. All resources will then stop working on ha-node2 and will be started on ha-node1. To return ha-node2 back to online state, use `node online ha-node2`.

To watch the video below, please make sure you accepted Analytical cookies:

Unplanned moving

If case of disaster on the active node, the HA cluster will automatically restart the resources (incl. Plesk service) on the passive node that make new active nodes. To emulate a disaster, I have created on each node of the HA cluster a task to reboot the server every single hour with a 30 minutes shift between them. For the HA cluster, that means a disaster every 30 min. I also created two subscriptions: one for the ‘Plesk Default Page’, and another one for the default WordPress site.

As I mentioned at the beginning of the article, I also installed the “agent360” from 360 Monitoring to each server (including database and NFS) to monitor externally how they work. It’s time to show the graphs for 24 hours.

I configured a cron-task to reboot a server every hour, and because of that we can see uptime 3.6k seconds (1 hour):

High availability cluster with Plesk blog

Clustering does not require many resources. On the graphs, the peaks coincidence with the reboots – all resting time we have is ~98-99% of CPU idle:

High availability cluster with Plesk blog

The graph below shows the number of processes, and how Plesk and hosting-related processes are started on a new node every time there is a disaster:

High availability cluster with Plesk blog

You may be wondering whether the websites on Plesk have been working all this time or not? And the answer is yes – They work with small downtimes when a role of active server migrates from one node to another. Sometimes the external monitoring was not able to detect the downtime because it performs checkups every 60 secs by default:

High availability cluster with Plesk blog

What Else is Important to Know?

For Plesk, it is important to have a similar set of system users on each HA cluster node. This means that if you make some changes on one node, the same changes should be done on the second.

If you are going to use extensions that install additional software, users, or system services, additional steps on the nodes will be required:

  • As an example, if you install a new service on the node (e.g. Grafana), it will not convert into a HA cluster adapted service automatically, you will need to perform additional steps on the nodes for that.
  • Another example is mail. The provided step by step instruction do not configure mail services for working in a HA cluster.

The resource agent Plesk HA does not work correctly with dedicated php-fpm pools because Plesk creates a new ‘systemd’ service on the system for each pool, which means that the services will not be the same on HA nodes and will not be started and stopped correctly.

Summary

First of all, in this blog post we explain how it is now possible to run Plesk as a service in a HA cluster. Steps for deployment depend on the infrastructure that was used previously for deployment. For some of these resources, the required scripts are already available (for Plesk, NFS storage). However, for another resources (it is more about floating IP address), a script might need to be prepared individually – it depends on their API, used infrastructure or hardware solutions.

In conclusion, there are many different possible optimizations and automation steps to make the required steps easier. For now, though, I would recommend trying the Plesk HA on a testing environment only.

So, I you have made it to here, thank you for reading the article. Now, I would love to hear your thoughts on it, so please feel free to comment and discuss the idea on the comments section below or on the Plesk Forum – we have a dedicated topic for Plesk HA.

No comment yet, add your voice below!

Add a Comment

Your email address will not be published. Required fields are marked *

GET LATEST NEWS AND TIPS

  • Yes, please, I agree to receiving my personal Plesk Newsletter! WebPros International GmbH and other WebPros group companies may store and process the data I provide for the purpose of delivering the newsletter according to the WebPros Privacy Policy. In order to tailor its offerings to me, Plesk may further use additional information like usage and behavior data (Profiling). I can unsubscribe from the newsletter at any time by sending an email to [email protected] or use the unsubscribe link in any of the newsletters.

  • Hidden
  • Hidden
  • Hidden
  • Hidden
  • Hidden
  • Hidden

Related Posts

Knowledge Base