Customized Use#

We intend that you should be able to make most changes by changing default variable settings in your local_configure.yml file. We've made a serious effort to make sure that all those settings are documented in the Plone's Ansible Playbook <> documentation.

For example, if you want to change the time at which backup occurs, you can check the doc and discover that we have a plone-backup-at setting.

The default setting is:

  minute: 30
  hour: 2
  weekday: "*"

That's 02:30 every morning.

To make it 03:57 instead, use:

  minute: 57
  hour: 3
  weekday: "*"

in your local_configure.yml file.

Common Customization Points#

Let's review the settings that are commonly changed.

Plone Setup#

Eggs And Versions#

You're likely to want to add Python packages to your Plone installation to enable add-on functionality.

Let's say you want to add collective.easyform and webcouturier.dropdownmenu.

Add to your local_configure.yml:

    - collective.easyform
    - webcouturier.dropdownmenu

If you add eggs, you should nearly always specify their versions:

  - "collective.easyform = 2.0.0b2"
  - "webcouturier.dropdownmenu = 3.0.1"

That takes care of packages that are available on the Python Package Index. What if your developing packages via git?

  -  "some.other.package = git git:// rev=1.1.5"

There's more that you can do with the plone_sources setting. See the docs!

Buildout From Git Repository#

It's entirely possible that the buildout created by the playbook won't be adequate to your needs.

If that's the case, you may check out your whole buildout directory via git:

buildout_git_version: master

Make sure you check the documentation on this setting.

Even if you use your own buildout, you'll need to make sure that some of the playbook settings reflect your configuration.

Running Buildout And Restarting Clients#

By default, the playbook tries to figure out if buildout needs to be run. If you add an egg, for example, the playbook will run buildout to make the buildout-controlled portions of the installation update.

If you don't want that behavior, change it:

plone_autorun_buildout: no

If autorun is turned off, you'll need to log in to run buildout after it completes the first time. (When you first run the playbook on a new server, buildout will always run.)

If automatically running buildout bothers you, automatically restarting Plone after running buildout will seem foolish. You may turn it off:

plone_restart_after_buildout: no

That gives you the option to log in and run the client restart script. If you're conservative, you'll first try starting and stopping the reserved client.


By the way, if buildout fails, your playbook run will halt. You don't need to worry that an automated restart might occur after a failed buildout.

Web Hosting Options#

It's likely that you're going to need to make some changes in nginx configuration. Most of those changes are made via the webserver_virtualhosts setting.

webserver_virtualhosts should contain a list of the hostnames you wish to support. For each one of those hostnames, you may make a variety of setup changes.

The playbook automatically creates a separate host file for each host you configure.

Here's the default setting:

  - hostname: "{{ inventory_hostname }}"
    default_server: yes
    zodb_path: /Plone

This connects your inventory hostname for the server to the /Plone directory in the ZODB.

A more realistic setting might look something like:

  - hostname:
    default_server: yes
    zodb_path: /Plone
    port: 80
    protocol: http
    client_max_body_size: 4M
  - hostname:
    zodb_path: /Plone
    port: 443
    protocol: https
    certificate_file: /thiscomputer/path/mycert.crt
    key_file: /thiscomputer/path/mycert.key

Here we're setting up two separate hosts, one for http and one for https. Both point to the same ZODB path, though they don't have to.

The https host item also refers to a key/certificate file pair on the Ansible host machine. They'll be copied to the remote server.

Alternatively, you could specify use of certificates already on the server:

  - hostname: ...
      key: /etc/ssl/private/ssl-cert-snakeoil.key
      crt: /etc/ssl/certs/ssl-cert-snakeoil.pem


One hazard for the current playbook web server support is that it does not delete old host files. If you had previously set up and then deleted that item from the playbook host list, the nginx host file would remain. Log in and delete it if needed. Yes, this is an exception to the "don't login to change configuration rule".

See also

For an example of using free Let's Encrypt certificates with certbot and auto-renewal, see Let's Encrypt Certificates and certbot.

Extra tricks#

There are a couple of extra setting that allow you to do extra customization if you know nginx directives. For example:

- hostname:
  protocol: http
  extra: return 301 https://$server_name$request_uri;

This is a redirect to https. It takes advantage of the fact that if you do not specify a zodb_path, the playbook will not automatically create a location stanza with a rewrite and proxy_pass directives.

Mail Relay#

Some cloud server companies do not allow servers to directly send mail to standard mail ports. Instead, they require that you use a mail relay.

This is a typical setup:

mailserver_relayport: 587
mailserver_relayuser: yoursendgriduser
mailserver_relaypassword: yoursendgridpassword

Bypassing Components#

Remember our stack diagram? The only part of the stack that you're stuck with is Plone.

All the other components my be replaced. To replace them, first prevent the playbook from installing the default component. Then, use a playbook of your own to install the alternative component.

For example, to install an alternative to the Postfix mail agent, add:

install_mailserver: no


If you choose not to install the HAProxy, varnish or Nginx, you take on some extra responsibilities. You're going to need to make sure in particular that your port addresses match up. If, for example, you replace HAProxy, you will need to point varnish to the new load-balancer's frontend. You'll need to point the new load balancer to the ZEO clients.

Multiple Plones Per Host#

We've covered the simple case of having one Plone server installed on your server. In fact, you may install additional Plones.

To do so, you create a list variable playbook_plones containing all the settings that are specific to one or more of your Plone instances.

Nearly all the plone_* variables, and a few others like loadbalancer_port and webserver_virtualhosts may be set in playbook_plones. Here's a simple example:

  - plone_instance_name: primary
    plone_zeo_port: 8100
    plone_client_base_port: 8081
    loadbalancer_port: 8080
      - hostname: "{{ inventory_hostname }}"
          - default
        zodb_path: /Plone
  - plone_instance_name: secondary
    plone_zeo_port: 7100
    plone_client_base_port: 7081
    loadbalancer_port: 7080
      - hostname:
        zodb_path: /Plone

Note that you're going to have to specify a minimum of an instance name, a zeo port and a client base port (the address of client1 for this Plone instance.)

You may specify up to four items in your playbook_plones list. If you need more, see the docs as you'll need to make a minor change in the main playbook.

The Plone Role -- Using It Independently#

For big changes, you may find that the full playbook is of little or no use. In that case, you may still wish to use Plone's Ansible Role independently, in your own playbooks.

The Plone server role is maintained separately, and may become a role in your playbooks if it works for you.