Skip to main content
Skip table of contents

Fully containerised hosting with embedded code

Introduction

In fully containerised hosting, application code is embedded in the container image when the deployment pipeline is run. This is the recommended strategy for production websites.

Advantages

The primary advantages of full containerisation are scalability and resilience. Because the container image includes all code required to run the application, containers can be scheduled on any node in a cluster:

  • containers can be scheduled across redundant availability zones, maintaining application availability during hardware failure or even a complete outage of an availability zone

  • an application can scale to enterprise-grade dimensions, because containers can be scheduled across multiple workers. Kubernetes clusters scale automatically to meet pod scheduling requirements, allowing almost unlimited scaling.

Requirements and Assumptions

There are aspects to fully containerised hosting that can initially feel unfamiliar when coming from a more traditional deployment and hosting model. While the advantages justify the workflow changes required, the following limitations need to be taken into account when working with a fully containerised deployment system.

The production environment is essentially an appliance with immutable code

While application interactions with the database, caching systems, media etc continue to function, it makes more sense to think of the hosting environment as an appliance that code is deployed into, rather than as a server where running running code is updated on the fly.

Read only file system

Because the webstack is composed of dynamically scaling microservices, each with their own copy of the application code, it’s inadvisable to try to make changes to application code once deployed. For this reason we set permissions in the deployment pipeline to prevent writes to code that shouldn’t be changed. This prevents containers from running different versions of the application at the same time and helps prevent code injection attacks.

Persistent storage

A shared EFS volume is mounted to all containers that require access to persistent, shared storage. While slower, EFS / NFS and similar shared file systems can be mounted to multiple worker nodes at once, and don’t limit containers that need access to this storage to running on a single instance. The files that are served over EFS are generally cached by OPcache or a CDN downstream.

The files and directories that need to be shared between containers (and persistent across deployments) are stored in /var/www/html/shared, and symlinks to these are created within /var/www/html/codebase You can see the symlinks in the SYMLINKS variable in the repository from which this is being deployed if using M.D.G. IT pipelines from Gitlab, Bitbucket, etc., as well as inspecting the contents of the shared directory.

Running Magento with embedded code.

Because the application will be running across multiple containers, each with their own file system, Magento commands that generate code or static files, such as compilation or deploying static content, would write the output to the container file system of that specific container, leading to inconsistent states between containers. The solution is to not run commands that change application code (and in fact we prevent this using permissions as per above) and to make these changes in the repository source so that the changes are implemented in the deployment pipeline instead.

Common Magento commands in containerised setups:

bin/magento indexer:reindex

bin/magento setup:upgrade, however this must be run with the --keep-generated flag, so e.g.:

CODE
php -d memory_limit=-1 /var/www/html/codebase/bin/magento setup:upgrade --keep-generated


bin/magento setup:di:compile

bin/magento setup:static-content:deploy

❌ Enabling or disabling extensions — this requires setup:static-content:deploy to be run afterwards

In summary of the above, it is useful to think of SSH on production containerised environments as a tool with which to run commands against databases, caching components and shared storage, rather than making changes to application code. The copy of application code in the SSH container is discrete from that in all other containers in the environment.

Running Magento in Production mode

In production mode, Magento doesn’t dynamically generate or compile any code on the fly, as is the case in Default / Developer modes. As all static-content generation and class compilation is performed ahead of time in the build pipeline, in most situations Magento should be in production mode in containerised environments unless there is a compelling reason not to.

Enabling and disabling Magento maintenance mode

Because maintenance mode needs to be set in all containers in the webstack, .maintenance.flag is symlinked out to shared storage. This means that enabling maintenance can be done as normal, using bin/magento maintenance:enable, however to turn maintenance off, please run

CODE
rm /var/www/html/shared/var/.maintenance.flag

Please also note that Magento generally doesn’t use the IP provided in X-Forwarded-For headers. This means that Magento needs additional configuration in order to be able to exclude IP addresses from maintenance mode when running on Kubernetes.

Add the following to app/etc/addheaders/di.xml

XML
<?xml version="1.0"?>
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="urn:magento:framework:ObjectManager/etc/config.xsd">

    <type name="Magento\Framework\HTTP\PhpEnvironment\RemoteAddress">
        <arguments>
            <argument name="alternativeHeaders" xsi:type="array">
                <item name="x-forwarded-for" xsi:type="string">HTTP_X_FORWARDED_FOR</item>
            </argument>
        </arguments>
    </type>
</config>


Source

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.