AWS Deployment

Amazon AWS is perfect for Enterprise customers who wish to have a very robust platform with high availability. It is used and trusted by some of the largest companies in the world to serve as their Cloud infrastructure. Below is a diagram showing the architecture of an AWS Deployment.


We assume in this that you already have an account with Amazon AWS. Once you create an account, you will then be ready to create your Enterprise deployment.
Steps through setting up Document DB, PDF server S3 bucket setup, and deploying (below) to Elastic Beanstalk


Before we start our deployment, we will first need to create a Database to store all of our Forms and Submissions within our deployment. We will use the fantastic DocumentDB to serve as our central database.
To get this setup, please follow the following instructions.
  • Within your AWS Console, type DocumentDB and click on the link that says Amazon DocumentDB
  • On the next page, you will click on the button that says Launch Amazon DocumentDB.
  • In the next section, we will choose our Instance configuration. For this, you will provide the following.
    • Cluster Identifier: Keep default or type your own name.
    • Engine Version: 4.x and below.
    • Instance Class:
      • Production Environment: At least db.r5.large
      • Development Environment: At least db.t3.medium
    • Number of Instances:
      • Production Environment: At least 2
      • Development Environment: At least 1
  • The following diagram illustrates a typical configuration for a Development Environment.
  • In the next section, provide a Master username and a secure password
  • Finally, press Create Cluster button
  • Now that our DocumentDB cluster is created, we will now click on the cluster link, and then copy the application connection string. It should look like the following.
  • Next, we will want to change this connection string to be a standard connection string. We will do this by first removing everything after the “:27017/”, and then adding our database name to the end of :27017/. We can pick any name here, but for this example, lets use formio.
  • Keep in mind that we've removed the readPreference=secondaryPreferred flag entirely as does not currently support this preference.
  • Now, we will want to replace <insertYourPassword> with the password you chose above when you created the cluster.
    mongodb://formio:[email protected]
  • Finally, we will want to add ?ssl=true&retryWrites=false to the end of the url to indicate that we have a database connection over SSL and to disable retryWrites since DocumentDB does not support it.
    mongodb://formio:[email protected]
  • Make sure to copy this connection string for use later.


Before we setup our API and PDF servers, we need to first setup an S3 Bucket which will contain the uploaded PDF’s used by the PDF server.
  • To setup an S3 bucket, we will first just navigate to the S3 section of Amazon AWS by clicking on the home page, and then typing S3 in the Search bar. Click on S3 in the Search results.
  • On the next page, click on the Create Bucket button.
  • On the next page, under General Settings, give it a Bucket Name of your choice.
  • Now skip over all the other sections (leaving the default configurations), and then at the bottom of the page, press Create Bucket to create your new S3 bucket.
  • Now that we have an S3 bucket created, we now need to create an IAM role with admin rights to this S3 bucket. We will do this by navigating back to the AWS homepage and then typing IAM in the Search bar.
  • Within the IAM page, we will create a new user by clicking on Users and then clicking on Add Users.
  • In the next page, we will call our User pdf-server, and we will want to make sure to check the Programmatic Access under the Access Type section. Then click the Next button.
  • In the Permissions page, we will want to Attach existing policies directly, and then we will want to Create Policy.
  • This will open up the “Create Policy” page. In this page, we will first want to click on Choose a service. Select S3.
  • In the Actions section, you will then select All S3 actions.
  • In the next Resources section, we will want to click on the Add ARN link next to the bucket section. Here we will provide our bucket name we provided in the previous section, and then click Add
  • In the object resource section, click Add ARN and then provide our bucket name, and then click on the Any checkbox for the object section. Then click the Add button.
  • For the other Resources, click on them and then click on the Any settings for all of them. When you are done, it should look like the following.
  • Click the Next buttons to skip ahead until you get to the Review Policy page.
  • For the Name, just provide pdf-server-s3, then click on the Create policy button.
  • Now that a policy has been created, we can attach that policy to the IAM role by going back to that page, clicking the Refresh icon, and then search for pdf-server-s3 in the search bar.
  • Now, click on the Next button in the IAM wizard. Skip ahead until you get to the Review page, and then click Create user
  • On the next page, make sure to copy the Access Key ID as well as the Secret access key, and save this for later.
  • We are now ready to move onto setting up the Elastic Beanstalk deployment!

Elastic Beanstalk

Now that we have our database and S3 configured, we will be using Elastic Beanstalk to manage our docker deployments.
  • Within the AWS Home page, type Elastic Beanstalk into the search and click on the link provided.
  • Once you are within the Elastic Beanstalk main page, you will want to click on the link that says Create Application
  • NOTE: You may see a different page that has a link that says Create New Environment. If so, then click on that link, and then select the Web server environment on the next page and press Select.
  • In the next screen, provide an Application Name
  • Scroll down and then select Docker as the environment, and then ensure you have Docker running on 64bit Amazon Linux 2 selected.
  • For this next section, click on Upload your code. Then follow the instructions below.
IMPORTANT NOTE: This file assumes you are using DocumentDB. If you are using any other external database provider, or an internal Community Edition database, then you will need to extract this ZIP file, open up the docker-compose.yml file and then remove the lines that reference the MONGO_CA environment variable. Otherwise the database connection will try to use the AWS certificate to your external database provider and the connection will fail.
  • Next click on the button that says Configure More Options
  • On the next page, we will first want to ensure that our deployment is configured to auto-scale, which we can do this by clicking the High Availability under Presets.
  • Next, we will click on Edit link within the Software section.
  • We will now need to provide the following environment variables within the environment properties section.
    The MongoDB connection string to connect to your remote database. This is the value we copied before.
    mongodb://formio:[email protected]
    The license key for your deployment. You will get this when you upgrade a project to Enterprise.
    Used to enable the On-Premise portal
    An admin account you would like to use as the first Admin user
    A password for the first Admin user. This can be changed after the deployment is finished.
    A secure secret that you will pick that is used to encrypt the project settings.
    If PORTAL_ENABLED is not set (as in an API Environment), then this secret is used to connect another portal to this environment
    A secure secret that you will pick that is used to establish secure JWT tokens.
    This is the name of the Bucket we created in the previous section
    This is the region which the S3 bucket was created
    This is the Key we saved in the previous step.
    This is the Secret Key that we saved in the previous step.
  • NOTE: If you wish to secure your Environment Variables from visibility, then we recommend looking into the Amazon Key Management Service @
  • These settings will look like the following.
  • Now press the Save Button to save your environment settings.
  • Next we will configure the Instances settings
  • Within the instance settings, we need to ensure that the Security Groups enabled are the same as those we established for DocumentDB. By default, this is going to be the default security group. Select this and press Save button.
  • Next we will configure the Capacity settings.
  • Within this section, we will make sure we select a size of instance that is suitable for our Environment. recommends the following configurations for the following environments.
    Environment Type
    Instance Size
    Development Environments
    at least t3.medium
    Production Environments
    at least t3.large
    For this example, we will just select t3.medium.
  • Now press Save button at the bottom of the page.
  • Next, we will edit the Network settings
  • We need to ensure that our instances are in the same VPC as our DocumentDB database as well as the same subnets.
  • We also need to do the same for our Load Balancer Settings. For production Environments, you will want to create a "Private" subnet and a "Public" subnets, where the public subnets are attached to an Internet Gateway. The private subnets would then be assigned to the DocumentDB database, whereas the public subnets will be assigned to the Load Balancer Settings. See AWS Documentation on the recommended configuration for Private/Public subnets with Load Balancers and Databases.
  • Now press the Save button to save the Network settings.
  • You can now press the Create App button at the bottom of the page to build your environment.
  • This will now create a new Environment within AWS for your deployment. We can now click on the Application URL and you should now see your portal.
We are now ready to create a new Project! Project

  • Now that we have our deployment up and running, the first step is to login to our new deployment. On the first page, we will now use the ADMIN_EMAIL and ADMIN_PASS values (which we added to the Environment Variables in a previous step) to authenticate into the developer portal.
  • Once you are logged into the Developer Portal, we will now create a new Project.
  • In the popup modal, give your project a title and then click Create Project

Domain Routing (Route 53)

Setting up Route 53 Domain Routing and validating SSL Certificate using Amazon Credential Manager
Now that your Environment is up and running, the next task is to attach a Domain to the Elastic Beanstalk deployment. If you configured the Elastic Beanstalk deployment to use High Availability, then it will have created some Elastic Load Balancers in front of the deployment which you can link the DNS records against.
  • To get started, navigate to the homepage and then search for Route 53
  • Next, you will need to created a Hosted Zone
  • You will now provide your domain name and then press Create hosted zone
  • Next, you will create a new Record Set and then provide the following record.
    • *
    • Type - A Record
    • Alias - Yes
    • Then select the Elastic Load Balancer as the target.
    • Now press Create
  • When you are done, your routes should look something like the following.
  • Now, you set your domain Nameservers to point to the ones provided by Route 53. Once the domains evaluate, then you should be able to see the deployed API within that domain.
  • Next, go back into your Portal, and then configure the PDF Server URL that we configured in a previous step with the new DNS name, which will be something like

Configure SSL certificate for Application Load Balancer

Create basic records in Route 53. The following example shows a created record for the root path and for the "www" path.
Here's the route record after the update.
Next, create a SSL Cert by clicking the "Request" button at the following URL .
Add the SSL certificate to the Route 53 domain and validate the cert. Choose the recommended option for validating the certificate.
If a "CNAME" record type is created in the Route 53 domain, it should be the issued certificate in the previous step.
If you refresh your certificate screen it should look something like this when the certificate has been issued.
If the certificate is not issued, delete and reissue the certificate
Next, add a listener to the load balancer for port 443 (HTTPS). Navigate to the EC2 Dashboard, open the Load Balancing tab, in the main screen click the Listener tab, click "Add Listener" button.
Set the listener to "HTTPS" and port to 443, then choose a Security Policy. Finally, click the "Select a Certificate" button.
Navigate back to the Load Balancer Listeners tab. If you see a small orange icon next to the Port 443 it means you need to add an Inbound rule for port 443 to the security group. Hover over the icon for details on what security group needs to be updated.
Navigate to the Security Groups by going to EC2 Dashboard > Security Groups. Expand the column to figure out which security group it is then click "Edit Inbound Rules".
Select HTTPS and choose a source then click "Save Rules". This should remove the orange caution mark on the listener's tab shown in the previous steps.
Navigate to your domain to see if the SSL Certificate has been configured.


There are many reasons why your docker containers will fail to start. When this happens, you will need to troubleshoot the reason by observing the logs from those containers. This can be done by downloading the logs from the Elastic Beanstalk, but in some rare occasions, such as when a License has been disabled, the containers will not create the logs. When this occurs, the best course of action is to SSH into the EC2 containers to diagnose the problem.
First, you will need to create a KeyPair so that you can perform the SSH. You can do this by navigating to the EC2 section of AWS, and then clicking on Key Pairs.Then click Create key pair
Next you will follow the instructions to create a new key pair. When you are done, you will download the private key onto your local machine. You will need to ensure that this downloaded key has the correct permissions by doing the following.
chmod 0400 my-key.pem
Next, you will navigate to the Elastic Beanstalk deployment, and then edit the Configurations for your deployment. Click Edit on the Security Section.
Next, you will select your new key pair and then click Save.
Once your deployment is finished making these updates, you will now go to the EC2 section of AWS, click on Instances, and find the instance that is associated with your deployment. You will then copy the Public DNS Name of that instance.
You can now ssh into your instance by performing the following command.
ssh -i ./my-key.pem [email protected]
Once you have SSH into your instance, you can now perform the following to see the docker images.
sudo su
docker ps -a
This should show you the failed container as you see here.
Once you see these containers, you will then need to copy the Container ID of one of the failed containers. You can then ssh into the failed container by running the following command.
docker commit [CONTAINER_ID] formio-debug
docker run -it \
-e "PORTAL_ENABLED=true" \
-e "DEBUG=true" \
-e "FORMIO_S3_KEY=--- S3 KEY ---" \
-e "FORMIO_S3_REGION=us-west-2" \
-e "FORMIO_S3_SECRET=---- S3 SECRET ---" \
--rm --entrypoint sh formio-debug
Of course, you will change out the values with the values you used to deploy the elastic beanstalk. Once you do this, you will then be able to try and run the software manually by doing the following.

For pdf-server:

node pdf.js

For formio-enterprise:

node formio.js
This will then output the problem with why it cannot start... for example, here is an output from a pdf server that could not start.
This tells us that the License for this server has been disabled.