Custom configuration

The docker container comes with some parameters that are configurable. In its current format the configurability is quite limited and we plan to add a lot more options very soon.

Basic configuration

When running the container the following environment variables can be provided:

  1. APP_AUTHORITY_NAME - The name used to issue JWT tokens to users. On the cloud instance it is mitter-io

  2. API_ENDPOINT - The publicly resolvable address for the API server.

  3. DASHBOARD_ENDPOINT - The publicly resolvable address for the dashboard. This is not required to be modified unless you are extending the docker image and need some services to use the dashboard externally.


To further configure the running services, you can also supply an APPLICATION_CONFIG containing key-value pairs for various properties. The two environment variables used would be:

  1. CENTRAL_APPLICATION_CONFIG - This passes an application configuration object to the API server.

  2. DASHBOARD_APPLICATION_CONFIG - This passes an application configuration object to the dashboard.

On the command line you would supply them using:

"": "value"\
}' mitterio/mitter:sa

Since this can get quite lengthy to type and manage on the CLI (especially with the newlines), it is recommended to store the configuration in a file, let's say mitter.api.config and mitter.dashboard.config with the specific JSON and then supply it using:

docker run -eCENTRAL_APPLICATION_CONFIG=`cat mitter.api.config` \
-eDASHBOARD_APPLICATION_CONFIG=`cat mitter.dashboard.config` \
-p11901:11901 -p11902:11902 mitterio/mitter:sa

Do note that you cannot nest JSON properties, the entire property path must be specified as a single string in the key. You can, however use integer/boolean values when applicable.

The supported properties are covered in the following sub-sections.

Configuring the signing key for JWTs

By default a random string is used for signing the JWTs for user authorization. This might not always be desired (although will always work if the JWTs are being verified by no other than your running container) and can be overridden using (along with other parameters for JWTs) the following in CENTRAL_APPLICATION_CONFIG:

"": ".. your issuer name ...",
"": 1000, // Milliseconds
"": ".. your key, base64 encoded .."

Configuring an external broker

To push messages via websockets from multiple docker containers behind a load balancer you will need to enable a rabbitmq server that can act as a central broker between the multiple containers.

NOTE This feature is currently experimental and not officially supported.

A test container can be run simply using:

docker run rabbitmq

Refer to the rabbitmq documentation to find out the following information:

  1. The port the RabbitMQ server is running on

  2. The username and the password.

Then enable the broker by adding the following in CENTRAL_APPLICATION_CONFIG:

"": true,
"": ".. host ..",
"": ".. rabbitmq username ..",
"": ".. rabbitmq password .."

Storing media in S3

If you do not wish to store media from messages (file messages, image messages etc.) locally where your container is running but instead on AWS S3 then you can configure the media storage to do so.

NOTE When images are stored in S3 there is no access protection mechanism active. You must designate a publically-accessible bucket that can be accessed by anyone. This is not recommended if you are handling sensitive data.

NOTE This feature is currently experimental and not officially supported.

To configure the API server to store media in S3, set the following keys in your CENTRAL_APPLICATION_CONFIG:

"": false,
"": true,
"": ".. S3 bucket to store media in ..",
"": ".. aws access key ..",
"": ".. aws secret key ..",
"": ".. aws region ..",

Using an event bus

You can also have the docker container send installation-wide events to an AWS SNS topic. The events sent on event bus are extremely granular and can result in a huge flux of messages to your message queue. Most use-cases can be solved by using a simpler and minimal Webhooks interface.

NOTE This feature is currently experimental and not officially supported.

To configure an AWS SNS queue to receive all events on your installation, set the following keys in CENTRAL_APPLICATION_CONFIG:

"mitter.eventing.entbus.enabled": true,
"mitter.eventing.entbus.sns.topic-arn": ".. aws sns topic ..",
"": ".. aws access key ..",
"": ".. aws secret key ..",
"": ".. aws region .."