Custom configuration
The Mitter.io docker container comes with some parameters that are configurable. In its current format the configurability is quite limited and we plan to add a lot more options very soon.
Basic configuration
When running the container the following environment variables can be provided:
APP_AUTHORITY_NAME
- The name used to issue JWT tokens to users. On the cloud instance it ismitter-io
API_ENDPOINT
- The publicly resolvable address for the API server.DASHBOARD_ENDPOINT
- The publicly resolvable address for the dashboard. This is not required to be modified unless you are extending the docker image and need some services to use the dashboard externally.
Specifying an APPLICATION_CONFIG
APPLICATION_CONFIG
To further configure the running services, you can also supply an APPLICATION_CONFIG
containing key-value pairs for various properties. The two environment variables used would be:
CENTRAL_APPLICATION_CONFIG
- This passes an application configuration object to the API server.DASHBOARD_APPLICATION_CONFIG
- This passes an application configuration object to the dashboard.
On the command line you would supply them using:
Since this can get quite lengthy to type and manage on the CLI (especially with the newlines), it is recommended to store the configuration in a file, let's say mitter.api.config
and mitter.dashboard.config
with the specific JSON and then supply it using:
Do note that you cannot nest JSON properties, the entire property path must be specified as a single string in the key
. You can, however use integer/boolean values when applicable.
The supported properties are covered in the following sub-sections.
Configuring the signing key for JWTs
By default a random string is used for signing the JWTs for user authorization. This might not always be desired (although will always work if the JWTs are being verified by no other than your running container) and can be overridden using (along with other parameters for JWTs) the following in CENTRAL_APPLICATION_CONFIG
:
Configuring an external broker
To push messages via websockets from multiple docker containers behind a load balancer you will need to enable a rabbitmq server that can act as a central broker between the multiple containers.
NOTE This feature is currently experimental and not officially supported.
A test container can be run simply using:
Refer to the rabbitmq documentation to find out the following information:
The port the RabbitMQ server is running on
The username and the password.
Then enable the broker by adding the following in CENTRAL_APPLICATION_CONFIG
:
Storing media in S3
If you do not wish to store media from messages (file messages, image messages etc.) locally where your container is running but instead on AWS S3 then you can configure the media storage to do so.
NOTE When images are stored in S3 there is no access protection mechanism active. You must designate a publically-accessible bucket that can be accessed by anyone. This is not recommended if you are handling sensitive data.
NOTE This feature is currently experimental and not officially supported.
To configure the API server to store media in S3, set the following keys in your CENTRAL_APPLICATION_CONFIG
:
Using an event bus
You can also have the docker container send installation-wide events to an AWS SNS topic. The events sent on event bus are extremely granular and can result in a huge flux of messages to your message queue. Most use-cases can be solved by using a simpler and minimal Webhooks interface.
NOTE This feature is currently experimental and not officially supported.
To configure an AWS SNS queue to receive all events on your installation, set the following keys in CENTRAL_APPLICATION_CONFIG
:
Last updated