Mitter is also distributed as a docker container publicly available on docker hub. It contains all the features of the public sandbox hosted at mitter.io with the following exceptions:
You cannot register for an account or have multiple accounts for your services. Mitter runs in a single tenant mode. You can still however create multiple applications and use them with the same guarantees on data isolation.
Certain features like temporary media storage, distributed message broker, event bus is either not available or not supported.
If you are using multiple docker containers behind a load balancer, websocket deliveries will work intermittently. This is because there is no exchange of messages and connections between the serving nodes. To be able to serve traffic from multiple nodes, refer to the section 'Configuring an external broker'.
NOTE The Mitter.io docker container is currently in public beta and is encouraged to be used for development and testing. If you wish to run this in production, reach out to us at firstname.lastname@example.org.
Getting the docker container is simple:
docker pull mitterio/mitter:sa
The mitter docker containers do not support the
latest tag as mitter is expected to be delivered in the following formats:
sa - (AVAILABLE) The standalone build which bundles the platform, dashboard and all dependent servies.
sa-headless - (COMING SOON) The standalone build which bundles the platform all the dependent services, but not the dashboard.
headless - (COMING SOON) The
headless build which bundles only the platform. It does not come with the dashboard and/or any dependent services.
To run the container, simply execute:
docker run -p11901:11901 -p11902:11902 mitterio/mitter:sa
The two commands that are exposed make the platform and the dashboard available to the host. If you are exposing ports other than the ones mentioned, you will have to specify to the running container how to externally reach the API server. This is required for certain features in the dashboard to work (specifically the user and channels dev panel). To do so:
docker run -eAPI_ENDPOINT='http://localhost:1234'-p1234:11901 -p80:11902 mitterio/mitter:sa
Do note that if you do not wish to use the dashboard or do not intend to use the user and channels dev panel, specifying the step above is optional.
Once the container is running, you can access the dashboard on whichever port you mapped to
11902 on the container. This might take a while.
NOTE Sometimes even when the dashboard is available, you might see an error page for a while. This is due to the API server not being ready as it takes longer than the dashboard to start up. Refresh the page after sometime if that is the case.
On the dashboard you can continue to create applications, users, channels etc. as usual.
The same SDKs and APIs work with your docker container as with the cloud solution. All SDKs provide a way to override the API URL which you must now point to the endpoint serving your docker container (and mapped to docker container
For specific information on the above, consult the individual SDK documentation.
The provided docker image contains the required
VOLUME directives for data to be persisted across container runs. For instance, if you were to run a container:
docker run -p.... mitterio/mitter:sa
Assume the container id was
9ba5901f6b80, create users, send messages etc. and then stop the container and restart it:
docker stop 9ba5901f6b80docker start 9ba5901f6b80
then all the data would be persisted between the two runs. However, the following data will be no longer available:
All user authorizations will no longer be available.
Any pending message deliveries on websockets will not be persisted and those delivery attempts will be marked as permanent failures on subsequent runs.
We are working on remedies to both the solutions. However, user authorizations will never be completely guaranteed to be persisted between container runs as they are only periodically flushed to the disk.
If you wish to keep all data in a location at a convenient place rather than the docker volume storage, the following mountpoints need to be mapped:
/data/supernova - This contains all the data from PostgreSQL and Minio.
/logs/supernova - This contains all the logs generated by the API server, dashboard and the dependent services (Redis, PostgreSQL, Minio).