mitter.io
Search…
Migrating from cloud to on-prem
To migrate from a cloud mitter.io account to an on-prem solution, roughly the following steps need to be followed:
    1.
    Host an instance of mitter on-prem using your deployment stack (in this walkthrough, we will use docker compose)
    2.
    Make your instances reachable from the outside world
    3.
    Migrate all the data from your cloud account to the on-prem account
    4.
    Replace all endpoints that your services/apps were using to the new endpoints you have configured
For getting a reachable endpoint, you'll most probably need a domain that points to wherever you have hosted mitter. A general overview of how mitter.io on docker works is provided on the mitter.io on docker page. Do note that the image used on that page is not suitable for production deployments, as it bundles all services in a single container making replications and horizontal scaling that much difficult.

Using docker-compose to host your mitter instance

To get started, install docker compose from https://docs.docker.com/compose/
Following is a sample configuration of how you can host all mitter services from within docker compose:
1
version: '3.4'
2
3
x-pgsql-common-variables: &pgsql-common-variables
4
POSTGRES_USER: <pg_user>
5
POSTGRES_PASSWORD: <pg_password>
6
POSTGRES_DB: mitter-io-central
7
PGSQL_HOST: <pg_host>
8
PGSQL_PORT: 5432
9
10
x-rabbitmq-common-variables: &rabbitmq-common-variables
11
RABBITMQ_DEFAULT_USER: <rabbitmq_user>
12
RABBITMQ_DEFAULT_PASS: <rabbitmq_password>
13
RABBITMQ_HOST: rabbit-mq
14
RABBITMQ_PORT: 5672
15
16
x-minio-common-variables: &minio-common-variables
17
MINIO_ACCESS_KEY: <minio_user>
18
MINIO_SECRET_KEY: <minio_secret_key>
19
MINIO_MEDIA_STORE_BUCKET: equities-media
20
21
services:
22
rabbit-mq:
23
image: rabbitmq:3.7.17-management
24
restart: always
25
logging:
26
<<: *mitter-logging-config
27
environment:
28
<<: *rabbitmq-common-variables
29
RABBITMQ_VM_MEMORY_HIGH_WATERMARK: 0.85
30
ports:
31
- "15672"
32
networks:
33
- mitter-subnet
34
35
redis:
36
image: 'redis:4-alpine'
37
command: 'redis-server --appendonly yes --appendfsync everysec --save "60 0"'
38
restart: always
39
ports:
40
- "6379"
41
volumes:
42
- ./.data/redis-data:/data
43
networks:
44
- mitter-subnet
45
46
minio:
47
image: 'minio/minio:RELEASE.2019-08-07T01-59-21Z'
48
command: 'server /data'
49
restart: always
50
ports:
51
- "9000"
52
environment: *minio-common-variables
53
volumes:
54
- ./.data/minio-data:/data
55
networks:
56
- mitter-subnet
57
58
weaver:
59
image: mitterio/weaver:dc-rc1-JU1120
60
restart: always
61
labels:
62
autoheal: "true"
63
healthcheck:
64
test: ["CMD", "wget", "-O", "-", "http://localhost:7180/health"]
65
start_period: 40s
66
timeout: 5s
67
interval: 20s
68
retries: 3
69
environment:
70
<<: *pgsql-common-variables
71
<<: *rabbitmq-common-variables
72
WEAVER_ARGS: --config-file=/config/weaver.application.conf
73
INIT_WAIT_TIME: 10
74
depends_on:
75
- rabbit-mq
76
ports:
77
- "7180:7180"
78
- "11951:11951"
79
volumes:
80
- ./config:/config
81
- ./javaagent:/javaagent
82
networks:
83
- mitter-subnet
84
85
mitter:
86
image: mitterio/platform:dc-rc1-JU1120
87
restart: always
88
logging:
89
<<: *mitter-logging-config
90
labels:
91
autoheal: "true"
92
healthcheck:
93
test: ["CMD", "wget", "-O", "-", "http://localhost:11901/health"]
94
start_period: 1m10s
95
timeout: 5s
96
interval: 20s
97
retries: 3
98
depends_on:
99
- weaver
100
- redis
101
- minio
102
environment:
103
<<: *pgsql-common-variables
104
<<: *minio-common-variables
105
CENTRAL_APPLICATION_CONFIG: |
106
{
107
"mitter.security.jwt.signing-key": "<your-signing-key>",
108
"mitter.security.jwt.issuer": "<issuer-name">",
109
"mitter.central.cache.application-resolution.expireAfterCreate": "1",
110
"mitter.central.cache.application-resolution.expireAfterUpdate": "1",
111
"mitter.central.cache.application-resolution.expireAfterRead": "400",
112
"mitter.central.cache.channel-resolution.expireAfterCreate": "1",
113
"mitter.central.cache.channel-resolution.expireAfterUpdate": "1",
114
"mitter.central.cache.channel-resolution.expireAfterRead": "400",
115
"mitter.central.cache.user-resolution.expireAfterCreate": "1",
116
"mitter.central.cache.user-resolution.expireAfterUpdate": "1",
117
"mitter.central.cache.user-resolution.expireAfterRead": "400",
118
"mitter.central.cache.counts.expireAfterCreate": "1",
119
"mitter.central.cache.counts.expireAfterUpdate": "1",
120
"mitter.central.cache.counts.expireAfterRead": "1",
121
"mitter.services.internal.weaver.enabled": "true",
122
"mitter.services.internal.weaver.internal-uri": "http://weaver:7181",
123
"mitter.central.database-url": "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${PGSQL_HOST}:${PGSQL_PORT}/${POSTGRES_DB}",
124
"mitter.plugins.directory": "/plugins",
125
"mitter.media.store.minio.uri": "http://minio:9000",
126
"mitter.security.token-issuance.redis-uri": "redis:6379:0",
127
"mitter.security.contexts.user-jwt-cookie.enabled": "true",
128
"spring.servlet.multipart.max-file-size": "6MB",
129
"spring.servlet.multipart.max-request-size": "7000KB",
130
"logging.level.io.mitter.commons.spring.RequestBeanManagement": "ERROR",
131
"logging.level.io.mitter.security.manager.PrincipalManager": "ERROR",
132
"logging.level.acl-execution": "WARN",
133
"logging.level.io.mitter.auth.context": "WARN",
134
"logging.level.io.mitter.auth.resolvers": "WARN",
135
"logging.level.io.mitter.security.support.WebPrincipalResolutionState": "WARN",
136
"logging.level.io.mitter.auth.filters.JwtProcessingFilter": "WARN",
137
"mitter.central.skip-request-logging": "^OPTIONS:.*$,^GET:.*/presence.*$,^GET:.*/counts/.*$,^GET:.*/users/me$",
138
"mitter.central.cors-allowed-origins": "*",
139
"server.tomcat.max-threads": "400",
140
"spring.datasource.hikari.maximumPoolSize": "10",
141
"mitter.central.outflow": "false",
142
"mitter.security.token-issuance.users.expiry-time": "2112912000",
143
"mitter.security.token-issuance.users.maximum-tokens": "3000"
144
}
145
DASHBOARD_JVM_ARGS: |
146
-Dmitter.security.auth.basic-auth.enabled=true
147
-Dmitter.security.auth.basic-auth.username=<dashboard-user>
148
-Dmitter.security.auth.basic-auth.password=<dashboard-password>
149
-Dmitter.sui.base-uri=https://mitter-sui-nyc1-a0.equities.chat
150
volumes:
151
- ./plugins:/plugins
152
- ./javaagent:/javaagent
153
ports:
154
- "11901:11901"
155
- "11902:11902"
156
- "11950:11950"
157
networks:
158
- mitter-subnet
159
160
networks:
161
mitter-subnet:
162
ipam:
163
driver: default
164
config:
165
- subnet: 172.24.0.0/22
166
Copied!
A few things to note in the configuration above:
    1.
    The configuration is not expected to work by just copy-pasting the file as-is. It is there to give a reference for your implementation.
    2.
    In the example above, it is assumed that you are hosting your postgres database separately. For servicing real-world loads, we would recommend using CloudSQL or AWS RDS or a similar managed postgres service. If you want to run your own db, a docker container similar to the following will work:
      1.
      docker run --name mitter-postgres-db -e POSTGRES_PASSWORD=<user> -e POSTGRES_USER=<user> -e POSTGRES_DB=mitter-io-central -p 35332:5432 -d postgres:9.6.6-alpine
    3.
    Make sure you edit the username and password values for all running services in the first 3 sections of the file. They are then referenced in the CENTRAL_APPLICATION_CONFIG section of the configuration.
    4.
    If you wish to continue running the mitter container with the dashboard enabled, then you should ideally not expose the dashboard port (in this example 11902) to the outside world or the public internet. We would strongly recommend using either using an ssh tunnel or similar proxy mechanisms to access the dashboard. In supernova installs, the dashboard runs without authentication (as you would've expected on your cloud hosted account). In case you wish to access it using a public address, then you can enable basic auth with a username/password that can be specified as on line no. 147 / 148 in the snippet above.
    5.
    Please do review the mount point of the /data volume in this docker compose installation. All media/images etc. get stored on this location - you would need to ensure adequate capacity as per your needs.

Making your instance reachable

While this will greatly differ from installation to installation, a sample of how a machine running the docker compose shown above can use nginx to make these services accessible is as follows
1
server {
2
client_max_body_size 10M;
3
server_name mitter.your-host.com;
4
5
location / {
6
proxy_pass http://localhost:11901;
7
}
8
9
listen [::]:443 ssl;
10
listen 443 ssl; # managed by Certbot
11
ssl_certificate /etc/letsencrypt/live/<ssl-cert-directory>/fullchain.pem; # managed by Certbot
12
ssl_certificate_key /etc/letsencrypt/live/<ssl-cert-directory>/privkey.pem; # managed by Certbot
13
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
14
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
15
16
}
17
18
server {
19
server_name mitter-dashboard.your-host.com;
20
21
location / {
22
proxy_pass http://localhost:11902;
23
}
24
25
listen [::]:443 ssl;
26
listen 443 ssl; # managed by Certbot
27
ssl_certificate /etc/letsencrypt/live/<ssl-cert-directory>/fullchain.pem; # managed by Certbot
28
ssl_certificate_key /etc/letsencrypt/live/<ssl-cert-directory>/privkey.pem; # managed by Certbot
29
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
30
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
31
32
}
33
34
server {
35
server_name mitter-weaver.your-host.com;
36
37
location / {
38
proxy_pass http://localhost:7180;
39
proxy_http_version 1.1;
40
proxy_set_header Upgrade $http_upgrade;
41
proxy_set_header Connection "upgrade";
42
proxy_read_timeout 86400;
43
}
44
45
listen [::]:443 ssl;
46
listen 443 ssl; # managed by Certbot
47
ssl_certificate /etc/letsencrypt/live/<ssl-cert-directory>/fullchain.pem; # managed by Certbot
48
ssl_certificate_key /etc/letsencrypt/live/<ssl-cert-directory>/privkey.pem; # managed by Certbot
49
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
50
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
51
52
}
53
54
server {
55
if ($host = mitter.your-host.com) {
56
return 301 https://$host$request_uri;
57
} # managed by Certbot
58
59
60
if ($host = mitter-dashboard.your-host.com) {
61
return 301 https://$host$request_uri;
62
} # managed by Certbot
63
64
if ($host = mitter-weaver.your-host.com) {
65
return 301 https://$host$request_uri;
66
} # managed by Certbot
67
68
69
listen 80 default_server;
70
listen [::]:80 default_server;
71
72
server_name mitter-weaver.your-host.com mitter.your-host.com mitter-dashboard.your-host.com
73
return 404; # managed by Certbot
74
}
75
76
Copied!

Migrating your data

By now, you would have received your entire data from the cloud account on your email. If you haven't, please reach out to [email protected]. To get started on the data migration, you will need three things:
    1.
    An installation of psql on the some machine that can reach your postgres instance. This psql cli must be compatible with Postgres 9.6.
    2.
    The data backup from your cloud account
    3.
    A pre and post import script, available here : https://gist.github.com/rohanprabhu-jm/bddeed8cdcc7cebb8fbea7fffcb866c6
Once you have the three things ready, let's start first by applying the pre-import script:
1
psql -U <user> -h <host> -p <port> -d mitter-io-central -f mitter-pre-import.sql
Copied!
Then, extract your data archive and import that:
1
tar -xvzf <your-email>.sql.tar.gz
2
psql -U <user> -h <host> -p <port> -d mitter-io-central -f <your-email>.sql
Copied!
Then finally, apply the post-import script:
1
psql -U <user> -h <host> -p <port> -d mitter-io-central mitter-post-import.sql
Copied!
Your data has already been transformed to run with mitter supernova builds, and all of your application ids, application keys, settings, user ids, message ids are preserved. All of your existing apps and APIs will continue to work as long as the API endpoints are modified. However, any user login information will not be persisted, and all of your users will have to fetch new authentication tokens. This is due to the fact that user tokens are signed by mitter's private key during issue and that data can longer continue to function without us distributing mitter's private key itself. (This is the 'signing-key' part of the configuration specificed in the docker-compose file).

Migrating your apps

To migrate your apps, all you will need to do is:
    1.
    Modify the endpoint to which your API calls were being made to (i.e. you will have to change it from https://api.mitter.io to https://mitter.your-host.com or a similar domain name you have chosen to host these services on)
    2.
    Do note that you will have to use some form a web server or a load balancer to expose your docker containers to the public internet via a domain name. It is not recommended to directly expose your docker ports from your machines.
    3.
    For apps, you will require all of your users to re-login and fetch new tokens. If they have registered any prior FCM delivery endpoints, they will continue to work without any changes from your side.
    4.
    Depending on which SDK you are using, please refer to the SDKs individual page for instructions on how to specify a different endpoint

Troubleshooting

If any of the containers refuse to start, or they are not working as expected, you can check the logs to troubleshoot these errors. To get a list of containers running, you could run
1
docker ps -a
Copied!
If you used a docker compose file similar to the one above, you'd be seeing container names that begin with mitter_. Against these containers you should also be able to see if the containers are in a healthy or an unhealthy state. To check the logs, you can simply use:
1
docker logs --follow <container-name>
Copied!
Last modified 1yr ago