Migrating from cloud to on-prem

To migrate from a cloud mitter.io account to an on-prem solution, roughly the following steps need to be followed:

  1. Host an instance of mitter on-prem using your deployment stack (in this walkthrough, we will use docker compose)

  2. Make your instances reachable from the outside world

  3. Migrate all the data from your cloud account to the on-prem account

  4. Replace all endpoints that your services/apps were using to the new endpoints you have configured

For getting a reachable endpoint, you'll most probably need a domain that points to wherever you have hosted mitter. A general overview of how mitter.io on docker works is provided on the mitter.io on docker page. Do note that the image used on that page is not suitable for production deployments, as it bundles all services in a single container making replications and horizontal scaling that much difficult.

Using docker-compose to host your mitter instance

To get started, install docker compose from https://docs.docker.com/compose/

Following is a sample configuration of how you can host all mitter services from within docker compose:

version: '3.4'

x-pgsql-common-variables: &pgsql-common-variables
  POSTGRES_USER: <pg_user>
  POSTGRES_PASSWORD: <pg_password>
  POSTGRES_DB: mitter-io-central
  PGSQL_HOST: <pg_host>
  PGSQL_PORT: 5432

x-rabbitmq-common-variables: &rabbitmq-common-variables
  RABBITMQ_DEFAULT_USER: <rabbitmq_user>
  RABBITMQ_DEFAULT_PASS: <rabbitmq_password>
  RABBITMQ_HOST: rabbit-mq
  RABBITMQ_PORT: 5672

x-minio-common-variables: &minio-common-variables
  MINIO_ACCESS_KEY: <minio_user>
  MINIO_SECRET_KEY: <minio_secret_key>
  MINIO_MEDIA_STORE_BUCKET: equities-media

services:
  rabbit-mq:
    image: rabbitmq:3.7.17-management
    restart: always
    logging: 
      <<: *mitter-logging-config
    environment:
      <<: *rabbitmq-common-variables
      RABBITMQ_VM_MEMORY_HIGH_WATERMARK: 0.85
    ports:
      - "15672"
    networks:
      - mitter-subnet

  redis:
    image: 'redis:4-alpine'
    command: 'redis-server --appendonly yes --appendfsync everysec --save "60 0"'
    restart: always
    ports:
      - "6379"
    volumes:
      - ./.data/redis-data:/data
    networks:
      - mitter-subnet

  minio:
    image: 'minio/minio:RELEASE.2019-08-07T01-59-21Z'
    command: 'server /data'
    restart: always
    ports:
      - "9000"
    environment: *minio-common-variables
    volumes:
      - ./.data/minio-data:/data
    networks:
      - mitter-subnet

  weaver:
    image: mitterio/weaver:dc-rc1-JU1120
    restart: always
    labels:
      autoheal: "true"
    healthcheck:
      test: ["CMD", "wget", "-O", "-", "http://localhost:7180/health"]
      start_period: 40s
      timeout: 5s
      interval: 20s
      retries: 3
    environment:
      <<: *pgsql-common-variables
      <<: *rabbitmq-common-variables
      WEAVER_ARGS: --config-file=/config/weaver.application.conf
      INIT_WAIT_TIME: 10
    depends_on:
      - rabbit-mq
    ports:
      - "7180:7180"
      - "11951:11951"
    volumes:
      - ./config:/config
      - ./javaagent:/javaagent
    networks:
      - mitter-subnet

  mitter:
    image: mitterio/platform:dc-rc1-JU1120
    restart: always
    logging: 
      <<: *mitter-logging-config
    labels:
      autoheal: "true"
    healthcheck:
      test: ["CMD", "wget", "-O", "-", "http://localhost:11901/health"]
      start_period: 1m10s
      timeout: 5s
      interval: 20s
      retries: 3
    depends_on:
      - weaver
      - redis
      - minio
    environment:
      <<: *pgsql-common-variables
      <<: *minio-common-variables
      CENTRAL_APPLICATION_CONFIG: |
        {
          "mitter.security.jwt.signing-key": "<your-signing-key>",
          "mitter.security.jwt.issuer": "<issuer-name">",
          "mitter.central.cache.application-resolution.expireAfterCreate": "1",
          "mitter.central.cache.application-resolution.expireAfterUpdate": "1",
          "mitter.central.cache.application-resolution.expireAfterRead": "400",
          "mitter.central.cache.channel-resolution.expireAfterCreate": "1",
          "mitter.central.cache.channel-resolution.expireAfterUpdate": "1",
          "mitter.central.cache.channel-resolution.expireAfterRead": "400",
          "mitter.central.cache.user-resolution.expireAfterCreate": "1",
          "mitter.central.cache.user-resolution.expireAfterUpdate": "1",
          "mitter.central.cache.user-resolution.expireAfterRead": "400",
          "mitter.central.cache.counts.expireAfterCreate": "1",
          "mitter.central.cache.counts.expireAfterUpdate": "1",
          "mitter.central.cache.counts.expireAfterRead": "1",
          "mitter.services.internal.weaver.enabled": "true",
          "mitter.services.internal.weaver.internal-uri": "http://weaver:7181",
          "mitter.central.database-url": "postgres://$${POSTGRES_USER}:$${POSTGRES_PASSWORD}@$${PGSQL_HOST}:$${PGSQL_PORT}/$${POSTGRES_DB}",
          "mitter.plugins.directory": "/plugins",
          "mitter.media.store.minio.uri": "http://minio:9000",
          "mitter.security.token-issuance.redis-uri": "redis:6379:0",
          "mitter.security.contexts.user-jwt-cookie.enabled": "true",
          "spring.servlet.multipart.max-file-size": "6MB",
          "spring.servlet.multipart.max-request-size": "7000KB",
          "logging.level.io.mitter.commons.spring.RequestBeanManagement": "ERROR",
          "logging.level.io.mitter.security.manager.PrincipalManager": "ERROR",
          "logging.level.acl-execution": "WARN",
          "logging.level.io.mitter.auth.context": "WARN",
          "logging.level.io.mitter.auth.resolvers": "WARN",
          "logging.level.io.mitter.security.support.WebPrincipalResolutionState": "WARN",
          "logging.level.io.mitter.auth.filters.JwtProcessingFilter": "WARN",
          "mitter.central.skip-request-logging": "^OPTIONS:.*$$,^GET:.*/presence.*$$,^GET:.*/counts/.*$$,^GET:.*/users/me$$",
          "mitter.central.cors-allowed-origins": "*",
          "server.tomcat.max-threads": "400",
          "spring.datasource.hikari.maximumPoolSize": "10",
          "mitter.central.outflow": "false",
          "mitter.security.token-issuance.users.expiry-time": "2112912000",
          "mitter.security.token-issuance.users.maximum-tokens": "3000"
        }
      DASHBOARD_JVM_ARGS: |
        -Dmitter.security.auth.basic-auth.enabled=true
        -Dmitter.security.auth.basic-auth.username=<dashboard-user>
        -Dmitter.security.auth.basic-auth.password=<dashboard-password>
        -Dmitter.sui.base-uri=https://mitter-sui-nyc1-a0.equities.chat
    volumes:
      - ./plugins:/plugins
      - ./javaagent:/javaagent
    ports:
      - "11901:11901"
      - "11902:11902"
      - "11950:11950"
    networks:
      - mitter-subnet

networks:
  mitter-subnet:
    ipam:
      driver: default
      config:
       - subnet: 172.24.0.0/22

A few things to note in the configuration above:

  1. The configuration is not expected to work by just copy-pasting the file as-is. It is there to give a reference for your implementation.

  2. In the example above, it is assumed that you are hosting your postgres database separately. For servicing real-world loads, we would recommend using CloudSQL or AWS RDS or a similar managed postgres service. If you want to run your own db, a docker container similar to the following will work:

    1. docker run --name mitter-postgres-db -e POSTGRES_PASSWORD=<user> -e POSTGRES_USER=<user> -e POSTGRES_DB=mitter-io-central -p 35332:5432 -d postgres:9.6.6-alpine

  3. Make sure you edit the username and password values for all running services in the first 3 sections of the file. They are then referenced in the CENTRAL_APPLICATION_CONFIG section of the configuration.

  4. If you wish to continue running the mitter container with the dashboard enabled, then you should ideally not expose the dashboard port (in this example 11902) to the outside world or the public internet. We would strongly recommend using either using an ssh tunnel or similar proxy mechanisms to access the dashboard. In supernova installs, the dashboard runs without authentication (as you would've expected on your cloud hosted account). In case you wish to access it using a public address, then you can enable basic auth with a username/password that can be specified as on line no. 147 / 148 in the snippet above.

  5. Please do review the mount point of the /data volume in this docker compose installation. All media/images etc. get stored on this location - you would need to ensure adequate capacity as per your needs.

Making your instance reachable

While this will greatly differ from installation to installation, a sample of how a machine running the docker compose shown above can use nginx to make these services accessible is as follows

server {
    client_max_body_size 10M;
    server_name mitter.your-host.com;

    location / {
	    proxy_pass http://localhost:11901;
    }

    listen [::]:443 ssl;
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/<ssl-cert-directory>/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/<ssl-cert-directory>/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}

server {
    server_name mitter-dashboard.your-host.com;

    location / {
	    proxy_pass http://localhost:11902;
    }

    listen [::]:443 ssl;
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/<ssl-cert-directory>/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/<ssl-cert-directory>/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}

server {
    server_name mitter-weaver.your-host.com;

    location / {
	      proxy_pass http://localhost:7180;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_read_timeout 86400;
    }

    listen [::]:443 ssl;
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/<ssl-cert-directory>/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/<ssl-cert-directory>/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}

server {
    if ($host = mitter.your-host.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    if ($host = mitter-dashboard.your-host.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

    if ($host = mitter-weaver.your-host.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    listen 80 default_server;
    listen [::]:80 default_server;

    server_name mitter-weaver.your-host.com mitter.your-host.com mitter-dashboard.your-host.com
    return 404; # managed by Certbot
}

Migrating your data

By now, you would have received your entire data from the cloud account on your email. If you haven't, please reach out to support@mitter.io. To get started on the data migration, you will need three things:

  1. An installation of psql on the some machine that can reach your postgres instance. This psql cli must be compatible with Postgres 9.6.

  2. The data backup from your cloud account

Once you have the three things ready, let's start first by applying the pre-import script:

psql -U <user> -h <host> -p <port> -d mitter-io-central -f mitter-pre-import.sql

Then, extract your data archive and import that:

tar -xvzf <your-email>.sql.tar.gz
psql -U <user> -h <host> -p <port> -d mitter-io-central -f <your-email>.sql

Then finally, apply the post-import script:

psql -U <user> -h <host> -p <port> -d mitter-io-central mitter-post-import.sql

Your data has already been transformed to run with mitter supernova builds, and all of your application ids, application keys, settings, user ids, message ids are preserved. All of your existing apps and APIs will continue to work as long as the API endpoints are modified. However, any user login information will not be persisted, and all of your users will have to fetch new authentication tokens. This is due to the fact that user tokens are signed by mitter's private key during issue and that data can longer continue to function without us distributing mitter's private key itself. (This is the 'signing-key' part of the configuration specificed in the docker-compose file).

Migrating your apps

To migrate your apps, all you will need to do is:

  1. Modify the endpoint to which your API calls were being made to (i.e. you will have to change it from https://api.mitter.io to https://mitter.your-host.com or a similar domain name you have chosen to host these services on)

  2. Do note that you will have to use some form a web server or a load balancer to expose your docker containers to the public internet via a domain name. It is not recommended to directly expose your docker ports from your machines.

  3. For apps, you will require all of your users to re-login and fetch new tokens. If they have registered any prior FCM delivery endpoints, they will continue to work without any changes from your side.

  4. Depending on which SDK you are using, please refer to the SDKs individual page for instructions on how to specify a different endpoint

Troubleshooting

If any of the containers refuse to start, or they are not working as expected, you can check the logs to troubleshoot these errors. To get a list of containers running, you could run

docker ps -a

If you used a docker compose file similar to the one above, you'd be seeing container names that begin with mitter_. Against these containers you should also be able to see if the containers are in a healthy or an unhealthy state. To check the logs, you can simply use:

docker logs --follow <container-name>

Last updated