Child pages
  • Migrating nightly from nightly2 (IU) to AWS
Skip to end of metadata
Go to start of metadata

DNS management

  • Change DNS name servers from University of Michigan (Neal) 11/14/2014

Initial setup (Mostly Neal, ideally completed by 11/03/2014)

  • Setup AWS Account (Neal)
  • Setup IAM (Identity Management) to create users and permissions who will perform the other step (Neal)
    • Identify users who will need an account
    • Could also create someone else as an admin and have them assist with IAM management

Virtual Server setup (Someone from ANI?)

  • Start up an r3.large reserved instance (15.25GB) to store the servers and MySQL on.Start up a db.m1.small with Oracle Database. I don't entirely know how this is launched but should be possible to figure it out. They have license included servers
    • We can debate which OS, I prefer Ubuntu LTS but whichever is probably fine
  • Setup the security group for this instance
  • Setup whatever networking needed so these instances can communicate internally
  • Collect private keys and setup accounts as needed for future steps
  • Setup public IP

Database Setup (Oracle) - (Not sure probably Matt)

  • Oracle should be setup automatically on the db.m1.small, make sure it's started up
  • We need to setup 1-2 accounts for use on the instances with tablespaces (or whatever they're called)
  • We need a script that can be used to wipe out the tablespaces and start from scratch

Database Setup (Mysql/Maria) (Someone from ANI?)

  • Install MySQL/Maria on the other instance. I'd prefer Maria.
  • Setup 3-4 accounts for use on the instances.

Sakai Instance Setup (Longsight?)

  • Setup whatever is needed to get the nightly build scripts to work  Around 2-2.5G for each machine should be enough. This would allow us around 5 instances.
  • We'll at least want
    • trunk.nightly running on Oracle
    • trunk.nightly running on MySQL
    • trunk-experimental running on MySQL
    • maint-rel running on MySQL
    • rc-release running on MySQL
  • Find a way so we can still set the configuration remotely. Idea was to setup a github repository and have it use this
  • Setup some way to access the logs. Either have them on the local machines or copied to S3
  • Setup Nexus as a repository backup

These instances may be changed in the future. We may also want to run the Oracle instance on the Oracle box and increase it from an m1.small to m1.medium. Will have to see how the performance is.

Other setup (Neal to switch the domains, Someone else to setup the load balancer)

  • Get the domain name pointed over to the primary load balancer for this cluster
  • Get a wildcard name * pointed over to the primary load balancer for this cluster
  • Setup the domain names, something likeEarle said this can be done through the AWS interface.
    • etc going to the local machines on the specific ports. 
  • Wildcard SSL for the domain
  • Redirect http to https
  • "Splash page" that lists information about the domains (Located on S3?)
  • Switch the instances over to reserved to save on cost



  • No labels


  1. As we're going to need the machines long term is it worth getting a reserved instances to save on cost?

  2. Yes, we are planning on getting a 1 year reserved to start.