MailRoute Hardware and Software Architecture
MailRoute employs the same proprietary network configuration that Microsoft purchased from us in 2005. As always, we continue to develop and upgrade our filtering, customer interface and granular account controls. Contact email@example.com for more information.
Three datacenters currently: two in Los Angeles, CA, one in Las Vegas, NV.
The overall architecture of each location is the same. Each run the same hardware and software, and can function fully in a stand-alone mode. We balance traffic between the centers. MailRoute will be adding its first international datacenter in 2017, and we will clone our successful working architecture and replicate it there.
The datacenters are secure facilities with 24x7 monitoring and staff. Biometric checks are required for entry. Within the datacenters, all of our equipment is further segregated in locked racks and cages and inaccessible to any other tenant of the datacenter.
Global Load Balancing
The datacenters are globally load-balanced. Incoming traffic is directed to whichever datacenter is best able to handle the traffic, based on its location, performance, network health, load, etc.
Connectivity within the datacenter
Our cages and racks in datacenters have Internet feeds coming from two different providers, and each provider has a minimum of two fiber drops over two disparate paths. There are two primary routers, and each feed from each provider connects to each router for additional redundancy. Each router connects to multiple switches, and every switch is connected to both routers.
There are multiple load balancers in each rack or cage, and each is connected to a minimum of two switches. These load balancers direct incoming traffic to a server cluster, which has the best health and performance at the given time. The traffic is then directed to the optimal server in the cluster.
We use a "Redundant Array of Independent Servers" model, as popularized (and taken to the extreme) by Google. Each datacenter has racks of servers. Each server has two network connections to two different switches, and each of the servers can maintain connectivity in the event that a router, a switch, a data path, or any other aspect of the network fails.
The servers are small but fast workhorses with 16GB of ram or more and SSD drives for buffering mail. All configuration settings are stored in a fully redundant database cluster. Any email delivered is immediately replicated to two servers in the event that any server were to die while processing mail, to ensure that nothing is lost.
We use a multi-layered approach to email filtering. Incoming connections are checked and denial of service attacks are blocked. IP addresses that have very bad reputations (all spam, no ham) are logged and dropped. We use blacklists (internal and those shared with other filtering companies, as well as some from outside providers) to help determine if a connection is likely spam or malicious. Anything that passes here goes on to "greylisting". Greylisting forces a sending server to prove it is legitimate - it makes it resend an email the first time it tries to connect and transfer an email to one of our users. If the user does so, it's marked as "legitimate" and not challenged again. This blocks email from 'bots and zombies, and certain types of email spam client software.
Email that makes it past the IP checks and greylisting goes into the content filters. Attachments are parsed and recursively decoded, and all parts are run through a minimum of two anti-virus engines. Email content is scored, URLs and links in messages are checked, headers are verified, and the email ends up with an overall "SpamScore". Depending on that score, the message can be delivered, tagged and delivered, quarantined or dropped – all depending on the mailbox settings.
Email that is quarantined is stored on our local database clusters for a minimum of two weeks.