What we learned when we rebuilt our own data centre

New Data CentreWhen we rebuilt the Compugen data centre at our new headquarters, we had a chance to put our own solutions into practice.

We have seen lots of advances in technology for datacentres in recent years – network convergence, virtualization, blade infrastructure, multi-core processing, SANs, IP telephony, fibre-over-ethernet, cloud computing and more.

With so many exciting new choices, however, there is always a risk of getting caught up in the ‘latest & greatest’ and the quest to be on the leading edge. All too often, this can be accompanied by increasing difficulties in maintaining and upgrading your datacentre infrastructure to support a best-practices architecture – for example, maintaining sufficient redundancy to support automatic failover and ensure smooth, uninterrupted operations.

Such was the case at Compugen, where we had outgrown the redundancies in power and in cooling that we had initially built into our datacentre, leaving us susceptible to downtime problems resulting from equipment failures. This was mainly because we did not have sufficient space in our datacentre at the time to implement the necessary additional protection. Interestingly, a leading root cause of downtime is believed by many to be vulnerabilities in a datacentre’s power and cooling infrastructures.

When Compugen built its new head office building in Richmond Hill, however, we had the opportunity to design and build a new datacentre from scratch, including all the necessary redundancies to mitigate problems that could arise due to facility equipment failure. This virtually eliminated the possibility of service delivery outages. Examples include:


Redundant Cisco Nexus 7000 Series switches for fast, automated failover for our internal LAN supporting the roughly 600 users occupying the new building. We also have redundant links – one hardwired fibre and one fixed wireless – to the Allstream MPLS cloud that serves as the WAN connecting Compugen’s 15 locations across Canada. Plus an additional set of redundant links to Allstream, again one wired and one fixed wireless, for Internet connectivity.


Two redundant servers support each critical component of our unified communications environment, including Cisco Call Manager, Unity Server and Contact Centre, as well as Microsoft Exchange.


We previously had a central UPS plus redundant additional UPSs installed in certain critical racks. If the central UPS failed, however, the remaining UPSs were sometimes unable to handle the power load. Now, every cabinet/rack in the data centre is powered by two separate UPS units – nothing is powered directly from the wall – and primary power is backed up by a diesel generator, for which we have guaranteed service and fuel supply contracts.


In spite of built-in redundancy of two cooling units, the amount of IT equipment had grown too large for any one single unit to handle the load when one of the two cooling units failed. As a result, the data centre would get too hot, causing periodic downtime. In the new facility, we installed three separate AC units so that if one unit fails, the other two will carry the load.

These enhancements ensure uptime and reliable operations and provide a stable platform to support our continued growth. Further to this, we are looking at establishing a secondary, disaster recovery data centre site, either at an existing Compugen branch location or outsourced to a third-party provider – the jury is still out.

One thought on “What we learned when we rebuilt our own data centre

Leave a Reply

Your email address will not be published. Required fields are marked *