Colocation is the renting of space in a data center facility for the purpose of housing computing equipment, often referred to as servers.
A longer explanation. Many of the applications you use on your computer, tablet or smart phone are made possible by computers around the world that respond to your queries when your asks. As we’ve become more dependent on these applications, having them not respond is becomes more unacceptable. Putting these machines in an office building, home or warehouse doesn’t really work as they are typically prone to power and internet outages, burglary, fire and tampering. For this reason colocating equipment in a data center is critical for applications that are considered critical.
What a Data Center / Colocation Offers Clients
A facility that is designed for housing computing infrastructure. It will have cabinets that are locking and bolted to the floor. In areas where earth quakes happen, there is bracing to ensure the cabinets don’t move or fall over.
Power is guaranteed by having access to 1 or more power substations, large Uninterruptible power supplies (UPS) and diesel generators that can run almost indefinitely.
These facilities are staffed 24 hours a day and have video security, very controlled access to the building and parking lot. The construction is usually concrete with no windows and there is way to enter without identification.
Having this many servers next to each other means that they will need powerful and redundant cooling. This is a major component of the data center offering is the ability to cool the facility.
Colocation services are offered by many firms these days all around the globe. In addition to the cabinet, power and cooling provided by these colocation data centers, there are additional services you will find being sold by them as well which include:
- Internet Bandwidth (an internet connection)
- Remote Hands service (technical personnel available on call since the equipment is not housed at the clients place of business)
- data backup services
- managed security services (to prevent data breeches)
Server Rooms – The History of Colocation
Colocation was simply the outsourcing of what were called server rooms. Before colocation became a staple service offered to businesses, many firms used to house their computer gear in the same building they occupied with their employees.
Many businesses had employees occupying the ground and upper levels of the building while in the basement, a data center with raised floors, ample cooling and network connectivity to the upper floors allowed employees to reach the applications and systems they needed to perform their jobs. Think of this as colocation on site if you will.
Server rooms not only connected employees in the floors above the equipment but expensive private lines connected one branch of a business to others so that the access to the application could be made company wide. In some cases this wasn’t possible so transactions and processing from other locations were sent to the server room in batches through various means.
Advantages of on premise server rooms
- easily accessible immediately by IT staff
- all aspects of the facility could be controlled
- application access was only limited to a single building
- costs were predictable
- easier audits as all aspects were under control of the firm
Disadvantages to having an on premise server room
- Capital cost intensive
- Forced firms to spend resources in an area of business that may not have been their focus
- Costly maintenance, repairs and modernization.
- limited flexibility in leasing options when looking for a building
- made moving from a building nearly impossible once the server room was built
- increased costs in security, cooling, infrastructure, network connectivity
- unless 2 facilities were owned by the firm, the server room could serve as a single point of failure.
Advantages of using Colocation from an outside 3rd party
You can’t really speak to the history of colocation without understanding the advantages of colocation:
- leveraging someone else’s heavy capital expenditure at a fraction of the cost
- easier to employ redundancy with interconnected data centers (you could purchase colocation in 2 places and get very high speed connections between them to copy data from one site to the next)
- reduction of capital expenditures to only the hardware, the building and environment capital expenditures went away
- more compatible with the style of corporate budgeting in the Capital Expenditure vs Operational Expenditure models.
- a colocation facility will have much higher standards for security, power and network redundancy and cooling as it’s their sole focus and specialty.
Why Are You Considering Colocation?
We find these are types of firms that are looking for colocation services:
- You’ve outgrown your existing in-house server room
- Your current provider is not performing as they should
- You are launching an application that supports internal or external users that cannot go down.
In all of these cases you need a mission critical facility that has no issues with keeping network, power and cooling flowing to your hardware at all times. Once a service provider has shown chronic issues with any of these essential environmental components it’s time to go.
No one wants the herculean task of moving from one facility to another. This is especially true when you have no swing hardware to keep the environment up and running while you move hardware to the new facility.
There comes a point where you’re on the fence. Here are some things to look for which maybe signs that you really should move out of a facility:
1. If you hear “…Yes, we have redundancy…BUT our switching gear failed to switch to the backup device”
Data centers are built much better than they were 14 years ago but there still data centers out there with design flaws. Today, few data centers have a single generator or UPS but having redundancy is useless if power cannot be switched seamlessly. This reminds me of an office I recently worked in where the primary internet connection was supposed to failover to a 3G network. The failover never worked without someone logging into the router and manually changing over to 3G which made the system useless since loss of internet not only disrupted work but also dropped VOIP phone calls. Having redundancy in power, cooling and network is important but having the mechanism to seamlessly switch over is just as important. Lacking this capability is a sign of a design flaw and the facility should be avoided.
2. If you hear “…to add additional power you will need to purchase the colocation cabinet next to you and leave it empty…”
This means you are using high-density servers in a facility that is low density. Each time you upgrade you’ll be forced to purchase more empty space to offset “hot spots” in the datacenter. The financials here will not work well as higher density space will cost less overall. We have details on why it’s important to purchase high-density colocation in a separate section of this buyers guide.
3. You are refused access to additional space unless you purchase more managed services.
The goal of most colocation service providers is to maximize the revenue generated per square foot, which is normal for a firm that wants to be profitable. Here’s the issue; you’ve purchased straight colocation services based on the fact that you already carry the overhead of IT staff. This means the colocation footprint you lease should have been sold to you at a margin that allowed the colocation facility to make a profit even if the margins are slimmer than managed services. To force the purchase of managed services in order to expand your colocation footprint is wrong if not disclosed before the initial purchase. If the colocation services provider decides not to aggressively discount, that’s ok so long as your aren’t prevented from the needed expansion.
4. Frequent outages of all sorts
As mentioned, most failures that data centers experience aren’t due to the lack of redundancy but usually a lack of diversity or a failure in some switching technology. For power this maybe a transfer switch, for networking this maybe a firmware bug on a router. Here are some real world examples of failures:
Router or switch upgrades. Although the network is redundant, a bad firmware upgrade can ruin your day since all the redundancy and diversity on the planet will not tell the internet how your servers can be reached if routing isn’t working. Having a poor roll back plan means that the problematic upgrade will be worked on and resolved even if the time allocated for this maintenance window is exceeded. It’s better to have a facility that is disciplined enough to swallow their pride and roll back to the previous firmware and configuration for the sake of their clients if possible.
Lack of diversity is another issue in some data centers. While having redundant paths for electricity or connectivity are excellent, if there are no diverse architecture, the energy or bandwidth supplied to those devices can be compromised by a failed component upstream from them. This is witnessed occasionally with power distribution units in a facility.
Sensor failure – There are many sensors in a datacenter, failure to detect extreme heat means that fires can be missed, failure to detect that power coming into the datacenter has a phasing issue means that motors powering AC compressors and cooling fans can be fried causing the datacenter to overheat. Failure to detect water leaks may mean that servers are being exposed to water or extreme humidity.
5. You are forced to consolidate your equipment into a different colocation services facility owned by the same company.
I’ve witnessed this myself a few times and it’s never pleasant. Here’s why you maybe forced to move your server hardware by a colocation provider; Let’s say that this company owns 2 data centers each of which have 30% occupancy. Chances are at 30% occupancy these 2 facilities are not profitable and are costing the colocation provider millions of dollars each month. If these 2 facilities are in the same region what they may choose to do is force the tenants of one facility to move into the other. This brings the occupancy rate up to 60% Which translates into instant profitability for the newly consolidated datacenter. While normally the move and all the expenses of the datacenter migration are typically paid for by the colocation facility, it still means downtime and aggravation for you.
6. Noticeable reductions in service quality or staff
If normally tickets opened for eyes and hands service are dealt with in 30 minutes or less and you begin to notice that these instances are now going into “hours” or you see staff levels being reduced you maybe seeing aggressive cost cutting which will lead to diminished service quality. Physical support is essential if your hardware isn’t within walking distance of you.
If you find overall account support reduced it may also mean that the colocation service provider is out finding more clients and not as interested in supporting existing clients until there’s an opportunity to upgrade or increase services to them.
7. Draconian Contracting Policies
If you notice that your original 3 year contract auto renewed for another 3 years locking in that bloated “dot com” priced bandwidth you’ll start to wonder if this colocation service provider is the best choice for you. While there is healthy demand for datacenter space at present, there are many colocation service providers to choose from so there’s no need to lock folks into contracts with clauses that are restrictive or draconian.
Again this article isn’t meant to cause issues between you and your colocation service provider but only as a guide to protect your IT infrastructure from issues that can cause downtime or restrictions that prevent your business from growing. If you’d like to understand your colocation options we are here to help.
In regards to power, mismatching the power density of your hardware with that of the data center you’d like to collocate in can cost your firm thousands of additional dollars each month.
Frequently we’ve witnessed clients upgrading standard 1 and 2U server hardware to modular chassis with blade servers expecting that efficiency and ease of use will increase. Having higher density computing is attractive and makes lots of sense provided the facility housing the gear is capable of accommodating this higher density hardware. Not having the adequate environment can end up costing much more for the space and power and in some cases the newly proposed hardware will not be permitted into the traditional facility at all. Firms in this situation will often become frustrated as expansion is slowed down or completely stopped.
In some cases the Colocation facility that houses standard server hardware will not have a high-density space at all. Should the same Colocation facility have compatible space in a separate location we’ve now introduced unplanned migration costs and downtime of critical IT systems in order to move the hardware to the new spot in the datacenter. Should it be the case where the facility has no high density space at all, your current contract may make it impossible to leave the facility without breaching the contract. So the new investment in hardware may end up being spread 1 chassis per rack or it may not be admitted to the Datacenter all. This means that potentially 32U’s (56″) of space will be left empty in each rack with no possibility of adding hardware in its place unless the blade chassis is removed.
Lets assume a firm purchases 10 Dell blade chassis, where each chassis holds 10 Dell 1955 blade servers for a total of 2186 watts per chassis.
10 of these blade chassis will generate 21,860 watts
Average Watts per cabinet may be calculated like this:
208 Volts x (30 amps x .8) = 4992 (in this example we are using 80% of the total usable amperage to avoid brownouts some prefer 75%)
Since each of our Dell chassis is 2186, we can fit 2 chassis per rack for a total of 4372 watts. This leaves a bit of space for a low wattage server or some switching/network hardware.
This means we need 5 cabinets total, we’ll assume costs are calculated per amp regardless of overall consumption so we won’t include actual power costs at this time.
Low Density Proposed Costs:
5 cabinets at $ 1100 average = $ 5500 (we are not including increase build out costs, cabling, patch panels + other one time costs)
Lets assume our high density cabinets can handle 15k watts, we’d have:
High Density Proposed Costs:
2 cabinets at $2500 average = $ 5000
This doesn’t sound like a great amount is savings but the key is the 5 low density cabinets would be almost consumed with no room for additional hardware. In some facilities, racks may only come in certain cage configurations requiring the leasing of additional cage space that may house no hardware.
In the high-density cabinets we would need 2 cabinets:
15,000 watts capacity, we insert 6 blade chassis that produce 13,116 watts leaving 1884 watts available for tape libraries, switches or routers needed for the environment.
Cabinet # 2
15,000 watts, we insert 4 blade chassis that produce 8,744 leaving 6,256 watts available for additional hardware in this rack.
So the math works in favor of the high density for blade environments. Not only is the cost overall less with scenario 2, it will increase savings as more cabinets are added to the environment. It’s important to note that some high density facilities will accommodate 10k – 20k watts per cabinet so it’s important to do the math up front since not all facilities will have 15k watts per our example above.
[This concept is a bit tricky to understand in written form, check out our video online]
Since the colocation facility houses may others that have high density computing environments (in a regular density facility) they must remain vigilant that they aren’t overselling power in their space to avoid hot spots that can affect neighboring servers. With this vigilance comes the auditing of each and every cabinet whenever a new hardware device is added which slows down the ability to quickly deploy hardware which can delay upgrades or even prevent additional capacity needed during peak holiday season.
- Plan your hardware purchases to coincide with contract end dates if possible, find alternate vendors as well.
- Never settle for a low-density space for high density servers or vice versa, it never works out well.
- Use a hardware planning tool, most major hardware providers give free access to to them on their website. For the purposes of this article Dell provided a very nice tool which helped us compute watts per rack, total U space required and allowed the complete visualization of potential layouts and even the weight per item should you want to house the gear in your own server room.
Most clients in the market for a data center have an understanding of the different uptime tiers of service that a data center may provide. Tier I has no redundancies, and a minimum target uptime of 99.671, or a little over one day of downtime per year. Tier 4 systems boast independent, dual-powered HVAC with multiple redundancies in a fault-tolerant facility, with a minimum target uptime of 99.995%, or about 26 minutes of downtime per year.
But Uptime Tier designations are not the whole story. In fact, relying merely on the title may net customers a data center that upholds the letter, but not the spirit, of the agreement. That’s why the most tech-savvy of customers don’t pay too much attention to the tier numbers and instead dig into other specifications to avoid any hype or bluster.
Demand Better Certifications
There is a blizzard of possible certifications to get out there, and it’s easy to be buffaloed if you’re not aware, for instance, that there are Uptime Institute certifications for both the design of facility and completed site. Without both certifications, you can’t be certain that the facility built to the specifications agreed to in the design.
Like buying a used car, it’s important to be aware of the most common tactics used to snow customers. For instance, anyone can say that they have Tier II certification on their website or marketing materials. But you have to check on their claims. Luckily, the Uptime Institute, an independent research and consulting firm, maintains a detailed listing of the data centers that they’ve certified, and you can easily verify whether the certification is for the constructed facility or just the design documents.
Uptime Institute credentials expire every 1-3 years, depending on the certification. Customers would be wise to include penalties in their contracts with the data center if the allow their accreditation to lapse. This can help avoid a tiresome transfer to another facility and give you leverage if you need it.
You can also move away from tier certifications altogether, or at least downgrade them, in favor of a Management and Operations Stamp of Approval. They’re hard to get– so far the Uptime Institute has only award 16– but there’s more in the pipeline.
This stamp of approval is less a snapshot of a single point in time, and more like guarantee that the data center continuously operates to their service level agreement. And that’s what customers really want correct?
It’s important to know that you are getting competitive rates and you aren’t signing a contract that has strange clauses in it that could force an auto-renew because you’ve forgotten to give notice 2 months before the end of the contract. The final details of the contract and small issues that many wouldn’t think about are listed in 14 things every colocation buyer should know. Plan on rolling your hardware into the facility in preconfigured racks? Make sure the elevator (if there’s one) will allow the rack to be wheeled in without tipping it to a dangerous angle. Can you ship hardware directly to the data center? Is it free? There are lots of questions to be asked and so definitely check out the guide and let us know.
We’ve long wanted to publish a colocation buyers guide for those out there in the market. There are simple but important issues that can be overlooked causing you to spend more money unnecessarily. We’ve listed the top 14 things every colocation buyer should know below in random order.
Check with your legal and finance teams to determine if your facility needs to be in a certain locale and if you require any 3rd party certifications of the facility you plan to colocate at. SSAE 16 and PCI are examples of the types of certifications a datacenter may have. If you are a service provider, your clients may have legal restrictions on their data being stored outside the country. If you are mainly hosting internal applications your requirements may not be as stringent as someone who is providing Software as a Service (SaaS). Finding a facility with renewable energy and other green initiatives may lead to tax benefits or play a role in corporate responsibility or a commitment to clients.
Some newer carrier neutral colocation sites may not have all their network partners connected. Since network carriers will incur a cost bringing services to each new datacenter that is built, many will wait until they win a client in that specific facility before building out their infrastructure. You’ll want to double check that there are actual other clients in this facility using bandwidth from your chosen network provider.
Naturally, datacenters want to make money, which can be easier when a colocation provider consolidates 2 low occupancy datacenters into a single moderately occupied datacenter. In other words, if you have 2 facilities that each have 30% utilization, forcing clients to move from the 2nd datacenter into the 1st will increase the percentage of occupancy and revenue while cutting the expense of the 2nd datacenter. Check your contract before signing to understand your rights regarding forced migration due to datacenter closure.
Network synergy can make a big difference in your IT deployment. You may have an on-net fiber provider that serves your corporate office building and the datacenter your choose. In this case, you could backup from the datacenter to a small server room in your corporate office or vice versa. Additionally if you offer an online service to consumers who will use your service mainly from home, you may find certain providers have a better reach to consumers as opposed to business networks. A good example of this would be gaming companies.
Ensure you are not only getting “redundancy”
As important as redundancy is, diversity is just as critical. Having 2 sets of power feeds coming in from the utility is great but if they are on the same side of a building which happens to be damaged in a fire outbreak, you’ll most likely lose both. Having feeds routed diversely into the facility and throughout the facility will increase overall uptime and reliability.
There are areas in the US where power is sold for a lower cost. If you aren’t bound to certain a certain geography seek out areas of cheaper power especially with large colocation needs. Although not exhaustive, Virginia, Texas and Las Vegas tend to have competively priced power due to access to cheaper power production methods and proximity. Please note that this cheaper power is not necessary renewable or green, it’s just cheaper.
Some colocation firms will have an auto-renew clause in their contract which renews the contract automatically for a term equal to the 1st term you’ve signed up for. Signing a 3 year term to have it auto-renew for 3 more years rarely is beneficial to you unless you negotiated amazing pricing on a service that is perpetually getting more expensive. For now, colocation is not steadily getting more expensive which means it’s best to have the freedom to explore your options at the expiration of your contract. While there’s no glut in datacenter space right now, there is a decent amount of datacenter surplus which means you have plenty of options.
Ensure you have adequate space around you when you sign up for colocation services. Usualling reserving space isn’t an option unless you are willing to pay for it but most facilities can give you first right of refusal on adjacent spaces or at least check with you unofficially before selling space right next to your cabinet or cage.
Get more than 1 or 2 colocation quotes if you can spare the time. While the old adage “you get what you pay for” definitely applies with colocation you will find that some facilities offer competitive pricing without sacrificing quality. A facility that retrofitted an existing building or purchased a site out of bankruptcy may have lower overhead which means they pay less, so these savings can be passed on to their datacenter clients. These synergies may not be known publicly but it may show up in pricing.
While typical real estate leases include gradual price increases, colocation services typically do not. Many colocation providers do include a clause that allows them to increase power costs in the event that their power cost rises, this is normal providing they give you a decent amount of notice beforehand. Ensure no other unit costs can be increased contractually without your approval.
Most facilities offer 24 hour access to their colocation but only after opening a ticket and getting approval. Usually this approval means that their on call staff maybe off site and may need to drive in to let you access the facility. Getting approval at 3am during a major systems outage isn’t ideal so find a facility that you can walk in to anytime you want provided you are authorized to do so and have the proper identification and access information with you.
Having to push equipment into a small, non-freight elevator can be exhausting during initial setup. Having no where to park your car can mean access to the colocation facility is hampered and stress inducing. Always tour the facility you plan to colocate in before signing contracts. During the tour imagine the hardware you use and what it will take to quickly get it in and out of your rack or cage. Can you store items there at no cost and can you borrow simple tools and a dolly without hassle? If you ship something in, will it be safely stored for you at no charge? Make sure door ways allow you to dolly items in without having to lean heavy equipment down due to lack of height.
If you do not match your hardware to the cooling density of the facility you will overpay in your space costs and hamper your ability to grow your footprint.
If you haven’t purchased hardware yet, please keep a few things in mind as you purchase:
• Having high efficiency hardware means you can put more hardware into a smaller footprint which will save you money so long as the power density of the colocation provider matches your hardware.
• Some storage hardware will come with an integrated enclosure which will (in most cases) force you into caged colocation space. While caged colocation is fine, it tends to have more upfront costs for build out and it comes with a minimum square footage that must be purchased.
• A base blade server chassis will not draw much power empty but after fulling loading it with 12 blade servers and 3 redundant power supplies the power draw will increase dramatically. Have your VAR calculate the actual draw of a fully populated blade chassis. Also, having high RPM traditional drives will create more heat and draw more power than SSD drives. Check out the math and see if the power savings are worth making a move to more power efficient devices.
Want this Report in PDF?
Click here to Download this Colocation Section as a Single PDF