What is a hyper converged infrastructure (HCI, hyper-converged, hyperConverged)?
Traditionally, data centers have relied on a separate stack of hardware including a layer for compute, a layer for storage, and a layer for networking – in order to function. With that, comes the experts necessary to set up, maintain, and troubleshoot these complex infrastructures. Deep level technical knowledge is necessary in order to run these technology silos. These traditional setups are still widely used, but hyper converged systems are gaining momentum as IT decision makers are seeing the tremendous benefit these systems can bring to their departments.
The issue often comes down to scale
Traditional IT infrastructures are difficult to scale. Once the stacks either reach the end of their expected life-cycle or the storage layer fills up to capacity, it’s often required to preform a forklift upgrade – a rip out and replace is often the only way to upgrade. This is costly, risky and time-consuming.
As companies grow, so does the infrastructure needed to support that growth. Data storage inevitably increases as does the need for constant availability. With the need for increased availability, comes with a decrease in tolerance for downtime. Depending on the business, the cost of downtime can monumental. Growth also means more hardware, more technical silos, and and increase in energy consumption to power and then keep cool – those rows of cabinets.
Enter hyper converged infrastructures
Hyper convergence combines the layers of the traditional infrastructure in to a single “box”. This has obvious benefits from a dedicated hardware perspective, but also allows IT departments to utilize their resources much more efficiently. The reduction in hardware footprint means a much lower cost of ownership and much lower energy usage.
Hyper converged infrastructures are gaining a growing presence in data centers. The cost and resource savings are impressive. Not to mention being able to set them up in a short amount of time and then manage them from a single pane of glass.
What is IOPS?
Input/Output Operations Per Second or IOPS (pronounced eye-ops) is the most common method to benchmark a storage system’s performance – hard disk drives, solid state drives, and storage area networks.
The most common performance characteristics are as follows:
|Total IOPS||Total number of I/O operations per second (when performing a mix of read and write tests)|
|Random Read IOPS||Average number of random read I/O operations per second|
|Random Write IOPS||Average number of random write I/O operations per second|
|Sequential Read IOPS||Average number of sequential read I/O operations per second|
|Sequential Write IOPS||Average number of sequential write I/O operations per second|
It measures the number of read/write operations a device can complete in a second.
There are a lot of IOPS performance claims out there published by vendors and manufacturers. A majority of these performance claims are measured under the most favorable conditions and as such, shouldn’t be relied upon too heavily as they rarely match the actual workloads that companies run on a daily basis. Many performance claims are based on a 4k block size, when as we know – real-world workloads are much, much larger.
Calculating IOPS depends on a few factors and latency is one of them. Latency is a measure of time delay for an input/output (I/O) operation.
Spindle Speed (RPMs) – Enterprise level storage rotations speeds are most commonly 10,000 and 15,000 RPM
Seek Time – How long it takes the read/write head to move to the track on the platter needed.
Newer SSD drives have significantly better IOPS performance than their traditional hard disk drive counterparts. This topic could fill an entirely separate post. A lack of moving parts, among many other things drastically improves their performance. As with most things, an increase in performance usually results in an increase in price.
Here’s a basic formula to calculate IOPS range: Divide 1 by the sum of the average latency in ms and the average seek time in ms. So, (1 / (average latency in ms + average seek time in ms).
The basic formula above applies to a single disk, when using multiple disks in an array, the calculation changes. Further change that when using a RAID configuration.
Your IOPS needs will depend on a myriad of factors.
If you’ve recently made changes to your Alpha Anywhere application, it’s a good idea to restart your application. Often, we run into issues where changes are not appearing and in most cases a restart will fix the most recent changes.
ZebraHost is proud to partner with Carbonite to offer Disaster Recovery Solutions. Thank you Carbonite for great service, support, and the great write up!
From Carbonite’s Partner page:
“ZebraHost is a global application-hosting firm that truly goes the extra mile when it comes to understanding each customer’s unique business and technological requirements.
The company takes a great deal of time to learn everything it can about customers’ industries, business objectives and plans for the future. That’s precisely how ZebraHost and its disaster recovery arm, ZebraDR, have earned a stellar reputation for providing excellent, custom-fit products and services.
“We serve many niche markets and we get to know the markets very, very well,” said Clive Swanepoel, ZebraHost’s founder and CEO. “For instance, one of our major markets consists of developers that use Alpha Software to create their applications. We participate in the forums, we go to conferences and we know most of the key developers personally. We get heavily involved and we’re very customer-service focused.”
ZebraHost has been hosting applications and providing data and IT services since 2000. But the firm recently found that it had room in its portfolio for a new line of powerful disaster preparedness solutions that are purpose-built for small and midsize businesses (SMBs), medical practices and law firms. That’s when ZebraHost and ZebraDR turned to Carbonite.
As a Carbonite Partner, ZebraHost now offers the full lineup of Carbonite backup solutions for SMBs and consumers. That includes Carbonite Personal plans, Carbonite Pro plans for workstations, Carbonite Server Backup and the Carbonite Backup Appliance, an all-in-one disaster recovery solution for SMBs.
“Carbonite fills the niche we had for that small business market,” said Bryan Manning, Sales and Business Development Manager at ZebraHost. “But it also gives us the ability to serve that somewhat larger customer that may need server backup or an appliance on site. Carbonite is opening a lot of doors for us.”
Some of ZebraHost’s favorite things about being a Carbonite Partner include:
The trusted Carbonite brand name
When customers hear the name Carbonite they know they’re going to get a highly respected, reliable backup and recovery solution. That makes Carbonite a great fit for ZebraHost and ZebraDR.
“Carbonite has a very good brand presence,” Manning said. “The reputation is awesome.”
Powerful yet simple backup
Carbonite is a robust and dependable backup solution that is easy to install and use. Partners can easily monitor and manage their clients’ backups through Carbonite’s Partner Portal and Web-based management dashboard.
Excellent marketing and technical support
ZebraHost prides itself on providing top-notch customer support. If a customer sends in a support email, they will quickly get a phone call in return. ZebraHost expects the same level of support from its technology partners. Carbonite, which offers market development funds to qualified partners and assigns each partner a dedicated account manager, fit the bill perfectly.
“ZebraHost is based on support. We’ve had many different software companies approach us about becoming a partner but they weren’t very support-focused,” Manning said. “Carbonite has given us more of a personal relationship from day one and that matches our culture.”