
IO.Cloud is an infrastructure that is built out using the OpenStack platform. IO looks to compete with larger outfits for public computing loads by offering the enterprise level cloud suite at public data centers all around the globe. IO also looks to sell IO.Cloud for private datacenters in pre-bundled racks. The pre-bundled racks can be setup in onsite datacenters that will allow IT departments to setup their own private cloud without having to go through the entire configuration overhead. This bundle includes 26 nodes for computation and 4 nodes for storage. Since these racks are built using the Open Compute Winterfell and Knox architectures, administrators will have 960 terabytes of storage and 3.75 terabytes of RAM per rack. The compact, modular design of IO.Cloud’s racks will allow data centers with 18 racks to possess around 17 petabytes of storage and 67.5 terabytes of RAM. When you think about traditional compute and storage infrastructure, it is apparent that IO gives engineers more bang for their buck while conserving space.
IO released a statement on their website saying, “IO.Cloud is built on Open Compute because it provides our engineers with the flexibility to configure and optimize the hardware specifically for scale cloud deployments … IO.Cloud uses OpenStack Cloud components that are interoperable and designed to support standardized hardware implementations.”
IO looks to take on cloud industry giants such as Microsoft, SalesForce and Amazon. IO looks to establish a competitive advantage with corporate clientele because many corporations enjoy using one vendor for all of their cloud needs. This ensures that an IT group isn’t using unsupported hardware configurations. This makes life much easier from an administration standpoint. Given the fact that IO will allow you to use public resources alongside their private enterprise cloud offering, the IO.Cloud becomes an enticing option for companies who look to maximize the space in their data center facilities. IO also gives corporations the flexibility to have the option of extending into a hybrid computing solution should the need arise to do so in the future.