Dissecting Cloud Services W/ Artificial Intelligence

Cloud computing is a term thrown around a lot in recent years. The original terminology comes from network diagrams. Within a network diagram, the internet was represented by a cloud. This was to show that while the hardware was unknown, the functionality was there. In the modern world, this further decoupling of hardware from specific functionality has led to many private and public cloud offerings. These cloud offerings have in turn put additional fuel on the fire of artificial intelligence creation. Here are how a number of cloud service offerings have helped aid in the maturation of artificial intelligence.

Software as a Service (SaaS )

In this cloud set the end user simply has a service that they control. They do not control any of the hardware or network. This is the largest sector of all data centers. One example of a SAAS is Google’s search ability. This search ability is backed by a number of advanced artificial intelligence algorithms. One of those key abilities is the neural network factoring to optimize results. More about this can be read on Google’s official blog.

Platform as a Service (PaaS)

In this cloud set the end user is allowed a “cookie cutter” network environment. This particular environment allows artificial intelligence researchers to do a number of things. They can have a test environment to run potential customer simulations. Researchers can also use this ideology as a way to increase size based on-demand. For example, if their environment needs two specific servers for X about of users, and now they have double those users, the PAAS can add two more servers with the required configuration. This automatic expanding allows AI researchers to no longer worry about right sizing their environment.

Infrastructure as a Service (IaaS )

In this final cloud set a artificial intelligence researcher has full control. They can either automatically or manually add servers as they see fit. The main draw to this is that you can rely on the data center’s redundant network backbone and power generators, while not paying for that investment. The downside to this configuration is that the people in charge of this environment need to have the proper technical training. Making a mistake on the configuration on a set of servers can greatly slow down the entire environment and therefore any experiments / services that need to be deployed.
AI researchers and providers have more choices than ever to build their latest creation. Furthermore, scaling power up or down automatically is a lot easier in today’s modern world. The downside to this ability is that additional training may need to occur to verify the environment is built properly.