The Future of Computing Lies In The Cloud

THE FUTURE OF COMPUTING LIES IN THE CLOUD

Cloud computing, often referred to as simply “the cloud,” is the delivery of on-demand computing resources — everything from applications to data centers — over the internet on a pay-for-use basis. It uses a remote server connected to the client by internet for using services like storage, processing, software etc.

There are many ways of implementing cloud    computing. The three major ways are:

 

  1. Software as a Service (SaaS): The required software is installed on a server and can be used by the client through the internet. They ensure that the same version of the software is accessible almost everywhere. Examples include Google Docs, Microsoft Office 365, Agile CRM.

  1. Platform as a Service (PaaS): The developers are provided a development and deployment platform on the cloud where they can upload their code snippets and let the cloud decide how to manage the resources needed to execute it. This enables the developers to create applications without worrying about server management. This is extremely useful for small startups or new developers. For example, AWS Elastic, Microsoft Azure, Google App Engine, SAP Cloud Platform and so on.
  2. Infrastructure as a Service (IaaS): IaaS involves using a remote server for storage and computation over the Internet. The main limitation of IaaS is data bandwidth. It can be used effectively only when the data bandwidth is comparable to the corresponding bandwidth of the local system. For instance, the average internet download speed in the USA is 70Mbps or 17.5MB/s. This is comparable to the average speed of 20MB/s of a USB2.0 external hard disk drive. So, storing data in the cloud has become more feasible now. For computation, services like Google Compute Engine are employed to analyze huge data in a batch-wise processing fashion. They can double up as primary servers or as additional servers in times of high demand. A few examples include Google Drive, Amazon Web Services, Google Compute Engine.

 

What is the future?

 

Due to the advantages of scalability and low maintenance costs associated with it, most businesses will shift their workload to the cloud. As developers need not care about managing the servers, this will lead to a “serverless architecture”.

 

I believe that in the future, the cloud is going to be responsible for a large portion of computation tasks of devices, and these devices will become essentially “thin clients”, ie. they will only have to compute enough power to manage the I/O devices and basic computations for networking as most of the computation tasks will be outsourced to the cloud.

 

The 5 main factors which will affect the future of cloud computing are:

  1. Scalability: The cloud is able to easily allocate resources for a task and use the unallocated resources for another task, thereby being more efficient than local computing. In traditional computing, the system remains idle for that time when there is no task. Also, whenever there is a surge in demand, the system hangs or crashes for some time as it is unable to get more resources for the more tasks in a traditional computing architecture. You must have experienced this in situations like booking tickets, checking results etc. The cloud solves this problem by adjusting the resources, so, in case of a surge in demand, it can provide more resources to stop the system from hanging.
  2. 5G: The biggest hurdle for a centralized computing platform is the network bandwidth. 5G will break this hurdle by providing speeds up to 10Gbps. This will make communication with the central server easier, and it can be used for processing.
  3. Quantum Computing: When quantum computers will be commercialized and will start being used as servers to become a part of the cloud, they will be able to provide huge computing power to small devices which will then allow them to connect to high-speed Internet.
  4. Internet of Things: Computing will become ubiquitous and devices will keep becoming smaller and cheaper in order to include full-fledged processors in them, which will not be a feasible option. In consequence, they will need to send the collected data to a cloud server which will analyse them to give instructions.

5. Machine Learning: Google announced in the GCP NEXT 2016 conference that it is going to make the process of data ingestion, storage, and training machine models as simple as calling an API. This will allow developers to focus on creating incredible new applications without having to understand complex concepts like neural networking. As machine learning becomes more advanced, the local general purpose servers will no longer be efficient enough to work on the neural networks. Such tasks for AI, Machine Learning, and Deep Learning computations will be carried out by Google’s application specific integrated circuits called Tensor Processing Units (TPUs) which are about 10 to 20 times more efficient than traditional GPGPUs cost-performance wise. However, most of the Deep Learning tasks are executed in a manner similar to batch processing, so it can be easily taken to the cloud where the cloud servers use the TPUs. Cloud TPUs are easy to program via TensorFlow, the most popular open-source machine learning framework. It is interesting to note here that even NVIDIA (one of the largest GPU manufacturers) is interested to compete with Google with its Tesla V100 GPGPU.

Aditya Singh

Leave a Reply

Your email address will not be published. Required fields are marked *