Advance Server Techniques

Topic: BusinessEnergy
Sample donated:
Last updated: June 13, 2019

Advanced Server Techniques – Energy Efficiency Student Number: 1314793 Table of Contents 1. Introduction3 2.

Advance Server Techniques4 2. 1. Virtualisation4 2. 2. Delivery Content Network (CDN)5 2. 3. Load Balancing5 4. Energy Efficiency on Server’s Techniques7 3.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

University of Connecticut – School of Business8 4. Server’s Techniques – Businesses10 1. IntroductionLooking back at the history of World Wide Web, which was initially set up to interconnect a small number of laboratories used by the government research, to today’s extensive use of its services , and almost daily reliance on it by millions of people around the world, proved that while the way the internet and the purpose of its applications has changed over the years, one thing which has remained consistent is the interest in its usability and the exponential increase of the number of users its , which in 2008/2009 alone saw an 18% increase.As the increase of web based applications continues at a progressive pace, the need for improved server handling capability has become an important aspect to consider for a reliable and successful web service. Over the time, new server architectures and techniques including virtualisation, load balancing and delivery content network have been developed to address the handling of the increase on network traffic and demand for web based services.The same techniques also used for large companies and organisations to provide services on their private networks. Traditionally the development of server techniques has been focused on performance improvements driven by continuous user demand for web applications; however, the ever increasing energy consumption has begun to limit the performance growth due to the investment on electricity bills and carbon footprint.

Therefore the research focus on server system development is being gradually shifted to power-energy efficiency.Part of this paper intends to identify new technologies that would bring more energy efficiency servers systems without compromising the performance, but rather boosting it. 2. Advance Server Techniques Companies and organisations using internet as a communication and transaction tool between them and their customers, users or other organisations, have being forced to develop more efficient and sophisticated server systems due to the increase of the number of users and amount of data transferred. Bellow there is a review of some of these server’s techniques. 2.

1. VirtualisationAs the ontological meaning of the word virtual (opposed to real) indicates, virtualisation is achieved by the use of a software application that allows the creation of one or more virtual “instances” of a machine (guests) living within the same physical machine (host). For example, the result of virtualisation applied to servers, is a number of independent servers sharing the same physical resources. The following are the three most popular implementations of virtualisation; * Virtual Machine: the guests run an imitation of the host hardware, so guests can run different operating systems.

Paravirtual Machine: in many aspects similar to “Virtual Machine”, but does not need total emulation of the host’s hardware. This approach uses an API (Application Programming Interface) to interact with the host platform. * Virtualization at the OS layer: the guests are limited to use the same operating as the host, although distinct distributions of it are allowed. Figure 1. Distributed Power Management with VMware DRS VMware is one of the companies that provide virtualisation services.Among the range of techniques they use, Distributed Power Management (DPM) claims to reduce and optimise energy efficiency by dynamically managing a series of virtual servers. DPM takes advantage of the “unaffected migration” nature of virtual servers to allow the server system to shot down and bring up resources (servers) according to the application’s demand, without reducing service levels. 2.

2. Delivery Content Network (CDN) CDN could be described as a series of network elements arranged for a more efficient content delivery to the end-users.In other words CDN consists of a series of server systems placed in different locations and containing copies of data (content). When a user requests this data the response comes from the closer server system to the user. This delegation of request reduces the change of bottleneck on one particular server. Figure 2 illustrates how a simple HTTP request works when using CDN. First the user makes a request to the DNS server (1). Once the request is resolved by the DNS (2) server the request is sent to the application server (3), which then enquires the content to the Content Provider server (4).

Finally the response is completed by the Content Provider server and sent by the Application server to the user. The main feature that CDN brings to web based applications/systems is scalability, never the less an important factor to consider when designing a web service. Web service providers need to invest monetary and time wise for additional network connections and infrastructure in order to scale their system. Outsourcing this resources to CDN significantly decline costly investments and reliefs most of the traffic from the customer servers. . 3. Load Balancing Utilising only one server to provide a web service could be enough in some cases, where the network traffic is not greater than the handling capacity.

If the service’s demand increases one solution could be to replace the server with a more powerful one. However this replacing process will work only until something goes wrong within the server, which is not very unusual. Then the service would be inaccessible until the problem is fixed, creating a decline on the service’s quality and service user’s trust.This is when Load Balancing comes along, being a server technique that offers solutions for the problem mentioned above as well as for many others concerns on web application hosting. Load Balancing refers to the technique of handling the network traffic directed to a specific application server and distributing the amount of requests to a number of different servers (server’s farm) that serve the same application. For instance should a server go down, then another server handles its portion of network traffic, by doing which one of the servers could fail but the service is always up and running.Ensuring the accessibility to a service is called high availability.

Figure 3 shows a simple implementation of Load Balancing. Figure 3. Simple Load Balancing Implementation Scalability is one of the main purposes of Load Balancing and applying it is straightforward. Knowing that in case of a server failure, its requests are sent to the other server and taken back to the previous server when this is up again, reflects the fact that the load balancer will use any resources available (servers), therefore, a desired number of servers could be added without the user ever notice or compromising the service availability at any level. . Energy Efficiency on Server’s Techniques Taking advantage of new server’s techniques for enterprises to achieve quality of web services, very often signifies the use of a relatively large server’s structure, meaning that more energy consumption takes place in order to keep a larger amount of servers up and running, not forgetting the energy applied to server’s cooling system.

Enterprises that highly relay on large server’s systems as their main revenue channel including eBay, Google, Amazon, etc… As their server’s infrastructure constantly grows, they have started to consider a way to keep down their energy costs.Some server’s techniques and architectures can help businesses to attenuate their energy consumption e. g.

Virtualisation as detail in section 2. There are also initiatives for energy efficiency like “Climate Savers Computing”, which is a non-profitable organisation of eco-conscious consumers, businesses and conservation organizations started by Google and Microsoft on 2007. The goal is to promote the adoption of smart technologies that can improve computing power and reduce energy consumption when the computer is inactive stage.

Today, the average desktop PC wastes nearly half of its power, and the average server wastes one-third of its power,” Although initiatives of this kind are most likely destined to lead the IT industry for a few years time, some future technologies point to a more low level change in computing architecture. Quantum Computing is one of these technologies (Computing based on the Quantum theory), currently on early development stage, but which promises to be able to increase exponentially the number of calculation of a microprocessor and abolish the always present overheating problems of computing based on silicon. . University of Connecticut – School of Business The school of business at University of Connecticut (Storrs, Connecticut) is in the 62th position within the top 100 World Rankings of Business Schools based on research contribution 2005-2009, which also allocates it within the top 5% of this ranking.

To be able to maintain their position the school is sometimes pushed by the corient of emerging technologies in order to pursuit a more qualitative teaching service.In order to achieve a qualitative service the school of business needs to maintain an uninterruptable access to their courses and services for its over 3. 5 thousand student and 150 staff members, regardless whether those are located inside the country or overseas.

Since October 2010 the school of business provides worldwide high availability online course access for student capable of hosting the most recent cutting-edge services, without increasing their energy consumption, but rather reducing it considerably.That was achieved by the use of a series of technologies including the migration of its data centre to virtualisation optimised Dell server (VMware vSphere software) and building its storage solutions using Dell PowerEdge servers. This is the current hardware used by the school * 10 Dell PowerEdge™ R710 servers with Intel® Xeon® x5570 series processors. * 144GB of RAM. * Dell™ EqualLogic™ PS6000E and PS5000E iSCSI SAN arrays. * Dell PowerVault ™ MD3000i highly available modular disk storage arrays. Avaya Ethernet Routing Switch 8800. * 20 Avaya Ethernet Routing Switches 5698TFD-PWR.

* Dual 10GbE MLT. * 4 Avaya Ethernet Routing Switches 5650TD. * Edge to Core: Dual 10GbE Distributed Multilink trunks. By using virtualisation the school was able to reduce the number of physical servers from 65 to 10, which resulted in a 75 percent reduction on managing physical server infrastructure and allowing them to spend more time on new projects.

It was observed that savings on server’s management was not the only benefit of having fewer servers to maintain, as the rack space was reduced by 85 percent, the costs on power dedicated for cooling the server’s room was dropped as well by a 20 percent. The computing power was also affected by the use of virtualisation, due to the fact that a number of virtual servers can coexist within the same physical machine, this allowing for service-server individualisation, in other words the possibility of hosting each service on its own exclusive server.The improvement observed by the school after applying virtualisation on 10 servers resulted on 100 percent more computing power than with 65 physical servers. VMware was the software that the school used for virtualisation and this software offers a feature called VMotion, which allows the virtual servers to load balance across physical server according to demand of service or physical server maintenance. The mobile nature of virtual server was used by the school to dramatically minimise the server’s downtime, which dropped by a 97 percent.The school main goal is to provide high performance on remote desktop sessions for their students and staff, which can be accessed from any workstation located anywhere in the world, therefore the school needs a solid and reliable connection to the core and to the network.

The school is building its own cloud for this purpose, as the school regards complete mobility an important part of its quality service package in order to keep attracting the potentially bets student. With redundant fibre connections, redundant power supplies, redundant power switches, and dual CPUs for high-availability mode, our level of reliability for business continuity is absolutely huge,”- Michael Vertefeuille commented – Chief Operating and Technology Officer, University of Connecticut School of Business In order to achieve high performance and improve scalability on a highly virtualised network and support their SAN (Storage Area Network) extensive irtualization clusters, the school is hoping to deploy 10 Gigabit Ethernet and iSCSI (Internet Small Computer System Interface) SAN infrastructure.SAN infrastructure is being developed by the school on the Avaya Ethernet Routing Switch 8800, which are designed expressible for highly virtualised systems and to improve scalability. 5.

Server’s Techniques – Businesses ‘ Figure 5. Just by looking at the number of Internet user throughout the world (. 04 million on 2000 vs. almost 2 billion on 2010), one could get an idea of how important internet has become over the last decade to our lives. Internet has turned to an indispensable communication tool for a number of areas and activities, which nowadays are accomplished in a fraction of the time that they used to take before the internet revolution.

As internet was gaining more and more attention from users, some businesses identified this as an opportunity to use internet as a selling channel on the pursuit of attracting customers.The traditional approach for businesses using internet to achieve revenue, consisted on a single server hosting a single service, which typically was enough for a certain amount of simultaneous users, depending of the computing capacity and specs of the computer server. There are a number of issues related to this traditional approach, which are associated to maintenance, update, scalability and failure of the computer server. ‘ Figure 5.As far as a business is concern a long downtime on its server is translated to both money loss and loss of trust from its users/customers.

In January 2008 Amazon. com availability dropped from 100 percent to less than 10 percent, recovering up to 70 percent availability after over 2 hours. The calculated loss for Amazon was about $4. 13 billion globally, making a $31,000 per minute on average. For this case no numbers of unhappy customers was reported, but one could make an idea.Scalability is one of the major concern for businesses, it prevents from having to restructure the whole server’s system in case of rapidly and unexpected growth, whereas Twitter is the most recent and more famous example of a poor scalable server system. Twitter success was much bigger and quicker than expected and the architecture was not designed for scalability, therefore they are now forced to go through the painful process of scaling their architecture, resulting in continuous .

…..

…This is one of the reasons of that it is not surprising that businesses are keen to invest more resources in short term on their server’s system in order to maintain happy customers and an uninterrupted service producing money. Part 1 a The brief says “Review and critically evaluate current techniques for implementing scalable and highly available servers”. Breaking that down: ·Review – research several big websites to discover what scalable and available server techniques they use. Present in your own words. ·Current techniques – make sure your review is current and say so.

It may be worth mentioning (but not much more) how they got to where they are now if that is interesting. ·Scalable – Over what timescale? Days, months, years? Why? What are the consequences of not being scalable? ·Critically evaluate – this is the biggy. This is your chance to make your report more than the sum of the parts – more than just an aggregation of research results.

How? One approach is to look through your research for: trends, exceptions – instances that stand out from the usual, successes, failures, and then try to explain and amplify them in your own works.Part 1 b First choose the system you are going to review. Just one! It has to be an actual system, in real use. Be prepared, and allow time, to abandon your first choice if, when you start to delve, you can’t find enough detail to support the task.

Examples of systems to look at could include, but not be limited to, high availability, scalable, clustered, virtualized, distributed or cloud servers. Remember that it is not just a description that is required but a critical evaluation also – your evaluation. Part 1 cIt is asking why businesses would bother to spend the time, money and share of mind on the techniques being covered here. You really need to try to reach your own conclusion.

You need to spend time thinking about the difference between, on one hand, your own conclusions, backed up by research results and on the other hand, any conclusions in the research material, which are, of course, not yours.How VMware Virtualization Right-sizes IT Infrastructure to reduce power consumption, VMware Inc. 2008 [ 4 ]. Globaly Distributed Content Delivery. John Dilley, Bruce Maggs,Jay Parikh, Harald Prokop,Ramesh Sitaraman,and Bill Weihl. 2003 [ 5 ].

System’s capability of expand in order to handle the increase on data transfer without compromising performance [ 6 ].http://www. javaworld. com/javaworld/jw-10-2008/jw-10-load-balancing-1. htmlttp://web.

mit. edu/newsoffice/2007/energy-computers-0614. html http://www. readwriteweb. com/archives/ibm_launches_5_year_quantum_computer_project. phphttp://somweb. utdallas.

edu/top100Ranking/searchRanking. php? t=whttp://www. allaboutmarketresearch. com/ [online on 28th November 2010]

x

Hi!
I'm Mia!

Don't know how to start your paper? Worry no more! Get professional writing assistance from me.

Check it out