StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

HTTP Purposes in Networking - Assignment Example

Cite this document
Summary
The assignment "HTTP Purposes in Networking" focuses on the critical analysis of the major issues in the purposes of HTTP in networking. HTTP along with other application layer protocols has been designed within the framework of the internet protocol suite…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER96% of users find it useful
HTTP Purposes in Networking
Read Text Preview

Extract of sample "HTTP Purposes in Networking"

?HTTP is an application layer protocol. Explain, and give relevant details in order to understand this protocol. HTTP along with other application layer protocols has been designed within the framework of the internet protocol suite. The application layer protocols such as HTTP rely on the underlying framework in order to execute specifically designed functions. In the case of HTTP, it relies on the internet protocol suite to display web pages, connect information through hyperlinks and to relay hypermedia in various forms. As a protocol, HTTP has been defined such that it has a reliable transportation layer protocol on which HTTP is based. Typically, the Transmission Control Protocol (TCP) is utilised in order to deal with HTTP transport purposes (W3, 2004 a). What is meant by an HTTP request and response, giving an example of each? How are these transmitted? By which underlying TCP/IP protocol? Essentially HTTP is a request and response protocol. The request is used to designate the instructions sent by the client to the server. This transmission takes place in the form of a request method such as Uniform Resource Identifier (URI). Typically, these requests contain the protocol version, request modifiers, information about the client and any other forms of body content. The server reacts to the request by producing a response. The response initiated by a server contains a status line which outlines the protocol version of the involved message along with an indication of failure or success. This is often followed by a message that provides information about the server, any meta data or meta information available and any other forms of attached body content. A basic example of HTTP request-response protocol is the interaction between web browsers and websites. A web browser tends to act as a client while any application present on a website tends to function as the server. The client, in this case a web browser, sends a HTTP request to the client. In turn the server provides the client with the required resources such as HTML files and other hypermedia. The response from the server’s end contains the required content as well as the completion status of the client’s request. A number of different internet protocol suite protocols are available for carrying out HTTP transportation. However, in most cases the Transmission Control Protocol (TCP) is utilised in preference to other protocols such as User Datagram Protocol (UDP) based on reliability concerns. HTTP connections are of two primary sorts. One of these involves the idea of pipelining request. Explain this concept and describe why and how HTTP can benefit from one of these connection types relative to the other. HTTP relies in large part on two primary connection types namely persistent connections and HTTP pipelining. The older versions of HTTP, including 0.9 and 1.0, utilised a single request response pair after which the connection was closed. This meant that the TCP connection request had to be renegotiated for every single request response pair. Persistent connection was introduced in HTTP 1.1 to keep the connection alive after a single request response action had been executed. This meant that the TCP connection did not have to be reinitiated every single time a request response was required. In turn, this meant that connection speeds improved since the TCP restart time was not required for every request response action. However, there was still one serious shortcoming. The client had to wait for a response before it could initiate a new request. This meant that a request response action could be carried out one at a time. HTTP pipelining was introduced in version 1.1 to allow the client to send multiple requests without any need to wait for responses. The use of HTTP pipelining allows the reduction of lag time since the client can send multiple requests at the same time and can then wait for their corresponding responses (W3, 2004 b). The use of this technique allows drastic improvements in the internet connection speed since HTML pages and other content tends to load faster. The typical TCP packet is constructed such that it houses multiple HTTP requests due to HTTP pipelining. Consequently, HTTP pipelining aids in sending a lower number of TCP packets on the network to perform the same tasks which allows a reduction in the overall network load and congestion. What is meant by a cache? Web cache? What is the relevance of the Principle of Locality in this context. There are two sorts of Web cache hit. Describe and explain these. In computing the term cache refers to a store of data that is used time and over. Storing the data in a cache allows a speedier retrieval of the data since it must not be retrieved from the original source every time the data is required. In a similar manner, a web cache is a temporary store of data downloaded from the internet. A typical web cache contains web pages as well as other forms of hypermedia. The contention in utilising a web cache is to reduce the overall bandwidth requirements, reducing the total server load and for reducing the lag in sending requests and receiving responses. Data stored in a web cache can be retrieved far more quickly since it is stored on the native machine that is sending the requests. In contrast, data that is not stored in the web cache has to be retrieved from the internet which requires more time and applies greater server load (Huston, 2009). According to the principle of locality, an internet user will retrieve the same information from the same source over the internet repeatedly. Consequently, the information retrieved once by the internet user will be required again as well. Under this principle, it makes more sense to store such frequently used data in a web cache since it would reduce retrieval time, network load and server load at the same time. Therefore, utilising a web cache under the principle of locality would allow significant benefits to the user and the overall network. A web cache hit occurs when a web browser requests a page and finds it in the local web cache. The web browser first looks into the web cache before going online to retrieve the information. The unique identifier for each resource on the internet and in web caches is the Unique Resource Locator (URL). The web browser matches the typed in URL to the ones available in the cache. If a match is found, it is considered a web cache it. However, two different situations can emerge in the instance of a web cache hit. The webpage existing in the cache may be fresh or it may be expired. The expired tags are matched with the retrieval time and date of the cached page. If the web page is still fresh (according to predefined standards), it is retrieved and displayed in the web browser. However, if the web page is expired, it is refreshed from the corresponding server and is branded as fresh before it is displayed in the web browser (Huston, 2009). If UDP is unreliable, why is it ever used? UDP is a core protocol from within the internet protocol suite though it is considered unreliable for regular internet data transmission. However, UDP was designed for simple data transfer in a unidirectional method. UDP is still used since it is transaction oriented especially for retrieving data such as domain names or network time. Moreover, being a datagram protocol, it provides datagrams that can be used to create other protocols. UDP is very efficient for bootstrapping purposes since it does not require a complete protocol stack to support it. Moreover, the stateless nature of UDP makes it suitable for applications where a large number of clients are involved. UDP is also utilised for real time applications since it does not have any retransmission delays by inherent design. The unidirectional nature of UDP makes it highly suited to broadcasting one way information or data over the internet (Kurose & Ross, 2010). How can a UDP server be written in Java. You should be able to show an understanding of all the critical lines needed to send a given UDP packet. A UDP server is intended to create a datagram for online applications. A datagram contains two essential things – the address of transmission and the transmitted content. In order to create a UDP server in Java, the first thing to do is to create a datagram socket at a specific port. Next, the byte sizes of the received and sent data should be defined (which are typically kept at 1,024). This should be followed by creating a buffer space to receive the subject datagram after which the datagram should be received on the server end. The IP address and port number of the client should be taken up next so that the transmission address of the datagram can be specified. Once the address is settled, the content should be inserted into the datagram so that the datagram can be completed. Once the datagram is complete, the datagram should be written out to a socket for transmission after which the server should re-loop to create another datagram (Rose India, 2007). Explain the essential ideas underlying Go Back N and Selective Repeat. How can a lost ACK be dealt with by a Stop and Wait protocol. Both Go Back N and Selective Repeat are instances of the automatic repeat request (ARQ) protocol. In both methods, the sending process will keep on transmitting frame(s) that have been specified by a window size regardless of the receipt of any acknowledgement packets (ACK) from the receiver’s end. In the case of Go Back N, the transmit window size is N while the receive window size is kept fixed at 1. However, in the case of Selective Repeat, the transmit window size and the receive window size is greater than 1. Moreover, in Selective Repeat the receiving process continues to accept and acknowledge the received frames even if an error has occurred (Tanenbaum, 2003). The Stop and Wait method of the ARQ protocol sends a frame to the requesting device and waits till it receives a positive acknowledgement before another frame is sent. In case that an ACK is lost, the Stop and Wait method will not be given an positive acknowledgment by the requesting device. Consequently, the Stop and Wait method will retransmit the same frame again until it receives a positive acknowledgment from the requesting device. This way the lost ACK is replenished by the Stop and Wait method before new ACKs are transmitted (Tanenbaum, 2003). Explain the use of the Sliding Window Protocol. What is the point of the latter? How does it operate? The Sliding Window Protocol is an essential component of the packet data transmission protocols that require packet transmission consistency without any packet loss. The various portions of transmissions in data transmission protocols are assigned unique sequence numbers that are interpreted by the client for packet or transmission sorting. However, there is no upper limit to sequence numbers required to achieve this task. The Sliding Window Protocol quantifies the number of packets that can be sent or received within a specified period of time. In this fashion, unlimited packets can be transmitted through the use of fixed size sequence series. Under the Sliding Window Protocol, the transmitter keeps track of the unacknowledged packets sent to the requestor. The total number of unacknowledged packets forms the window. With every acknowledgement, the receiver tells the transmitter the current window boundary size. Utilising the Sliding Window Protocol, every time the transmitter receives an ACK packet, the window slides by a single packet so that a new packet can be transmitted. Similarly, on the receiving end the window slides by a packet as soon as a new ACK packet is received (Peterson & Davie, 2000). Explain what is meant by a half-close connection in TCP. Given this 4-way handshake, how are the flags fields set, and when are ACKs sent between server and client? TCP connections have the capability to close one end of the connection while the other end is still active. Typically, one end of the TCP connection terminates its output while the other end is still receiving data. For any connection in TCP, the client sends a FIN to create a half closed TCP connection. Now the client is not sending any data but can still receive data from the other end. As soon as the server receives the FIN, it goes into a passive close state. In response, the server sends an ACK followed by its own FIN. Finally, the client sends the server an ACK. As soon as the server receives an ACK, it closes the connection totally (Microsoft, 2012). TCP Renoe and TCP Tahoe interpret the arrival of three duplicate ACKs differently. Why is this so? TCP Tahoe detects a packet loss after a complete time out. Practically, the lag in Tahoe’s case is even longer due to coarse grain timeout. In contrast, Renoe tends to detect packets earlier than Tahoe and does not empty the pipeline when a packet is lost. The inherent architectural differences between Tahoe and Renoe ensure that their interpretation of packet loss is differentiated (Stoica, 2012) TCP uses a congestion handling mechanism (AIMD) that results in a saw-tooth pattern in the size of the sender’s congestion window. Explain the latter term and go on to explain why there is this variation on the size of congestion window. TCP utilises Additive Increase / Multiplicative Decrease in order to handle network congestion in a stable manner. This results in a saw tooth pattern whereby the output variable tends to increase and then decrease and then increases again. This cycle continues and produces a saw tooth pattern. In the case of AIMD, the TCP source for each connection uses the variable “cwnd” in order to reflect the congestion level. Additive increase is carried out as a reaction to available capacity perceptions and “cwnd” is increased fractionally in a linear fashion. Consequently the output tends to increase linearly. When a timeout occurs, “cwnd” is halved and this tends to decrease the output. This is followed by another additive increase followed by a multiplicative decrease and so on and so forth. The resulting output assumes a saw tooth shape accordingly (Peterson & Davie, 2000). References Huston, G., 2009. Web Caching. The Internet Protocol Journal , 2(3). Kurose, J.F. & Ross, K.W., 2010. Computer Networking: A Top-Down Approach. 5th ed. Boston, MA: Pearson Education. Microsoft, 2012. TCP Connection States and Netstat Output. [Online] Available at: HYPERLINK "http://support.microsoft.com/kb/137984" http://support.microsoft.com/kb/137984 [Accessed 29 August 2012]. Peterson, L.L. & Davie, B.S., 2000. Computer Networks: A Systems Approach. Morgan Kaufmann. Rose India, 2007. UDP Server in Java. [Online] Available at: HYPERLINK "http://www.roseindia.net/java/example/java/net/udp/udp-server.shtml" http://www.roseindia.net/java/example/java/net/udp/udp-server.shtml [Accessed 28 August 2012]. Stoica, I., 2012. A comparative analysis of TCP Tahoe, Renoe, New-Reno, SACK and Vegas. [Electronic] Available at: HYPERLINK "http://inst.eecs.berkeley.edu/~ee122/fa05/projects/Project2/SACKRENEVEGAS.pdf" http://inst.eecs.berkeley.edu/~ee122/fa05/projects/Project2/SACKRENEVEGAS.pdf [Accessed 29 August 2012]. Tanenbaum, A.S., 2003. Computer networks. Upper Saddle River, NJ: Prentice Hall. W3, 2004 a. HTTP/1.1: Introduction. [Online] Available at: HYPERLINK "http://www.w3.org/Protocols/rfc2616/rfc2616-sec1.html" \l "sec1.4" http://www.w3.org/Protocols/rfc2616/rfc2616-sec1.html#sec1.4 [Accessed 28 August 2012]. W3, 2004 b. HTTP/1.1: Connections. [Online] Available at: HYPERLINK "http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html" http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html [Accessed 28 August 2012]. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(“Networking Assignment Example | Topics and Well Written Essays - 1000 words”, n.d.)
Networking Assignment Example | Topics and Well Written Essays - 1000 words. Retrieved from https://studentshare.org/other/1400947-networking
(Networking Assignment Example | Topics and Well Written Essays - 1000 Words)
Networking Assignment Example | Topics and Well Written Essays - 1000 Words. https://studentshare.org/other/1400947-networking.
“Networking Assignment Example | Topics and Well Written Essays - 1000 Words”, n.d. https://studentshare.org/other/1400947-networking.
  • Cited: 0 times
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us