..:: PCSX2 Forums ::..

Full Version: Anyone familiar with computer networking, internet, and stuff?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I am stuck on a homework problem for computer networks. In the question it explains that the Round Trip Time to transmit a packet over the internet is the time it takes for a server to transmit a packet of data to the client, and for the client to send an ack back to the server saying the packet was correctly recieved. It includes all queuing delays, propagation delays, and the like involved with transmitting bits, or at least that is what I understand. The round trip time given in the problem is 2 msec which to me seems awfully short.

It is simplified to assume that both are running a protocol that a new packet will not be transmitted until the ack for the previous packet is recieved.

So one question says how long will it take to transmit a 1 mb file if it is divided up into 10 equal packets of 100 kb each? So I just assume it will be 10 * the round trip time.

Then the next question asks about how long will it take if the file is instead divided up into 2 equal packets of 500 kbits.

If I try doing it with the idea of multiplying the number of packets times the round trip time then that means it is much faster to transfer a file with 2 , 500 kb packets instead of 10, 100 kb packets which doesnt make sense to me. I always thought it is better to transfer the files in smaller packets than larger ones. Less chance for buffer overflows at routers and if a packet is lost you don't have to retransmit a large amount of data all over again.


Of course the problem is assuming the client and server are the only 2 systems in the world connected to the internet and there is no other traffic and no chance for packet loss or queuing delays or anything.

Pain in the ass but it is actually a pretty fun class though.
I think you're just overthinking the question... By the sounds of it, they are just asking simple "what if" questions and not really using real world experiences. In which case, you'd be correct on all points you've made so far.

Unless there is more to the problem then you're saying... but from what I understand you're saying, you should be right in your assumptions.
(09-24-2009 05:12 AM)Dadaluma83 Wrote: [ -> ]I am stuck on a homework problem for computer networks. In the question it explains that the Round Trip Time to transmit a packet over the internet is the time it takes for a server to transmit a packet of data to the client, and for the client to send an ack back to the server saying the packet was correctly recieved. It includes all queuing delays, propagation delays, and the like involved with transmitting bits, or at least that is what I understand. The round trip time given in the problem is 2 msec which to me seems awfully short.

It is simplified to assume that both are running a protocol that a new packet will not be transmitted until the ack for the previous packet is recieved.

So one question says how long will it take to transmit a 1 mb file if it is divided up into 10 equal packets of 100 kb each? So I just assume it will be 10 * the round trip time.

Then the next question asks about how long will it take if the file is instead divided up into 2 equal packets of 500 kbits.

If I try doing it with the idea of multiplying the number of packets times the round trip time then that means it is much faster to transfer a file with 2 , 500 kb packets instead of 10, 100 kb packets which doesnt make sense to me. I always thought it is better to transfer the files in smaller packets than larger ones. Less chance for buffer overflows at routers and if a packet is lost you don't have to retransmit a large amount of data all over again.


Of course the problem is assuming the client and server are the only 2 systems in the world connected to the internet and there is no other traffic and no chance for packet loss or queuing delays or anything.

Pain in the ass but it is actually a pretty fun class though.

Well it depends on the size of the packet that is transmitted, for instance, if the size is 500b per packet trip, you would just convert the 100kb files to b then divide by your 2ms trip speed. and that would give you rougly how long it would take assuming that they are the only computers transfering it shouldn't make a difference because it is a direct connection. buy saying that the 2 500kb packets is faster makes somewhat sense becasue the 10 100kb packets have multiple times to connect therefore having to close packet sending, then start next one so there is a very very small delay between each sending/recieving.

if that makes anysense to you, just using simple math and from what I remember from my networking class, that should be the answer but i'm not giving any guarentees
Pretty much what Jareth said, but I'll make it a bit more clear =P
It's not exactly the file size or transfer rate that's being question here, the point is that 10 files means there will be more "I'm going to send this to you" "OK go ahead". So if there's 10 packets, there will be 10 conversations in between, while 2 packets will only have 2 conversations. Basically, 10 packets = 20ms of conversation while 2 packets = 4ms of conversation.

Actually, I think I just repeated Jareth ;>_>
Larfin you did repeat me, but it made for sense from what you said.
Well at least to me
Reference URL's