comp.lang.ada
 help / color / mirror / Atom feed
From: Olivier Henley <olivier.henley@gmail.com>
Subject: Re: Roundtrip latency problem using Gnoga, on Linux, when testing at localhost address
Date: Thu, 31 Mar 2016 10:02:40 -0700 (PDT)
Date: 2016-03-31T10:02:40-07:00	[thread overview]
Message-ID: <a2c83ba3-0fb8-44ea-8499-6d598354d38c@googlegroups.com> (raw)
In-Reply-To: <ndik6b$91e$1@gioia.aioe.org>

On Thursday, March 31, 2016 at 3:39:10 AM UTC-4, Dmitry A. Kazakov wrote:

> Still it could be indication of a problem or a bug. Usual suspects are:
> 
> 1. The measurement procedure.
> 
> 1.1. Artifacts of process switching.
> 1.2. Wrong clock used. System clocks may be very coarse. In many OSes 
> they are driven by the timer interrupts or from an inferior clock 
> source. Real-time clock at the processor's or the front bus' tact must 
> be used in round-trip measurements.

Debian 8 x64 using Adacore GNAT 2015 and within Ada.Real_Time
 
> 2. HTTP. It is a very poor protocol. Much depends on the 
> connection/session settings. E.g. if a new session is created for each 
> measurement it is to expect a quite poor performance. Creating and 
> connecting sockets is expensive.

I am using Gnoga and expect everything to go through Websocket and not HTTP (handshake done). I don't suspect my using of the framework, nor the framework, to create new connections every time I communicate with a particular client.

> 3. Socket settings
> 
> 3.1. TCP_NO_DELAY and sending packets in full is essential to short 
> latencies. The frame coalescing algorithm timeout depends on the system 
> settings. Usually it is close to 100ms. Which means that you could have 
> sporadic latency spikes depending on the state of the TCP buffer when 
> the traffic is low.

I need to investigate this one.


> 4. OS, TCP stack system settings
> 
> 4.1. Poor Ethernet controller, especially bad are integrated ones. The 
> worst are ones connected over USB/COM. Many boards have ones sitting on 
> top of an UART.

ping 127.0.0.1 gives me like 0.02 ms. So to have a ~80ms discrepancy, I would not suspect my hardware.

> Then, it could be a software bug. The Simple Components server gets a 
> notification when there is free space in the output buffer, ready to 
> send more data out. When the traffic is low, the output buffer is never 
> full and this means that, in effect, the buffer-ready signal is always 
> set. Thus dealing with the socket output becomes in effect busy waiting. 
> That in turn leads 100% load of the CPU's core running the server. In 
> order to prevent that the server stops waiting for the buffer-ready 
> signal when there was nothing to send. When a new portion of output data 
> appears the socket waiting is resumed. There is a certain timeout to 
> look after the socket regardless. If there is a bug somewhere that 
> prevents the resuming, it may expose itself as sporadically increased 
> latencies during sending data out. The latency must then be close to the 
> timeout value. [This is similar to the effect of TCP_NO_DELAY not set]

Good. In a sense it reassures me because I am effectively sending almost nothing, couple of bytes, every half seconds. I should stress test and see if it stays the same or even maybe reduce latency.

Ok but would it hold if I tell you that using Firefox I get around 120ms instead?

What is my best plan to investigate further? Profiling my exec, analysing network traffic etc?

Thx 

  reply	other threads:[~2016-03-31 17:02 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-30 15:35 Roundtrip latency problem using Gnoga, on Linux, when testing at localhost address Olivier Henley
2016-03-31  4:47 ` rieachus
2016-03-31  5:23   ` Jeffrey R. Carter
2016-03-31  7:38     ` Dmitry A. Kazakov
2016-03-31 17:02       ` Olivier Henley [this message]
2016-03-31 17:44         ` Dmitry A. Kazakov
2016-03-31 16:39     ` Olivier Henley
2016-03-31 20:46       ` Jeffrey R. Carter
2016-03-31 21:14         ` Dmitry A. Kazakov
2016-04-01  0:32         ` Dennis Lee Bieber
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox