ICA Session freeze over TCP transport streaming ADM shows high L7 Server side latency only solution is re-connect

ICA Session freeze over TCP transport streaming ADM shows high L7 Server side latency only solution is re-connect

book

Article ID: CTX318330

calendar_today

Updated On:

Description

ADM Shows High latency for several seconds in L7 Server side (between ADC SNIP and AWS external LB VIP)
RTT between ADC SNIP and AWS is super fast (very healthy ~50 ms).

End users are facing latency during the Desktop/Apps streaming over TCP transport.

Note :: When ICA traffic is streamed over UDP/EDT/HDX/DTLS there are no issues, sessions maintain a stable flow without traffic freeze. Switched from TCP to UDP and no more session freeze observed. But customer wants to have TCP transport streaming as a backup.

ICA sessions tend to freeze after a few minutes or hours, end users lose control of the sessions, mouse and keyboard won't respond, only solution is to close the session and re-connect.

rtt backend.jpg

However ADM shows layer 7 Server side latency super high ~ 2 seconds.

layer 7 server side latency.jpg

CDF traces collected from VDA shows TCP outbufs in Server getting filled up waiting for WorkspaceApp TCP ACK flags. This indicates VDA lost communication with WorkspaceApp client.

11:13:46:60813,3536,3532,CtxGfx.exe,1,wdica,buffer.cpp,2339,GetOutBuf,1,TC_OUTBUF,"GetOutBuf: No more available outbufs.",""
11:16:05:66069,3536,3532,CtxGfx.exe,1,wdica,buffer.cpp,4328,IcaBufferAppendVirtualData,1,TC_OUT,"IcaBufferAppendVirtualData:Timed out while waiting for OutBufs to be freed",""
11:16:05:66069,3536,3532,CtxGfx.exe,1,wdica,buffer.cpp,2339,GetOutBuf,1,TC_OUTBUF,"GetOutBuf: No more available outbufs.",""
11:16:05:66069,3536,3532,CtxGfx.exe,1,wdica,buffer.cpp,3733,SendSomeData,1,TC_OUT,"SendSomeData: No outbufs are available to send data",""
11:16:05:66069,3536,3532,CtxGfx.exe,1,wdica,buffer.cpp,3799,SendSomeData,9,Information,"TOTAL WIRE_TRAFFIC SENT SO FAR = 20756229 BYTES",""
11:16:05:66670,3536,3532,CtxGfx.exe,1,wdica,buffer.cpp,3799,SendSomeData,9,Information,"TOTAL WIRE_TRAFFIC SENT SO FAR = 20756229 BYTES","" VDA could not send any data and still the sent data is 20756229 bytes.
11:18:05:45380,3536,3532,CtxGfx.exe,1,wdica,buffer.cpp,3799,SendSomeData,9,Information,"TOTAL WIRE_TRAFFIC SENT SO FAR = 20795046 BYTES","" 38817 bytes of data was sent in 2 minutes.

 

Resolution

Adjust AWS MTU size to 1500 matching ADC SNIP MTU of 1500 bytes.
AWS powershell command used :: > set subinterface “Ethernet” mtu=1500 store=persistent

Problem Cause

MTU size mismatch between AWS inbound LB VIP (jumbo frames, 9000 bytes size) and ADC SNIP standard MTU (1500 bytes)

ADC MSS 1460 bytes
AWS LB VIP 8460 bytes

ADC SNIP offered TCP options during tcp 3-way handshake ::
===============================
Options: (12 bytes), Maximum segment size, No-Operation (NOP), Window scale, No-Operation (NOP), No-Operation (NOP), SACK permitted
    TCP Option - Maximum segment size: 1460 bytes
    TCP Option - No-Operation (NOP)
    TCP Option - Window scale: 8 (multiply by 256)
    TCP Option - No-Operation (NOP)
    TCP Option - No-Operation (NOP)
    TCP Option - SACK permitted

AWS response TCP options during tcp 3-way handshake ::
================================
Options: (12 bytes), Maximum segment size, No-Operation (NOP), Window scale, No-Operation (NOP), No-Operation (NOP), SACK permitted
    TCP Option - Maximum segment size: 8460 bytes
    TCP Option - No-Operation (NOP)
    TCP Option - Window scale: 8 (multiply by 256)
    TCP Option - No-Operation (NOP)
    TCP Option - No-Operation (NOP)
    TCP Option - SACK permitted