dk_brad has contributed to 12 posts out of 21185 total posts
(0.06%) in 3,639 days (0.00 posts per day).
20 Most recent posts:
Hi Tim,
Fields 10, 11 and 12 of regional messages are indicated to be type String on the following page:
http://www.iqfeed.net/dev/api/docs/RegionalMessageFormat.cfm
As far as I can tell, they are always integer fields. Is the documentation wrong? Or could these potentially not contain price/mkt index codes?
For reference, here's a log of the same code running when the issue doesn't occur.
Example screenshot showing partial (4 connections) hang.
I have come across a critical problem with the IQConnect software when establishing multiple connections in quick succession.
Steps to reproduce:
1 - launch client, login 2 - for i = 1 to 15 { create new socket on port 9100 send set protocol command sleep x } 3 - wait 4 - close all sockets
For values of x < ~15ms, the threads for each socket in the iqconnect process seem to enter some sort of race condition, and ramp up to 100% CPU (core) usage. This will typically happen to all 15 connections, but is sometimes limited to fewer (e.g 3 IQConnect threads running at 100% on 3 cores).
During failures: - IQConnect threads are running at 100% on all 8 cores of my machine (although sometimes fewer as mentioned above). - There is no response on the socket to any commands sent, including the initial set protocol. - There are no entries in the IQConnectLog after "LOOKUP SOCKET ACCEPTED i - " for each connection.
Log files are attached.
Configuration:
This occurs on versions 5.1.1.3 and 5.2.1.0 (didn't test others). I am connecting from Java using standard java.net / java.io libraries
OS tested: - Debian 8 64-bit (kernel 4.5.0 x86_64) - Debian 7 32-bit (kernel 4.5.0 x86),
JVM tested: - 1.7.0_80-b15 (32 bit) - 1.8.0_77 (32 and 64-bit)
Wine tested: - 1.6.2 - 1.9.6
Interestingly, I couldn't reproduce the problem using wine 1.4.1, but I suspect that's because 1.4.1 is slower to create and connect the sockets rather than wine being the problem. This issue only became evident because I am staging newer wine versions.
I couldn't reproduce the issue with the same code on Windows 7.
I would guess that the most likely explanation is a subtle concurrency bug in the IQConnect.exe connection code that is only evident when multiple connections are made almost simultaneously on a machine with many cores, but it is surprising that it hasn't been seen before.
I can provide test code (Java) if required.
No problem
Please also note that while you have removed the ordinal column, the descriptions still reference fields by number, which is likely to be confusing for developers that haven't seen the 5.1 docs.
Hi,
I noticed "Open Interest" is missing from the 5.2 update/summary message docs at:
http://www.iqfeed.net/dev/api/docs/Level1UpdateSummaryMessage.cfm
but I didn't see it mentioned in the upgrade guide. Is this field part of the 5.2 protocol?
Great, thanks.
I'd also be very keen to see possible error returns added to the documentation. It's a bit of a trial and error process at the moment, where new undocumented error returns are discovered and added to the list of possible received messages (e.g. this one, valid protocol already set, invalid # of params etc.)
I still need some additional clarification here.
As I understand it, each IQConnect connection buffers requests locally and only sends a single to the server at a time (per connection). Since each connection processes requests in a serial fashion, is it safe to assume that there only ever be one outstanding request per connection under this definition? (I am assuming the restriction is on the server side, not the client side)
Looking at my prior example, sending 20 requests each to 5 connections doesn't trigger an error. I presume that IQConnect is only effectively processing 5 requests simultaneously.
If this is the case, then limiting the code to a maximum of 15 connections processing history requests at any one time would seem to be the solution.
EDIT: This post doesn't make much sense post your edit. I see you've removed the 1 req/connection text so I'm guessing that it's just 15 connections and IQConnect will take care of buffering multiple requests per connection Edited by dk_brad on May 16, 2014 at 08:20 AM Edited by dk_brad on May 16, 2014 at 08:21 AM
Hi Tim,
That doesn't concur with the results that I was seeing. Sending 100 requests over 20 connections in round-robin fashion (i.e 5 requests per connection) was triggering this error response.
Conversely, sending the same 100 requests over 5 connections (i.e. 20 requests per connection) does not trigger the error.
In both cases, the requests are all sent upfront before the first results come in.
I am unable to check again at the moment as the feed is currently in use, but I shall try to replicate over the weekend and revert. FWIW, I have a log of the 5 connection test, so I am certain of that result.
Cheers. Edited by dk_brad on May 16, 2014 at 07:38 AM
Given the latency of requesting historical data on a per-port basis, it is necessary to spread requests over a number of ports and aggregate the results as highlighted in the post:
http://forums.iqfeed.net/index.cfm?page=topic&topicID=3193
When doing so, i am encountering an error response of "Too many simultaneous history requests.". The post referenced above indicated that there were no specific limits in place at that time. Clearly this has since changed, but I am unable to find out where this has been documented.
It's not clear from testing what specifically is triggering the response (e.g. rate limit on request submission, limit on outstanding number of requests, etc.) but it seems only loosely related to the number of connections. For example, it is triggered (sporadically) by simply requesting 100 day bars for 100 symbols using 10 connections.
Please provide details of the exact restrictions in place so that we may manage our requests to ensure they work.
Thanks for the quick response. I agree that without additional data, there's little else to conclude. I've put in place monitoring and logging of the route, so if the issue occurs again, we shall have more information to assess the likely cause.
My server is located in a NJ data center and is only a few ms and hops to the NY based exchanges and my broker. I have never had a network issue (> 1 year), but my previous data provider was also NY based. I've only been running IqFeed as a live data source for four days.
For reference, this is the route to your solar2 server:
1 x.x.x.x (x.x.x.x) 0.586 ms 0.645 ms 0.771 ms 2 207.99.53.41 (207.99.53.41) 0.408 ms 0.423 ms 0.421 ms 3 0.e1-1.tbr1.tl9.nac.net (209.123.10.102) 1.411 ms 1.405 ms 0.e1-3.tbr2.mmu.nac.net (209.123.10.26) 0.476 ms 4 0.e1-1.tbr2.tl9.nac.net (209.123.10.78) 1.367 ms 1.459 ms 1.439 ms 5 xe-11-1-3.edge8.NewYork1.Level3.net (4.31.30.37) 1.632 ms 1.626 ms 1.597 ms 6 vlan70.csw2.NewYork1.Level3.net (4.69.155.126) 47.160 ms 47.200 ms vlan60.csw1.NewYork1.Level3.net (4.69.155.62) 47.182 ms 7 ae-92-92.ebr2.NewYork1.Level3.net (4.69.148.45) 46.057 ms ae-72-72.ebr2.NewYork1.Level3.net (4.69.148.37) 45.837 ms ae-82-82.ebr2.NewYork1.Level3.net (4.69.148.41) 45.967 ms 8 ae-48-48.ebr2.NewYork2.Level3.net (4.69.201.38) 46.794 ms 46.769 ms ae-45-45.ebr2.NewYork2.Level3.net (4.69.141.22) 46.835 ms 9 ae-2-2.ebr1.Chicago1.Level3.net (4.69.132.65) 47.130 ms 46.793 ms 48.258 ms 10 ae-6-6.ebr1.Chicago2.Level3.net (4.69.140.190) 46.733 ms 47.003 ms 46.965 ms 11 ae-3-3.ebr2.Denver1.Level3.net (4.69.132.61) 47.258 ms 46.994 ms 46.978 ms 12 ae-22-52.car2.Denver1.Level3.net (4.69.147.100) 222.894 ms 222.742 ms 222.645 ms 13 DTN.car2.Denver1.Level3.net (4.53.2.94) 56.465 ms 57.481 ms 57.469 ms 14 66.112.152.58 (66.112.152.58) 59.367 ms 59.374 ms 59.263 ms 15 solar2.interquote.com (66.112.156.222) 56.198 ms !X 56.680 ms !X 56.858 ms !X
I just ran a 5 minute test on this route with the following results:
Host Loss% Snt Last Avg Best Wrst StDev 1. x.x.x.x 0.0% 377 1.0 0.9 0.5 17.8 0.9 2. 207.99.53.41 0.0% 377 0.6 1.6 0.4 33.7 2.9 3. 0.e1-3.tbr2.mmu.nac.net 0.0% 377 0.7 2.6 0.5 28.8 3.4 4. 0.e1-1.tbr2.tl9.nac.net 0.0% 377 1.5 2.1 1.4 24.5 1.9 5. xe-11-1-3.edge8.NewYork1.Level3.net 0.0% 376 1.8 2.9 1.6 59.4 5.8 6. vlan70.csw2.NewYork1.Level3.net 0.0% 376 47.4 47.6 46.9 61.1 1.8 7. ae-72-72.ebr2.NewYork1.Level3.net 0.0% 376 46.4 46.6 46.0 60.5 1.5 8. ae-47-47.ebr2.NewYork2.Level3.net 0.0% 376 46.9 47.4 46.9 59.2 1.3 ae-48-48.ebr2.NewYork2.Level3.net ae-46-46.ebr2.NewYork2.Level3.net ae-45-45.ebr2.NewYork2.Level3.net 9. ae-2-2.ebr1.Chicago1.Level3.net 0.0% 376 47.0 47.3 46.6 58.7 1.6 10. ae-6-6.ebr1.Chicago2.Level3.net 0.0% 376 48.3 47.4 46.6 72.6 2.3 11. ae-3-3.ebr2.Denver1.Level3.net 0.0% 376 48.4 47.4 46.6 71.0 1.8 12. ae-22-52.car2.Denver1.Level3.net 0.0% 376 165.6 68.9 46.6 278.2 47.0 13. DTN.car2.Denver1.Level3.net 0.0% 376 58.2 67.5 57.2 899.8 56.1 14. 66.112.152.58 0.0% 376 58.0 58.3 57.2 200.8 7.4 15. solar2.interquote.com 0.0% 376 57.0 57.2 56.8 69.0 0.8
Even at this time (1:30AM EST), although all packets are getting through, there seem to be some issues around the node DTN.car2.Denver1.Level3.net, with latency of up to 900ms and significant jitter. I presume this is where your data center connects with Level3. This may just be ICMP rate limiting though, since the final node shows no such jitter. Edited by dk_brad on May 13, 2014 at 01:13 AM
Per the subject, I experienced repeated disconnections last Friday.
IQConnect started and logged in at ~09:53:38Z (UTC). Over the next few seconds, the following local connections were established:
11 connections to port 9100 (lookup) 2 connections to port 5009 (L1)
The L1 connections both reported the remote IP as 66.112.156.223/60002.
At approximately 14:18Z, the following sequence of disconnection/reconnection occurred. C1/C2 refers to the two L1 port connections.
14:18:49.933Z: C1 [SERVER DISCONNECTED] 14:18:49.933Z: C2 [SERVER DISCONNECTED] 14:18:50.164Z: C1 [SERVER DISCONNECTED] 14:18:50.164Z: C2 [SERVER DISCONNECTED] 14:21:18.834Z: C1 [SERVER DISCONNECTED] 14:21:18.834Z: C2 [SERVER DISCONNECTED]
14:22:22.886Z: C1 [KEY] 14:22:22.886Z: C1 [SERVER CONNECTED] 14:22:22.886Z: C2 [KEY] 14:22:22.886Z: C2 [SERVER CONNECTED] 14:22:22.886Z: C2 [IP] 14:22:22.886Z: C1 [IP] 14:22:22.886Z: C2 [CUST (ip/port=66.112.156.222/60003)] 14:22:22.886Z: C1 [CUST (ip/port=66.112.156.222/60003)]
14:23:28.933Z: C1 [SERVER DISCONNECTED] 14:23:28.933Z: C2 [SERVER DISCONNECTED] 14:23:54.840Z: C1 [SERVER DISCONNECTED] 14:23:54.840Z: C2 [SERVER DISCONNECTED] 14:24:07.016Z: C1 [SERVER DISCONNECTED] 14:24:07.016Z: C2 [SERVER DISCONNECTED] 14:24:57.669Z: C1 [SERVER DISCONNECTED] 14:24:57.669Z: C2 [SERVER DISCONNECTED]
14:26:10.663Z: C1 [KEY] 14:26:10.663Z: C1 [SERVER CONNECTED] 14:26:10.663Z: C2 [KEY] 14:26:10.663Z: C1 [IP] 14:26:10.663Z: C2 [SERVER CONNECTED] 14:26:10.663Z: C1 [CUST (ip/port=66.112.156.223/60002)] 14:26:10.663Z: C2 [IP] 14:26:10.663Z: C2 [CUST (ip/port=66.112.156.223/60002)]
14:26:33.933Z: C1 [SERVER DISCONNECTED] 14:26:33.933Z: C2 [SERVER DISCONNECTED] 14:26:45.778Z: C1 [SERVER DISCONNECTED] 14:26:45.778Z: C2 [SERVER DISCONNECTED]
14:26:45.986Z: C1 [KEY] 14:26:45.986Z: C1 [SERVER CONNECTED] 14:26:45.986Z: C2 [KEY] 14:26:45.986Z: C2 [SERVER CONNECTED] 14:26:45.986Z: C1 [IP] 14:26:45.986Z: C2 [IP] 14:26:45.986Z: C1 [CUST (ip/port=66.112.156.229/60003)] 14:26:45.986Z: C2 [CUST (ip/port=66.112.156.229/60003)]
14:28:37.934Z: C1 [SERVER DISCONNECTED] 14:28:37.934Z: C2 [SERVER DISCONNECTED] 14:28:49.937Z: C1 [SERVER DISCONNECTED] 14:28:49.937Z: C2 [SERVER DISCONNECTED] 14:29:06.235Z: C1 [SERVER DISCONNECTED] 14:29:06.235Z: C2 [SERVER DISCONNECTED]
14:29:27.410Z: C1 [KEY] 14:29:27.410Z: C1 [SERVER CONNECTED] 14:29:27.410Z: C2 [KEY] 14:29:27.410Z: C2 [SERVER CONNECTED] 14:29:27.410Z: C1 [IP] 14:29:27.410Z: C1 [CUST (ip/port=66.112.148.114/60002)] 14:29:27.410Z: C2 [IP] 14:29:27.410Z: C2 [CUST (ip/port=66.112.148.114/60002)]
14:30:19.933Z: C1 [SERVER DISCONNECTED] 14:30:19.933Z: C2 [SERVER DISCONNECTED] 14:30:31.531Z: C1 [SERVER DISCONNECTED] 14:30:31.531Z: C2 [SERVER DISCONNECTED]
14:30:44.175Z: C1 [KEY] 14:30:44.175Z: C1 [SERVER CONNECTED] 14:30:44.175Z: C1 [IP] 14:30:44.175Z: C2 [KEY] 14:30:44.175Z: C2 [SERVER CONNECTED] 14:30:44.175Z: C1 [CUST (ip/port=66.112.156.115/60003)] 14:30:44.175Z: C2 [IP] 14:30:44.175Z: C2 [CUST (ip/port=66.112.156.115/60003)]
14:30:53.935Z: C1 [SERVER DISCONNECTED] 14:30:53.935Z: C2 [SERVER DISCONNECTED] 14:31:08.784Z: C1 [SERVER DISCONNECTED] 14:31:08.784Z: C2 [SERVER DISCONNECTED] 14:31:14.947Z: C1 [SERVER DISCONNECTED] 14:31:14.949Z: C2 [SERVER DISCONNECTED]
14:31:25.383Z: C1 [KEY] 14:31:25.383Z: C1 [SERVER CONNECTED] 14:31:25.383Z: C2 [KEY] 14:31:25.383Z: C2 [SERVER CONNECTED] 14:31:25.384Z: C1 [IP] 14:31:25.384Z: C1 [CUST (ip/port=66.112.156.224/60003)] 14:31:25.384Z: C2 [IP] 14:31:25.384Z: C2 [CUST (ip/port=66.112.156.224/60003)]
This continued for some time.
Was there an issue with your service during this time? If not, are you aware of what may cause this? (FYI, Level3 is the only network between my host and your servers). I have confirmed with my hosting provider that they had no connectivity issues at that time.
|