||May 7, 2004 01:04 PM
||Sep 13, 2022 01:41 PM
||Sep 22, 2022 11:14 AM
taa_dtn has contributed to 143 posts out of 20682 total posts
(0.69%) in 6,716 days (0.02 posts per day).
20 Most recent posts:
False alarm. Turned out to be due to an update elsewhere in the system that cause network connectivity issues.
Using Wine 7.12 under Feodora 36. No updates to Wine or to my software in the past several weeks. No problems until this morning. Here's what's in the log. Any ideas?
=== IQConnect Log File Opened On Tue Sep 13 10:43:38 ===
Current Log Levels,Connectivity,Information,Admin
Current IQFeed Version,126.96.36.199
STATUS Connectivity 464 0 2022-09-13 10:43:38 Initializing the login thread
STATUS Connectivity 456 0 2022-09-13 10:43:38 Creating trader account verification thread. Status idle
STATUS Information 460 0 2022-09-13 10:43:38 Unable to start Authentication Server. Error (107): Connection refused
STATUS Connectivity 452 0 2022-09-13 10:43:42 Waiting for authentication threads. Result:102 Code:183.
Thanks! It looks like that will work nicely.
Hi, folks. The lists of symbol additions/changes/deletions at http://iqfeed.net/symbolguide are no longer updating. Have those moved, or is this a glitch?
Yes, I experimented with this a few years ago. Your questions are relevant and insightful, but I don't have much useful information to offer in reply.
I haven't seen many published papers on the subject in recent years. Take that with a grain of salt, though, because I'm not looking actively enough. Possibly if the technique has been applied successfully, it hasn't been discussed in public for the obvious reasons. Hopefully someone else will reply with better information.
In general, I hit the same roadblocks you did. It's hard to choose the right network architecture (financial data isn't statistically stationary, so I wasn't able to design either recurrent or convolutional networks that were consistently successful). Raw tick-by-tick data has so much variability along so many dimensions that I suspect feature engineering is necessary, but that's a major research project in its own right. Techniques currently being used for natural language processing are probably where I'd start if I were to look at this again today.
Possibly the most fundamental problem I ran into is that it doesn't seem workable to use a scalar value to measure outcomes, so anything based on simple gradient descent is problematic. I think a practical outcome measurement must be at least three-dimensional -- it needs to include return, risk, and capital management. Arguably more, but the need for those three is easy to understand.
Hmm. I haven't run into this problem on my system (Wine 6.21 on Fedora 34). However, I'm still having issues with iqconnect failing to shut down after the last client closes, so I'm killing it explicitly and restarting early each day. Do you have any unexpected iqconnect, explorer, etc. processes lying around?
Thanks for the reply, Gary.
I've written a toy app that exhibits all the correct (expected) behavior, so I'll try to nail down what's different between that and my real apps. I don't have much time available this week, so my next update here might take a while longer.
Now that I have an example working, it's clear that the problem is unrelated to the protocol and client software upgrades.
For reference, I start iqconnect.exe using fork() and execl(), passing a path to the executable and all the appropriate arguments. I wait a few seconds for it to initialize and then I connect. This has worked well for years. CrossOver has never been necessary.
My concern is that iqconnect is still running, sending data to a socket that it thinks is still connected to a client even though the client process is long gone. This is very likely to cause problems somewhere down the line. At the very least, iqconnect's CPU demands are going to get progressively larger over time as clients connect and disconnect. At the worst, it could block and be unable to accept new connections. Plus, the widget tray fills up with multiple connection manager icons, some of which don't respond to mouse clicks, which is ugly and confusing.
I still have no solution to this, though a few things I've observed suggest the problem might be related to the way I start iqconnect. I've tried several different methods, and they all misbehave, but some in different ways than others. The common behavior for all of them is that iqconnect thinks it's still connected to a client even after the client (a) closes the only socket it has open to iqconnect, or (b) exits entirely.
FWIW, this problem doesn't exist when using protocol 4.9, but does with 6.2.
So, what is the recommended way to start iqconnect under Wine?
I'm in the middle of a long-overdue protocol upgrade (to 6.2) and I'm seeing some odd behavior that I wonder if anyone else has encountered.
The problem is that long after my app has exited, iqconnect.exe is still running. Diagnostics.exe client stats claims the app is still connected, and still receiving data on the Level 1 port. However, I've confirmed that my app is definitely not running.
ShutdownDelayLastClient is set to a reasonable value (10 seconds), but I doubt it applies, since iqconnect.exe thinks it's still talking to a client. I see that start.exe, conhost.exe, explorer.exe, and iqconnect.exe are all still running, though. Manually stopping the feed closes everything down as expected.
All this is on Fedora 34 Linux using Wine 6.18, so that might be a factor.
Seconding BottomFeeder's request: Is there any estimate for when the historical data will be updated?
Just FYI, today I was seeing the response '!ERROR! Unknown Server Error code 0.' during history fetch from servers in the IP range 188.8.131.52/24. This occurred repeatably. I worked around it by adding a small delay between history requests.
Follow up: Fedora recently upgraded to Wine 6.0, and IQConnect seems to be working again. I'll let you know if the problem reappears.
Back to normal performance levels today.
Hi, Gary. Thanks for the update!
Just to clarify, is the rate limit per-connection, or per-IP-address (or some other key)? Since I have multiple threads issuing requests on separate connections, there are conditions under which more than 50 requests could be issued over the course of a second for all the connections in aggregate, though not for any individual connection.
I implemented a request-rate limiter last week, just in case. Maybe the observed slowdown was unrelated to the changes at your end. We'll see what happens today.
Following up with some data from my logs. Minor correction; I see there are 7 active threads, not 8; the code says I chose that to stay under half of IQFeed's suggested limit of 15.
On 2020-12-22, my history fetch showed mean latency (time from sending request to arrival of first data record) of 0.22 seconds. Min was about 0.1, median about 0.24, max about 0.6. Mean data transfer time (from arrival of first data record to arrival of last data record) was 0.09 seconds. Min was 0, median about 0.04, max about 10.2.
So, some equities generate a lot of data. For data transfer alone, the best average completion rate I could ever achieve would be about 80 requests per second. On the other hand, some equities trade very thinly, so their transfer time is essentially zero, and the peak request rate could be extremely high. This is what I mentioned in my earlier postings.
On 2020-12-23, the mean latency was 1.22 seconds (!); min 0.4, median 1.0, max 3.5. Mean transfer time was 1.9; min 0, median 0.04, max 118 (!!!).
Things got really slow on 2020-12-23 for both latency and throughput. Maybe it was just an odd coincidence that this happened the same day we received notice of the new rate limiting algorithm. I'm curious to see what happens tomorrow.
Hi, Mathieu. I don't object to showing you the code (other than embarrassment because some is old and crufty), but it's straightforward. This is a C++ socket-based app that has existed for more than a decade being run on a capable modern desktop machine with an AT&T gigabit fiber connection and no recent changes to the code or to its environment, so an overnight factor-of-six slowdown suggests to me that the new rate-limiting on the feed might be having more effect than intended.
Until the new rate-limiting started, I was averaging 12 completed requests per second, which seems pretty reasonable to me. Some loss of performance is caused by round-trip latency (for simplicity and error recovery, as well as compliance with previous IQFeed policy, each thread keeps only one request in flight at a time). But I suspect the main reason for the low average rate is that for some equities, an entire day worth of ticks takes a significant amount of time to prepare, transmit, and process. It's not just a matter of the maximum rate at which the app is allowed to issue history requests.
Hi, Gary. I'm confused about the purpose and implementation of the new limit.
Each day after the main trading session (at 7PM Eastern) I fetch tick history for all the symbols I'm screening. Currently there are 4090 of them.
My first reaction on hearing about the new rate limit was "That's fine; I'm OK with a lower limit, though I doubt I'm hitting 50 requests per second as it is." I run 8 threads, each of which performs history requests sequentially, so there are at most 8 requests in flight at any given time, and it takes a while for each one to complete and a new one to be issued. (The data has to be received, reformatted, compressed, and written to disk before a new request is issued.)
Yesterday, for example, the history fetch took 339 seconds, for an average of about 12 completed requests per second.
Today the history fetch took 2031 seconds, for an average of 2 completed requests per second. Wow; that's quite a change. It's slower than the days when I had a 6Mbps DSL line.
My best guess is that the majority of history requests finish quickly, so they're hitting the rate limit and are being delayed by as much as a factor of 6. However, a fair number of requests take a long time to finish, which drags down the average way below the nominal rate limit. High peak rate, low average rate.
So I'm curious as to whether your intent is to limit the rate at which requests are initiated, or the bandwidth demand on the servers. If the latter, the current approach may be too drastic; my connection is idle more than 80% of the time.
Thanks! If they need more information, I'll be happy to help debug.
FYI, I've downgraded Wine to version 5.5 (the most recent older version available in the Fedora 32 repositories) and everything works correctly. This will keep me going for a while, but eventually I'll need a more solid fix.