Pages: 1 [2] :: one page |
|
Author |
Thread Statistics | Show CCP posts - 8 post(s) |
Kazuno Ozuwara
Science and Trade Institute Caldari State
1
|
Posted - 2015.11.22 19:39:11 -
[31] - Quote
Iam Widdershins wrote:And let me tell you, libraries that support pipelining are few, far between, and hard to use. For any language..
It's hard like calling one function. Which language\environment has no reliable lib with HTTP pipelining?
!!SCIENCE!!
|
Iam Widdershins
Puppies and Christmas
896
|
Posted - 2015.11.23 09:22:00 -
[32] - Quote
Kazuno Ozuwara wrote:Iam Widdershins wrote:And let me tell you, libraries that support pipelining are few, far between, and hard to use. For any language.. It's hard like calling one function. Which language\environment has no reliable lib with HTTP pipelining? Python? C#? Java? Why, what are you using?
Lobbying for your right to delete your signature
|
Kazuno Ozuwara
Science and Trade Institute Caldari State
1
|
Posted - 2015.11.23 15:13:49 -
[33] - Quote
Iam Widdershins wrote:Kazuno Ozuwara wrote:Iam Widdershins wrote:And let me tell you, libraries that support pipelining are few, far between, and hard to use. For any language.. It's hard like calling one function. Which language\environment has no reliable lib with HTTP pipelining? Python? C#? Java? Why, what are you using?
.NET supports pipelining by default (using it right now). 5 min of googling for for Java and Py: http://www.innovation.ch/java/HTTPClient/advanced_info.html#pipelining, https://gist.github.com/coady/f9e1be438ba8551dabad.
!!SCIENCE!!
|
Pete Butcher
KarmaFleet Goonswarm Federation
303
|
Posted - 2015.11.23 18:31:26 -
[34] - Quote
In C++ you have a crapton of HTTP libs with transparent pipelining support. Nothing hard there.
http://evernus.com - the ultimate multiplatform EVE trade tool + nullsec Alliance Market tool + Trade Advisor
|
Iam Widdershins
Puppies and Christmas
896
|
Posted - 2015.11.25 08:10:41 -
[35] - Quote
That's good, I'm glad to hear .NET has it.
The python recipe you linked, however, is pretty close to the only thing available short of learning how to use cURL, and it seems pretty hacky; it does, however, look like what I'll end up using. It might not be obvious, but the set of _HTTPConnection__state to idle is a sketchy, un-pythonic hijack of the connection object's private internal state to trick it into sending additional requests without waiting, counter to its designed behavior.
Pete Butcher wrote:In C++ you have a crapton of HTTP libs with transparent pipelining support. Nothing hard there. Haha, aside from C++, yeah. I should really learn that language sometime.
Lobbying for your right to delete your signature
|
Silvia Sotken
HC - Redemption
0
|
Posted - 2015.11.26 23:34:40 -
[36] - Quote
Lol it just clicked. I get a lot of ConnectionErrors, now I know why and not much I can do about it. Never thought of doing a tracert, while I'm not as good at programming as you guys, I'm learning what I can. While it was a little problem under Authed, under Public it is a lot more time consuming to run my data feeds.
Tracing route to public-crest.eveonline.com [87.237.38.221] over a maximum of 30 hops:
1 7 ms 1 ms 1 ms mygateway1.ar7 [10.1.1.1] 2 * * * Request timed out. 3 10 ms 10 ms 12 ms cpcak3 [101.98.0.1] 4 9 ms 8 ms 10 ms pts [101.98.5.20] 5 11 ms 8 ms 8 ms pts [101.98.5.21] 6 10 ms 12 ms 9 ms bundle-100.bdr02.akl05.akl.vocus.net.nz [175.45.102.65] 7 134 ms 134 ms 132 ms bundle-10.cor01.akl05.akl.VOCUS.net.nz [114.31.202.100] 8 134 ms 136 ms 133 ms bundle-200.cor02.lax01.ca.VOCUS.net [114.31.202.47] 9 135 ms 134 ms 134 ms bundle-101.bdr02.lax01.ca.VOCUS.net [114.31.199.51] 10 134 ms 177 ms 168 ms v301.core1.lax2.he.net [64.62.151.125] 11 131 ms 137 ms 154 ms 100ge2-1.core1.lax1.he.net [72.52.92.121] 12 194 ms 198 ms 199 ms 100ge15-2.core1.ash1.he.net [184.105.80.201] 13 203 ms 201 ms 199 ms 100ge5-1.core1.nyc4.he.net [184.105.223.166] 14 272 ms 266 ms 274 ms 100ge7-2.core1.lon2.he.net [72.52.92.165]
|
Zifrian
Licentia Ex Vereor Phoebe Freeport Republic
1692
|
Posted - 2015.12.28 17:39:59 -
[37] - Quote
I've started looking at my market history calls and I'm noticing anywhere from 300 to 3000 milliseconds to download the json file. My processing is pretty fast so this is the main limitation. I'm using 4 connections/threads because when I went to 8 I started getting 'forcibly disconnected' responses or the file wouldn't download completely. The second part I can handle (and should) but as I go higher in threads I'm noticing more issues. The download time wasn't much better either. I'll try 20 tonight to see what happens and log failures, download times, etc.
The most frustrating part is that it takes about 7-8 minutes on my machine to download history for about 1600 items in The Forge. That's not very efficient for my app. If anyone has any tips, I'd appreciate it. I'm using vb.net so any .net info would be great. If I could get to 2-3 minutes I'd be fine with that.
As mentioned above, it would be awesome to just download the entire market of data in one file per region. Individually is ok but I think most of us want to get it all and just cache it for use throughout the day. Certainly less server calls.
GÇ£Any fool can criticize, condemn, and complain - and most fools do. GÇ¥ - Dale Carnegie
Maximze your Industry Potential! - Download EVE Isk per Hour!
|
Soltys
60
|
Posted - 2016.02.04 14:26:37 -
[38] - Quote
From Evernus user's perspective - crest pulls for market analysis are now hours-long frustrating repetitions of connection closed errors. I mean hours literally, as it can take that much time to get any data from say Jita without errors.
It's near unusable now - and it worked just fine before switch to public endpoint, so I don't really suspect Evernus being at fault here, as it makes 6 connections on its own and I've further limited it in options to 20 requests/s (and it constantly keeps failing).
Jita Flipping Inc.: Solmp / Kovl
|
Cornbread Muffin
The Chosen - Holy Warriors of Bob the Unforgiving
0
|
Posted - 2016.02.26 23:47:46 -
[39] - Quote
I just started using CREST to write apps, so figured I would chime in here as well. Bulk requests to CREST are glacial and error prone. Starts great and ends in tens of thousands of errors. =/
FWIW my company had the exact same issue with API access for one of our products. Especially when we started adding users in other continents and mobile usage increased (mobile = crap latency). We switched from many small requests to few large requests and it fixed the performance issues immediately. We tossed pagination as well because our users wanted all of the pages anyway. As a bonus, it's far easier to troubleshoot. Connection flooding is a hassle every step of the way. It's just not handled well at any layer along the pipe. Pushing bulk data only has a couple of failure points and you catch them once instead of needing to handle millions of errors for little chunks of data.
Highly compartmentalized APIs are best if the use case is a few, essentially random nodes from a large selection. If the use case means systems are going to visit all of the endpoints anyway to get the work done, you won't regret consolidating.
Pete's proposal or something similar is definitely the way to go. |
Dagobert I Duck
THE HANG0VER
2
|
Posted - 2016.02.27 15:00:02 -
[40] - Quote
- General Rate Limit: 150 requests per second
- Concurrent Connections: 20
Keeping this numbers in mind I can make 130 -140 req/s without any errors. Increasing the concurrent connections to 21 leads to a lot of errors...
I fetch the market within 2 hours. |
|
Cornbread Muffin
The Chosen - Holy Warriors of Bob the Unforgiving
3
|
Posted - 2016.03.07 02:28:24 -
[41] - Quote
I wish that was my experience. I run 18 connections and get nowhere approaching 130-140rps. My servers are in the eastern US. Even if it was, 2 hours to move the amount of data contained in the market is bad. |
Iam Widdershins
Puppies and Christmas
904
|
Posted - 2016.04.10 11:35:25 -
[42] - Quote
Cornbread Muffin wrote:I wish that was my experience. I run 18 connections and get nowhere approaching 130-140rps. My servers are in the eastern US. Even if it was, 2 hours to move the amount of data contained in the market is bad. The answer, I believe, lies in pipelining requests. Your net libraries may not support this. HTTP/1.1 supports request/response pipelining natively. Because individual requests take some time to get through the pipeline on the serverside, exacerbated by your increased ping from being on another continent, means that waiting for the ping alone will cause you to achieve less than 7.5 request/response cycles per second per connection. By sending multiple requests bundled together at a time and reading their respective responses when they return, you can reduce or eliminate this restriction.
Lobbying for your right to delete your signature
|
|
|
|
Pages: 1 [2] :: one page |
First page | Previous page | Next page | Last page |