Pages: [1] 2 :: one page |
|
Author |
Thread Statistics | Show CCP posts - 4 post(s) |
|
CCP Logibro
C C P C C P Alliance
1440
|
Posted - 2016.05.23 14:36:19 -
[1] - Quote
To help with developers working with market data, we're adding a new Bulk Market Order endpoint to CREST. Find all of the details here.
CCP Logibro // EVE Universe Community Team // Distributor of Nanites // Patron Saint of Logistics
@CCP_Logibro
|
|
DoToo Foo
Sons Of Alexander AL3XAND3R.
66
|
Posted - 2016.05.23 14:58:54 -
[2] - Quote
Thank you
http://foo-eve.blogspot.com.au/
|
Querns
GoonWaffe Goonswarm Federation
2402
|
Posted - 2016.05.23 15:05:48 -
[3] - Quote
This is excellent news, and a great idea.
Unsolicited advice: a thing that stuck out to me was that the size of the returned response was still very large. Perhaps you might look into alternate transport formats for this data? Something like protocol buffers work very well for serializing data in a very small way. It may or may not be practical; just throwing it out there as a way to reduce the payload's size further.
This post was crafted by the wormhole expert of the Goonswarm Economic Warfare Cabal, the foremost authority on Eve: Online economics and gameplay.
|
Steve Ronuken
Fuzzwork Enterprises Vote Steve Ronuken for CSM
6000
|
Posted - 2016.05.23 15:16:58 -
[4] - Quote
Foxfour is a lovely lovely man.
Thanks for doing this And so quickly after I asked.
Woo! CSM XI!
Fuzzwork Enterprises
Twitter: @fuzzysteve on Twitter
|
|
CCP FoxFour
C C P C C P Alliance
4311
|
Posted - 2016.05.23 16:12:17 -
[5] - Quote
Querns wrote:This is excellent news, and a great idea. Unsolicited advice: a thing that stuck out to me was that the size of the returned response was still very large. Perhaps you might look into alternate transport formats for this data? Something like protocol buffers work very well for serializing data in a very small way. It may or may not be practical; just throwing it out there as a way to reduce the payload's size further.
I would SOOOOO love to do this!
@CCP_FoxFour // Technical Designer // Team Tech Co
Third-party developer? Check out the official developers site for dev blogs, resources, and more.
|
|
Querns
GoonWaffe Goonswarm Federation
2402
|
Posted - 2016.05.23 17:47:09 -
[6] - Quote
CCP FoxFour wrote:Querns wrote:This is excellent news, and a great idea. Unsolicited advice: a thing that stuck out to me was that the size of the returned response was still very large. Perhaps you might look into alternate transport formats for this data? Something like protocol buffers work very well for serializing data in a very small way. It may or may not be practical; just throwing it out there as a way to reduce the payload's size further. I would SOOOOO love to do this!
Well here's hoping that the serialization layer of CREST is decoupled enough to make it easy. :)
This post was crafted by the wormhole expert of the Goonswarm Economic Warfare Cabal, the foremost authority on Eve: Online economics and gameplay.
|
POS Trader
Merchants of Lore
11
|
Posted - 2016.05.23 21:36:20 -
[7] - Quote
It would be awesome if it was possible to access bulk market, but with some sort of a sieve list. For example, instead of specifying individual type ids or fetching them all, why not allow for multiple types,
https://crest../market/xyz/orders/multiple?type=12345&type=234&type=345...
With maybe maximum number of type ids set to 100 so it doesn't overflow the 10k entries page response?
That way my application could be reduced to one query per region to sync all data, instead of 60+
|
Steve Ronuken
Fuzzwork Enterprises Vote Steve Ronuken for CSM
6000
|
Posted - 2016.05.23 21:57:25 -
[8] - Quote
POS Trader wrote:It would be awesome if it was possible to access bulk market, but with some sort of a sieve list. For example, instead of specifying individual type ids or fetching them all, why not allow for multiple types, https://crest../market/xyz/orders/multiple?type=12345&type=234&type=345...
With maybe maximum number of type ids set to 100 so it doesn't overflow the 10k entries page response? That way my application could be reduced to one query per region to sync all data, instead of 60+
A large chunk of this is to do with caching. Caching 'random' requests like this isn't hugely viable. Because you're likely the only person who's asking for that specific set.
Woo! CSM XI!
Fuzzwork Enterprises
Twitter: @fuzzysteve on Twitter
|
|
CCP FoxFour
C C P C C P Alliance
4311
|
Posted - 2016.05.23 22:52:37 -
[9] - Quote
POS Trader wrote:It would be awesome if it was possible to access bulk market, but with some sort of a sieve list. For example, instead of specifying individual type ids or fetching them all, why not allow for multiple types, https://crest../market/xyz/orders/multiple?type=12345&type=234&type=345...
With maybe maximum number of type ids set to 100 so it doesn't overflow the 10k entries page response? That way my application could be reduced to one query per region to sync all data, instead of 60+
With this new resource it should be about 30 requests at the high end for regions like The Forge and next week if I am able to get it up to 30,000 orders per page then it is only 10 requests at the most per region. The bigger issue is just the sheer size of it all. Whelp.
@CCP_FoxFour // Technical Designer // Team Tech Co
Third-party developer? Check out the official developers site for dev blogs, resources, and more.
|
|
Vic Vorlon
Aideron Robotics
52
|
Posted - 2016.05.24 14:00:12 -
[10] - Quote
Steve Ronuken wrote:A large chunk of this is to do with caching. Caching 'random' requests like this isn't hugely viable. Because you're likely the only person who's asking for that specific set.
That's really interesting to know! So it's less strain on the server to deliver the entire data set, because it's static (same data to everyone who asks), instead of a specified group of type ids. That makes sense now.
How often is that static data set updated? Hourly? Every few minutes? |
|
Jai Blaze
Honor Forge Joint Operation Involving Nobodys
0
|
Posted - 2016.05.27 14:45:27 -
[11] - Quote
Steve Ronuken wrote:A large chunk of this is to do with caching. Caching 'random' requests like this isn't hugely viable. Because you're likely the only person who's asking for that specific set.
but if each typeID has been recently cached, couldn't the request including multiple IDs simply collect info on each item and send in bulk rather than triggering a cache for the specific request?
I send requests to eve-central all the time that include 28 itemIDs (I found that once I get into the 5 digit IDs, any more requests break the URL). this is an example of the first request I send eve-central for my spreadsheet.
http://api.eve-central.com/api/marketstat?typeid=34,35,36,37,38,39,40,377,380,393,394,399,400,434,438,439,440,442,443...etc
Couldn't crest handle it similarly?
Edit: I may not understand caching at all, I do think the data transfer from crest could be reduced a lot by letting us choose a station or something to filter the response. Is it a problem of processing vs bandwidth and they just prefer to heap it on bandwidth? |
Steve Ronuken
Fuzzwork Enterprises Vote Steve Ronuken for CSM
6000
|
Posted - 2016.05.27 17:49:58 -
[12] - Quote
Jai Blaze wrote:Steve Ronuken wrote:A large chunk of this is to do with caching. Caching 'random' requests like this isn't hugely viable. Because you're likely the only person who's asking for that specific set. but if each typeID has been recently cached, couldn't the request including multiple IDs simply collect info on each item and send in bulk rather than triggering a cache for the specific request? I send requests to eve-central all the time that include 28 itemIDs (I found that once I get into the 5 digit IDs, any more requests break the URL). this is an example of the first request I send eve-central for my spreadsheet. http://api.eve-central.com/api/marketstat?typeid=34,35,36,37,38,39,40,377,380,393,394,399,400,434,438,439,440,442,443...etc Couldn't crest handle it similarly? Edit: I may not understand caching at all, I do think the data transfer from crest could be reduced a lot by letting us choose a station or something to filter the response. Is it a problem of processing vs bandwidth and they just prefer to heap it on bandwidth?
Crest caches at the request response level, not the db query level
Woo! CSM XI!
Fuzzwork Enterprises
Twitter: @fuzzysteve on Twitter
|
Pete Butcher
KarmaFleet Goonswarm Federation
335
|
Posted - 2016.05.30 05:41:15 -
[13] - Quote
There should be some filtering available. I've done some testing and making ~750 requests for individual items is almost 3x faster than making whole market requests for 12 regions (those include The Forge and Sinq Laison).
http://evernus.com - the ultimate multiplatform EVE trade tool + nullsec Alliance Market tool + Trade Advisor
|
Pete Butcher
KarmaFleet Goonswarm Federation
335
|
Posted - 2016.05.30 06:41:58 -
[14] - Quote
The main thing killing the performance is the serial nature of the endpoint - we cannot make concurrent requests for different pages; we need to fetch page by page. With Eve having ~60 regions, it means we can make at most ~60 requests per few seconds it takes to get a reply and parse it.
I propose another endpoint which will return the page count upfront, so we can use our rate limit in full and fetch the pages concurrently.
http://evernus.com - the ultimate multiplatform EVE trade tool + nullsec Alliance Market tool + Trade Advisor
|
Iron Dezi
Ministry of War Amarr Empire
0
|
Posted - 2016.05.31 00:15:25 -
[15] - Quote
How are the responses sorted? It doesn't seem to be sorted by time issued, time remaining or type id. |
|
CCP FoxFour
C C P C C P Alliance
4311
|
Posted - 2016.05.31 20:18:34 -
[16] - Quote
Pete Butcher wrote:There should be some filtering available. I've done some testing and making ~750 requests for individual items is almost 3x faster than making whole market requests for 12 regions (those include The Forge and Sinq Laison).
Can you do that testing again after todays deployment.
@CCP_FoxFour // Technical Designer // Team Tech Co
Third-party developer? Check out the official developers site for dev blogs, resources, and more.
|
|
Nuke Beta
Tacere Servitium
0
|
Posted - 2016.06.01 04:10:16 -
[17] - Quote
Pete Butcher wrote:The main thing killing the performance is the serial nature of the endpoint - we cannot make concurrent requests for different pages; we need to fetch page by page. With Eve having ~60 regions, it means we can make at most ~60 requests per few seconds it takes to get a reply and parse it.
I propose another endpoint which will return the page count upfront, so we can use our rate limit in full and fetch the pages concurrently.
After I get the first reply I check the page count and queue up each of the remaining pages so I'm typically asynchronously downloading some number of them in no particular order (Python asyncio/aiohttp). Isn't this = "making concurrent requests for different pages"? I don't do Python professionally so I could certainly have missed the big picture...
My OCD is struggling with not being able to be sure that the data set that comes back is coherent, I think the cache can update in the middle of the download? It seems like it would be handy to have an indication of which cache "version" the data came from so you could redo the ones from the previous cache. However, I have a vague memory that there may be two cache servers that aren't synced so there may be no way to guarantee that you can get a coherent set? |
Steve Ronuken
Fuzzwork Enterprises Vote Steve Ronuken for CSM
6003
|
Posted - 2016.06.01 11:35:38 -
[18] - Quote
Nuke Beta wrote:Pete Butcher wrote:The main thing killing the performance is the serial nature of the endpoint - we cannot make concurrent requests for different pages; we need to fetch page by page. With Eve having ~60 regions, it means we can make at most ~60 requests per few seconds it takes to get a reply and parse it.
I propose another endpoint which will return the page count upfront, so we can use our rate limit in full and fetch the pages concurrently. After I get the first reply I check the page count and queue up each of the remaining pages so I'm typically asynchronously downloading some number of them in no particular order (Python asyncio/aiohttp). Isn't this = "making concurrent requests for different pages"? I don't do Python professionally so I could certainly have missed the big picture... My OCD is struggling with not being able to be sure that the data set that comes back is coherent, I think the cache can update in the middle of the download? It seems like it would be handy to have an indication of which cache "version" the data came from so you could redo the ones from the previous cache. However, I have a vague memory that there may be two cache servers that aren't synced so there may be no way to guarantee that you can get a coherent set?
_Strictly_, you shouldn't be asking for pages you haven't been given a link for.
Woo! CSM XI!
Fuzzwork Enterprises
Twitter: @fuzzysteve on Twitter
|
Pete Butcher
KarmaFleet Goonswarm Federation
336
|
Posted - 2016.06.01 15:26:11 -
[19] - Quote
Nuke Beta wrote:Pete Butcher wrote:The main thing killing the performance is the serial nature of the endpoint - we cannot make concurrent requests for different pages; we need to fetch page by page. With Eve having ~60 regions, it means we can make at most ~60 requests per few seconds it takes to get a reply and parse it.
I propose another endpoint which will return the page count upfront, so we can use our rate limit in full and fetch the pages concurrently. After I get the first reply I check the page count and queue up each of the remaining pages so I'm typically asynchronously downloading some number of them in no particular order (Python asyncio/aiohttp). Isn't this = "making concurrent requests for different pages"? I don't do Python professionally so I could certainly have missed the big picture... My OCD is struggling with not being able to be sure that the data set that comes back is coherent, I think the cache can update in the middle of the download? It seems like it would be handy to have an indication of which cache "version" the data came from so you could redo the ones from the previous cache. However, I have a vague memory that there may be two cache servers that aren't synced so there may be no way to guarantee that you can get a coherent set?
The problem is - you don't know the url structure to make requests basing only on page count. If you assert one, your app will stop working on any change. That's why an endpoint with links to each page should solve all issues.
http://evernus.com - the ultimate multiplatform EVE trade tool + nullsec Alliance Market tool
|
Dread Griffin
Blueprint Haus Blades of Grass
0
|
Posted - 2016.06.01 17:01:29 -
[20] - Quote
I understand i can get these bulk orders from The Forge for example by using https://crest-tq.eveonline.com/market/10000002/orders/all/ but how do I walk to that path from the root instead of hard coding it?
I've tried walking down these paths found in the root, but they don't seem to provide a way to the bulk path: https://crest-tq.eveonline.com/market/prices/ https://crest-tq.eveonline.com/market/groups/ https://crest-tq.eveonline.com/market/types/ |
|
Pete Butcher
KarmaFleet Goonswarm Federation
336
|
Posted - 2016.06.01 17:06:25 -
[21] - Quote
It's in the individual region endpoints.
http://evernus.com - the ultimate multiplatform EVE trade tool + nullsec Alliance Market tool
|
Dread Griffin
Blueprint Haus Blades of Grass
0
|
Posted - 2016.06.01 17:40:20 -
[22] - Quote
Pete Butcher wrote:It's in the individual region endpoints.
Thanks, I guess that does make sense. I'll try it out. |
Nuke Beta
Tacere Servitium
0
|
Posted - 2016.06.01 18:14:19 -
[23] - Quote
Pete Butcher wrote:Nuke Beta wrote:Pete Butcher wrote:The main thing killing the performance is the serial nature of the endpoint - we cannot make concurrent requests for different pages; we need to fetch page by page. With Eve having ~60 regions, it means we can make at most ~60 requests per few seconds it takes to get a reply and parse it.
I propose another endpoint which will return the page count upfront, so we can use our rate limit in full and fetch the pages concurrently. After I get the first reply I check the page count and queue up each of the remaining pages so I'm typically asynchronously downloading some number of them in no particular order (Python asyncio/aiohttp). Isn't this = "making concurrent requests for different pages"? I don't do Python professionally so I could certainly have missed the big picture... My OCD is struggling with not being able to be sure that the data set that comes back is coherent, I think the cache can update in the middle of the download? It seems like it would be handy to have an indication of which cache "version" the data came from so you could redo the ones from the previous cache. However, I have a vague memory that there may be two cache servers that aren't synced so there may be no way to guarantee that you can get a coherent set? The problem is - you don't know the url structure to make requests basing only on page count. If you assert one, your app will stop working on any change. That's why an endpoint with links to each page should solve all issues.
Doesn't an endpoint with links to each page have the same problem with the cache as my concerns with getting a coherent set of data? You query to get links to each page of data, cache updates, page count changes +/- 1, your query for the last page may fail or you may not get all the data? |
Pete Butcher
KarmaFleet Goonswarm Federation
336
|
Posted - 2016.06.01 18:23:44 -
[24] - Quote
Nuke Beta wrote:Pete Butcher wrote:Nuke Beta wrote:Pete Butcher wrote:The main thing killing the performance is the serial nature of the endpoint - we cannot make concurrent requests for different pages; we need to fetch page by page. With Eve having ~60 regions, it means we can make at most ~60 requests per few seconds it takes to get a reply and parse it.
I propose another endpoint which will return the page count upfront, so we can use our rate limit in full and fetch the pages concurrently. After I get the first reply I check the page count and queue up each of the remaining pages so I'm typically asynchronously downloading some number of them in no particular order (Python asyncio/aiohttp). Isn't this = "making concurrent requests for different pages"? I don't do Python professionally so I could certainly have missed the big picture... My OCD is struggling with not being able to be sure that the data set that comes back is coherent, I think the cache can update in the middle of the download? It seems like it would be handy to have an indication of which cache "version" the data came from so you could redo the ones from the previous cache. However, I have a vague memory that there may be two cache servers that aren't synced so there may be no way to guarantee that you can get a coherent set? The problem is - you don't know the url structure to make requests basing only on page count. If you assert one, your app will stop working on any change. That's why an endpoint with links to each page should solve all issues. Doesn't an endpoint with links to each page have the same problem with the cache as my concerns with getting a coherent set of data? You query to get links to each page of data, cache updates, page count changes +/- 1, your query for the last page may fail or you may not get all the data?
Yes, it does have the same problem with cache, but you can fetch each page concurrently, rather in series.
http://evernus.com - the ultimate multiplatform EVE trade tool + nullsec Alliance Market tool
|
Nuke Beta
Tacere Servitium
0
|
Posted - 2016.06.01 20:34:12 -
[25] - Quote
Pete Butcher wrote:Nuke Beta wrote:Pete Butcher wrote: The problem is - you don't know the url structure to make requests basing only on page count. If you assert one, your app will stop working on any change. That's why an endpoint with links to each page should solve all issues.
Doesn't an endpoint with links to each page have the same problem with the cache as my concerns with getting a coherent set of data? You query to get links to each page of data, cache updates, page count changes +/- 1, your query for the last page may fail or you may not get all the data? Yes, it does have the same problem with cache, but you can fetch each page concurrently, rather in series.
At the risk of beating a dead horse, doesn't the Jita region have more than 20 pages of data (you may also have another region already queued up)? So you can't get them all concurrently and there will always be a risk of the cache updating before you finish resulting in a different page count. Just another thing to handle in code I guess... Is it not possible to return some indication of the cache age (or ??) so you have a direct indication of the problem? |
Steve Ronuken
Fuzzwork Enterprises Vote Steve Ronuken for CSM
6004
|
Posted - 2016.06.01 20:39:27 -
[26] - Quote
Jita has, currently, 9 pages. (up to 30k records per page)
Yes, there's potential cache issues. (Ran into them sometimes) Though you can mitigate it by getting the types at the edges separately.
Woo! CSM XI!
Fuzzwork Enterprises
Twitter: @fuzzysteve on Twitter
|
Pete Butcher
KarmaFleet Goonswarm Federation
336
|
Posted - 2016.06.02 18:59:52 -
[27] - Quote
CCP FoxFour wrote:Pete Butcher wrote:There should be some filtering available. I've done some testing and making ~750 requests for individual items is almost 3x faster than making whole market requests for 12 regions (those include The Forge and Sinq Laison). Can you do that testing again after todays deployment.
Whole market fetching is about 30% slower than individual types, because of lack of concurrency. That endpoint with links to all pages would be a really good idea, especially since all you have to do is make a list of strings in a loop.
http://evernus.com - the ultimate multiplatform EVE trade tool + nullsec Alliance Market tool
|
Shiri Xuri
Hedion University Amarr Empire
0
|
Posted - 2016.06.09 15:35:56 -
[28] - Quote
It seems like this path returns active market orders as well as market orders that are no longer valid (as in they are no longer available on the market). Is this intentional? I was hoping to be able to get all the market orders that are currently active on the market. |
foxjazz
Froosh INC. SpaceMonkey's Alliance
0
|
Posted - 2016.06.18 19:51:40 -
[29] - Quote
I would like to suggest some technology that may be helpful in servicing subscribed requests.
First I don't know what the underlying database or filebase the system uses to hold or transact orders, and I really don't care unless it's sql server.
If it's sql server, then utilizing service broker in conjunction with signalr would be a nice way of handling push notifications to clients.
Either way, I would like to suggest using SignalR in allowing your clients to subscribe to push notifications (or web-socket) but SignalR is more robust and very fast.
with signalr the system could just post the notification of a change for subscribed items, and the client's would receive the simple json message on the change. itemid, stationid, price, buy/sell etc...
I am sure the traffic would be active within the farm, but the action of the subscribed users traffic would range based on the requested data.
foxjazz |
Aineko Macx
365
|
Posted - 2016.06.19 05:54:32 -
[30] - Quote
Pete Butcher wrote:Whole market fetching is about 30% slower than individual types, because of lack of concurrency. That endpoint with links to all pages would be a really good idea, especially since all you have to do is make a list of strings in a loop. I find your performance findings strange, because even without concurrency pulling the 9 pages for The Forge is still much faster then querying 11k items @ 100reqs/s. But yes, I have asked for collection page indexes a long time ago. Due to the lack of it this is one of the situations where I disregard the CREST convention and construct the URLs to fetch collection pages in parallel.
iveeCore 3.0: The PHP engine for industrial activities and CREST library
|
|
|
|
|
Pages: [1] 2 :: one page |
First page | Previous page | Next page | Last page |