Retrieve item's price history on Steam market - json

Regarding items from Steam market I was wondering if there is a way to retrieve the price history of an item over a period of time.
I know that Steam provides a special api for developers who want to integrate market specific data into their own sites but I haven't been able to find anything about retrieving price history for an item in the form of a json.
Have any of you already done this ?

I've done some more research and found the way you can retrieve the price history for an item.
As an example for those who are curious, the price history for this random item "Specialized Killstreak Brass Beast" can be retrieved in this way:
http://steamcommunity.com/market/pricehistory/?country=DE&currency=3&appid=440&market_hash_name=Specialized%20Killstreak%20Brass%20Beast

If calling from code
url ="http://steamcommunity.com/market/pricehistory/"
and the query string payload is:
{
"country" = "US", # two letter ISO country code
"currency"= 1, # 1 is USD, 3 is EUR, not sure what others are
"appid" = 753, # this is the application id. 753 is for steam cards
"market_hash_name" ="322330-Shadows and Hexes" # this is the name of the item in the steam market.
}
country code is the ISO Country Code.
you can find the app id for a game from the URL of its store page. Example: The game "CS:GO" app id is 730. Store page
You can find the market hash name from the URL of its market page. Example: This CS:GO item hash name is "Glove Case Key".
Price history for the Glove Case Key is here.

Related

Is it possible to restrict a template to be created only once per day

Is it possible to define a template Daily, which can only be created once per day in the sense that if Alice creates one, Bob no longer can, and if Bob creates one, Alice no longer can?
When asking about constraints like "one per day" in DAML, one has to think about the scope of that constraint and who guarantees it.
The simplest possible template in DAML is
template Daily
with
holder : Party
where
signatory holder
An instance of this template is only known to holder. There is no party, or set of parties that could ensure that there is only one such contract instance between Alice and Bob. In certain ledger topologies, Alice and Bob may not even know about each other, nor is there any party that knows about both.
A set of parties that guarantees the uniqueness is needed:
template Daily
with
holder : Party
uniquenessGuarantors : [Party]
...
The uniqueness guarantors need to be able to enable or block the creation of a Daily. In other words, they need to be signatories.
template Daily
with
holder : Party
uniquenessGuarantors : [Party]
where
signatory holder, uniquenessGuarantors
Now the easiest way to guarantee any sort of uniqueness in DAML is by using contract keys. Since we want one per day, we need a Date field.
template Daily
with
holder : Party
uniquenessGuarantors : [Party]
date : Date
where
signatory holder, uniquenessGuarantors
key (uniquenessGuarantors, date) : ([Party], Date)
maintainer key._1
What this says is that there is a unique copy of Daily for each key, and the guarantors in key._1 are responsible for making it so.
Finally you need a mechanism for actually creating these things, a sort of DailyFactory provided by the guarantors. That factory can also take care of making sure that date is always set to the current date on the ledger.
template DailyFactory
with
uniquenessGuarantors : [Party]
holder : Party
where
signatory uniquenessGuarantors
controller holder can
nonconsuming FabricateDaily
: ContractId Daily
do
now <- getTime
let date = toDateUTC now
create Daily with ..
A simple test shows how it works, with uniqueness being guaranteed by a single party Charlie:
test_daily = scenario do
[alice, bob, charlie] <- mapA getParty ["Alice", "Bob", "Charlie"]
fAlice <- submit charlie do
create DailyFactory with
holder = alice
uniquenessGuarantors = [charlie]
fBob <- submit charlie do
create DailyFactory with
holder = bob
uniquenessGuarantors = [charlie]
-- Alice can get hold of a `Daily`
submit alice do
exercise fAlice FabricateDaily
-- Neither can create a second
submitMustFail alice do
exercise fAlice FabricateDaily
submitMustFail bob do
exercise fBob FabricateDaily
-- The next day bob can create one
pass (days 1)
submit bob do
exercise fBob FabricateDaily
-- But neither can create a second
submitMustFail alice do
exercise fAlice FabricateDaily
submitMustFail bob do
exercise fBob FabricateDaily
Note that in terms of privacy, Alice and Bob don't know about each other or the other's Daily or DailyFactory, but the uniquenessGuarantors know all parties for which uniqueness is maintained, and know of all Daily instances for which they guarantee uniqueness. They have to!
To run the above snippets, you need to import DA.Time and DA.Date.
Beware that getTime returns UTC - and consequently the code would guarantee uniqueness, one per day, according to UTC, but not for example according to local calendar (which could be, say, Auckland NZ).

How to interpret Record Type in Socrata New York City Real Property Legals database?

Can you look at https://data.cityofnewyork.us/City-Government/ERROR-in-record-type/dq2e-3a6q
This shows a record type that appears to be incorrect.
It shows
P:10,item":"Bloomfield"},{"count":9,item":"New Britain"},{"count":8,item":"West Htfd"},{"count":7,item":"Torrington"},{"count":6,item":"Meriden"},{"count":5,item":"Whfd"},{"count":4,item":"Manchester
If you select count(*) and group by record_type you see:
curl 'https://data.cityofnewyork.us/resource/636b-3b5g.json?$select=count(*),record_type&$group=record_type'
[ {
"count" : "1",
"record_type" : "P:10,item\":\"Bloomfield\"},{\"count\":9,item\":\"New Britain\"},{\"count\":8,item\":\"West Htfd\"},{\"count\":7,item\":\"Torrington\"},{\"count\":6,item\":\"Meriden\"},{\"count\":5,item\":\"Whfd\"},{\"count\":4,item\":\"Manchester"
}
, {
"count" : "36631085",
"record_type" : "P"
}
This means there are 36M record type's having the value "P" and one very odd one.
One suggestion for New York City Open Data Law:
We must modify the Open Data Law (http://www1.nyc.gov/site/doitt/initiatives/open-data-law.page) to require New York City Government agencies to not only to open up data but to actually use the open data portal for government agency public sites.
If we allow agencies to simply dump data into a portal, then we have no quality testing. And agencies can trumpet how many datasets are open but no one is actually using the data.
This simple change "agency must use it's own data (aka, dogfood)" will encourage quality. If you read, http://www1.nyc.gov/site/doitt/initiatives/open-data-law.page it only mentions quality once and nothing about usage of the data. A portal is not a thing to brag about, it is an important way to join technology and government.
Thanks!

Is it possible to extract game's specific data using steam API?

I am new to Json and API.
But, for study now I am figuring out how to get specific game data from steam API.
I followed lots of process.. and get api code from steam.
At first, I thought that 'oh I can extract all of game data using my API code!'
but,,, there's a question.
Most of the JSON QUERY requires my API code.
But they print just single result (especially about certain ID)
I want to know about what game is best selling or played..
like the steamspy : http://steamspy.com/
But the steam API just offers only user stat and so on.
At this point, now I am highlight again. I am new to Json.
so.. I wonder, "Is it possible to extract game's specific data using steam API?"
Not the single user's data.
But the all of the game's list.
Thank you for reading.
I have done little research but I know you can colllect the game-names from Steam also the ID:s I
by using
http://api.steampowered.com/ISteamApps/GetAppList/v0001/
I believe then you can filtering with different criterias some stuff.
regarding the Steamspy
This may not answer the full question but why don't you just using Steamspys:s API
you can collect most played games:
http://steamspy.com/api.php
### genre ###
Returns games in this particular genre. Requires *genre* parameter and works like this:
* steamspy.com/api.php?request=genre&genre=Early+Access
### top100in2weeks ###
Returns Top 100 games by players in the last two weeks.
### top100forever ###
Returns Top 100 games by players since March 2009.
### top100owned ###
Returns Top 100 games by owners.
### all ###
Returns all games with owners data sorted by owners.
I hope this is to some help
Best regards
Xsi

Designing REST - save big set of related entities

In my system, I have an entity (sales) who can serve people which have certain ZIP codes.
So, each sales can have thousands of ZIP codes binded to his account.
I need to develop REST API that would allow to load and edit list of sales zip codes.
Basically I have 2 options:
1) Creates 2 Resources : Sales and SalesZip. Submit Sales data, and then sumbit SalesZip records for each supported zip code.
2) Create Sales entity, and load list of supported zip codes like this:
{
id : 1,
name : "John",
zip : [
"90231",
"12341",
...
]
}
And submit zip codes like an array:
zip[]=90231,12341
Both ways have some disadvantages.
If use first option, I may need to submit too many separate HTTP requests.
If use second option, I may need to send quite big PUT/POST request.
Question
Which option should I use?
What's best practics of designing such functionality?
What is exactly "quite big"?
In a rough estimation, if each char are 2 bytes, and your ZIP codes have 5 chars, each code is 10 bytes. Assuming that US has 41,741 ZIP codes, in US worst case scenario, a salesman that sells across all country, would need a payload of around 417,410 bytes, or 407.6 kbytes.
In average, to how many ZIP codes a salesman belong? how is it distributed? How often do you get these requests? You may discover that is not that bad after all.
There is not enough data to make a decision, but it seems that second option is not bad.

How should I populate city/state fields based on the zip?

I'm aware there are databases for zip codes, but how would I grab the city/state fields based on that? Do these databases contain the city/states or do I have to do some sort of lookup to a webservice?
\begin{been-there-done-that}
Important realization: There is not a one-to-one mapping between cities/counties and ZIP codes. A ZIP code is not based on a political area but instead a distribution area as defined for the USPS's internal use. It doesn't make sense to look up a city based on a ZIP code unless you have the +4 or the entire street address to match a record in the USPS address database; otherwise, you won't know if it's RICHMOND or HENRICO, DALLAS or FORT WORTH, there's just not enough information to tell.
This is why, for example, many e-commerce vendors find dealing with New York state sales tax frustrating, since that tax scheme is based on county, e-commerce systems typically don't ask for the county, and ZIP codes (the only information they provide instead) in New York can span county lines.
The USPS updates its address database every month and costs real money, so pretty much any list that you find freely available on the Internet is going to be out of date, especially with the USPS closing post offices to save money.
One ZIP code may span multiple place names, and one city often uses several (but not necessarily whole) ZIP codes. Finally, the city name listed in the ZIP code file may not actually be representative of the place in which the addressee actually lives; instead, it represents the location of their post office. Our office mail is addressed to ASHLAND, but we work about 7 miles from the town's actual political limits. ASHLAND just happens to be where our carrier's route originates from.
For guesstimating someone's location, such as for a search of nearby points of interest, these sources and City/State/ZIP sets are probably fine, they don't need to be exact. But for address validation in a data entry scenario? Absolutely not--validate the whole address or don't bother at all.
Just a friendly reminder to take a step back and remember the data source's intended use!
\end{been-there-done-that}
Modern zip code databases contain columns for City, State fields.
http://sourceforge.net/projects/zips/
http://www.populardata.com/
Using the Ziptastic HTTP/JSON API
This is a pretty new service, but according to their documentation, it looks like all you need to do is send a GET request to http://ziptasticapi.com, like so:
GET http://ziptasticapi.com/48867
And they will return a JSON object along the lines of:
{"country": "US", "state": "MI", "city": "OWOSSO"}
Indeed, it works. You can test this from a command line by doing something like:
curl http://ziptasticapi.com/48867
Using the US Postal Service HTTP/XML API
According to this page on the US Postal Service website which documents their XML based web API, specifically Section 4.0 (page 22) of this PDF document, they have a URL where you can send an XML request containing a 5 digit Zip Code and they will respond with an XML document containing the corresponding City and State.
According to their documentation, here's what you would send:
http://SERVERNAME/ShippingAPITest.dll?API=CityStateLookup&XML=<CityStateLookupRequest%20USERID="xxxxxxx"><ZipCode ID= "0"><Zip5>90210</Zip5></ZipCode></CityStateLookupRequest>
And here's what you would receive back:
<?xml version="1.0"?>
<CityStateLookupResponse>
<ZipCode ID="0">
<Zip5>90210</Zip5>
<City>BEVERLY HILLS</City>
<State>CA</State>
</ZipCode>
</CityStateLookupResponse>
USPS does require that you register with them before you can use the API, but, as far as I could tell, there is no charge for access. By the way, their API has some other features: you can do Address Standardization and Zip Code Lookup, as well as the whole suite of tracking, shipping, labels, etc.
I'll try to answer the question "HOW should I populate...", and not "SHOULD I populate..."
Assuming you are going to do this more than once, you would want to build your own database. This could be nothing more than a text file you downloaded from any of the many sources (see Pentium10 reply here). When you need a city name, you search for the ZIP, and extract the city/state text. To speed things up, you would sort the table in numeric order by ZIP, build an index of lines, and use a binary search.
If you ZIP database looked like (from sourceforge):
"zip code", "state abbreviation", "latitude", "longitude", "city", "state"
"35004", "AL", " 33.606379", " -86.50249", "Moody", "Alabama"
"35005", "AL", " 33.592585", " -86.95969", "Adamsville", "Alabama"
"35006", "AL", " 33.451714", " -87.23957", "Adger", "Alabama"
The most simple-minded extraction from the text would go something like
$zipLine = lookup($ZIP);
if($zipLine) {
$fields = explode(", ", $zipLine);
$city = $fields[4];
$state = $fields[5];
} else {
die "$ZIP not found";
}
If you are just playing with text in PHP, that's all you need. But if you have a database application, you would do everything in SQL. Further details on your application may elicit more detailed responses.