I have to get reviews from Google map API. details are on this page.
https://developers.google.com/places/documentation/details#PlaceDetailsResults
the details will fetch from this page:-
https://maps.googleapis.com/maps/api/place/details/json?reference=CmRYAAAAciqGsTRX1mXRvuXSH2ErwW-jCINE1aLiwP64MCWDN5vkXvXoQGPKldMfmdGyqWSpm7BEYCgDm-iv7Kc2PF7QA7brMAwBbAcqMr5i1f4PwTpaovIZjysCEZTry8Ez30wpEhCNCXpynextCld2EBsDkRKsGhSLayuRyFsex6JA6NPh9dyupoTH3g&sensor=true&key=AddYourOwnKeyHere
My problem is I can't find what is reference in request. and how I find this parameter value from my Google plus page.
A more recent way to do this:
https://maps.googleapis.com/maps/api/place/details/json?placeid={place_id}&key={api_key}
place_id: https://developers.google.com/places/place-id
api_key: https://developers.google.com/places/web-service/get-api-key
Response:
{
"html_attributions": [],
"result": {
...
"rating": 4.6,
"reviews": [
{
"author_name": "John Smith",
"author_url": "https://www.google.com/maps/contrib/106615704148318066456/reviews",
"language": "en",
"profile_photo_url": "https://lh4.googleusercontent.com/-2t1b0vo3t-Y/AAAAAAAAAAI/AAAAAAAAAHA/0TUB0z30s-U/s150-c0x00000000-cc-rp-mo/photo.jpg",
"rating": 5,
"relative_time_description": "in the last week",
"text": "Great time! 5 stars!",
"time": 1508340655
}
]
}
}
Reviews are limited to the 5 latest.
For fetching a google review you need reference id for the place.In order to get this reference key you can use google places search api request.
https://maps.googleapis.com/maps/api/place/textsearch/xml?query=restaurants+in+bangalore&sensor=true&key=AddYourOwnKeyHere
Its response will have reference id which you can use in your request.
<PlaceSearchResponse>
<status>OK</status>
<result>
<name>Koshy's Restaurant</name>
<type>bar</type>
<type>restaurant</type>
<type>food</type>
<type>establishment</type>
<formatted_address>
39, St Marks Road,Shivajinagar,Bangalore, Karnataka, 560001, India
</formatted_address>
<geometry>
<rating>3.7</rating>
<icon>
http://maps.gstatic.com/mapfiles/place_api/icons/bar-71.png
</icon>
**<reference>**
CnRwAAAA1z8aCeII_F2wIVcCnDVPQHQi5zdd-3FsDl6Xhb_16OGrILvvvI4X4M8bFk2U8YvuDCKcFBn_a2rjvYDtvUZJrHykDAntE48L5UX9hUy71Z4n80cO7ve_JXww6zUkoisfFnu6jEHcnKeeTUE42PCA4BIQGhGz0VrXWbADarhKwCQnKhoUOR-Xa9R6Skl0TZmOI4seqt8rO8I
**</reference>**
<id>2730db556ca6707ef517e5c165adda05d2395b90</id>
<opening_hours>
<open_now>true</open_now>
</opening_hours>
<html_attribution>
Ujaval Gandhi
</html_attribution>
</photo>
</result>
$reqUri = 'https://maps.googleapis.com/maps/api/place/nearbysearch/json?key='.YOURSERVERKEY;
$reqUri .= '&sensor=false&radius=500';
$reqUri .= '&location=38.908310,-104.784035&name='.urlencode(LOCATION NAME);
$reqUri .= '&keyword='.urlencode(WEBSITE PHONE);
I made it through PHP, Now call like this URL and you get original result with reference key.
Then parse it like:
$data = cURL($reqUri);
$data = json_decode($data);
echo $data ->results[0]->reference;
Hope it will help you
***Note: location=38.908310,-104.784035 this var is not auto you must have it.
Using Python and Google Places APIs, you can retrieve business details and reviews (up to 5 reviews) as follows:
api = GooglePlaces("Your API key")
places = api.search_places_by_coordinate("40.819057,-73.914048", "100", "restaurant")
for place in places:
details = api.get_place_setails(place['place_id'], fields)
try:
website = details['result']['website']
except KeyError:
website = ""
try:
name = details['result']['name']
except KeyError:
name = ""
try:
address = details['result']['formatted_address']
except KeyError:
address = ""
try:
phone_number = details['result']['international_phone_number']
except KeyError:
phone_number = ""
try:
reviews = details['result']['reviews']
except KeyError:
reviews = []
print("===================PLACE===================")
print("Name:", name)
print("Website:", website)
print("Address:", address)
print("Phone Number", phone_number)
print("==================REVIEWS==================")
for review in reviews:
author_name = review['author_name']
rating = review['rating']
text = review['text']
time = review['relative_time_description']
profile_photo = review['profile_photo_url']
print("Author Name:", author_name)
print("Rating:", rating)
print("Text:", text)
print("Time:", time)
print("Profile photo:", profile_photo)
print("-----------------------------------------")
For more details about this code and Google Places API, you can check this tutorial: https://python.gotrained.com/google-places-api-extracting-location-data-reviews/
I had problems making it work on localhost. I added my example.com.test domain, but of course I could not verify it, because it cannot be reached from outside (except ngrok variant).
I found an amazing dirty hack on GitHub: gaffling/PHP-Grab-Google-Reviews.
Worked great for me, except I had to change the /* CHECK SORT */ line to if (isset($option['sort_by_reating_best_1']) and $option['sort_by_reating_best_1'] == true) and I also limited the foreach to only 5 reviews via optional second function parameter.
No API_KEY required at all
Related
I am new to PowerBI environment. And got this source code from some sources to create a search parameter and to show latest tweet about several keywords. The problem is, the latest data shown in PowerBI only accumulated to the latest 7 days' tweet. How to generate the latest data from last 1 months or years? Thanks.
Here is the code
/*
This M script gets an bearer token and performs a tweet search from the Twitter REST API
https://dev.twitter.com/oauth/application-only
Requires establishing a Twitter application in order to obtain a Consumer Key & Consumer Secret
https://apps.twitter.com/
IMPORTANT - The Consumer Key and Consumer secret should be treated as passwords and not distributed
*/
let
// Concatenates the Consumer Key & Consumer Secret and converts to base64
authKey = "Basic " & Binary.ToText(Text.ToBinary("XXXAPITOKENXXX"),0),
url = "https://api.twitter.com/oauth2/token",
// Uses the Twitter POST oauth2/token method to obtain a bearer token
GetJson = Web.Contents(url,
[
Headers = [#"Authorization"=authKey,
#"Content-Type"="application/x-www-form-urlencoded;charset=UTF-8"],
Content = Text.ToBinary("grant_type=client_credentials")
]
),
FormatAsJson = Json.Document(GetJson),
// Gets token from the Json response
AccessToken = FormatAsJson[access_token],
AccessTokenHeader = "bearer " & AccessToken,
// Uses the Twitter GET search/tweets method using the bearer token from the previous POST oauth2/token method
GetJsonQuery = Web.Contents("https://api.twitter.com/1.1/search/tweets.json?q="&SearchParameters&"&count=100",
[
Headers = [#"Authorization"=AccessTokenHeader]
]
),
FormatAsJsonQuery = Json.Document(GetJsonQuery),
NavigateToStatuses = FormatAsJsonQuery[statuses],
TableFromList = Table.FromList(NavigateToStatuses, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
ExpandColumn = Table.ExpandRecordColumn(TableFromList, "Column1", {"metadata", "created_at", "id", "id_str", "text", "source", "truncated", "in_reply_to_status_id", "in_reply_to_status_id_str", "in_reply_to_user_id", "in_reply_to_user_id_str", "in_reply_to_screen_name", "user", "geo", "coordinates", "place", "contributors", "is_quote_status", "retweet_count", "favorite_count", "entities", "favorited", "retweeted", "lang", "possibly_sensitive", "quoted_status_id", "quoted_status_id_str", "quoted_status"}, {"Column1.metadata", "Column1.created_at", "Column1.id", "Column1.id_str", "Column1.text", "Column1.source", "Column1.truncated", "Column1.in_reply_to_status_id", "Column1.in_reply_to_status_id_str", "Column1.in_reply_to_user_id", "Column1.in_reply_to_user_id_str", "Column1.in_reply_to_screen_name", "Column1.user", "Column1.geo", "Column1.coordinates", "Column1.place", "Column1.contributors", "Column1.is_quote_status", "Column1.retweet_count", "Column1.favorite_count", "Column1.entities", "Column1.favorited", "Column1.retweeted", "Column1.lang", "Column1.possibly_sensitive", "Column1.quoted_status_id", "Column1.quoted_status_id_str", "Column1.quoted_status"})
in
ExpandColumn
The v1.1 search API only provides access to the past 7 days of Tweets. You will need to use a different API to retrieve data from further back:
the premium API 30-days or full-archive search (these are commercial but have a free tier for limited access)
or
the v2 full-archive search (this requires that you have an account that is on the Academic Research product track)
First you need to have premium search API access, then by passing parameter toDate, fromDate and maxResults you can get your desired paginated data.
Doc for reference Twitter Search API Document
I am a relative newbie to JSON in general, but have experience in handling JSON with linux based command line tools in Python, usually in a crude plain-text manner.
I am attempting to implementing some functionality in Google Apps Scripts. The method contacts the Qaundl API and receives back some data, and of that data I would like to return one specific value (the "Close" price) in this instance.
function CLOSE_PRICE(ticker,date) {
var options =
{
'muteHttpExceptions': true,
"headers":{"Accept":"application/json"}
};
var api_key = "some_api_key"
ticker = "HD"
date = "2017-12-28"
var url = "https://www.quandl.com/api/v3/datasets/EOD/" + ticker + ".json?start_date=" + date + "&end_date=" + date + "&api_key=" + api_key
var response = UrlFetchApp.fetch(url,options)
var json = response.getContentText();
var datum = JSON.parse(json)
var end_of_day_prices = datum.dataset.data;
Logger.log(end_of_day_prices);
var close = end_of_day_prices[4];
Logger.log(close);
return close;
}
This is the JSON data I receive back...in a pretty print visualized format.
"dataset": {
"id": 42635437,
"dataset_code": "HD",
"database_code": "EOD",
"name": "Home Depot Inc. (The) (HD) Stock Prices, Dividends and Splits",
"description": "<p><b>Ticker</b>: HD</p>\n<p><b>Exchange</b>: NYSE</p>\n<p>Prices, dividends, splits for Home Depot Inc. (The) (HD).\n\n</p><p>Columns:</p>\n<p>Open, High, Low, Close, Volume are <b>unadjusted</b>.</p>\n<p>Dividend shows the <b>unadjusted</b> dividend on any ex-dividend date else 0.0.</p>\n<p>Split shows any split that occurred on a the given DATE else 1.0</p>\n<p>Adjusted values are adjusted for dividends and splits using the CRSP methodology.</p>\n<p>Updates of this dataset occur at 5pm ET. Subsequent corrections from the exchange are applied at 9pm ET.</p>\n<p>Data is sourced from NASDAQ, NYSE and AMEX via Quotemedia.</p>\n\n",
"refreshed_at": "2019-11-08 04:01:00 UTC",
"newest_available_date": "2017-12-28",
"oldest_available_date": "2013-09-01",
"column_names": [
"Date",
"Open",
"High",
"Low",
"Close",
"Volume",
"Dividend",
"Split",
"Adj_Open",
"Adj_High",
"Adj_Low",
"Adj_Close",
"Adj_Volume"
],
"frequency": "daily",
"type": "Time Series",
"premium": true,
"limit": null,
"transform": null,
"column_index": null,
"start_date": "2017-12-28",
"end_date": "2017-12-28",
"data": [
[
"2017-12-28",
190.91,
190.98,
189.64,
189.78,
3175631.0,
0.0,
1.0,
182.95836799845628,
183.02545241393943,
181.74126503183305,
181.8754338627994,
3175631.0
]
],
"collapse": null,
"order": null,
"database_id": 12910
}
}
I cannot access individual elements of the dataset.data array, and I cannot understand why. Here is the logger console of Google Apps Script showing me my log lines.
I'm pretty sure I'm not understanding some aspect of either Google Apps Script or about the JSON data model. Thank you for your assistance.
The first comment on my question lead to the answer.
var close = end_of_day_prices[0][4];
Is the correct way to access the data I am seeking.
I am learning how to get JSON data from a particular API using Python 3. I am using input() to select a word for inclusion in the url search string. I generally get the JSON results I expect which includes a count of records returned. However for some search words, the JSON is returned as expected but the except is invoked and the records are not counted. I think the problem is to do with the try and except. In my code the input words that work as expected include "pool" and "Pool" and the results include both "pool" and "Pool" but "captain" or "Captain" does not give a final count even though there are records with "Captain", and the JSON is returned.
My code is below and a sample of the json also. I'd be grateful for any assistance with this:
import urllib.request, urllib.parse
import json
what = input('What?: ')
what = what.strip()
url = 'http://collections.anmm.gov.au/advancedsearch/objects/'+'title:'+ what +'/'+'json'
#print ('Retrieving: ', url)
connection = urllib.request.urlopen(url)
data = connection.read().decode()
try:
results = json.loads(data)
#print (results)
print ('Retrieving: ', url)
count = 0
if results['objects']:
for item in results['objects']:
objectNumber = item['invno']['value'];
title = item['title']['value'];
date = item['displayDate']['value'];
count = count +1
print(objectNumber,'',title,'',date)
print(count, 'records with ',what,' returned')
except:
print('No search results returned')
And my json sample:
{
"objects":[
{
"displayDate":{
"label":"Date",
"value":1930,
"order":3
},
"invno":{
"label":"Object No",
"value":"ANMS0427[063]",
"order":9
},
"id":{
"label":"Id",
"value":88089,
"order":0
},
"title":{
"label":"Title",
"value":"Article by Donald McLean titled 'Lure of buried treasure; from Carolina to Cocos' about pirates: Captain blackbeard, Captain Kidd and Captain Edward Davis",
"order":2
}
}
]
}
If you debug your code, you'll see that there is a problem with this instruction's output :
date = item['displayDate']['value'];
I am making a react app that searches for a book by title and returns the results.
It's mostly working fine, but for some titles searched (such as "hello") it can't get the results because the parameters are missing.
Specially, the "amount" value is missing, and it can get me e-books that are not for sale even if I add the filter=paid-ebooks param while fetching the api. Using projection=full doesn't help either.
For example, when I call the api with
https://www.googleapis.com/books/v1/volumes?printType=books&filter=paid-ebooks&key=${APIKEY}
and use the fetched data inside books array in reactjs:
this.props.books.map((book, index) => {
return (
<CardItem
key={index}
title={book.volumeInfo.title}
authors={book.volumeInfo.authors ?
book.volumeInfo.authors.join(', ') :
"Not provided"}
price={book.saleInfo.listPrice.amount}
publisher={book.volumeInfo.publisher}
addToCart={() =>
this.props.addItem(this.props.books[index])}
/>
)
})
One of the results it gets is like this:
"saleInfo": {
"country": "TR",
"saleability": "NOT_FOR_SALE",
"isEbook": false
}
While it should be like, what's expected is :
"saleInfo": {
"country": "TR",
"saleability": "FOR_SALE",
"isEbook": true,
"listPrice": {
"amount": 17.23,
"currencyCode": "TRY"
}
And trying to search with this api answer throws the error :
TypeError: Cannot read property 'amount' of undefined
price={book.saleInfo.listPrice.amount}
As you can see in react code's authors, this issue comes up with authors parameter too, which I've bypassed as seen in the code. But I cannot do the same with amount. Is this a known error in Google Books API or is there a way to prevent this? I don't understand why it still returns me e-books that are not for sale even with filter=paid-ebooks param.
I have not dug into the API documentation. An ideal solution would be a query param that only sends back books with a list price (like you tried with filter=paid-ebooks). Because that's not working, a simple fix would be to filter your results once you get them.
Assuming the response contains an array of book objects, it would look something like this:
const paidBooks = apiResponse.data.filter(book => book.listPrice)
This code will take the response from the API, and filter out all books that do not contain a truthy value for listPrice
That totally right, actually i never used react but the same logic try using try{ }catch(error){} for those missing data
I am trying to make a localized version of this app: SMS Broadcast Ruby App
I have been able to get the JSON data from a local file & sanitize the number as well as open the JSON data. However I have been unable to extract the values and pair them as a scrubbed hash. Here's what I have so far.
def data_from_spreadsheet
file = open(spreadsheet_url).read
JSON.parse(file)
end
def contacts_from_spreadsheet
contacts = {}
data_from_spreadsheet.each do |entry|
puts entry['name']['number']
contacts[sanitize(number)] = name
end
contacts
end
Here's the JSON data sample I'm working with.
[
{
"name": "Michael",
"number": 9045555555
},
{
"name": "Natalie",
"number": 7865555555
}
]
Here's how I would like the JSON to be expressed after the contacts_from_spreadsheet method.
{
'19045555555' => 'Michael',
'19045555555' => 'Natalie'
}
Any help would be much appreciated.
You could create array of pairs (hashes) using map and then call reduce to get a single hash.
data = [{
"name": "Michael",
"number": 9045555555
},
{
"name": "Natalie",
"number": 7865555555
}]
data.map{|e| {e[:number] => e[:name]}}.reduce Hash.new, :merge
Result: {9045555555=>"Michael", 7865555555=>"Natalie"}
You don't seem to have number or name extracted in any way. I think first you'll need to update your code to get those details.
i.e. If entry is a JSON object (or rather was before parsing), you can do the following:
def contacts_from_spreadsheet
contacts = {}
data_from_spreadsheet.each do |entry|
contacts[sanitize(entry['number'])] = entry['name']
end
contacts
end
Not really keeping this function within JSON, but I have solved the problem. Here's what I used.
def data_from_spreadsheet
file = open(spreadsheet_url).read
YAML.load(file)
end
def contacts_from_spreadsheet
contacts = {}
data_from_spreadsheet.each do |entry|
name = entry['name']
number = entry['phone_number'].to_s
contacts[sanitize(number)] = name
end
contacts
end
This returned back clean array here:
{"+19045555555"=>"Michael", "+17865555555"=>"Natalie"}
Thanks everyone who added input!