I am trying to obtain the list of places the user has saved on Google Maps. Now I know there isnt an API for this (for whatever reason), but I saw here:
"My Places" Google Maps API
That apparently there used to be a way to obtain the URL, but it does not seem to work with my list of places.
E.g.
https://www.google.com/maps/#46.889424,0.1194148,6z/data=!4m3!11m2!2s1KbZtik1IdXyNhwfXEb3P9vaZvzU!3e3
Does not seem to work if I append &output=kml or &output=json
I created this list on Google Maps, then hit share and obtained that link.
I even tried parsing the resulting HTML but it seems everything is handled by some Javascript Engine and I can't find any reference to Google Ids there --- I dont even know how they handle clicks!
Any help? There must be a way to retrieve this information programmatically!
EDIT:
I managed to get something working by visiting the shared link, then processing the html and storing the window.APP_INITIALIZATION_STATE variable. I then convert it to an javascript array and loop over it. Deep inside the array/map structure, I managed to get the google name and google place id out of that array. That seems to work a bit, but when trying with lists over 20 items long, google only gets the first 20 and is waiting for the user to 'scroll down' to get the next 20. That seems to trigger another call to get the next 20 results and looks a bit like:
https://www.google.com/search?tbm=map&fp=1&authuser=0&hl=en&gl=nl&pb=!4m8!1m3!1d54065472.4384380........
I can see the original feature id being included at the end of the url, but have no idea how to construct this url in full though to get the next 20 items.... Any ideas?
Your saved places list actually has what you call a feature ID attribute, this isn't a common practice and Google frowns upon this technique but take a look at this URL:
https://www.google.com/maps/preview/entity?authuser=0&hl=en&gl=us&pb=!1m10!1s0x0%3A0x3743ae09a161976b!3m8!1m3!1d14318.72623152007!2d-98.2296425!3d26.2070353!3m2!1i1024!2i768!4f13.1!12m3!2m2!1i392!2i106!13m57!2m2!1i203!2i100!3m2!2i4!5b1!6m6!1m2!1i86!2i86!1m2!1i408!2i200!7m42!1m3!1e1!2b0!3e3!1m3!1e2!2b1!3e2!1m3!1e2!2b0!3e3!1m3!1e3!2b0!3e3!1m3!1e8!2b0!3e3!1m3!1e3!2b1!3e2!1m3!1e9!2b1!3e2!1m3!1e10!2b0!3e3!1m3!1e10!2b1!3e2!1m3!1e10!2b0!3e4!2b1!4b1!9b0!14m3!1snyc5W-WeHY3r5gLwkoRI!7e81!15i10112!15m19!2b1!5m4!2b1!3b1!5b1!6b1!10m1!8e3!14m1!3b1!17b1!24b1!25b1!26b1!30m1!2b1!36b1!52b1!53b1!21m28!1m6!1m2!1i0!2i0!2m2!1i458!2i768!1m6!1m2!1i974!2i0!2m2!1i1024!2i768!1m6!1m2!1i0!2i0!2m2!1i1024!2i20!1m6!1m2!1i0!2i748!2m2!1i1024!2i768!22m1!1e81!29m0!30m1!3b1
Highlighted is the feature ID from the link you posted:
https://www.google.com/maps/#46.889424,0.1194148,6z/data=!4m3!11m2!2s1KbZtik1IdXyNhwfXEb3P9vaZvzU!3e3
Along with other maps parameters; when you hit that link you're actually manually triggering the same callback that Google's own scripts in maps use to parse the data to feed back to the maps UI; if you look at array item 2, or {c:..} you'll find a stringified array with the contents of your list, now depending on the program language you're using all it takes is a little tweaking (find/replace, loop through, lint and trim, etc.) to this array and you can pull your results; the cool thing is if you add or remove a place the next time you hit that end point it's updated in real-time.
Some people may call it a "hack"; but it gets the job done. :)
Hope I pointed you to a direction in the event you haven't found a solution; give this a shot.
Note the URL has to be pasted in its entirety, SO truncated the hyperlink; copy and paste the whole thing in one shot and a text file from Google with the arrays will be produced; in my case I curl the URLs I need and parse the returned strings as needed to pull data from Google where their API has limitations. Just a tip. :)
Also check Joel's Answer who did some research and refined some of the following information.
Pagination
You can use this tool to decrypt the pb-parameter. PB stands for protocol buffer (protobuf) and Google uses its own kind of it for maps. You can find different decoders for this by googling it.
In my case, the pagination was done via one parameter (8iX0). It seems, that it always comes with another similar parameter (7i20) but I don't know that it does. I can't yet confirm that this is always the case, but from my experience you're basically looking for two integers that are 20/40/60 etc. apart.
Here's what this looks like for me:
page 2 (7i20, 8i20)
page 3 (7i20, 8i40)
page 4 (7i20, 8i60)
From this information, I tried 7i20 8i00 for page 1, that seemed to work. For lists with >100 items, it just continues like that (8i120, 8i140 etc.)
Here's a code snippet in python (quick & dirty). Make sure to add (long) delays if your list has many pages as you will get rate-limited by captchas eventually if you don't. Notice the 8i%s0 in the url, make sure to put the %s back when you paste your pb-block.
url = "https://www.google.com:443/search?tbm=map&pb=!7i20!8i%s0!..."
headers = {"Referer": "https://www.google.com/"}
def fetch_stops_from_maps():
new_results = -1
page = 0
results = []
while new_results != 0:
new_results = 0
x = requests.get(url % page, headers=headers)
txt = html.unescape(x.text)
txt = txt.split("\n")[1]
results = re.findall(r"\[null,null,[0-9]{1,2}\.[0-9]{4,15},[0-9]{1,2}\.[0-9]{4,15}]", txt)
print(len(results))
for cord in results:
# curr = the description you can manually type in when saving
curr = txt.split(cord)[1].split("\"]]")[0]
curr = curr[curr.rindex(",\"") + 2:]
cords = str(cord).split(",")
lat = cords[2]
lon = cords[3][:-1]
results.append(s)
new_results += 1
page += 2
Actually getting the correct url
Getting the correct url currently seems to be the hardest part when doing this and I have not fully figured this out aswell. However, for my use-case this is not really important, so I extracted the correct pb-block once and called it a day.
As explained in the other answers, the id of the list is visible in the basic url (here, the 2sXX...) when you navigate to the list in your browser. It seems to usually be 24-32 (?) characters long.
.../maps/<coords>/data=!4m3!11m2!2sXXXX...XXXX!3e3
If you have this id, you can put it into an existing protobuf-block and it may work (I only tested this with 3 different lists, which were all created by the same account, so this theory is far from proven).
Now, how do you get the block? I would just share the one I have, but because I only understand parts of what it does, I fear that it may contain some personal info. Instead, I will share my process of getting it. For this I use Burpsuite. It's a program mainly used for web-security testing and has a free community edition, however for our use-case it is the perfect tool, because with it you can easily tinker with requests, change small parts in the request, send it again and immediately see if your changes changed the response. However for extracting the pb-block, one should also be able to use any program that can intercept browser traffic.
Heres the basic rundown with burp:
From GMaps, share a list that has >20 items (this is important) and copy the public link
In Burp, go to the tab "Proxy", make sure "Intercept" is off and click "Open browser" to open the integrated chromium browser
There, paste the link and wait until maps loaded completely
In Burp, turn "Intercept" on, then in google maps, scroll down in the list, until it starts loading new results (always blocks of 20)
Burp now intercepted all requests the browser made since you turned intercepting on. Click "Forward" and go through all requests, until you see a request in the format
GET /search?tbm=map&authuser=0&hl=de&gl=de&pb=!7i20....
This is what you're looking for.
Optionally, you can now right click into the request-text and click "send to repeater", then switch to the repeater-tab. Here you can edit the request and then send it again, being able to see the response immediately. For example, removing the authuser, hl, gl, q, ech, psi url parameters, the request still works flawlessly. If you remove the tch=1 parameter, the response you get will be in a more human readable format.
In the request-text you should now be able to just search for the list-id you got from the link previously and replace it with the id of another list (search bar is at the bottom in burp). As I said, this worked for me, but it may be possible that the pb-block contains some additional metadata that makes lists from different google-accounts or different types of lists incompatible with specific pb-blocks. Just a theory though. Let me know how it goes!
Further automating
I have theorised that one could automate getting the pb-block using requests-html because it can load html-sites fully but it doesn't get updated anymore. Another option (probably the better one) is Selenium Wire, as you should be able to load the page and intercept the requests, like we did in burp. Seems like a whole lot of work tho :D
This was the only API was able to find was this:
https://www.google.com/bookmarks/?output=xml
Used in a browser you would have to first log in through Google's OAuth. It would then return your saved places. Not sure at the moment how you would embedded the authentication to do this programmatically, but this might send you in the right direction.
I was able to extract the data I needed from my google maps list. Below are some comments that expand on some of the other comments here, along with a script that extracts all of the relevant data points from the network response.
Obtaining the underlying URL
You can easily find this URL by just opening the devtools on your browser, going to the network tab, and refreshing the webpage or scrolling down on the list until it loads new results (the list must be larger than 20 results). You should be able to find the network request that starts with https://www.google.com/search?tbm=map&pb... and go from there.
Increase the results size
I was able to increase the number of results returned from the request by changing the value of the 7i20 parameter. From what I can tell, the 71XX parameter is the size of the page, and the 8iXX parameter is the starting point. I haven't tested how large you can make the page limit, but I tested 100 and it seemed to work fine. This should make dealing with larger lists much easier.
Parsing out the data
Instead of using regex to parse out the relevant data from the response, I found that the response is basically just a massive JSON object and I was able to identify the indexes for specific types of data, such as the name of the place, location, notes, etc. See the script below.
If you look at the buildResults function in the script below, you can see the exact indexes used to extract specific pieces of information. This of course may change over time if the network response changes format at all, so use these as a starting point in the case where the specific values aren't at those indexes anymore. Hopefully they would be close to those locations
Script to parse the data (javascript / node.js)
// Insert the raw text content from the network response from the
// https://www.google.com/search?tbm=map&pb... url below.
const rawInput = null
function prepare(input) {
// There are 5 random characters before the JSON object we need to remove
// Also I found that the newlines were messing up the JSON parsing,
// so I removed those and it worked.
const preparedForParsing = input.substring(5).replace(/\n/g, '')
const json = JSON.parse(preparedForParsing)
const results = json[0][1].map(array => array[14])
return results
}
function prepareLookup(data) {
// this function takes a list of indexes as arguments
// constructs them into a line of code and then
// execs the retrieval in a try/catch to handle data not being present
return function lookup(...indexes) {
const indexesWithBrackets = indexes.reduce((acc, cur) => `${acc}[${cur}]`, '')
const cmd = `data${indexesWithBrackets}`
try {
const result = eval(cmd)
return result
} catch(e) {
return null
}
}
}
function buildResults(preparedData) {
const results = []
for (const place of preparedData) {
const lookup = prepareLookup(place)
// Use the indexes below to extract certain pieces of data
// or as a starting point of exploring the data response.
const result = {
address: {
street_address: lookup(183, 1, 2),
city: lookup(183, 1, 3),
zip: lookup(183, 1, 4),
state: lookup(183, 1, 5),
country_code: lookup(183, 1, 6),
},
name: lookup(11),
tags: lookup(13),
notes: lookup(25,15,0,2),
placeId: lookup(78),
phone: lookup(178,0,0),
coordinates: {
long: lookup(208,0,2),
lat: lookup(208,0,3)
}
}
results.push(result)
}
return results
}
const preparedData = prepare(rawInput)
const listResults = buildResults(preparedData)
console.log(listResults)
I logged in today to make another logic app today, and i noticed the return output for (in this example) a Http-call has changed.
Before, i have a memory of the whole object showing in the output of an action in the workflow. Now i just se this:
Picture below:
The output body is only a string in some kind of encryption...
Does the Workflow definition Language where i want to reach one specific value in the Json-body still work? Or was this Update a major overhall.
I'm lost here.
Does the Workflow definition Language where i want to reach one specific value in the Json-body still work?
Yes, We can still do it, seems like you at the designer over view of a run, if you want to see full of inputs and output click on the Run details tab on top of this screen.
enter image description here
The content is base64 encoded, if you want to decode with in the logic apps for any other action input, we can use the base64 function which are explained here (https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-workflow-definition-language#functions)
I am currently reading Async Javascript by Trevor Burnham. This has been a great book so far.
He talks about this snippet and console.log being 'async' in the Safari and Chrome console. Unfortunately I can't replicate this. Here is the code:
var obj = {};
console.log(obj);
obj.foo = 'bar';
// my outcome: Object{}; 'bar';
// The book outcome: {foo:bar};
If this was async, I would anticipate the outcome to be the books outcome. console.log() is put in the event queue until all code is executed, then it is ran and it would have the bar property.
It appears though it is running synchronously.
Am I running this code wrong? Is console.log actually async?
console.log is not standardized, so the behavior is rather undefined, and can be changed easily from release to release of the developer tools. Your book is likely to be outdated, as might my answer soon.
To our code, it does not make any difference whether console.log is async or not, it does not provide any kind of callback or so; and the values you pass are always referenced and computed at the time you call the function.
We don't really know what happens then (OK, we could, since Firebug, Chrome Devtools and Opera Dragonfly are all open source). The console will need to store the logged values somewhere, and it will display them on the screen. The rendering will happen asynchronously for sure (being throttled to rate-limit updates), as will future interactions with the logged objects in the console (like expanding object properties).
So the console might either clone (serialize) the mutable objects that you did log, or it will store references to them. The first one doesn't work well with deep/large objects. Also, at least the initial rendering in the console will probably show the "current" state of the object, i.e. the one when it got logged - in your example you see Object {}.
However, when you expand the object to inspect its properties further, it is likely that the console will have only stored a reference to your object and its properties, and displaying them now will then show their current (already mutated) state. If you click on the +, you should be able to see the bar property in your example.
Here's a screenshot that was posted in the bug report to explain their "fix":
So, some values might be referenced long after they have been logged, and the evaluation of these is rather lazy ("when needed"). The most famous example of this discrepancy is handled in the question Is Chrome's JavaScript console lazy about evaluating arrays?
A workaround is to make sure to log serialized snapshots of your objects always, e.g. by doing console.log(JSON.stringify(obj)). This will work for non-circular and rather small objects only, though. See also How can I change the default behavior of console.log in Safari?.
The better solution is to use breakpoints for debugging, where the execution completely stops and you can inspect the current values at each point. Use logging only with serialisable and immutable data.
This isn't really an answer to the question, but it might be handy to someone who stumbled on this post, and it was too long to put in a comment:
window.console.logSync = (...args) => {
try {
args = args.map((arg) => JSON.parse(JSON.stringify(arg)));
console.log(...args);
} catch (error) {
console.log('Error trying to console.logSync()', ...args);
}
};
This creates a pseudo-synchronous version of console.log, but with the same caveats as mentioned in the accepted answer.
Since it seems like, at the moment, most browsers' console.log's are asynchronous in some manner, you may want to use a function like this in certain scenarios.
When using console.log:
a = {}; a.a=1;console.log(a);a.b=function(){};
// without b
a = {}; a.a=1;a.a1=1;a.a2=1;a.a3=1;a.a4=1;a.a5=1;a.a6=1;a.a7=1;a.a8=1;console.log(a);a.b=function(){};
// with b, maybe
a = {}; a.a=function(){};console.log(a);a.b=function(){};
// with b
in the first situation the object is simple enough, so console can 'stringify' it then present to you; but in the other situations, a is too 'complicated' to 'stringify' so console will show you the in memory object instead, and yes, when you look at it b has already be attached to a.
I am doing a HTML5 app. Everything works well. The client suddenly requested that he needs to change error messages and text labels as he wish after completing the code, but without touching the HTML5. So I got two solutions in to my mind.
1 Use javascript variables
// Error Messages
var msg_authentication_failed = "The username or password is invalid. Please try again";
and use this variable as I wanted.
2 Use XML file (Or JSON)
<?xml version="1.0" encoding="ISO-8859-1"?>
<ErrorMessages>
<AuthenticationFail>The username or password is invalid. Please try again</AuthenticationFail>
</ErrorMessages>
Load XML file and set values using it's tag names.
However I feel that the 2nd solution is easy to maintain by the client but performance wise it's not good.
Is there any other possible way to get done this kind of requirement? Appreciate your suggestions.
I'm running a jQuery Mobile project which supports around 6 languages (from Europe) and what I did was simply maintaining the labels in a JSON string.
using the event pagebeforeshow I update the labels when showing form element's labels. If it is an error message, I wrote a custom function that can access the string based on the labelID.
More than XML, JSON is extremely easy to handle. I would suggest you to go ahead with it.
Hi, i have been having some problems using JSON data in flash builder lately and I was hoping someone could help me out here.
I have spent the past month working solidly on this issue, so I HAVE been looking around, HAVE been trying out everything I can find or think of. I am simply stuck.
I have been working on a flex mobile application for the Blackberry Playbook tablet with Adobe Flash Builder 4.6. It is a reddit app, designed to give users the main reddit feed, subreddits, search function, hopefully log in stuff etc. Of course I need the aid of the reddit API to access this information, of which the documentation can be found here: https://github.com/reddit/reddit/wiki/API/ The api uses either XML or JSON formatted data.
Now onto my problem- Like mentioned above, i want to display the reddit feed inside the app. I want to be able to use a item renderer to customize the data that is shown within each entry of the list.
One entry would consist of:
1) a thumbnail of the image in the post
2) The title of the post
3) a 'like/dislike' button, but this is unimportant at the moment.
Of course to start out, i placed a spark List component onto the design view. Then i configured a new HTTP data service using the Data/Services panel. I specified http://www.reddit.com/r/all.json for the url. I configured the return type, and the did a Test Operation. Everything connected just fine. All the data came through as normal. I will give you an idea of what the data comes back as so that you may understand my issue later on.
Test Operation Results (json data structure):
NoName1
data
after
before
children
[0]
data
media_embed
score
id
title
thumbnail
url
(etc etc...)
kind
[1]
data
media_embed
score
title
thumbnail
(etc etc...)
kind
[2] (array continues)
modhash
kind
As you can see, to get to the thumnail for example, you would have to go through data.children[].data.thumnail. When I tried to bind this data to the spark List component, I specified the data service to be from the one above. Then I specified the Data provider option to be Children[], as this value is typically set to the array. Now this is where the trouble begins. The final option, Label Field, only gave me one value to choose from: 'kind'. So as you can tell, it wasnt expecting the data to go any further nested. It stops at the two value just within each array item, which would be Data and Kind, though it only offers me Kind. I need to go one level further to access Title and Thumbnail. This is my problem.
Now, I have analyzed the code for the binding, and I have tried altering it to accomodate the further nested value. No success what so ever. The following is the code that the binding generates:
<s:List
id="myList" width="100%" height="100%" change="myList_changeHandler(event)"
creationComplete="myList_creationCompleteHandler(event)" labelField="kind">
<s:AsyncListViewlist="{TypeUtility.convertToCollectionredditFeedJSONResult.lastResult.data.children)}"/>
<s:List>
Obviously i would want to have something like along the lines of:
"TypeUtility.convertToCollection(redditFeedJSONResult.lastResult.data.children.data" and then set the labelField="title" or "thumbnail".
I certainly hope someone can help me with this. I am out of my mind as to what to do. If you need any further clarification I would be happy to provide it. I tried to explain the situation above as clearly as possible. Thank you so much.
Ted
I have this situation often: get an XML or JSON data from the server, then trying to use it as a dataProvider for a spark.components.List or for a mx.controls.Menu and then they just wouldn't display the data as I want them, because something in the data is different from what they expect. And then they display wrong XML-children or [Object,Object,etc.]
And in such cases I just create an empty ArrayCollection() which serves as dataProvider instead (and can be sorted and/or filtered too). And when data comes from the server, I push() new Objects into it:
[Bindable]
private var _data:ArrayCollection = new ArrayCollection();
public function update(xlist:XMLList):void {
_data.length = 0;
for each (var xml:XML in xlist)
_data.push({label: xml, event: xml.#event});
}
This always works. And if you get your next problem - flickering of the List, then that is solvable by merging data too.
Good luck with your Playbook development, which is a cool piece of hardware :-)