How to add more than 10 locations for pass - json

I want to use many locations to show on lock screen. I tested already, only 10 locations is shown. Are there any way to show more than 10 locations ?

Rachel is correct in that Passbook will only recognise the first 10 locations included in pass.json. If there are any more than 10, then these will be ignored.
The workaround that you link to, proposes the following:
You create a location enabled app
Whenever your app detects a significant location change, it signals your server, providing the pass serial number and the new location
Your server then selects the 10 closest locations, compiles a new pass and pushes it to the device
Depending on how sophisticated you want to get in determining the most appropriate locations, it could be a bit of work. It also doesn't make for a great user experience since the location will eat battery and the constant updating of the pass will eat data.
Three alternative approaches are:
Letting the user select the 10 most appropriate locations for them, or
Updating the locations whenever the pass is used. If the pass is scanned, then you can use the location of the scanning device to determine the 10 closest locations and push an updated pass, or
Adding a unique link on the back of the pass to a HTML5 page that grabs their current location with Javascript (see below), then initiates a push. E.g. To update you pass with the 10 nearest locations, click the link below http://www.yourservice.com/?passSerial=xxxx
Sample location JS:
<script>
if(navigator.geolocation){
navigator.geolocation.getCurrentPosition(success,fail);
}
function success(a) {
$("#long").val(a.coords.longitude).focus(); // focus required to force an update of the field value in webkit browsers
$("#lat").val(a.coords.latitude).focus();
// initiate ajax callback to push new pass and alert the user that it is on the way
}
function fail() {
alert("You must give permission to provide your location, please refresh this page and try again");
}
</script>

Related

Anylogic GIS programmatically search for schools in a given location

Without using the search option in the GIS map in anylogic, I want Anylogic to take a user input which is the name of a location and then place an agent in that location. Then, as the model runs I want it to search for schools near that agent/location found earlier. Then I want the schools found to be made into a collection. This is then important for me to compute some aspects further. How do I do this using functions/codes in Java.
I am able to achieve all this using individual search in the search window of GIS map in anylogic. But I want it to happen such that once a user types a location as a user input in the simulation window( using a parameter or so), automatically the map places the agent there and then searches schools and then places agents there and then these agents become a collection which will be used in another function for computing. I want to automate it using codes.Please help. Thanks in advance.
You will need to set up a population of your agents and specify their initial location based on a parameter that you create inside your custom agent.
For this simple example, I created variable location of type String where a user can input the location where you want to search for a school.
Then inside the create Agents button I added this code
// Find the location we are searching for as a GPS point
GISPoint point = map.searchFirst(location);
// Set the visible map to this location
map.setCenterLatitude(point.getLatitude());
map.setCenterLongitude(point.getLongitude());
map.setMapScale(1/1000000.0);
//Set the search parameters to be within a range from the location we got
map.setSearchBounds(point.getLatitude()-5, point.getLongitude()-5, point.getLatitude()+5, point.getLongitude()+5);
// Search for points within the map serachable area and for each create a new agent.
List<GISPoint> schools = map.search("School");
for (GISPoint gisPoint:schools){
add_myAgent(gisPoint);
}
It works when testing, however, the results for Schools in the searchable area around New York were very small. But this is the case even when doing it manually.

Obtain list of My Places from Google Maps

I am trying to obtain the list of places the user has saved on Google Maps. Now I know there isnt an API for this (for whatever reason), but I saw here:
"My Places" Google Maps API
That apparently there used to be a way to obtain the URL, but it does not seem to work with my list of places.
E.g.
https://www.google.com/maps/#46.889424,0.1194148,6z/data=!4m3!11m2!2s1KbZtik1IdXyNhwfXEb3P9vaZvzU!3e3
Does not seem to work if I append &output=kml or &output=json
I created this list on Google Maps, then hit share and obtained that link.
I even tried parsing the resulting HTML but it seems everything is handled by some Javascript Engine and I can't find any reference to Google Ids there --- I dont even know how they handle clicks!
Any help? There must be a way to retrieve this information programmatically!
EDIT:
I managed to get something working by visiting the shared link, then processing the html and storing the window.APP_INITIALIZATION_STATE variable. I then convert it to an javascript array and loop over it. Deep inside the array/map structure, I managed to get the google name and google place id out of that array. That seems to work a bit, but when trying with lists over 20 items long, google only gets the first 20 and is waiting for the user to 'scroll down' to get the next 20. That seems to trigger another call to get the next 20 results and looks a bit like:
https://www.google.com/search?tbm=map&fp=1&authuser=0&hl=en&gl=nl&pb=!4m8!1m3!1d54065472.4384380........
I can see the original feature id being included at the end of the url, but have no idea how to construct this url in full though to get the next 20 items.... Any ideas?
Your saved places list actually has what you call a feature ID attribute, this isn't a common practice and Google frowns upon this technique but take a look at this URL:
https://www.google.com/maps/preview/entity?authuser=0&hl=en&gl=us&pb=!1m10!1s0x0%3A0x3743ae09a161976b!3m8!1m3!1d14318.72623152007!2d-98.2296425!3d26.2070353!3m2!1i1024!2i768!4f13.1!12m3!2m2!1i392!2i106!13m57!2m2!1i203!2i100!3m2!2i4!5b1!6m6!1m2!1i86!2i86!1m2!1i408!2i200!7m42!1m3!1e1!2b0!3e3!1m3!1e2!2b1!3e2!1m3!1e2!2b0!3e3!1m3!1e3!2b0!3e3!1m3!1e8!2b0!3e3!1m3!1e3!2b1!3e2!1m3!1e9!2b1!3e2!1m3!1e10!2b0!3e3!1m3!1e10!2b1!3e2!1m3!1e10!2b0!3e4!2b1!4b1!9b0!14m3!1snyc5W-WeHY3r5gLwkoRI!7e81!15i10112!15m19!2b1!5m4!2b1!3b1!5b1!6b1!10m1!8e3!14m1!3b1!17b1!24b1!25b1!26b1!30m1!2b1!36b1!52b1!53b1!21m28!1m6!1m2!1i0!2i0!2m2!1i458!2i768!1m6!1m2!1i974!2i0!2m2!1i1024!2i768!1m6!1m2!1i0!2i0!2m2!1i1024!2i20!1m6!1m2!1i0!2i748!2m2!1i1024!2i768!22m1!1e81!29m0!30m1!3b1
Highlighted is the feature ID from the link you posted:
https://www.google.com/maps/#46.889424,0.1194148,6z/data=!4m3!11m2!2s1KbZtik1IdXyNhwfXEb3P9vaZvzU!3e3
Along with other maps parameters; when you hit that link you're actually manually triggering the same callback that Google's own scripts in maps use to parse the data to feed back to the maps UI; if you look at array item 2, or {c:..} you'll find a stringified array with the contents of your list, now depending on the program language you're using all it takes is a little tweaking (find/replace, loop through, lint and trim, etc.) to this array and you can pull your results; the cool thing is if you add or remove a place the next time you hit that end point it's updated in real-time.
Some people may call it a "hack"; but it gets the job done. :)
Hope I pointed you to a direction in the event you haven't found a solution; give this a shot.
Note the URL has to be pasted in its entirety, SO truncated the hyperlink; copy and paste the whole thing in one shot and a text file from Google with the arrays will be produced; in my case I curl the URLs I need and parse the returned strings as needed to pull data from Google where their API has limitations. Just a tip. :)
Also check Joel's Answer who did some research and refined some of the following information.
Pagination
You can use this tool to decrypt the pb-parameter. PB stands for protocol buffer (protobuf) and Google uses its own kind of it for maps. You can find different decoders for this by googling it.
In my case, the pagination was done via one parameter (8iX0). It seems, that it always comes with another similar parameter (7i20) but I don't know that it does. I can't yet confirm that this is always the case, but from my experience you're basically looking for two integers that are 20/40/60 etc. apart.
Here's what this looks like for me:
page 2 (7i20, 8i20)
page 3 (7i20, 8i40)
page 4 (7i20, 8i60)
From this information, I tried 7i20 8i00 for page 1, that seemed to work. For lists with >100 items, it just continues like that (8i120, 8i140 etc.)
Here's a code snippet in python (quick & dirty). Make sure to add (long) delays if your list has many pages as you will get rate-limited by captchas eventually if you don't. Notice the 8i%s0 in the url, make sure to put the %s back when you paste your pb-block.
url = "https://www.google.com:443/search?tbm=map&pb=!7i20!8i%s0!..."
headers = {"Referer": "https://www.google.com/"}
def fetch_stops_from_maps():
new_results = -1
page = 0
results = []
while new_results != 0:
new_results = 0
x = requests.get(url % page, headers=headers)
txt = html.unescape(x.text)
txt = txt.split("\n")[1]
results = re.findall(r"\[null,null,[0-9]{1,2}\.[0-9]{4,15},[0-9]{1,2}\.[0-9]{4,15}]", txt)
print(len(results))
for cord in results:
# curr = the description you can manually type in when saving
curr = txt.split(cord)[1].split("\"]]")[0]
curr = curr[curr.rindex(",\"") + 2:]
cords = str(cord).split(",")
lat = cords[2]
lon = cords[3][:-1]
results.append(s)
new_results += 1
page += 2
Actually getting the correct url
Getting the correct url currently seems to be the hardest part when doing this and I have not fully figured this out aswell. However, for my use-case this is not really important, so I extracted the correct pb-block once and called it a day.
As explained in the other answers, the id of the list is visible in the basic url (here, the 2sXX...) when you navigate to the list in your browser. It seems to usually be 24-32 (?) characters long.
.../maps/<coords>/data=!4m3!11m2!2sXXXX...XXXX!3e3
If you have this id, you can put it into an existing protobuf-block and it may work (I only tested this with 3 different lists, which were all created by the same account, so this theory is far from proven).
Now, how do you get the block? I would just share the one I have, but because I only understand parts of what it does, I fear that it may contain some personal info. Instead, I will share my process of getting it. For this I use Burpsuite. It's a program mainly used for web-security testing and has a free community edition, however for our use-case it is the perfect tool, because with it you can easily tinker with requests, change small parts in the request, send it again and immediately see if your changes changed the response. However for extracting the pb-block, one should also be able to use any program that can intercept browser traffic.
Heres the basic rundown with burp:
From GMaps, share a list that has >20 items (this is important) and copy the public link
In Burp, go to the tab "Proxy", make sure "Intercept" is off and click "Open browser" to open the integrated chromium browser
There, paste the link and wait until maps loaded completely
In Burp, turn "Intercept" on, then in google maps, scroll down in the list, until it starts loading new results (always blocks of 20)
Burp now intercepted all requests the browser made since you turned intercepting on. Click "Forward" and go through all requests, until you see a request in the format
GET /search?tbm=map&authuser=0&hl=de&gl=de&pb=!7i20....
This is what you're looking for.
Optionally, you can now right click into the request-text and click "send to repeater", then switch to the repeater-tab. Here you can edit the request and then send it again, being able to see the response immediately. For example, removing the authuser, hl, gl, q, ech, psi url parameters, the request still works flawlessly. If you remove the tch=1 parameter, the response you get will be in a more human readable format.
In the request-text you should now be able to just search for the list-id you got from the link previously and replace it with the id of another list (search bar is at the bottom in burp). As I said, this worked for me, but it may be possible that the pb-block contains some additional metadata that makes lists from different google-accounts or different types of lists incompatible with specific pb-blocks. Just a theory though. Let me know how it goes!
Further automating
I have theorised that one could automate getting the pb-block using requests-html because it can load html-sites fully but it doesn't get updated anymore. Another option (probably the better one) is Selenium Wire, as you should be able to load the page and intercept the requests, like we did in burp. Seems like a whole lot of work tho :D
This was the only API was able to find was this:
https://www.google.com/bookmarks/?output=xml
Used in a browser you would have to first log in through Google's OAuth. It would then return your saved places. Not sure at the moment how you would embedded the authentication to do this programmatically, but this might send you in the right direction.
I was able to extract the data I needed from my google maps list. Below are some comments that expand on some of the other comments here, along with a script that extracts all of the relevant data points from the network response.
Obtaining the underlying URL
You can easily find this URL by just opening the devtools on your browser, going to the network tab, and refreshing the webpage or scrolling down on the list until it loads new results (the list must be larger than 20 results). You should be able to find the network request that starts with https://www.google.com/search?tbm=map&pb... and go from there.
Increase the results size
I was able to increase the number of results returned from the request by changing the value of the 7i20 parameter. From what I can tell, the 71XX parameter is the size of the page, and the 8iXX parameter is the starting point. I haven't tested how large you can make the page limit, but I tested 100 and it seemed to work fine. This should make dealing with larger lists much easier.
Parsing out the data
Instead of using regex to parse out the relevant data from the response, I found that the response is basically just a massive JSON object and I was able to identify the indexes for specific types of data, such as the name of the place, location, notes, etc. See the script below.
If you look at the buildResults function in the script below, you can see the exact indexes used to extract specific pieces of information. This of course may change over time if the network response changes format at all, so use these as a starting point in the case where the specific values aren't at those indexes anymore. Hopefully they would be close to those locations
Script to parse the data (javascript / node.js)
// Insert the raw text content from the network response from the
// https://www.google.com/search?tbm=map&pb... url below.
const rawInput = null
function prepare(input) {
// There are 5 random characters before the JSON object we need to remove
// Also I found that the newlines were messing up the JSON parsing,
// so I removed those and it worked.
const preparedForParsing = input.substring(5).replace(/\n/g, '')
const json = JSON.parse(preparedForParsing)
const results = json[0][1].map(array => array[14])
return results
}
function prepareLookup(data) {
// this function takes a list of indexes as arguments
// constructs them into a line of code and then
// execs the retrieval in a try/catch to handle data not being present
return function lookup(...indexes) {
const indexesWithBrackets = indexes.reduce((acc, cur) => `${acc}[${cur}]`, '')
const cmd = `data${indexesWithBrackets}`
try {
const result = eval(cmd)
return result
} catch(e) {
return null
}
}
}
function buildResults(preparedData) {
const results = []
for (const place of preparedData) {
const lookup = prepareLookup(place)
// Use the indexes below to extract certain pieces of data
// or as a starting point of exploring the data response.
const result = {
address: {
street_address: lookup(183, 1, 2),
city: lookup(183, 1, 3),
zip: lookup(183, 1, 4),
state: lookup(183, 1, 5),
country_code: lookup(183, 1, 6),
},
name: lookup(11),
tags: lookup(13),
notes: lookup(25,15,0,2),
placeId: lookup(78),
phone: lookup(178,0,0),
coordinates: {
long: lookup(208,0,2),
lat: lookup(208,0,3)
}
}
results.push(result)
}
return results
}
const preparedData = prepare(rawInput)
const listResults = buildResults(preparedData)
console.log(listResults)

Permanent links to thumbnails in Google Drive API

I'm using Google Drive API (PHP) to upload some photos to my Drive. When a file is uploaded, a Google_DriveFile object is returned in the response to confirm the successful transfer. It includes a field called thumbnailLink, accessible through the getThumbnailLink getter. Its content may look like this:
https://lh4.googleusercontent.com/dqVdU195R4_0ZtWxsJlhW1Fr2K30xa2hH3V1KV4UrTBl9QkhOSR0ZqN9HoB-TjEQv8SIJw=s220
Until today, I was sure that the link doesn't change by itself over time. However, when I tried to display a thumbnail of a photo I have on my Drive, using a cached address I keep in my local database, I got a 403 error - you can see it under the mentioned link. I asked the API for the current link to the thumbnail and it's now completely different.
It happened to me only once but for multiple files, i.e. all the files I had on my Drive suddenly got new thumbnail links.
Is there a way to quickly retrieve a thumbnail of a document (preferably, a photo) by some constant value or to be sure that it won't change? The perfect solution would be to access the thumbnail under a link that includes the document's id instead of some hash that may change.
Try this:
https://drive.google.com/thumbnail?authuser=0&sz=w320&id=[fileid]
Where:
sz is a size, where you may use as w (width), as h (height)
fileid is a file id. You may find it in "share" menu by right click in Google Drive UI.
I have gone through the API Documentation as they have provided:
Important: Thumbnails are invalidated each time the content of the file changes. When supplying thumbnails, it is important to upload new thumbnails each time the content is modified.
According to the information it means that a new Thumbnail is only generated only when the contents of the file are modifided. But in your case it is really weired thing and the contents are not changed but the thumbnail are Changed. As from documentation there is no batch process thing avaiable but another way around is available i.e. Web Hook
According to the Documentation there is web hook available i.e. Files:Watch process through which one can track the changes are made to file. Thus, it means every time contents are changed then hook would run and you can change the cache of the image thumbnail.
HTTP request can be sent to request the watching the files changing
POST https://www.googleapis.com/drive/v2/files/fileId/watch
Here fileID means the ID provided after loading the file.
In the request body, supply data with the following structure:
id ==> string (A UUID or similar unique string that identifies
this channel.)
token# ==> string (An arbitrary string delivered to the target address with
each notification delivered over this channel).
expiration# => long (Date and time of notification channel expiration,
expressed as a Unix timestamp, in milliseconds.)
type ==> string (The type of delivery mechanism used for this channel.
The only option is web_hook.)
address => string (The address where notifications are delivered
for this channel.)
# Optional.
If the contents get changed then new Thumbnail is generated and hook will notify you address and through your address you can fetch new information.
Here is another solution. Let's say we store only GDrive ID of the images or PDFs (google generate thumbs for many file types).
we can send request to gDrive to get valid thumbnail since looks like thumbs will expire even if there is no changes to the file.
In this case each thumbnail inside Angular component. If you use something else you can create array of links and iterate through it to create proper thumb links.
Here is the code:
const thumb = () => {
if (this.item.DriveId) {
this.getThumb(this.item.DriveId, this.authToken)
.then(response => {
console.log(`response from service ${response}`);
// Set thumbnail width size to 300px or any other width if needed
this.item.externalThumbnailId = response.slice(0, -3) + 300;
})
//here we can handle cases when API limit exceeded 10 req in a sec
.catch(e => {
if(e.data.error.message == 'User Rate Limit Exceeded'){
console.log('Failed to load thumb. trying one more time');
setTimeout(thumb, 1000);
} else {
console.log(e);
}
});
}
};
//call this function on component load.
thumb();
Another solution will be to write some backend script that updates thumbs in DB records.

AngularJS form wizard save progress

I have a service in AngularJS that generates all the steps needed, the current state of each step (done, current, show, etc) and an associated directive that actually implements the service and displays the data of the service. But, there are 2 steps that are divided in 4 and 3 steps each:
Step one
Discounts
Activities
Duration
Payment Length
Step two
Identification
Personal data
Payment
How can I "save" the state of my form in case the person leaves the site and comes back later? Is it safe to use localStorage? I'm no providing support for IE6 or 7. I thought of using cookies, but that can end up being weak (or not)
Either local storage or cookies should be fine. I doubt this will be an issue, but keep in mind that both have a size limit. Also, it goes without saying that the form state will only be restored if the user returns on the same browser, and without having deleted cookies / local storage.
Another option could be to save the information server side. If the user is signed in, you can make periodic AJAX calls with the data and store the state on the server. When the user finishes all steps, you can make an AJAX call telling the server to delete any saved data it might have. This allows you to restore state even if the user returns on a different browser, as long as he is signed in.
Regardless of what direction you go with this, you can use jQuery's serialize method to serialize the form into a string and save it using your choice of storage.

Stream Position Returned By Box API Cannot Be Used To Track Events

Thanks for your reply for my question: Is this a bug of Box API v2 when getting events
This is a new problem related to this. The problem is that I cannot reliably use the next_stream_position I got from previous calls to track events.
Given this scenario:
Given the following two GET HTTP queries:
1. GET https://api.box.com/2.0/events?stream_position=1336039062458
This one returns the JSON file which contains one file entry of myfile.pdf and the next stream position = 1336039062934
2. GET https://api.box.com/2.0/events?stream_position=1336039062934
This call uses the stream position I got from the first call. However, it returns the JSON contains the exactly same file entry of myfile.pdf with the first call.
I think if the first call gives a stream position, it should be used as a mark for that exact time (say: TIme A). If I use that stream position in subsequent queries, no events before "Time A" should be returned.
Is this a bug? Or did I use the API in the wrong way?
Many thanks.
Box’s /events endpoint is focused on delivering to you a highly reliable list of all the events relevant to your Box account. Events are registered against a time-sequenced list we call the stream_position. When you hit the /events API and pass in a stream_position we respond to you with the events that happened slightly before that stream position, up to the current stream_position, or the chunk_size, whichever is lesser. Due to timing lag and our preference to make sure you don’t miss some event, you may receive duplicate events when you call the /events API. You may also receive events that look like they are ‘before’ events that you’ve already received. Our philosophy is that it is better for you to know what has happened, than to be in the dark and miss something important.
Box events currently give you a window roughly 5 seconds into the past, so that you don't miss some event.
We have considered just delaying the events we send you by about 5 seconds and de-duplicating the events on our side, but at this point we've turned the dial more towards real-time. Let us know if you'd prefer a fully de-duped stream, that was slower.
For now, (in beta) if you write your client to check for duplicate events, and discard them, that will be best. We are about to add an event_id to the payload so you can de-duplicate on that. Until then, you'll have to look at a bunch of fields, depending on the event type... It's probably more challenging that it is worth.
In order to help you be able to figure out if an event is a duplicate, we have now added to each event an event_id that will be unique. It is our intention that the event_id will allow you to de-duplicate the responses you receive from subsequent GET /events calls.
You can see this reflected in the updated documentation here, including example payloads.