Partial GET request for Google Calendar html download - html

Hi I'm working on an Arduino project and I'd like to display the next event from a Google calendar on a small display. I want to know if there's a way to limit the size of a HTML request from Google. Right now when I do the request I'm getting my full calendar's data. This significantly slows down the time it takes to get the event. I tried using a GET request and Range bytes 1000-3000 but this doesn't seem to work. Does anyone know any workarounds for this without going through Oauth?

You want the "maxResults" parameter and you may also like to limit the fields returned using the "fields" parameter as well. Check the events > list docs for details.

Related

Pulling Data For A List Of URLs Using The Google Analytics Sheets Add-On

I've been looking into The Google Analytics Spreadsheet Add-on, however, currently, it pulls the data for the entire account, I want to pull the data only for specific URLs (new blog posts uploaded which will be updated once a month).
I understand I could use something like "ga:pagePath=~^/blog/" however, this would only show the total number of sessions for the "/blog/" section and not for individual posts, ideally, I want the format to be like this:
enter image description here
Is this possible?
You may need to work around this manually to aggregate the correct data.
When it comes to seperate posts/pages, use this:
"ga:pagePath=#YourURL" =# means contains substring.
IF blogs follow naming conventions, you can filter the different levels of pagepaths, using =#, till you find the correct URL.
I always practice building these out before in Google UA Query Builder.
https://ga-dev-tools.web.app/query-explorer/
Here's a screenshot of what I tested to find the results:
Dimensions + Metrics

Google App Script to reply to Slack with 200 and run the actual code

I have used google app script to make an interactive app in Slack. Most of the time it works as I want it to, but in one of the steps the app has to retrieve all the issues from Jira and attach it in the payload, also contact an external API to create a progress chart etc. In short it is a time intensive process and sometimes it fails because Slack does not get a reply within 3 seconds.
I tried using the trigger builder but unfortunately it has a +-15m accuracy in a millisecond input... I need the interactions to be fairly instant.
If anyone knows and could share another way to make this happen, it would really help.
Thank you

Would it be possible to scrape data from Airbnb directly into a Google Sheet?

I'm trying to build a super simple Google Sheet dashboard comparing the prices at D+7 and D+30 in real-time of specific listings/rooms that are both on Airbnb and Booking.com.
On the Booking.com side, it was super easy : I just created a formula concatenating the URL with the check-in/check-out dates, number of guests and trip duration as parameters and using the =IMPORTXML function and the proper class, I was able to automatically retrieve the price.
It is more difficult on Airbnb, as the price is dynamic(see here: https://www.airbnb.com/rooms/25961741). When I use what I think is the proper class, I get a "N/A Error, Imported content is empty" on Google Sheet.
I also tried using the Airbnb API with REGEX functions to extract the price, but the price set in the listing info is a default price, and does not reflect reality:
"price":1160,"price_formatted":"$1160"
https://api.airbnb.com/v2/listings/25961741?client_id=d306zoyjsyarp7ifhu67rjxn52tv0t20&_format=v1_legacy_for_p3&number_of_guests=1
Do you now if there are any other possible way to access this dynamic price and have it automatically parsed into a spreadsheet? It seems that the data I'm looking for in within meta tags on the HTML code and I don't know if it's possible to scrape it into Google sheet using =IMPORT functions.
Maybe with of a script ?
Thanks a lot !
I'm curious if you were unable to yank direct with the ABNB API; what if you tried to directly pull off the site's service? Have a look at this URL:
https://www.airbnb.com/api/v2/explore_tabs?version=1.3.9&satori_version=1.0.7&_format=for_explore_search_web&experiences_per_grid=20&items_per_grid=18&guidebooks_per_grid=20&auto_ib=false&fetch_filters=true&has_zero_guest_treatment=false&is_guided_search=true&is_new_cards_experiment=true&luxury_pre_launch=false&query_understanding_enabled=true&show_groupings=true&supports_for_you_v3=true&timezone_offset=-240&client_session_id=8e7179a2-44ab-4cf3-8fb8-5cfcece2145d&metadata_only=false&is_standard_search=true&refinement_paths%5B%5D=%2Fhomes&selected_tab_id=home_tab&checkin=2018-09-15&checkout=2018-09-27&adults=1&children=0&infants=0&click_referer=t%3ASEE_ALL%7Csid%3A61218f59-cb20-41c0-80a1-55c51dc4f521%7Cst%3ALANDING_PAGE_MARQUEE&allow_override%5B%5D=&price_min=16&federated_search_session_id=5a07b98f-78b2-4cf9-a671-cd229548aab3&screen_size=medium&query=Paris%2C%20France&_intents=p1&key=d306zoyjsyarp7ifhu67rjxn52tv0t20&currency=USD&locale=en
This is a GET request to ABNB's live page search; now I don't know much about ABNB but I can see from the listings portion of the JSON feed it does have a few pricing factors that differ from the API results you provided; I'm not sure what you need to pull exactly but this may lead you in the right direction; check the 'Listings' array and see if there's something you can possibly use.
Keep in mind if you are looking to automate scraping this data you would want to generate new search sessions; but first you want to see if this is the type of data you're looking for.
Another option, Google CSE's API; I've pulled data in the page headers of sites as they appear in Google based on the Schema.org's tags; but this may be delayed data and it appears you need real-time; the best route would be reserach the above example or try to make sure of ABNB's natural API (they provide its functionality for a reason right?; there must be a way to get what you need).
Hope my answer helped lead you in the right direction!

Facebook Graph API v2.10 page likes

disclosure. I am not a programmer
For the past year or so I have been utilizing the Facebook Graph API to pull Facebook page "likes" into a spreadsheet so that I can track how many likes my page gets vs other pages of similar business. It is kind of rudimentary but it became cumbersome to have to visit every page each week to get the total page "Likes" so this my solution.
I was utilizing this formula...
=importjson(concatenate("https://graph.facebook.com/",APINames!$B11,"?access_token=",$B$1),"/likes","noHeaders")
I reference this post...
Get Facebook page like count for OpenGraph v2.10
Which states to use the a different URL to retrieve page likes.
https://graph.facebook.com/v2.10/<page-id>?access_token=<access-token>&fields=fan_count
When inputting the new URL in my formula function I still receive a reference error.
=importjson(concatenate("https://graph.facebook.com/v2.10/",APINames!$B3,"?access_token=",$B$1),"&fields=fan_count","noHeaders")
If anyone could point me in the right direction I would be very grateful. I have spent over an hour scouring the web for information as well as reading the new changelog for v2.10. I fear going back to the manual process!!
I'm not 100% sure, but you should try this:
=importjson(concatenate("https://graph.facebook.com/v2.10/",APINames!$B3,"?access_token=",$B$1,"&fields=fan_count"),"/fan_count", "noHeaders")
I think the brackets are on the wrong place, because you need to concatenate the url, the page-id (in APINames), the query parameter access_token, the access token form cell B1 and the fields query_parameter.
At least with this change I get a fan_count. Interesting use case, BTW.

google documents api pagination

I am using google documents API. I need to implement pagination.
I used like below.
First page:
qry.setMaxResults(10);
qry.setStartIndex(1);
Second page:
qry.setMaxResults(10);
qry.setStartIndex(11);
I am getting same 10 results for second page also.
This question has this as answer link.
I am unable to find any answer in this link.
Can any one help me?
Thanks in advance..
The Documents List API uses "next" tokens for pagination and not the start index:
https://developers.google.com/google-apps/documents-list/#getting_all_pages_of_documents_and_files
To retrieve all pages of documents, you start from the beginning and then follow the link with rel=next to get more pages.
The max results parameter is used to limit the number of elements in each page.
Remember you can (and should) also use the newer Drive API to perform the same task:
https://developers.google.com/drive/v2/reference/files/list