How to use ReceivedDateTime filter using Graph client in C# - microsoft-graph-sdks

How to use email message ReceivedDateTime filter using Graph client in C#

Try to use Filter method.
Example how to filter messages in March 2022:
API call:
GET https://graph.microsoft.com/v1.0/me/messages?$filter=receivedDateTime ge 2022-03-01 and receivedDateTime lt 2022-04-01
C#:
var filter = "receivedDateTime ge 2022-03-01 and receivedDateTime lt 2022-04-01";
var messages = await graphClient.Me.Messages
.Request()
.Filter(filter)
.GetAsync();
Filter by exact date time:
var dateTimeStr = new DateTime(2022,3,7,09,27,50).ToString("yyyy-MM-ddTHH:mm:ssZ");
var filter = $"receivedDateTime eq {dateTimeStr}";
var messages = await graphClient.Me.Messages
.Request()
.Filter(filter)
.GetAsync();
Filter messages older than 30 days
var days = 30;
var dateTimeStr = DateTime.Now.Subtract(TimeSpan.FromDays(days)).ToString("yyyy-MM-dd");
var filter = $"receivedDateTime lt {dateTimeStr}";
var messages = await graphClient.Me.Messages
.Request()
.Filter(filter)
.GetAsync();
Resources:
filter query operator

Related

How to extract specific data from website and store it into the sheet

Im planning to have a list of the clan, something like this:
Player PlayerID Wins AverageCards
Player 1 #3G32DB4 343 5464
Player 2 #PP4V6SS 234 4512
...
Every clan can get up to 50 members. Im currently trying to get all player IDs and Names, to run my code for every member to collect data like this:
var response = UrlFetchApp.fetch("https://royaleapi.com/data/player/cw_history/?player_tag=XXXXXXXXXXXXXXXXXXXX");
I found these sources for clan ids:
https://www.deckshop.pro/spy/clan/89UJ8J2J
https://royaleapi.com/clan/89UJ8J2J
official game developer API, i tried it with that, but i needed some authorization. (https://developer.clashroyale.com/#/documentation)
What would be the easiest way to get like a list of all current clan members and run the code each member?
this is my current code for ONE SPECIFIC member, the plan is to run this for every member and build sums/averages. if you want to try it out:
function royaleapi() {
var response = UrlFetchApp.fetch("https://royaleapi.com/data/player/cw_history/?player_tag=29YY9L98C");
var sheet = SpreadsheetApp.getActiveSheet();
// Parse the JSON reply
var json = response.getContentText();
var data = JSON.parse(json);
sheet.getRange(1,1).setValue("Rank");
sheet.getRange(1,2).setValue("collection_day_battles_played");
sheet.getRange(1,3).setValue("cards_earned");
sheet.getRange(1,4).setValue("battles_lost");
sheet.getRange(1,5).setValue("number_of_battles");
sheet.getRange(1,6).setValue("wins");
sheet.getRange(1,7).setValue("battles_won");
sheet.getRange(1,8).setValue("timestamp");
//start the loop
for (var i=1; i<340; i++)
{
var rank = data["battles"][i-1]["rank"];
sheet.getRange(i+1,1).setValue(rank);
var collection_day_battles_played = data["battles"][i-1]["collection_day_battles_played"];
sheet.getRange(i+1,2).setValue(collection_day_battles_played);
var cards_earned = data["battles"][i-1]["cards_earned"];
sheet.getRange(i+1,3).setValue(cards_earned);
var battles_lost = data["battles"][i-1]["battles_lost"];
sheet.getRange(i+1,4).setValue(battles_lost);
var number_of_battles = data["battles"][i-1]["number_of_battles"];
sheet.getRange(i+1,5).setValue(number_of_battles);
var wins = data["battles"][i-1]["wins"];
sheet.getRange(i+1,6).setValue(wins);
var battles_won = data["battles"][i-1]["battles_won"];
sheet.getRange(i+1,7).setValue(battles_won);
var timestamp = data["battles"][i-1]["timestamp"];
sheet.getRange(i+1,8).setValue(timestamp);
}
}

Decode/Unescape Unicode Google App Scripts

I am new to decoding hence I am unable to decode below response while using UrlFetchApp.
Below is the function I am using to fetch data but getting values in unicode which I need to decode. Any function that I can use to decode this?
function scrape() {
var url = UrlFetchApp.fetch("https://immi.homeaffairs.gov.au/visas/working-in-australia/skillselect/invitation-rounds");
var elements =/id="ctl00_PlaceHolderMain_PageSchemaHiddenField_Input" (.*?)\/>/gim;
var x = url.getContentText().match(elements);
Logger.log(x);
}
Although I'm not sure whether this is the best way, how about this modification?
Modified script:
function scrape() {
var url = UrlFetchApp.fetch("https://immi.homeaffairs.gov.au/visas/working-in-australia/skillselect/invitation-rounds");
var elements =/id="ctl00_PlaceHolderMain_PageSchemaHiddenField_Input" (.*?)\/>/gim;
var x = url.getContentText().match(elements);
var res = unescape(x[0].replace(/\\u/g, "%u")); // Added
Logger.log(res)
}
Result:
When above modified script is used, as a sample, the values are converted as follows.
From:
\u003cp\u003eThe table below shows the number of invitations issued in the SkillSelect invitation round on 11 September 2018.\u003c/p\u003e\n\n\u003ch3\u003eInvitations issued on 11 September 2018\u003c/h3\u003e\n\n
To:
<p>The table below shows the number of invitations issued in the SkillSelect invitation round on 11 September 2018.</p>\n\n<h3>Invitations issued on 11 September 2018</h3>\n\n
References:
unescape()
replace()
If I misunderstand your question, I'm sorry.
function scrape() {
var url = UrlFetchApp.fetch("https://immi.homeaffairs.gov.au/visas/working-in-australia/skillselect/invitation-rounds");
var elements =/id="ctl00_PlaceHolderMain_PageSchemaHiddenField_Input" (.*?)\/>/gim;
var x = url.getContentText().match(elements);
var res = unescape(x[0].replace(/\\u/g, "%u")); // Added
Logger.log(res)
}

Sign Ethereum transactions offline using Node-RED

I am trying to create a flow in Node-RED to sign for Etherem transactions (in this primary case, just messages) off-line and am struggling.
I have a Python code working fine (meaning doing well this off-line signing) and posting via an API but the Node-RED is not happy doing what I want (it returns a different signature than the one I get on the Web3.py and thus the EVM reverts the transaction).
Can anyone help? I tried several approaches, including the ones commented out, but none will match the signature generated at the Python code...
PS - the Python code implements it with signed_hash = self.web3.eth.account.signHash(hash, self._fetch_signer_key())
Thanks!
var Web3 = global.get('web3');
var web3 = new Web3();
var account = "0x9EfEe29e8fDf2cXXXXXXXde5502ABDf3f13ADac9a";
var privateKey = "0xbde16f62dadb43dad898e182e4e798baXXXXae46458d4939361d2494e11ce9e4";
var password = "12XXX8";
var message = "SMX1 10 1527596375";
var _hash = web3.utils.sha3(message);
//var signedObject = web3.eth.accounts.sign(_hash, privateKey);
var signedObject = web3.eth.accounts.sign(message, account);
//var signedObject = web3.eth.personal.sign(message, account, password);
//const signature = web3.eth.sign(message,account);
var msg = {};
msg.payload = {
//"message": message,
"message": signedObject.message,
"hash": _hash,
//"hash": signedObject.messageHash,
"signature": web3.utils.toHex(signedObject.signature),
//"signature": web3.eth.personal.sign(message, account, password),
"identity": identity,
};
/*
//msg.payload = signedObject.message;
msg.payload.message = signedObject.message;
msg.payload.hash = signedObject.messageHash;
msg.payload.signature = signedObject.signature;
msg.payload.identity = identity;
*/
//return signedObject.messageHash;
//return msg;
return signedObject;

Not Able to Scrape table in Google Sheets

With the help of this SO questionsI am trying to scrape the following website. I would like the two teams and the time. For example, the first entry would be Chicago | Miami | 12:30 PM, and the last entry would be Colorado | Arizona | 10:10 PM. My code is as follows
function espn_schedule() {
var url = "http://www.espn.com/mlb/schedule/_/date/20180329";
var content = UrlFetchApp.fetch(url).getContentText();
var scraped = Parser.data(content).from('class="schedule has-team-logos align-left"').to('</tbody>').iterate();
var res = [];
var temp = [];
var away_ticker = "";
scraped.forEach(function(e){
var away_team = Parser.data(e).from('href="mlb/team/_/name/').to('"').build();
var time = Parser.data(e).from('a data-dateformat="time1"').to('</a>').build();
if (away_ticker == "") away_ticker = away_team;
if (away_team != away_ticker) {
temp.splice(1, 0, away_ticker);
res.push(temp);
temp = [];
away_ticker = away_team;
temp.push(time);
}
});
var ss = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Schedule");
ss.getRange(ss.getLastRow() + 1, 1, res.length, res[0].length).setValues(res);
}
I get the following error:
TypeError: Cannot read property "length" from undefined. (line 42, file "Code")
Here is a modified solution that works
function espn_schedule() {
var url = "http://www.espn.com/mlb/schedule/_/date/20180329";
var content = UrlFetchApp.fetch(url).getContentText();
var e = Parser.data(content).from('class="schedule has-team-logos align-left"').to('</tbody>').build();
var res = [];
//Logger.log(scraped[0])
var temp = [];
var away_ticker = "";
var teams = Parser.data(e).from('<abbr title="').to('">').iterate();
Logger.log(teams)
var time = Parser.data(e).from('data-date="').to('">').iterate()
Logger.log(time)
for( var i = 0; i<teams.length ; i = i+2)
{
res[i/2] = []
res[i/2][0] = teams[i]
res[i/2][1] = teams[i+1]
res[i/2][2] = new Date(time[i/2]).toLocaleTimeString('en-US')
}
Logger.log(res)
var ss = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Schedule");
ss.getRange(ss.getLastRow() + 1, 1, res.length, res[0].length).setValues(res);
}
Modification explained:
1) Since you access only the first table you don't need to iterate during parsing and just get the first table. Also, since you get just the first table, you don't need to use forEach to loop through each element.
var e = Parser.data(content)
.from('class="schedule has-team-logos align-left"')
.to('</tbody>')
.build(); //Use build instead of iterate
2) Instead of parsing the HTML link to get the team name, you can use <abbr title=" element to scrape the name. Furthermore, you can iterate over all the team names in the table to get an array of team names.
var teams = Parser.data(e).from('<abbr title="').to('">').iterate();
3) Similar to the above modification, you can get the time by using the data-date tag. This gives you date which can read by Date() class. Again, we iterate over the table to get all the times
var time = Parser.data(e).from('data-date="').to('">').iterate()
4) Finally, we use for loop to rearrange the teams and time in the array called res. This allows for inserting the data into the sheet directly.
for( var i = 0; i<teams.length ; i = i+2) //each loop adds 2 to the counter
{
res[i/2] = []
res[i/2][0] = teams[i] //even team (starts at zero)
res[i/2][1] = teams[i+1] //vs odd teams
res[i/2][2] = new Date(time[i/2]).toLocaleTimeString('en-US')
}
Reference:
Date(),Date.toLocaleTimeString()
Edit:
Reason for error, in the below code
Parser.data(e).from('href="mlb/team/_/name/').to('"').build()
you are looking for string 'href="mlb/team/_/name/', however it should be href="/mlb/team/_/name/'. Note the difference mlb vs /mlb.
Secondly, in the following code
Parser.data(e).from('a data-dateformat="time1"').to('</a>').build();
The string should be a data-dateFormat, when you inspect the website it shown as dateformat. However, when you call it using URLfetch and log the text, it is shown as dateFormat

Error in Google Sheets Script when parsing XML

I have this function running in a Google Sheets script that pulls HTML from subreddits and returns them to a spreadsheet. It works for me some/most of the time, but other times I get an error "Could not parse text. (line 13)" which is the line with var doc = Xml.parse(page, true);. Any idea why this is happening or is this just a bug with Google Scripts? Here's the code that works...sometimes.
function getRedditHTML() {
var entries_array = [];
var subreddit_array = ['https://www.reddit.com/r/news/','https://www.reddit.com/r/funny/','https://www.reddit.com/r/science/'];
for (var s = 0; s < subreddit_array.length; s++) {
var page = UrlFetchApp.fetch(subreddit_array[s]);
//this is Line 13 that is breaking
var doc = Xml.parse(page, true);
var bodyHtml = doc.html.body.toXmlString();
doc = XmlService.parse(bodyHtml);
var root = doc.getRootElement();
var entries = getElementsByClassName(root,'thing');
for (var i = 0; i < entries.length; i++) {
var title = getElementsByClassName(entries[i],'title');
title = XmlService.getRawFormat().format(title[1]).replace(/<[^>]*>/g, "");
var link = getElementsByClassName(entries[i],'comments');
link = link[0].getAttribute('href').getValue();
var rank = getElementsByClassName(entries[i],'rank');
rank = rank[0].getValue();
var likes = getElementsByClassName(entries[i],'likes');
likes = likes[0].getValue();
entries_array.push([rank, likes, title, link]);
}
}
return entries_array.sort(function (a, b) {
return b[1] - a[1];
});
}
Here is what I found upon playing with importXML (my usual way of doing this) - for some reason I cannot narrow down - it DOES appear to randomly stall out and return null for a few minutes - so I'm guessing the issue with your thing is not the code but that the site or google temporarily blocks/won't return the data -
however I found the JSON endpoint to the piece you want - and I noticed that when XML went down - the JSON didnt.
You can take that and fix it to push your own array of topics/urls - I just left it for one link for now to show you how the URL breaks down and where it should be modified:
The URL is 'https://www.reddit.com/r/news/hot.json?raw_json=1&subredditName=news&sort=top&t=day&feature=link_preview&sr_detail=true&app=mweb-client
News is mentioned in 2 places so just modify all your URLs to follow that method - you can easily load that javascript in a browser to see all the fields available
Also the portion hot.json is where you can change whether you want the ranked list (called hot), or new,top,promoted, etc. you just change that keyword.
Score is the same as the upvotes/likes
function getSubReddit() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getActiveSheet(); //get Active sheet
var subject = 'news';
var url = 'https://www.reddit.com/r/' + subject + '/hot.json?raw_json=1&subredditName=' + subject + '&sort=top&t=day&feature=link_preview&sr_detail=true&app=mweb-client'; //json endpoint for data
var response = UrlFetchApp.fetch(url); // get api endpoint
var json = response.getContentText(); // get the response content as text
var redditData = JSON.parse(json); //parse text into json
Logger.log(redditData); //log data to logger to check
//create empty array to hold data points
var statsRows = [];
var date = new Date(); //create new date for timestamp
//The following lines push the parsed json into empty stats array
for (var j=0;j<25;j++){
for (var i =0;i<25;i++){
var stats=[];
stats.push(date);//timestamp
stats.push(i+1);
stats.push(redditData.data.children[i].data.score); //score
stats.push(redditData.data.children[i].data.title); //title
stats.push(redditData.data.children[i].data.url); //article url
// stats.push('http://www.reddit.com' + redditData.data.children[i].data.permalink); //reddit permalink
statsRows.push(stats)
}
//append the stats array to the active sheet
sheet.appendRow(statsRows[j])
}
}