Get number of the current step - github-actions

For troubleshooting purposes, I would like to obtain the URL to the current step of GitHub Actions logs.
The URL seems fairly easy to calculate:
url="https://github.com/$GITHUB_REPOSITORY/runs/$GITHUB_RUN_ID?check_suite_focus=true#step:$step_number:1"
What's missing is getting the number of the current step - I don't see it listed on https://docs.github.com/en/actions/learn-github-actions/contexts or https://docs.github.com/en/actions/learn-github-actions/environment-variables. Hard-coding the number is not ideal as adding/removing steps before this one will result in wrong/misleading URLs.
Is there perhaps some way to get the current step number that I've overlooked?
Alternatively, the step can have an id. However, it doesn't seem like there's a way to link to a step's log section by its id, is there?

Is there perhaps some way to get the current step number that I've overlooked?
Here is one (very) ugly way:
Give all steps an id. This causes them to be added to the steps object.
Obtain the length of the steps object with jq:
step_nr=$(echo '${{ toJson(steps) }}' | jq length)
Add 2, to get the 1-based step number. (+1 to convert 0-based numbering of length of steps so far to 1-based numbering used by the URL hash parser, and +1 for the "Set up job" step that runs automatically.)
Alternatively, the step can have an id. However, it doesn't seem like there's a way to link to a step's log section by its id, is there?
Looking at the JS code which handles the hash part of the URL, there is:
const e = window.location.hash.match(/^#step:(\d+):(\d+)$/) || [];
So, "no" apparently, at least not via the same mechanism as for indicating the step ID by number.

Related

How to add back comments/whitespaces in translator using the Antlr4's visitor model

I'm currently writing a TSQL (Sybase/Microsoft SQL) to MySQL translator using the ANTLR4 visitor approach.
I'm able to push comments and whitespaces to different channels so that I can use that information later.
What's not super clear is:
how do I get the data back?
and more importantly how do I plug the comments and whitespaces back into my translated MySQL code?
Re: #1, this seems to work to get the list of all tokens including the comments/whitespaces:
public static List<Token> getHiddenTokensFromString(String sqlIn, int hiddenChannel) {
CharStream charStream = CharStreams.fromString(sqlIn);
CaseChangingCharStream upper = new CaseChangingCharStream(charStream, true);
TSqlLexer lexer = new TSqlLexer(upper);
CommonTokenStream commonTokenStream = new CommonTokenStream(lexer, hiddenChannel);
commonTokenStream.fill();
List<Token> hiddenTokens = commonTokenStream.getTokens();
return hiddenTokens;
}
Re #2, what makes it particularly challenging is that as part of the translation, lines of SQL have to be moved around, some lines removed and some lines added.
Any help will be greatly appreciated.
Thanks.
The ANTLR4 lexer creates a number of tokens, each with an index (a running number). Provided you didn't just skip a token, all tokens are available for later inspection, once the parsing step is done, regardless of their channels (the channel is actually just a number property on a token).
So, given you have a token you want to translate, get its index and then ask the token stream for the tokens with the next smaller index or next higher index. These are usually the hidden whitespaces.
Once you have the whitespace token use its start and stop index to get the original text from the char stream. And since you know where you are in the translation process when you do that, it should be easy to know where to insert the original text.

Save function results when a script is executed

Pretty new to python. I have a machine learning script, and what I would like to do is, every time the script is run, I would like to save the results. But what I don't understand is if all the code is in one script, how to save the results without overwriting? So for example:
auc_score = cross_val_score(logreg_model, X_RFECV, y_vars, cv=kf, scoring='roc_auc').mean()
auc_scores=[]
def auc_log():
auc_scores.append(auc_score)
return(auc_scores)
auc_log()
Everytime I run this .py file, the auc_scores list will start with blank, and the list won't update until each time the function is executed, but if you run the whole script than obvious the above will execute and start the saved list as blank again. I feel this is fairly simple, just not thinking about this properly from a continuous deployment perspective. Thanks!
It might be as well to use each result list or zero list as variables of acc_log function, which can leave all function result.
For example,
auc_score=cross_val_score(logreg_model, X_RFECV, y_vars, cv=kf, scoring='roc_auc').mean()
#if auc_score is 'int' or 'float', you must conver it to list type
auc_score_=[]
auc_score_.append(auc_score)
auc_score_zero=[]
def acu_log(acu_score_1,auc_score_2):
acu_scores=acu_score_1+auc_score_2
return acu_scores
initial_log=acu_log(auc_score_zero, auc_score_)
#print (initial_log)
second_log=acu_log(initial_log, auc_score_)
#print (second_log)
If you want to save each acc_log list on HDD after returning the result at each step, 'pickle module' is convenient for treating it.
I’m not sure this is really what you want, but hope my answer contribute to solve your question

Jmeter: set property for each loop

I'm trying to create a test that will loop depending on the number of files stored in one folder then output results base on their filename. I'm thinking to use their filename as the name of their result, so for this, I created something like this in BS preProcessor:
props.setProperty("filename", vars.get("current_tc"));
Then use it for the name of the result:
C:\\TEST\\Results\\${__property(filename)}
"current_tc" is the output variable name of a ForEach controller. It returns different value on each loop. e.g loop1 = test1.csv, loop2 = test2.csv ...
I'm expecting that the result name will be test1.csv, test2.csv .... but the actual result is just test1.csv and the result of the other file is also in there. I'm new to Jmeter. Please tell me if I'm doing an obvious mistake.
Test Plan Image
The way of setting the property seems okayish, the question is where and how you are trying to use this C:\\TEST\\Results\\${__property(filename)} line so a snapshot of your test plan would be very useful.
In the meantime I would recommend the following:
Check jmeter.log file for any suspicious entries, if something goes wrong - most probably you will be able to figure out the reason using this file. Normally it is located in JMeter's "bin" folder
Use Debug Sampler and View Results Tree listener combination to check your ${current_tc} variable value, maybe it is the case of the variable not being incremented. See How to Debug your Apache JMeter Script article to learn more about troubleshooting tecnhiques

Counting the number of passes through a CSV file in JMeter

Am I missing an easy way to do this?
I have a CSV file with a number of params in it, and in my test I want to be able to make some of the fields unique across CSV repetitions with a suffix determined by the number of times I've looped through the file.
So suppose my CSV (simplified) had:
abc
def
ghi
I want to generate in the test
abc_1
def_1
ghi_1 <hit EOF>
abc_2
def_2
ghi_2 <hit EOF>
abc_3
def_3
ghi_3
I thought I could set up a counter to run parallel to my CSV loop, but that won't work unless I increment it by 1/n each iteration, where n is the number of lines in my CSV file. Which you can't do because counters are integers.
I'm going to go flail around and see if I can come up with a solution, but in case I'm not successful, has anyone got any suggestions?
I've used an EOF marker row (index column with something like "EOF" or "END", etc) and used an IF controller with either a non-resetting counter OR user-variables incremented via javascript in a BSF element (BSF assertion or whatever, just a mechanism to run the script).
Unfortunately its the best solution I've come up with without putting too much effort into it.

Drive API files.list query with 'not' parameter returns empty pages

I'm using the Drive API to list files from a collection which do not contain a certain string in their title.
My query looks something like this:
files().list(q="'xxxxx' in parents and not title contains 'toto'")
In my drive collection, I have 100 files, all contain the string "toto" in their title except for let's say 10 files.
I'm using pagination to retrieve the results 20 by 20, so I'm expecting to get only one page with the 10 files corresponding to my request. Surprisingly, the API returns 5 pages, with the first 4 having no results but with a nextToken page, and the files which are compliant with my request only come with the fifth page.
I'm still trying some use-cases here but it seems that it has something to do with the "not" operator. Like if the request was made without it, therefore returning 5 pages, but the results not corresponding to the request being removed from the response. It's very disturbing for me as I'm looking for the best performance here, and obviously having to make 5 requests to Drive instead of one single is not good for me. I'm also noticing that the results don't always come in the last page. I made the test with another collection, the results show up in the second page, but I still get 3 empty pages after that.
Am I missing something here ? Is this kind of behaviour "normal" ? I mean imagine if I had 1000 documents in my collection, having to make 50 requests to find only a few is not what I expect.
I have similar problem in files.list API. I tried to receive all three folders under root folder. I received result only on 342nd page. After several hours of researching I found some regularity in this strange behavior.
As I understood, the Drive API works in this way:
Detects something like index that best match your query
Selects first 20 records using index from step 1
Applies your filter: removes records that do not match your query
Rest is returned to you (maybe empty) with next page token.
The nextPageToken is looks like just OFFSET for the first record on next page in decided index, maybe it contains some information about query or index.
After base64 decode this token I found appropriate record number for next result in 121st position in decoded token.
Previously I built index of tokens using maxResults=1.
This is crazy, but I have no other explanation for observable behavior.
It is very useful for server because server do a very small work for search. From other side this algorithm must produce a lot of requests for pagenate whole list. But limitation for requests per second solve this problem.
Only You can do is pagenage and skip empty results. Do not forget about limitation of number of requests.
Do not try to find errors on your side. This is how Google Drive API works.
contains operator is working as a prefix matcher at the moment.title contains 'toto' will match "totolong" and "toto", but not "blahtoto".