monaco-editor use AddAction/AddCommand, how to stopPrapagation? - stoppropagation

Here's an action that uses input Ctrl/Cmd + Enter to execute the run function.
editor.addAction({
id: 'runStep',
label: 'run',
keybindings: [monaco.KeyMod.CtrlCmd | monaco.KeyCode.Enter],
contextMenuGroupId: 'navigation',
contextMenuOrder: 1.5,
run: function(editor){ /*do something*/}
});
But how can I stopPrapagation?
The parameter of the run function is editor, not a keyboardEvent object.

Related

Calling Google Cloud function with arguments

I have a function which fetch sql file from Cloud storage. This function accept project_id,bucket_id & sql_file
from google.cloud import storage
def read_sql(request):
request_json = request.get_json(silent=True)
project_id=request_json['project_id']
bucket_id=request_json['bucket_id']
sql_file=request_json['sql_file']
gcs_client = storage.Client(project_id)
bucket = gcs_client.get_bucket(bucket_id)
blob = bucket.get_blob(sql_file)
contents = blob.download_as_string()
return contents.decode('utf-8')
It works fine when I test it with these parameters
{"project_id":"my_project","bucket_id":"my_bucket","sql_file":"daily_load.sql"}
I am trying to call this function in Google WorkFlow but don't know how to setup arguments in call http.get
main:
params: []
steps:
- init:
assign:
- project: ${sys.get_env("GOOGLE_CLOUD_PROJECT_ID")}
- location: "us-central1"
- name: "workflow_test_return_sql1"
- service_account: "sa#appspot.gserviceaccount.com" # Use App Engine default SA.
- get_function:
call: googleapis.cloudfunctions.v1.projects.locations.functions.get
args:
name: ${"projects/" + project + "/locations/" + location + "/functions/" + name}
result: function
- grant_permission_to_all:
call: googleapis.cloudfunctions.v1.projects.locations.functions.setIamPolicy
args:
resource: ${"projects/" + project + "/locations/" + location + "/functions/" + name}
body:
policy:
bindings:
- members: ["allUsers"]
role: "roles/cloudfunctions.invoker"
- call_function:
call: http.get
args:
url: ${function.httpsTrigger.url}
result: resp
Code written in Workflow is yaml
Any idea how to build function call URL with arguments ?
I have tried below two approached but they didn't work
- call_function:
call: http.post
args:
url: ${function.httpsTrigger.url}
body:
input: {"project_id":"my_project","bucket_id":"my_bucket","sql_file":"daily_load.sql"}
result: resp
and
- call_function:
call: http.get
args:
url: ${function.httpsTrigger.url}?project_id=my_project?bucket_id=my_bucket?sql_file=daily_load.sql
You are close!! Try that
- call_function:
call: http.get
args:
url: ${function.httpsTrigger.url + "?project_id=" + my_project + "&bucket_id=" + my_bucket + "&sql_file=" + daily_load.sql}
EDIT 1
Here the post option with JSON body (note that YAML and JSON are similar. here how to write your JSON with yaml)
- call_function:
call: http.post
args:
url: ${function.httpsTrigger.url}
body:
project_id: "my_project"
bucket_id: "my_bucket"
sql_file: "daily_load.sql"
result: resp

How to store the variable key and value that has been extracted from json into another variable with same format in azure pipeline?

I have a variable template
var1.yml
variables:
- name: TEST_DB_HOSTNAME
value: 10.123.56.222
- name: TEST_DB_PORTNUMBER
value: 1521
- name: TEST_USERNAME
value: TEST
- name: TEST_PASSWORD
value: TEST
- name: TEST_SCHEMANAME
value: SCHEMA
- name: TEST_ACTIVEMQNAME
value: 10.123.56.223
- name: TEST_ACTIVEMQPORT
value: 8161
When I run the below pipeline
resources:
repositories:
- repository: templates
type: git
name: pipeline_templates
ref: refs/heads/master
trigger:
- none
variables:
- template: templates/var1.yml#templates
pool:
name: PoolA
steps:
- pwsh: |
Write-Host "${{ convertToJson(variables) }}"
I get the output
{
build.sourceBranchName: master,
build.reason: Manual,
system.pullRequest.isFork: False,
system.jobParallelismTag: Public,
system.enableAccessToken: SecretVariable,
TEST_DB_HOSTNAME: 10.123.56.222,
TEST_DB_PORTNUMBER: 1521,
TEST_USERNAME: TEST,
TEST_PASSWORD: TEST,
TEST_SCHEMANAME: SCHEMA,
TEST_ACTIVEMQNAME: 10.123.56.223,
TEST_ACTIVEMQPORT: 8161
}
How can I modify the pipeline to extract only the key value from the result set that starts with "Test_" and store into another variable in the same format so that I could be used in other tasks in the same pipeline ?
OR iterate through the objects that has keys "Test_" and get the value for the same ?
The output you have shown is invalid JSON and cannot be transformed with JSON. Assuming that it were valid JSON:
{
"build.sourceBranchName": "master",
"build.reason": "Manual",
"system.pullRequest.isFork": "False",
"system.jobParallelismTag": "Public",
"system.enableAccessToken": "SecretVariable",
"TEST_DB_HOSTNAME": "10.123.56.222",
"TEST_DB_PORTNUMBER": 1521,
"TEST_USERNAME": "TEST",
"TEST_PASSWORD": "TEST",
"TEST_SCHEMANAME": "SCHEMA",
"TEST_ACTIVEMQNAME": "10.123.56.223",
"TEST_ACTIVEMQPORT": 8161
}
then you can use the to_entries or with_entries filters of jq to get an object containing only those keys which start with "TEST_":
with_entries(select(.key|startswith("TEST_")))
This will give you a new object as output:
{
"TEST_DB_HOSTNAME": "10.123.56.222",
"TEST_DB_PORTNUMBER": 1521,
"TEST_USERNAME": "TEST",
"TEST_PASSWORD": "TEST",
"TEST_SCHEMANAME": "SCHEMA",
"TEST_ACTIVEMQNAME": "10.123.56.223",
"TEST_ACTIVEMQPORT": 8161
}
The convertToJson() function is a bit messy, as the "json" it creates is not, in fact, a valid json.
There are several possible approaches I can think of:
Use convertToJson() to pass the non-valid json to a script-step, convert it to a valid json and then extract the relevant values. I have done this before and it typically works, if you have control over the data in the variables. The downside is that there is risk that the conversion to valid json can fail.
Create a yaml-loop that iterates the variables and extract the ones that begins with Test_. You can find examples of how to write a loop here, but basically, it would look like this:
- stage:
variables:
firstVar: 1
secondVar: 2
Test_thirdVar: 3
Test_forthVar: 4
jobs:
- job: loopVars
steps:
- ${{ each var in variables }}:
- script: |
echo ${{ var.key }}
echo ${{ var.value }}
displayName: handling ${{ var.key }}
If applicable to your use case, you can create complex parameters (instead of variables) for only the Test_ variables. Using this, you could use the relevant values directly and would not need to extract a subset from your variable list. Note however, that parameters are inputs to a pipeline and can be adjusted before execution. Example:
parameters:
- name: non-test-variables
type: object
default:
firstVar: 1
secondVar: 2
- name: test-variables
type: object
default:
Test_thirdVar: 3
Test_forthVar: 4
You can use these by referencing ${{ parameters.Test_thirdVar }} in the pipeline.

Forge error if using SendStringtoExecute (Command ^C^C)

Actually, the error doesn't raise if I just run (Command ^C^C) from a lisp script.
The case is, my app is a .NET app, and I call some SendStringToExecute to use some lisp code.
To be sure to end the lisp routine, I put at the end:
doc.SendStringToExecute("(Command ^C^C)", True, False, True)
The result of Forge Design Automation is: failedinstruction
Though I can easily find another way to get around this, it cost me more than a day to figure out that it was the (Command ^C^C) cause the failedinstruction, while everything else was working fine.
Hope this bug will be fixed as well as anything similar won't raise up again somewhere else.
I isolate the case like this:
Make a .NET bundle, or just reuse any of your existing one in debug mode
Add the following lisp define function (or it can be a custom command, whatever):
<LispFunction("l+SendStringToExecute")>
Public Shared Function lsp_SendStringToExcute(args As ResultBuffer) As Object
Dim script$ = Nothing
For Each arg As TypedValue In args.AsArray
script = arg.Value
Exit For
Next
script = script.Trim()
If script <> "" Then
Document doc =
AcadApplication.DocumentManager.MdiActiveDocument
doc.SendStringToExecute(script + vbCr, True, False, True)
End If
Return New TypedValue(LispDataType.T_atom)
End Function
Upload the bundle to Forge, create a dump activity and just run the custom lisp solely:
(l+SendStringToExecute "(Command ^C^C)")
The result log is here like this:
...
[02/01/2021 17:23:26] Command: (l+SendStringToExecute "(Command ^C^C)")
[02/01/2021 17:23:26] T
[02/01/2021 17:23:26] Command: (Command ^C^C)
[02/01/2021 17:23:26] *Cancel*
[02/01/2021 17:23:26] Command: nil
[02/01/2021 17:23:27] End AutoCAD Core Engine standard output dump.
[02/01/2021 17:23:27] Error: AutoCAD Core Console failed to finish the script - an unexpected input is encountered.
[02/01/2021 17:23:27] End script phase.
[02/01/2021 17:23:27] Error: An unexpected error happened during phase CoreEngineExecution of job.
[02/01/2021 17:23:27] Job finished with result FailedExecution
[02/01/2021 17:23:27] Job Status:
{
"status": "failedInstructions", ...
Thanks for reporting, I'm not sure if we allow command expression in accoreconsole.
The suggestion is to use following way.
[CommandMethod("CANCELOUTSTANDING")]
public void TESTCANCEL()
{
var doc = Application.DocumentManager.MdiActiveDocument;
string cmd = string.Format("{0}", new string((char)03, 2));
doc.SendStringToExecute(cmd, true, false, true);
}

how to provide options in json-diff when running from node js program

I am working on a simple JSON comparison code that compares two JSON files.
I found json-diff npm module that does exactly what I want. I want to use it with the -k option (compare only keys). Here is the documentation: https://www.npmjs.com/package/json-diff
I can do it in the cmd directly using the command:
json-diff a.json b.json -k
But I'm not able to figure out how do I provide the "options" when writing a node js code.
This is what I have tried but it did not work out.
var jsonDiff = require('json-diff')
console.log(jsonDiff.diffString({ foo: 'bar' }, { foo: 'baz' }, '-k'));
You need to pass the options as the last parameter (for .diff there are 3 parameters)
var jsonDiff = require('json-diff')
console.log(jsonDiff.diff({ foo: 'bar' }, { foo: 'baz' }, {keysOnly: true}));
For diffString there are four params (the 3rd being the colorize options, and the 4th the options)
var jsonDiff = require('json-diff')
console.log(jsonDiff.diffString({ foo: 'bar' }, { foo: 'baz' }, undefined, {keysOnly: true}));

How to print an html file as a report after a successful build, in Jenkins?

My current Jenkins Version: Jenkins 2.204.4
I have a python program generating an HTML Report(only contains a table).
I need to print this as the build report after a successful Jenkins pipeline build.
I tried using the dashboard plugin(iframe portlet) and htmlpublisher plugin but cannot get them to print it as a build report.
Also, I want to keep only one file and not have multiple files doing multiple things. Is it possible?
This is the last stage from the pipeline
stage("publish HTML Table") {
steps {
script {
def outputhtml = sh returnStdout: true, script: 'ls -atrl ./output |tail -1|cut -d" " -f11'
println outputhtml
def htmlfolder = "output/".concat(outputhtml)
publishHTML([allowMissing: false, alwaysLinkToLastBuild: true, escapeUnderscores: false, keepAll: false, reportDir: htmlfolder, reportFiles: 'final_result.html', reportName: 'Vulnerability Test Report', reportTitles:''])
createSummary(icon:"star-gold.png",text: "${outputhtml}")
}
}
}
edit:
createSummary(icon: "notepad.png", text: readFile('./'.concat(html_folder.trim().concat("/${final_html}".trim()))))
This works. There was a dependency issue. https://plugins.jenkins.io/badge/ is the plugin that we need.