Specflow Living Doc - TestExecution.json file is not updating when using "dotnet test" or "vstest.console" - selenium-chromedriver

So I've been trying to implement a script to run the tests in my suite and then run the LivingDoc generate command, this is all fine except for some reason when I run the tests via "dotnet test" or "vstest.console" the TestExecution.json is not being updated with the results, the update is only happening when I run them via test explorer in VS.
Am I missing a step somewhere I couldn't find anything in the doc, here is my script that I am running:
> $scriptpath = $MyInvocation.MyCommand.Path
> $dir = Split-Path $scriptpath
>
> dotnet test --filter TestCategory=Smoke
>
> Set-Location "$dir\bin\Debug\net6.0"
> livingdoc test-assembly Specflow.Actions.Framework.dll -t TestExecution.json
> Set-Location $dir
The rest of my suite is pretty basic, I am using Nunit with Specflow and Specflow.actions.selenium for my browser interactions.

Related

'gcloud functions deploy' deploys code that cannot listen to Firestore events

When I try to use a gcloud CLI to deploy a small python script that listens to Firestore events, the script fails to listen to the Firestore events. If I use the web inline UI or web zip upload, the script actually listens to Firestore events. The command line doesn't show any errors.
Deploy script
gcloud beta functions deploy print_name \
--runtime python37 \
--service-account <myprojectid>#appspot.gserviceaccount.com \
--verbosity debug \
--trigger-event providers/cloud.firestore/eventTypes/document.create \
--trigger-resource projects/<myprojectid>/databases/default/documents/Test/{account}
main.py
def print_name(event, context):
value = event["value"]["fields"]["name"]["stringValue"]
print("New name: " + str(value))
gcloud --version
Google Cloud SDK 243.0.0
beta 2019.02.22
bq 2.0.43
core 2019.04.19
gsutil 4.38
Back to comments
The document is pretty basic (has a name string field).
Any ideas? I'm curious if the gcloud CLI has a bug.
The inline web UI and zip uploader work great. I've tried multiple variations of this (e.g. removing 'beta', adding and removing different deploy args).
I'd expect the script to actually listen to Firestore events.
The "default" in trigger-resource needs parentheses around it.
gcloud beta functions deploy print_name \
--runtime python37 \
--service-account <myprojectid>#appspot.gserviceaccount.com \
--verbosity debug \
--trigger-event providers/cloud.firestore/eventTypes/document.create \
--trigger-resource "projects/<myprojectid>/databases/(default)/documents/Test/{account}"

Export .MWB to working .SQL file using command line

We recently installed a server dedicated to unit tests, which deploys
updates automatically via Jenkins when commits are done, and sends
mails when a regression is noticed
> This requires our database to always be up-to-date
Since the database-schema-reference is our MWB, we added some scripts
during deploy, which export the .mwb to a .sql (using python) This
worked fine... but still has some issues
Our main concern is that the functions attached to the schema are not exported at all, which makes the DB unusable.
We'd like to hack into the python code to make it export scripts... but didn't find enough informations about it.
Here is the only piece of documentation we found. It's not very clear for us. We didn't find any information about exporting scripts.
All we found is that a db_Script class exists. We don't know where we can find its instances in our execution context, nor if they can be exported easily. Did we miss something ?
For reference, here is the script we currently use for the mwb to sql conversion (mwb2sql.sh).
It calls the MySqlWorkbench from command line (we use a dummy x-server to flush graphical output.)
What we need to complete is the python part passed in our command-line call of workbench.
# generate sql from mwb
# usage: sh mwb2sql.sh {mwb file} {output file}
# prepare: set env MYSQL_WORKBENCH
if [ "$MYSQL_WORKBENCH" = "" ]; then
export MYSQL_WORKBENCH="/usr/bin/mysql-workbench"
fi
export INPUT=$(cd $(dirname $1);pwd)/$(basename $1)
export OUTPUT=$(cd $(dirname $2);pwd)/$(basename $2)
"$MYSQL_WORKBENCH" \
--open $INPUT \
--run-python "
import os
import grt
from grt.modules import DbMySQLFE as fe
c = grt.root.wb.doc.physicalModels[0].catalog
fe.generateSQLCreateStatements(c, c.version, {})
fe.createScriptForCatalogObjects(os.getenv('OUTPUT'), c, {})" \
--quit-when-done
set -e

Using startup script from .net api

I'm trying launch an instance with a startup script in the compute engine .net API.
Here's the code I'm using-
var start = new Google.Apis.Compute.v1.Data.Metadata.ItemsData();
start.Key = "startup-script";
start.Value = "C:\\Users\\User\\Desktop\\script.sh";
newinst.Metadata = new Google.Apis.Compute.v1.Data.Metadata();
newinst.Metadata.Items = new List<Google.Apis.Compute.v1.Data.Metadata.ItemsData>();
newinst.Metadata.Items.Add(start);
and this is my script-
#! /bin/sh
gsutil cp gs://bucket/file dir
dir is an existing directory in the image. The instance launches but there's no trace of that command being run.
further info: from looking at log info it looks like a script is found in metadata and the instance thinks it's running it but no commands are executed
For anyone interested, what I needed here was to add-
newinst.Metadata.Kind = "compute#metadata";
before executing the InsertRequest or it won't use the script.

Run Net Share command in powershell script

I try to share a folder in powershell with net share command, i can't use group or user name for share permission because this script will be used on the systems with different os languages, for this reason i use group/user SID to set up share permissions.
Here is my script, my function work great outside of command. But my function dont work in "NET SHARE" cmd.
function Get-GroupName {
param ($SID)
$objSID = New-Object System.Security.Principal.SecurityIdentifier($sid)
$objUser = $objSID.Translate([System.Security.Principal.NTAccount])
$objUser.Value
}
# Share Folder + Set Share Permission SID Based
cmd /c net share MSI=C:\MSI /GRANT:(Get-GroupName -SID 'S-1-1-0'),READ
This will work:
cmd /c $( "net share MSI=C:\MSI /GRANT:""$(Get-GroupName -SID 'S-1-1-0')"",READ" )
But if you're on Windows 8 or newer, Windows Server 2012 or newer you can use the Set-SmbShare and Grant-SmbShareAccess cmdlets instead:
http://technet.microsoft.com/en-us/library/jj635727
http://technet.microsoft.com/en-us/library/jj635705

export shell function to su as a user with ksh default shell

I have a situation where only root can mailx, and only ops can restart the process. I want to make an automated script that both restarts the process and sends an email about doing so.
When I try this using a function the function is "not found".
I had something like:
#!/usr/bin/bash
function restartprocess {
/usr/bin/processcontrol.sh start
}
export -f restartprocess
su - ops -c "restartprocess"
mailx -s "process restarted" myemail.mydomain.com < emailmessage.txt
exit 0
It told me that the function was not found. After some troubleshooting, it turned out that the ops user's default shell is ksh.
I tried changing the script to run in ksh, and changing "export -f" to "typeset -xf", and still the function was not found. Like:
ksh: exportfunction not found
I finally gave up and just called the script (that was in the function directly) and that worked. It was like:
su - ops -c "/usr/bin/processcontrol.sh start"
(This is all of course a simplification of the real script).
Given that user ops has default shell is ksh and I can't change that or modify sudoers, is there a way to export a function such that I can su as ops (and I need to run ops's profile) and execute that function?
I made sure ops user had permission to the directory of the script I wanted it to execute, and permission to run that script.
Any education about this would be appreciated!
There are many restrictions for exporting functions, especially
combined with su - ... with different accounts and different shells.
Instead, turn your script inside out and put all of the command
that is to be run inside a function in the calling shell.
Something like: (Both bash and ksh)
#!/usr/bin/bash
function restartprocess {
/bin/su - ops -c "/usr/bin/processcontrol.sh start"
}
if restartprocess; then
mailx -s "process restarted" \
myemail#mydomain.com < emailmessage.txt
fi
exit 0
This will hide all of the /bin/su processing inside the restartprocess function, and can be expanded at will.