Is there a way to sort test run results by time in Testrail? - testrail

We use Testrail for test case management. I want to be able to sort the test run results by the execution time of the test cases. Is this possible?
The only way I can see the execution time of the test case is if I open each test case instance individually.

When you have your test run result page you can click on the right side on the column icon and add the elapsed column to your table. Then you can sort on that column

Related

Foundry Scenarios edited data materialized as dataset

Is it possible to materialize the edits made as part of a scenario into a dataset in foundry?
I want for each scenario to write out the primary keys of the objects edited as part of the scenario.
The motivation is that I need to run multiple processes to compute metrics as part of the changed values for each scenario, at a scale and runtime that is not possible to do with Functions.
Edit with details:
The thing is that I am not doing actual edits to the objects for the object type, I don't want to apply it.
I tested out the "Action Log" and it does not seem like this picks up "uncommitted" actions, meaning actions that is just run as part of a scenario. Also, it does not seem to be a link to the scenario it was a part of, even if the changes were committed.
The workflow is that I have Object Type A, and I define multiple scenarios S on a subset of the objects in A.
Each scenario might make something like 50k edits to a subset of A, through multiple Actions backed by a Function.
I save some of the scenarios. Now I am able to load these scenarios and "apply" them on A again in Workshop.
However I need to be able to get all the primary keys, and the edited values of A materialized into a dataset (for each scenario), as I need to run some transformation logic to compute a metric for the change as part of each scenario (at a scale and execution time not possible in Functions).
The Action Log did not seem to help a lot for this. How do I get the "edits" as part of a saved scenario into a dataset?
The only logic you can run BEFORE applying will be functions.
Not sure about your exact logic but Function's Custom Aggregations can be very powerful: Docs here
this might not directly let you calculate the Diff but you could use the scenario compare widgets in workshop to compare your aggregation between multiple Scenarios
e.g. you have a function that sums(total profit)
Your Workshop could show:
Current Data:
$10k
Scenario A:
$5k
Scneario B:
$13k
instead of like:
Scenario A:
-$5k
Scenario B:
+$3k
Afaik there's no first class way of doing this (yet).
"Applying" a scenario basically means you're submitting the actions queued on the scenario to the ontology. So neither the actions nor the ontology are aware that they came from a scenario.
What I've been doing to achieve what you're working on is using the "Action Log". It's still in Beta so you might need to ask for it to be enabled. It will allow you on each action to define a "log" object to be created that can track the pks of your edited objects per Action.
How I do the action log is:
My Action log has the "timestamp" of the action when they were run.
My Scenarios have the "timestamp" of when it was applied.
Since "Applying a Scenario" means -> actually running all actions on Ontology (underlying data) this gets me this structure if I sort everything by timestamp:
Action 1
Action 2
Scenario A applied
Action 3
Action 4
Scenario B applied
this allows you to do a mapping later on of Action 1/2 must come from Scenario A and Action 3+4 come from Scenario B.
EDIT: Apparently you might be able to use the Scenario RID directly in the Action Logs too (which is a recent addition I haven't adopted yet)
This won't allow you tho to compute (in transforms...) anything BEFORE applying a scenario tho

DWH Reload data

In monthly increment loaded DWH I have task to create process to be able reload random month in the DWH.
Lets say reload data for February 2021 in existing DWH.
If I reload data for February 2021 on 2021/08/15, my SCD2 dimension Customer will end up like this:
I could have wrong dimension attributes until next load. And Dates in DateFrom/DateTo will be messed.
Questions:
is it a good approach to reload single month?
if yes, any advice how to deal with it?
In this case I would prefer full reload of DWH. Is this good idea?
Working on sql server using SSIS ETL tool.
Thanks
If you are just running your existing process then in order to reload data you would need to rollback your DWH to the point prior to the incorrect data, apply the updated dataset again and then reapply all the subsequent datasets.
Obviously, this is a significant piece of work so not a good idea unless you have no other choice and definitely not something you’d want to run regularly.
If you do want to be able to re-apply a single dataset from the past then you’d need to write a process to do this e.g.
Identify the existing records that correspond to your updated dataset and delete them
Insert your updated dataset taking into account previous and subsequent records

Deleting/Copying the data from Database with specific condition in mySQL

I am looking for an Idea to handle data in DB directly.
Here is my use case
I have table “EVENT_REGISTERED” with column (ID, Event_name,Event_DateTime)
I want to display the EVENT_REGISTERED in the fronted end whose date and time is not passed. i.e. I want to display only Upcoming event not Historical events.
Of course this can be handle with JS code before displaying.
But What I want is there should be some sort of a trigger which will delete the Instance form the “EVENT_ REGISTERED” table and copy it to another Table “HISTORICAL_EVENT”
I cannot Create an MY SQL EVENT to do this as it like batch job and I cannot run this every 5 mins as there can be more than 10000 rows in there.
I see Trigger option as well, I am not sure how to use this as it says that it will be activated after the specific action is executed. Here the specific action is CURRENT_DATETIME == EVENT_DATETIME.
Can anybody give me a direction or any sort of alternative way to achieve this?
**I am not an Expert of MySQL*
Thank you
Regards
Prat
Don't start moving rows between tables. Simply run a query:
select er.*
from event_registered
where er.event_datetime > now();
With an index on (event_datetime), performance should be fine.

Populating multiple records from access form

My company keeps records of job codes & purchase order numbers as machines are used, etc. I am trying to create a form where an individual can populate a job number and then input the time a machine was used (machine name and hours) for that job along with a few other fields. My primary question is if there is a way to populate one job code for multiple machines/hours. This would eventually be used for employee time keeping also.
You can do this by creating an update query.
See Create and run an update query or Update Queries in Microsoft Access
Not sure much of a newbie you are, but here's what I would do from start to finish.
1) Create an update query by going to Create -> Query Design.
2) Add your table to the query
3) In the upper-left part of the screen, "select" will be selected. Click on "update" to create an update query
4) In your query criteria, add whatever criteria you need to make it apply to multiple machines or hours. For example machines that have worked between January 1, 2017 and June 1, 2017 you would add (you need #'s for dates in queries)
=#1/1/2017# And <=#6/1/2017#
to your date field. Or let's say you have three machines named A1, B2, and C3. You only want to apply to job code to A1 and B2. Under the field that denotes your machine name your criteria would be:
"A1" or "B2"
Whether you use the date or machine field, you'll need to use the update query to input the job code for all applicable cases. let's say you want to make the job code "Code124" for all people who meet the criteria you specified. Under the Update to line in your query, type in "Code124" and hit "Run" (the icon in the upper left corner with an explanation point. That should do it.

MySQL Trigger After Any Changes

I am looking for a way to create a trigger after any changes occur in a table on any row or field.
I want for my web app to automatically refresh if they're have been any changes to the data since it was last loaded. For this I need a "modified_on" attribute for a table which will apply to the whole table, not just a row.
Not sure what database triggers have to do with this problem, as they are not going to be able to trigger any behavior at the web application level. You will need to build logic in your web application to inspect the data looking for a change. Most likely, this would take the form of some some-client triggered refresh process (i.e. AJAX), which would need to call a application script that would take information from the client on when it last checked for an update and compare it to the most recently updated row(s) in the table. As long as you have a timestamp/datetime field on the table and update each row when it is updated, you can retrieve all updated rows via a simple query such as
SELECT {fields} FROM {table}
WHERE {timestamp field} > '{last time checked}'
I you want, you could use this to only update those rows in the application view which need updating rather than re-rendering the whole table (this would minimize response bandwidth/download time, rendering time, etc.). If you simply want to check if the table has been updated from some certain, but don't care about individual rows, you can just check that the above query returns 1 or more rows.
If you don't want the client application view to have to check at regular intervals (as would likely be done with AJAX), you might also consider websockets or similar to enable bi-directional client-server communication, but this still wouldn't change the fact that your server-side application would need to query the database to look for changed records.