What's really going on with shared objects? - actionscript-3

I want to test for the existence of a .sol file, which may or may not have been written by my program.
In a test program, let's say that my shared object is defined like this:
var mySharedObject = SharedObject('foldername'+'/StoredItem');
and I write the .sol like this:
mySharedObject.data.storedthing = 'storeddatum';
mySharedObject.flush();
Once I've written the sol, when I try to test if it exists with:
if(mySharedObject.data.storedItem)
I get a 'yes' whether or not the file is actually there! That is, if I delete the sol file manually before I run the test, the test still comes up positive! If I run
mySharedObject.clear()
before I do the test I get a 'no', which is what you'd expect. But then, even if I manually replace an exact copy of the sol back into the appropriate folder, I still get a 'no'!
It's as if AS3 won't actually read the file but rather reads the shared object as a static variable. This is a pain because otherwise I have to test for the existence of the sol using the File class.
Any ideas about what 'if(mySharedObject.data.storedItem)' is actually doing?

Related

Psychopy: how to avoid to store variables in the csv file?

When I run my PsychoPy experiment, PsychoPy saves a CSV file that contains my trials and the values of my variables.
Among these, there are some variables I would like to NOT be included. There are some variables which I decided to include in the CSV, but many others which automatically felt in it.
is there a way to manually force (from the code block) the exclusion of some variables in the CSV?
is there a way to decide the order of the saved columns/variables in the CSV?
It is not really important and I know I could just create myself an output file without using the one of PsychoPy, or I can easily clean it afterwards but I was just curious.
PsychoPy spits out all the variables it thinks you could need. If you want to drop some of them, that is a task for the analysis stage, and is easily done in any processing pipeline. Unless you are analysing data in a spreadsheet (which you really shouldn't), the number of columns in the output file shouldn't really be an issue. The philosophy is that you shouldn't back yourself into a corner by discarding data at the recording stage - what about the reviewer who asks about the influence of a variable that you didn't think was important?
If you are using the Builder interface, the saving of onset & offset times for each component is optional, and is controlled in the "data" tab of each component dialog.
The order of variables is also not under direct control of the user, but again, can be easily manipulated at the analysis stage.
As you note, you can of course write code to save custom output files of your own design.
there is a special block called session_variable_order: [var1, var2, var3] in experiment_config.yaml file, which you probably should be using; also, you should consider these methods:
from psychopy import data
data.ExperimentHandler.saveAsWideText(fileName = 'exp_handler.csv', delim='\t', sortColumns = False, encoding = 'utf-8')
data.TrialHandler.saveAsText(fileName = 'trial_handler.txt', delim=',', encoding = 'utf-8', dataOut = ('n', 'all_mean', 'all_raw'), summarised = False)
notice the sortColumns and dataOut params

How to run Julia function on specific processor using remotecall(), when the function itself does not have return

I tried to use remotecall() in julia to distribute work to specific processor. The function I like to run does not have any return but it will output something. I can not make it work as there is no output file after running the code.
This is the test code I am creating:
using DelimitedFiles
addprocs(4) # add 4 processors
#everywhere function test(x) # Define the function
print("hi")
writedlm(string("test",string(x),".csv"), [x], ',')
end
remotecall(test, 2, 2) # To run the function on process 2
remotecall(test, 3, 3) # To run the function on process 3
This is the output I am getting:
Future(3, 1, 67, nothing)
And there is no output file (csv), or "hi" shown
I wonder if anyone can help me with this or I did anything wrong. I am fairly new to julia and have never used parallel processing.
The background is I need to run a big simulation (A big function with bunch of includes, but no direct return outputs) lots of times, and I like to split the work to different processors.
Thanks a lot
If you want to use a module function in a worker, you need to import that module locally in that worker first, just like you have to do it in your 'root' process. Therefore your using DelimitedFiles directive needs to occur "#everywhere" first, rather than just on the 'root' process. In other words:
#everywhere using DelimitedFiles
Btw, I am assuming you're using a relatively recent version of Julia and simply forgot to add the using Distributed directive in your example.
Furthermore, when you perform a remote call, what you get back is a "Future" object, which is a way of allowing you to obtain the 'future results of that computation' from that worker, once they're finished. To get the results of that 'future computation', use fetch.
This is all very simplistic and general information, since you haven't provided a specific example that can be copy / pasted and answered specifically. Have a look at the relevant section in the manual, it's fairly clearly written: https://docs.julialang.org/en/v1/manual/parallel-computing/#Multi-Core-or-Distributed-Processing-1

Save function results when a script is executed

Pretty new to python. I have a machine learning script, and what I would like to do is, every time the script is run, I would like to save the results. But what I don't understand is if all the code is in one script, how to save the results without overwriting? So for example:
auc_score = cross_val_score(logreg_model, X_RFECV, y_vars, cv=kf, scoring='roc_auc').mean()
auc_scores=[]
def auc_log():
auc_scores.append(auc_score)
return(auc_scores)
auc_log()
Everytime I run this .py file, the auc_scores list will start with blank, and the list won't update until each time the function is executed, but if you run the whole script than obvious the above will execute and start the saved list as blank again. I feel this is fairly simple, just not thinking about this properly from a continuous deployment perspective. Thanks!
It might be as well to use each result list or zero list as variables of acc_log function, which can leave all function result.
For example,
auc_score=cross_val_score(logreg_model, X_RFECV, y_vars, cv=kf, scoring='roc_auc').mean()
#if auc_score is 'int' or 'float', you must conver it to list type
auc_score_=[]
auc_score_.append(auc_score)
auc_score_zero=[]
def acu_log(acu_score_1,auc_score_2):
acu_scores=acu_score_1+auc_score_2
return acu_scores
initial_log=acu_log(auc_score_zero, auc_score_)
#print (initial_log)
second_log=acu_log(initial_log, auc_score_)
#print (second_log)
If you want to save each acc_log list on HDD after returning the result at each step, 'pickle module' is convenient for treating it.
I’m not sure this is really what you want, but hope my answer contribute to solve your question

Jmeter: set property for each loop

I'm trying to create a test that will loop depending on the number of files stored in one folder then output results base on their filename. I'm thinking to use their filename as the name of their result, so for this, I created something like this in BS preProcessor:
props.setProperty("filename", vars.get("current_tc"));
Then use it for the name of the result:
C:\\TEST\\Results\\${__property(filename)}
"current_tc" is the output variable name of a ForEach controller. It returns different value on each loop. e.g loop1 = test1.csv, loop2 = test2.csv ...
I'm expecting that the result name will be test1.csv, test2.csv .... but the actual result is just test1.csv and the result of the other file is also in there. I'm new to Jmeter. Please tell me if I'm doing an obvious mistake.
Test Plan Image
The way of setting the property seems okayish, the question is where and how you are trying to use this C:\\TEST\\Results\\${__property(filename)} line so a snapshot of your test plan would be very useful.
In the meantime I would recommend the following:
Check jmeter.log file for any suspicious entries, if something goes wrong - most probably you will be able to figure out the reason using this file. Normally it is located in JMeter's "bin" folder
Use Debug Sampler and View Results Tree listener combination to check your ${current_tc} variable value, maybe it is the case of the variable not being incremented. See How to Debug your Apache JMeter Script article to learn more about troubleshooting tecnhiques

Junit: String return for AssertEquals

I have test cases defined in an Excel sheet. I am reading a string from this sheet (my expected result) and comparing it to a result I read from a database (my actual result). I then use AssertEquals(expectedResult, actualResult) which prints any errors to a log file (i'm using log4j), e.g. I get java.lang.AssertionError: Different output expected:<10> but was:<7> as a result.
I now need to write that result into the Excel sheet (the one that defines the test cases). If only AssertEquals returned String, with the AssertionError text that would be great, as I could just write that immediately to my Excel sheet. Since it returns void though I got stuck.
Is there a way I could read the AssertionError without parsing the log file?
Thanks.
I think you're using junit incorrectly here. THis is why
assertEquals not AssertEquals ( ;) )
you shouldnt need to log. You should just let the assertions do their job. If it's all green then you're good and you dont need to check a log. If you get blue or red (eclipse colours :)) then you have problems to look at. Blue is failure which means that your assertions are wrong. For example you get 7 but expect 10. Red means error. You have a null pointer or some other exception that is throwing while you are running
You should need to read from an excel file or databse for the unit tests. If you really need to coordinate with other systems then you should try and stub or mock them. With the unit test you should work on trying to testing the method in code
if you are bootstrapping on JUnit to try and compare an excel sheet and database then I would ust export the table in excel as well and then just do a comparison in excel between columns
Reading from/writing to files is not really what tests should be doing. The input for the tests should be defined in the test, not in the external file which can change - this can either introduce false negatives or even worse false positives (making your tests effectively useless while also giving false confidence that everything is ok because tests are green).
Given your comment (a loop with 10k different parameters coming from file), I would recommend converting this excel file into JUnit Parameterized test. You may want to put the array definition in another class, because 10k lines is quite a lot.
If it is some corporate bureaucracy, and you need to have this excel file, then it makes sense to not write a classic "test". I would recommend just a main method that does the job - reads the file, runs the code, checks the output using simple if (output.equals(expected)) and then writes back to file.
Wrap your AssertEquals(expectedResult, actualResult) with try catch
in catch
catch(AssertionError e){
//deal with e.getMessage or etc.
}
But it not good idea for some reasons, I guess.
And try google something like soft assert
Documentation on assertEquals is pretty clear on what the method does:
Asserts that two objects are equal. If they are not, an AssertionError
without a message is thrown.
You need to wrap the assertion with try-catch block and in the exception handling do Your logging. You will need to make Your own message using the information from the specific test case, but this what You asked for.
Note:
If expected and actual are null, they are considered equal.