AllenNLP - configuration to write test set metrics to Tensorboard - allennlp

When running the AllenNLP train or evaluate CLI commands, is there a configuration option (in the json/jsonnet file) to write test set evaluation metrics to Tensorboard?
If not, how can I do it in my own script?
Thanks in advance for your time and help. Best regards

You could pass in your test set as a validation set in your config file. However, treating the test set as an evaluation set is not recommended.
You can also use the allennlp evaluate command, which dumps the metrics to a user-specified output file.

Related

Can MyBatis log the complete SQL that can run directly

Can MyBatis log the complete SQL that can run directly
In general case the answer is NO.
If the query has parameters mybatis can't to that even in principle. In order to log the query all parameters should be serialized and represented as strings. For simple data types like String or Integer this is not a problem but for more complex like Timestamp or Blob representation may depend on the database.
When query is executed there's no need to convert parameters to strings because JDBC driver passes them to the database in more efficient (and database dependent) format. But for logging purposes mybatis has only java objects and mybatis does not know how to represent them as database specific string literals.
So the best you can have (and this is supported in mybatis) is to log the query with placeholders and log parameters used separately. Configure DEBUG log level for the logger named after the mapper.
For log4j configuration looks like this:
log4j.logger.my.org.myapp.SomeMapper=DEBUG
If you are in the development environment and use IntelliJ, I think plugin Mybatis Log Plugin can help you.
And if it is in a production environment, you can copy the log and paste it locally. Then use the plugin feature Restore Sql from Selection or Restore Sql from text(new version coming soon)
Detailed introduction:
https://github.com/kookob/mybatis-log-plugin
you can copy com.mysql.cj.jdbc.PreparedStatement to your project directory(keep the same package path) and call log.info(asSql()) method into after PreparedStatement.execute.fillSendPacket. (while using batch operations you can use executeBatchInternal).
Class will be loaded from your project and the origin class will be ignored, you can try this on other framewokrk and database.

mysql udf read my.cnf

I'm trying to write a MySQL UDF (User Definied Function), which should read the configuration file of MySQL - my.cnf -, or access MySQL session and status vars.
How do I do that ?
I'm sure, there are functions implemented in MySQL source code - somewhere ... for this functionality.
How do I find them?
Also, is there a good MySQL source API documentation?
Thanks,
krisy
The easiest solution I found, was starting MySQL from a script, which script contains commands, to set enviromental variables, and accesss these variables througg getenv() function from UDF.
If anyone has a better solution, I'm interested very much :-)

Hadoop JUnit testing writing/reading to/from the hdfs

I have written a class(es) that writes and reads from hdfs. Given certain conditions that are occurring when these classes are instantiated they create a specific path and file, and write to it (or they go to a previously created path and file and read from it). I have tested it by running a few hadoop jobs, and it appears to be functioning correctly.
However, I would like to be able to test this in the JUnit framework, but I have not found a good solution for being able to test reading and writing to hdfs in JUnit. I would appreciate an helpful advice on the matter. Thanks.
I haven't tried this myself yet, but I believe what you are looking for is org.apache.hadoop.hdfs.MiniDFSCluster.
It is in hadoop-test-.jar NOT hadoop-core-.jar. I guess the Hadoop project uses it internally to test.
Here it is:
http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/MiniDFSCluster.java?revision=1127823&view=markup&pathrev=1130381
I think there are plenty of uses of it in that same directory, but here is one:
http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/TestWriteRead.java?revision=1130381&view=markup&pathrev=1130381

SQLCompare generate only diff script

Is it possible to generate only diff script using SQLCompare from Red Gate SQl Compare?
In our database sync scenario we will use SQLCompare to generate diff script and will use Tarantino for applying scripts. I've played a little with sqlcompare but not found a way to generate only diff script, without sync databases.
Thanks
you should install this patch
The obvious answer with SQL Compare command-line is to use the /sf: argument to specify an output file and do not specify /sync. But it appears you are possibly running into some other issue which is not described here.

In Hudson, how do I set multiple environment variables given a single parameter?

I want to set up a parameterized build in Hudson that only takes one parameter -- the type of build to create (QA, Stage, Production). However, each of those builds requires several different environment variables to be set. Something like (pseudocode):
if ${CONFIG} == "QA" then
${SVN_PATH} = "branches/dev"
${BUILD_CONFIG} = "Debug"
# more environment variables...
else if ${CONFIG} == "Production" then
${SVN_PATH} = "trunk"
${BUILD_CONFIG} = "Release"
# more environment variables...
else # more build configurations...
end if
There are myriad steps in our build -- pull from subversion, then run a combination of MSBuild commands, DOS Batch files, and Powershell scripts.
We typically schedule our builds from the Hudson interface, and I want the parameter entry to be as idiot-proof as possible.
Is there a way to do this?
Since you do so many things for a release, how about scripting all the steps outside of Hudson. you can use ant, batch files, or whatever scripting language you prefer. Put that script in your scm to get the revision control.
pro's:
you get the logic out of Hudson, since Hudson only calls the script.
You don't need to create environment variables that need to get persisted between shells (global variables).
you can use config/properties files for every environment, which you should also put into version control
you can run these scripts outside of Hudson, if you need to
Hudson job config get's way easier
you don't have the side effects of changing the global environment variables. You should always try to create jobs that don't change any global settings, because that always calls for trouble.
Hudson supports build parameters, which are build-time variables that Hudson makes available as environment variables to your build steps (see the Hudson Parameterized Build wiki page). In your job, you can have one choice parameter CONFIG that the user enters. (To set up the parameter, under your job's configuration, select This build is parameterized.)
Then, you can basically write what you've coded in pseudocode at the start of your build step. Or since you have multiple steps, put the environment setup in a file that's referenced from your existing build scripts. (Depending on your shell, there are various tricks for exporting variables set in a script to the parent environment.) Putting the setup in a file that's integrated with your existing build scripts will make it much easier to manage and test (i.e. it's testable outside of Hudson) as well as give you an easy way to invoke your scripts locally.
You might also consider separating your unified build into separate jobs that perform each of the configurations that you've described. Even though they may reference a central build script, the CONFIG types that you've defined seem like they should be distinct actions and deserve separate jobs.