Running a Julia script in a remote Jupyter Notebook, with arguments - json

I am running a Julia script in a Jupyter Notebook on a remote host by using the following command in a jupyter-environment
jupyter nbconvert --to notebook --execute my_notebook.ipynb
which works fine. However, if I try to pass arguments to the Jupyter Notebook with the intention to be finally passed to the Julia script I fail. My question is how to do it?
To pass arguments to the Jupyter Notebook I modified the above command to
jupyter nbconvert --to notebook --execute my_notebook.ipynb arg1 arg2 arg3
Also, in the Julia script I try to recover the arguments (which are supposed to be small enough integers) via
x1 = parse(Int16, ARGS[1])
x2 = parse(Int16, ARGS[2])
x3 = parse(Int16, ARGS[3])
which doesn't work.
I tried to understand what is in ARGS, but I can't decipher what it means. The output of
println(ARGS)
included in the Julia script is
"/tmp/tmp8vuj5f79.json"
Coming back to the second bullet point, a few errors occur since ARGS[1] obviously can't be converted to an integer.
Another error which occurs when passing the arguments to the Jupyter Notebook execution is
[NbConvertApp] WARNING | pattern 'arg1' matched no files
[NbConvertApp] WARNING | pattern 'arg2' matched no files
[NbConvertApp] WARNING | pattern 'arg3' matched no files
I might be approaching the problem from a completely wrong perspective, so any help would be very much appreciated!

It looks like it's not possible to pass in command-line arguments to --executed notebooks.
The WARNING | pattern 'arg1' matched no files messages indicate that these arguments are seen by nbconvert as additional files to convert, not as arguments to the notebook.
The common suggestion is to use environment variables instead.
X=12 Y=5 Z=42 jupyter nbconvert --to notebook --execute my_notebook.ipynb
which you can then access from within the notebook as
ENV["X"], ENV["Y"], and ENV["Z"].

Related

How can I read xlsx files into Octave matlab?

So, I tried to read xlsx file using readtable function in Octave, but it put out a warning message "the 'readtable' function is not yet implemented in Octave". So, what is the way I could possible read xlsx files in Octave?
This will be a very late answer, but I am writing to provide a solution for those who have this problem in the future.
To read .xls files, you need the octave io package because this package includes "xlsread()" function. You can check if you have this package by typing "pkg list" in the octave console. Then, at the beginning of the code, you should write the command "pkg load io". You can now use the xlsread() function.
The outputs of the xls and readtable functions are not exactly the same, so you have to make changes to the output.
io package:
xlsread function:

PyCharm Tests Add Shell Command to Additional Arguments

I'm still pretty new to running anything in PyCharm more advanced than just a simple script. I'm writing a test in pytest right now and I want to have the test results output to a junit xml file; I'm thinking the best naming convention will be based on the current date/time, so I am trying to pipe in the current date using the date shell command as an environment variable as seen below:
Current Configuration:
However, when I run the configuration as-is, it just names the .xml file based on the command without actually executing it. Any ideas what I'm missing, or if this is even possible?
Thanks!
Yes, it is possible with a workaround. I don't think what you are trying to achieve is possible using a single configuration. The the value you set in Environment variables are substituted as-is and wouldn't be executed in bash prior to that.
The workaround would be use multiple configurations.
Store the following line in a bash file.
export PYTEST_EXEC_TIME=$(date '+%Y-%m-%d%H:%M:%S')
Add a bash configuration to which executes this file.
Add that configuration to the pytest configuration as a "Before Launch" configuration and use the $PYTEST_EXEC_TIME in the additional parameters.
Note: Here is a detailed answer showing step by step process of setting up a "Before Launch" configuration.

Zip the contents of a folder in SSIS

I am trying to zip the contents of a Folder in SSIS, there are files and folders in the source folder and I need to zip them all individually. I can get the files to zip fine my problem is the folders.
I have to use 7.zip to create the zipped packages.
Can anyone point me to a good tutorial. I haven't been able to implement any of the samples that I have found.
Thanks
This is how I have configured it.
Its easy to configure but the trick is in constructing the Arguments. Though you see the Arguments as static in the screenshot, its actually coming from a variable and that variable is set in the Arguments expression of Execute Process Task.
I presume you will have this Execute Process task in a For Each File Ennumerator with Traverse SubFolders checked.
Once you have this basic setup in place, all you need to do is work on building the arguments to do the zipping, how you want them. A good place to find all the command line arguments is here.
Finally, the only issue I ran into was not providing a working directory in the command line arguments for 7zip. The package used to run fine on my dev environment but used to fail when running on the server via a SQL job. This was because 7zip didn't have access to the 'Temp' folder on the SQL Server, which it uses by default as the 'working directory'. I got round this problem by specifying the 'working directory as follows at the end of the command line arguments, using the -ws switch:
For e.g:
a -t7z DestinationFile.7z SourceFile -wS:YourTempDirectoryToWhichTheSQLAgentHasRights

Cython -a flag (to generate yellow-shaded HTML) without command line

When you run from the command line
$ cython -a mycode.pyx
you get a really nice HTML "annotation" file with yellow shading to indicate slow python operations vs fast C operations. You also get this same HTML file as a link every time you compile Cython code in Sage. My questions are: (1) Can I get this HTML file if I'm compiling using distutils? (2) Can I get this HTML file if I'm compiling using pyximport? Thanks!!
Thanks to larsmans's comment and the Cython email list, I now have many satisfying options to generate the "annotate" HTML file without leaving IPython:
(1) Use subprocess...
import subprocess
subprocess.call(["cython","-a","myfilename.pyx"])
(2) Turn on the global annotate flag in Cython myself, before compiling:
import Cython.Compiler.Options
Cython.Compiler.Options.annotate = True
(3) Pass annotate=True into cythonize() [when using the distutils compilation method].
It seems that pyximport does not have its own direct option for turning on annotation.

DTS_E_FLATFILESOURCEADAPTERSTATIC_CANTCONVERTVALUE

I'm running an SSIS package that I made a few months ago, and ran into an odd error.
The package loads data from a tab-delimited file that's exported from an excel worksheet. Errors are redirected to an error table, which is later serialized to an output file.
With my most recent attempts to load these files, every row is rejected with the DTS_E_FLATFILESOURCEADAPTERSTATIC_CANTCONVERTVALUE error code and a column number that doesn't exist in the input file (there are 13 rows on the input, the error column is 187.
I figure that there's something not exported to csv properly, but I'm at at a loss to explain what it is. I've looked at the file, and it has the proper encoding. In addition the SSIS package builder generates the preview correctly.
When have you run into this error before, and what solutions/workarounds did you find that worked?
Some details about the execution environment: package run via dtexec, 2 parameters set on the command line. One is the working folder for the package, the other is the file name. The data is loaded into a SQL Server 2005 database.
Thanks for the help :)
Zach,
Good question, when I first started with SSIS this would happen to me all the time and there is hardly any information on why this happens. What I found is that if you delete the Flat-File/Excel Import component and the actual file from the datasources list on the bottom and then re-add it you can often correct this issue.
As I mentioned before, I am not entirely sure what causes the preview to get out of whack with what is happening but I suspect it may have something to do with the ID keys assigned to different components (just pure conjecture though).
Figured out what the error was: I was passing parameters on the command line improperly.
I was running DTEXEC as follows:
> dtexec /f "C:\Path\to\Package.dtsx"
/set \package.Variables[User::InputFileName].Value;"filename"
/set \package.Variables[User::WorkingDir].Value;"C:\working\dir\"
Either DOS or SSIS was parsing the User:WorkingDir variable incorrectly... it interpreted the backslashes within the path as escape sequences, not as path components. Updating the dtexec line to escape each backslash fixed the issue:
> dtexec /f "C:\Path\to\Package.dtsx"
/set \package.Variables[User::InputFileName].Value;"filename"
/set \package.Variables[User::WorkingDir].Value;"C:\\working\\dir\\"
(line breaks added for clarity)
It pains me when I miss the blatantly obvious ;)