How to programmatically run rspec test and capture output? - json

I am attempting to programmatically run a series of Rspec tests and capture the output of each.
What I have so far is this:
RSpec.configure do |config|
config.formatter = :json
end
out = StringIO.new
err = StringIO.new
RSpec::Core::Runner.run([file], err, out)
This does run the given file correctly, and in the terminal that I run this script from I see the JSON output that I expect... but since I'm giving the runner a StringIO stream, I would expect it to write the JSON output to that instead of stdout. In this case, the out stream is always empty.
Anyone have any idea what I'm doing wrong?

To run a series of Rspec tests I use Rake and to record their output I use a RSpec::Core::RakeTask in which I can specify rspec_opts with which you can pass a location and format for rspecs output.
Example:
Rakefile
require 'rake'
require 'rspec/core/rake_task'
desc "Run tests, recording their output"
RSpec::Core::RakeTask.new(:spec) do |task|
task.pattern = 'spec/*_spec.rb'
task.rspec_opts = '--format documentation' +
' --format json --out logs/output.json'
end
I made a basic github project demonstrating this at https://github.com/SamuelGarratt/rake_demo
The above Rakefile assumes your rspec tests are in a folder called 'spec' and end in '_spec'. Type 'rake spec' on the command line to run all the rspec tests and have their output recorded.

Related

Fastlane passed parameters from CLI are trimmed

I am trying to pass the Pull request title as a parameter to the lane
I run this command for example
fastlane android distribute release_notes:$PR_TITLE
And I can see from the logs that the command is executed correctly with the complete title
[16:37:52]: $ bundle exec fastlane android distribute release_notes:ref: copy the services module inside app module
but when I print the passed argument I found it trimmed
desc "distribute new build for testers, set internal to true to pass it for internal testrs"
lane :distribute do |options|
print "\n"
print "release_notes"
print options[:release_notes]
which prints release_notes ref:, trimmed after the : and it even gets trimmed after newlines in a strange way
As you can see from your release_notes:string command, fastlane parses colons in a key/value format. So it will break if you pass in a string which includes a colon.
A more common pattern is to read the release notes from the environment variable within your lane. So instead of using options at all just do something like
notes = ENV['PR_TITLE']

Opensmile: unreadable csv file while extracting prosody features from wav file

I am extracting prosody features from an audio file while using Opensmile using Windows version of Opensmile. It runs successful and an output csv is generated. But when I open csv, it shows some rows that are not readable. I used this command to extract prosody feature:
SMILEXtract -C \opensmile-3.0-win-x64\config\prosody\prosodyShs.conf -I audio_sample_01.wav -O prosody_sample1.csv
And the output of csv looks like this:
[
Even I tried to use the sample wave file given in Example audio folder given in opensmile directory and the output is same (not readable). Can someone help me in identifying where the problem is actually? and how can I fix it?
You need to enable the csvSink component in the configuration file to make it work. The file config\prosody\prosodyShs.conf that you are using does not have this component defined and always writes binary output.
You can verify that it is the standart binary output in this way: omit the -O parameter from your command so it becomesSMILEXtract -C \opensmile-3.0-win-x64\config\prosody\prosodyShs.conf -I audio_sample_01.wav and execute it. You will get a output.htk file which is exactly the same as the prosody_sample1.csv.
How output csv? You can take a look at the example configuration in opensmile-3.0-win-x64\config\demo\demo1_energy.conf where a csvSink component is defined.
You can find more information in the official documentation:
Get started page of the openSMILE documentation
The section on configuration files
Documentation for cCsvSink
This is how I solved the issue. First I added the csvSink component to the list of the component instances. instance[csvSink].type = cCsvSink
Next I added the configuration parameters for this instance.
[csvSink:cCsvSink]
reader.dmLevel = energy
filename = \cm[outputfile(O){output.csv}:file name of the output CSV
file]
delimChar = ;
append = 0
timestamp = 1
number = 1
printHeader = 1
\{../shared/standard_data_output_lldonly.conf.inc}`
Now if you run this file it will throw you errors because reader.dmLevel = energy is dependent on waveframes. So the final changes would be:
[energy:cEnergy]
reader.dmLevel = waveframes
writer.dmLevel = energy
[int:cIntensity]
reader.dmLevel = waveframes
[framer:cFramer]
reader.dmLevel=wave
writer.dmLevel=waveframes
Further reference on how to configure opensmile configuration files can be found here

Complex stdout check in Ansible

I run a job on a remote server with Ansible. The stdout generates some output where sometimes errors show up. The error text is in the form of
#ERROR FMM0129E The following error was returned by the vSphere(TM) API: 'Cannot complete login due to an incorrect user name or password.'.
The thing is that some of these errors can safely be ignored and only these that are not in my false positive list should raise a fail.
My question is, can this be done in a pure Ansible way?
The only thing that comes to mind is the simple failed_when check which, in this case, falls short. I am thinking that these "complex" output checking should be done out of Ansible, invoking a python / shell / etc. script to help.
If you are remotely executing a shell command anyway then there's no reason why you couldn't wrap that in a shell script that returns a non 0 status code for the things you care about and then simply execute that via the script module.
example.sh
#!/bin/bash
randomInt=$[ 1 + $[ RANDOM % 10 ]]
echo $randomInt
if [ $randomInt == 1 ]; then
exit 1
else
exit 0
fi
And then use it like this in your playbook:
- name: run example.sh
script: example.sh
Ansible will automatically see any non 0 return codes as the task failing.
Instead of failed_when you could use ignore_errors: true which would get you into the position of passing the failing task and forwarding the stdout to another task. But I would not recommend this, since in my opinion a task should never ever report a failed state by intend. But if you feel this is an option for you, there even would be a way to reset the error counter so the Ansible stats at the end are correct.
- some: task
register: some_result
ignore_errors: true
- name: Reset errors after intentional fail
meta: clear_host_errors
when: some_result | failed
- another: task
check: "{{ some_result.stdout }}
when: some_result | failed
The last task then would check your stdout in a custom script or whatever you have and should report a failed state itself (return code != 0).
As far as I know the clear_host_errors feature is yet undocumented and the commit is about a month old, so I guess it will only be available in Ansible 2.0.1.
Another idea would be to wrap your task inside the script which checks the output or pipe it to that script. That obviously will only work if you run a shell command and not with any other ansible modules.
Other than those two options I don't think there is anything else available.

printing the output of shell command from python subprocess

I am running a shell script which emits lots of line while executing...they are just status output rather than the actual output....
I want them to be displayed on a JTextArea. I am working on jython. The piece of my code looks like:
self.console=JTextArea(20,80)
cmd = "/Users/name/galaxy-dist/run.sh"
p = subprocess.Popen(cmd, stdout=subprocess.PIPE,stderr=subprocess.PIPE, shell=True)
self.console.append(p.stdout.read())
This will wait until the command finishes and prints the output. But I want to show the realtime out put to mimic the console. Anybody have the idea ?
You're making things more complicated than they need to be. The Popen docs state the following about the stream arguments:
With the default settings of None, no redirection will occur; the child’s file handles will be inherited from the parent. [my emphasis]
Therefore, if you want the subprocess' output to go to your stdout, simply leave those arguments blank:
subprocess.Popen(cmd, shell=True)
In fact, you aren't using any of the more advanced features of the Popen constructor, and this particular example doesn't need any parsing by the shell, so you can simplify it further with the subprocess.call() function:
subprocess.call(cmd)
If you still want the return code, simply set a variable equal to this call:
return_code = subprocess.call(cmd)

My function cannot create text file

I am a beginner in Python and i am reading Wrox's "Beginning Python Using Python 2.6 and Python 3.1"... There is one certain example in chapter 8 about using files and directories that has troubled me a lot... The following function is supposed to create (if it doesn't exist) and write in a text file:
def write_to_file():
f=open("C:/Python33/test.txt","w")
f.write("TEST TEST TEST TEST")
f.close()
When i run the function nothing happens, no text file is created and no error message is returned...
When i run the code in IDLE, command by command, it works perfectly...
What is wrong with the function???
Python's picky about indentation, from what I remember of it:
def write_to_file():
f = open("C:/Python33/test.txt", "w")
f.write("TEST TEST TEST TEST")
f.close()
# On top of that, you need to actually run the function.
write_to_file()
I think this is because of indentation, do it like this:
def write_to_file():
f=open("C:/Python33/test.txt","w")
f.write("TEST TEST TEST TEST")
f.close()