Why does it shows error while running ConnectedThresholdImageFilter example? - itk

I am trying to run ConnectedThresholdImageFilter example in ITK mentioned "https://itk.org/Doxygen45/html/Segmentation_2ConnectedThresholdImageFilter_8cxx-example.html" here.
But it is showing the following error.
itk::ImageFileWriterException (0x24cb740)
Location: "void itk::ImageFileWriter::Write() [with
TInputImage = itk::Image]" File:
/usr/local/include/ITK-4.13/itkImageFileWriter.hxx Line: 151
Description: Could not create IO object for writing file output
Tried to create one of the following:
BMPImageIO
BioRadImageIO
Bruker2dseqImageIO
GDCMImageIO
GE4ImageIO
GE5ImageIO
GiplImageIO
HDF5ImageIO
JPEGImageIO
LSMImageIO
MINCImageIO
MRCImageIO
MetaImageIO
NiftiImageIO
NrrdImageIO
PNGImageIO
StimulateImageIO
TIFFImageIO
VTKImageIO You probably failed to set a file suffix, or
set the suffix to an unsupported type
I didn't do any changes in code. And I am trying to use dicom image as an input.

It is either
You set the output file name with an unsupported extension.
There is something wrong about how you compiled/linked ITK or how you are linking your example to ITK.

Related

Snowflake throwing error (Error parsing JSON: misplaced { )

I am trying to load json files in to snow flake using copy command.I have two files of same structure.However one file loaded without issue,the other one is throwing the error
"Error parsing JSON: misplaced { "
The simple example select parse_json($1) record from values ('{{'); also errors with Error parsing JSON: misplaced {, pos 2 so your second file probably does in fact contain invalid JSON.
Try running the statement in validation mode (e.g. copy into mytable validation_mode = 'RETURN_ERRORS';) which will return a table containing useful troubleshooting info like the line number and character of the error(s).
The docs cover this here: https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#validating-staged-files

error finding and uploading a file in octave

I tried converting my .csv file to .dat format and tried to load the file into Octave. It throws an error:
unable to find file filename
I also tried to load the file in .csv format using the syntax
x = csvread(filename)
and it throws the error:
'filename' undefined near line 1 column 13.
I also tried loading the file by opening it on the editor and I tried loading it and now it shows me
warning: load: 'filepath' found by searching load path
error: load: unable to determine file format of 'Salary_Data.dat'.
How can I load my data?
>> load Salary_Data.dat
error: load: unable to find file Salary_Data.dat
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> x = csvread(Salary_Data)
error: 'Salary_Data' undefined near line 1 column 13
>> x = csvread(Salary_Data.csv)
error: 'Salary_Data' undefined near line 1 column 13
>> load Salary_Data.dat
warning: load: 'C:/Users/vaith/Desktop\Salary_Data.dat' found by searching load path
error: load: unable to determine file format of 'Salary_Data.dat'
>> load Salary_Data.csv
warning: load: 'C:/Users/vaith/Desktop\Salary_Data.csv' found by searching load path
error: load: unable to determine file format of 'Salary_Data.csv'
Salary_Data.csv
YearsExperience,Salary
1.1,39343.00
1.3,46205.00
1.5,37731.00
2.0,43525.00
2.2,39891.00
2.9,56642.00
3.0,60150.00
3.2,54445.00
3.2,64445.00
3.7,57189.00
3.9,63218.00
4.0,55794.00
4.0,56957.00
4.1,57081.00
4.5,61111.00
4.9,67938.00
5.1,66029.00
5.3,83088.00
5.9,81363.00
6.0,93940.00
6.8,91738.00
7.1,98273.00
7.9,101302.00
8.2,113812.00
8.7,109431.00
9.0,105582.00
9.5,116969.00
9.6,112635.00
10.3,122391.00
10.5,121872.00
Ok, you've stumbled through a whole pile of issues here.
It would help if you didn't give us error messages without the commands that produced them.
The first message means you were telling Octave to open something called filename and it couldn't find anything called filename. Did you define the variable filename? Your second command and the error message suggests you didn't.
Do you know what Octave's working directory is? Is it the same as where the file is located? From the response to your load commands, I'd guess not. The file is located at C:/Users/vaith/Desktop. Octave's working directory is probably somewhere else.
(Try the pwd command and see what it tells you. Use the file browser or the cd command to navigate to the same location as the file. help pwd and help cd commands would also provide useful information.)
The load command, used as a command (load file.txt) can take an input that is or isn't defined as a string. A function format (load('file.txt') or csvread('file.txt')) must be a string input, hence the quotes around file.txt. So all of your csvread input commands thought you were giving it variable names, not filenames.
Last, the fact that load couldn't read your data isn't overly surprising. Octave is trying to guess what kind of file it is and how to load it. I assume you tried help load to see what the different command options are? You can give it different options to help Octave figure it out. If it actually is a csv file though, and is all numbers not text, then csvread might still be your best option if you use it correctly. help csvread would be good information for you.
It looks from your data like you have a header line that is probably confusing the load command. For data that simply formatted, the csvread command can bring in the data. It will replace your header text with zeros.
So, first, navigate to the location of the file:
>> cd C:/Users/vaith/Desktop
then open the file:
>> mydata = csvread('Salary_Data.csv')
mydata =
0.00000 0.00000
1.10000 39343.00000
1.30000 46205.00000
1.50000 37731.00000
2.00000 43525.00000
...
If you plan to reuse the filename, you can assign it to a variable, then open the file:
>> myfile = 'Salary_Data.csv'
myfile = Salary_Data.csv
>> mydata = csvread(myfile)
mydata =
0.00000 0.00000
1.10000 39343.00000
1.30000 46205.00000
1.50000 37731.00000
2.00000 43525.00000
...
Notice how the filename is stored and used as a string with quotation marks, but the variable name is not. Also, csvread converted non-numeric header data to 'zeros'. The help for csvread and dlmread show you how to change it to something other than zero, or to skip a certain number of rows. If you want to preserve the text, you'll have to use some other input function.

Setting Jenkins build name from package.json version value

I want to include the value of the "version" parameter in package.json as part of the Jenkins build name.
I'm using the Jenkins Build Name Setter plugin - https://wiki.jenkins-ci.org/display/JENKINS/Build+Name+Setter+Plugin
So far I've tried to use PROPFILE syntax in the "Build name macro template" step:
${PROPFILE,file="./mainline/projectDirectory/package.json",property="\"version\""}
This successfully creates a build, but includes the quotes and comma surrounding the value of the version property in package.json, for example:
"0.0.1",
I want just the value inside returned, so it reads
0.0.1
How can I do this? Is there a different plugin that would work better for parsing package.json and getting it into the template, or should I resort to some sort of regex for removing the characters I don't want?
UPDATE:
I tried using token transforms based on reading the Token Macro Plugin documentation, but it's not working:
${PROPFILE%\"\,#\",file="./mainline/projectDirectory/package.json",property="\"version\""}
still just returns
However, using only one escaped character and only one of # or % works. No other combinations I tried work.
${PROPFILE%\,,file="./mainline/projectDirectory/package.json",property="\"version\""}
which returns "0.0.1" (comma removed)
${PROPFILE#\"%\"\,,file="./mainline/projectDirectory/package.json",property="\"version\""}
which returns "0.0.1", (no characters removed)
UPDATE:
Tried to use the new Jenkins Token Macro plugin's JSON macro with no luck.
Jenkins Build Name Setter set to update the build name with Macro:
${JSON,file="./mainline/pathToFiles/package.json",path="version"}-${P4_CHANGELIST}
Jenkins build logs for this job show:
10:57:55 Evaluated macro: 'Error processing tokens: Error while parsing action 'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input position (line 1, pos 74):
10:57:55 ${JSON,file="./mainline/pathToFiles/package.json",path="version"}-334319
10:57:55 ^
10:57:55
10:57:55 java.io.IOException: Unable to serialize org.jenkinsci.plugins.tokenmacro.impl.JsonFileMacro$ReadJSON#2707de37'
I implemented a new macro JSON, which takes a file and a path (which is the key hierarchy in the JSON for the value you want) in token-macro-2.1. You can only use a single transform per macro usage.
Try the token transformations # and % (see Token-Makro-Plugin):
${PROPFILE#"%",file="./mainline/projectDirectory/package.json",property="\"version\""}
(This will only help if you are using pipelines. But for what it's worth,..)
What works for me is a combination of readJSON from the Pipeline Utility Steps plugin and directly setting currentBuild.displayName, thusly:
script {
// readJSON from "Pipeline Utility Steps"
def packageJson = readJSON file: 'package.json'
def version = packageJson.version
echo "Setting build version: ${packageJson.version}"
currentBuild.displayName = env.BUILD_NUMBER + " - " + packageJson.version
// currentBuild.description = "other cool stuff"
}
Omitting error handling etc obvs.

Rmarkdown error: Invalid nesting of html_preserve directives

When I try a bunch of chunks in RStudio I get the following error in RStudio:
|.................................................................| 100%
ordinary text without R code
output file: relatorio1.knit.md
Error in extract(input_str) : Invalid nesting of html_preserve directives
Calls: <Anonymous> ... <Anonymous> -> base -> extract_preserve_chunks -> extract
I really could not identify what is going on since the debugger tells me nothing and the code runs until 100%, but it simply does not generate the html file as it was supposed to.
I see that this error comes from htmltools package code (see https://github.com/rstudio/htmltools/blob/master/R/tags.R), which says:
# Sanity check.
if (any(preserve_level < 0) || tail(preserve_level, 1) != 0) {
stop("Invalid nesting of html_preserve directives")
}
Unfortunately I cannot provide the whole Rmd file since it is work related, but generic comments on this issue are welcome.
Thanks.
Edit: I tried to circumvent the problem by isolating datatables (package DT) that were not working. I am using data.table's fread funtion, but I also tried readr and base data.table. I also tried to load data directly from source (source file generates data). When I try to knit it, I still get the above error. Never had any problem with these functions before.
I try the following code:
---
title: "Title"
output: flexdashboard::flex_dashboard
---
```{r setup}
require(DT)
require(flexdashboard)
require(htmltools)
require(htmlwidgets)
require(readr)
require(data.table)
source("sourcefile.R")
a <- fread("perfectlynormaldata.txt")
b <- a[,1:10]
```
Flexdashboard Storyboard {.storyboard}
=========================================
### Text Text
```{r datatable not running}
datatable(b)
```
Edit 2: I could render the datatable after limiting the length of a string variable with substr. I don't know if it was supposed to happen.
Final Edit: I could solve definitely the issue by removing certain characters that appear in the problematic variable as "\032".

Trouble following Encrypted Big-Query tutorial document

I wanted to try out the encrypted big query client for google big query and I've been having some trouble.
I'm following the instructions outlined in this PDF:
https://docs.google.com/file/d/0B-WB8hYCrhZ6cmxfWFpBci1lOVE/edit
I get to the point where I'm running this command:
ebq load --master_key_filename="key_file" testdataset.cars cars.csv cars.schema
And I'm getting an error string which ends with:
raise ValueError("No JSON object could be decoded")
I've tried a few different formats for my .csv and .schema files but none have worked. Here are my latest versions.
cars.schema:
[{"name": "Year", "type": "integer", "mode": "required", "encrypt": "none"}
{"name": "Make", "type": "string", "mode": "required", "encrypt": "pseudonym"}
{"name": "Model", "type": "string", "mode": "required", "encrypt": "probabilistic_searchwords"}
{"name": "Description", "type": "string", "mode": "nullable", "encrypt": "searchwords"}
{"name": "Website", "type": "string", "mode": "nullable", "encrypt": "searchwords","searchwords_separator": "/"}
{"name": "Price", "type": "float", "mode": "required", "encrypt": "probabilistic"}
{"name": "Invoice_Price", "type": "integer", "mode": "required", "encrypt": "homomorphic"}
{"name": "Holdback_Percentage", "type": "float", "mode": "required", "encrypt":"homomorphic"}]
cars.csv:
1997,Ford,E350, "ac\xc4a\x87, abs, moon","www.ford.com",3000.00,2000,1.2
1999,Chevy,"Venture ""Extended Edition""","","www.cheverolet.com",4900.00,3800,2.3
1999,Chevy,"Venture ""Extended Edition, Very Large""","","www.chevrolet.com",5000.00,4300,1.9
1996,Jeep,Grand Cherokee,"MUST SELL! air, moon roof,loaded","www.chrysler.com/jeep/grand­cherokee",4799.00,3950,2.4
I believe the issue may be that you need to move the --master_key_filename argument before the load argument. If that doesn't work, can you send the output of adding --apilog=- as the first argument?
Also, there is an example script file of running ebq here:
https://code.google.com/p/bigquery-e2e/source/browse/#git%2Fsamples%2Fch13