Hi I am using a custom app to generate drawings and export dwg/png/pdf. Local test on Autocad 2016 is good, but when uploading activity, server can't export pdf.
db.SaveAs(dwgOut, DwgVersion.Current);
ed.Command("_pngout", pngOut, "");
ed.Command("_tilemode", "0");
ed.Command("_-export", "_pdf", "C", "N", pdfOut);`
it is on core engine v21.
AIO report:
[03/12/2017 09:38:38] Command: _Customtest
[03/12/2017 09:38:38] Specify parameter file: params.json
[03/12/2017 09:38:38] Specify output folder: outputs
[03/12/2017 09:38:38] Regenerating layout.
[03/12/2017 09:38:38] Regenerating layout.
[03/12/2017 09:38:38] Regenerating model - caching viewports.
[03/12/2017 09:38:38] _pngout Enter file name <C:\Aces\Jobs\59ac1762d7c84db5a109ad8278685e7f\CustomtestTemplate.png>: outputs\test.png Select objects or <all objects and viewports>: _tilemode
[03/12/2017 09:38:38] Enter new value for TILEMODE <0>: 0 _-export Enter file format [Dwf/dwfX/Pdf] <dwfX>_pdf Enter plot area [Current layout/All layouts]<Current Layout>: C Detailed plot configuration? [Yes/No] <No>: N
[03/12/2017 09:38:38] Layout not foundCoreHeartBeat
[03/12/2017 09:38:38] Enter file name <CustomtestTemplate-NewLayout.pdf>: outputs\test.pdf
[03/12/2017 09:38:38] Command: _.quit
This 'Layout not foundCoreHeartBeat' is odd...Any tip, please? Thank you!
Edited:
I also tried
ed.Command("_-export", "_pdf", "_C", "N", pdfOut);`
No luck.
I changed to print pdf from model space
ed.Command("_pngout", pngOut, "");
ed.Command("_tilemode", "1");
ed.Command("_.ZOOM", "_E");
ed.Command("_-export", "_pdf", "d", "n", pdfOut);
Yes, png file reflects layout but we loose layout in pdf.
We thought the problem might be from that we created a new layout in paper space:
var id = LayoutManager.Current.CreateAndMakeLayoutCurrent("testLayout");
So we commented out the create layout part,fine tuned the code for export, had it tested with local success:
ed.Command("_pngout", pngOut, "");
ed.Command("_tilemode", "0");
ed.Command("_-export", "_pdf", "c", "n", pdfOut);
Again, no pdf file. But this time the report changed:
[03/12/2017 23:47:13] Command: _Customtest
[03/12/2017 23:47:13] Specify parameter file: params.json
[03/12/2017 23:47:13] Specify output folder: outputs
[03/12/2017 23:47:13] _pngout Enter file name <C:\Aces\Jobs\b7b5c7d991f047df90fcbe33e80d0a86\CustomtestTemplate.png>: outputs\test.png Select objects or <all objects and viewports>: _tilemode
[03/12/2017 23:47:13] Enter new value for TILEMODE <1>: 0 Regenerating layout.
[03/12/2017 23:47:13] Regenerating layout.
[03/12/2017 23:47:13] Regenerating model - caching viewports.
[03/12/2017 23:47:13] _-export Enter file format [Dwf/dwfX/Pdf] <dwfX>_pdf Enter plot area [Current layout/All layouts]<Current Layout>: c Detailed plot configuration? [Yes/No] <No>: n
[03/12/2017 23:47:13] There were no plottable sheets in the current operation.Enter file name <CustomtestTemplate-Layout1.pdf>: outputs\test.pdf
[03/12/2017 23:47:13] Command: _.quit
Now it is 'There were no plottable sheets in the current operation'.
Have you tried changing "C" to "_C"?
Related
I am extracting prosody features from an audio file while using Opensmile using Windows version of Opensmile. It runs successful and an output csv is generated. But when I open csv, it shows some rows that are not readable. I used this command to extract prosody feature:
SMILEXtract -C \opensmile-3.0-win-x64\config\prosody\prosodyShs.conf -I audio_sample_01.wav -O prosody_sample1.csv
And the output of csv looks like this:
[
Even I tried to use the sample wave file given in Example audio folder given in opensmile directory and the output is same (not readable). Can someone help me in identifying where the problem is actually? and how can I fix it?
You need to enable the csvSink component in the configuration file to make it work. The file config\prosody\prosodyShs.conf that you are using does not have this component defined and always writes binary output.
You can verify that it is the standart binary output in this way: omit the -O parameter from your command so it becomesSMILEXtract -C \opensmile-3.0-win-x64\config\prosody\prosodyShs.conf -I audio_sample_01.wav and execute it. You will get a output.htk file which is exactly the same as the prosody_sample1.csv.
How output csv? You can take a look at the example configuration in opensmile-3.0-win-x64\config\demo\demo1_energy.conf where a csvSink component is defined.
You can find more information in the official documentation:
Get started page of the openSMILE documentation
The section on configuration files
Documentation for cCsvSink
This is how I solved the issue. First I added the csvSink component to the list of the component instances. instance[csvSink].type = cCsvSink
Next I added the configuration parameters for this instance.
[csvSink:cCsvSink]
reader.dmLevel = energy
filename = \cm[outputfile(O){output.csv}:file name of the output CSV
file]
delimChar = ;
append = 0
timestamp = 1
number = 1
printHeader = 1
\{../shared/standard_data_output_lldonly.conf.inc}`
Now if you run this file it will throw you errors because reader.dmLevel = energy is dependent on waveframes. So the final changes would be:
[energy:cEnergy]
reader.dmLevel = waveframes
writer.dmLevel = energy
[int:cIntensity]
reader.dmLevel = waveframes
[framer:cFramer]
reader.dmLevel=wave
writer.dmLevel=waveframes
Further reference on how to configure opensmile configuration files can be found here
I am running a knime chunk loop to write always the same procedure in different csv files:
The Part with the python script until the csv write is working, when I do it without loop, but somehow he is not writing in the customized folder path, if I have the loop inside.
The target is to write a new csv-file for every loop (the output is a list).
The nodes are:
Chunk Loop: Rows per chunk: 51
Create file name:
Options Selected directioy: C:/....
Flow Variables: FileName: currentIteration
CSV Writer: Flow Variables: filename: CurrentIteration
How can I change the folder path of the file? He is always saving it in the default folder
Here is an example workflow (apologies for the weird blurry Windows 10 screenshots):
Create File Name config:
CSV Writer config:
You may need to run each node individually in order to create the flow variable before you can select it in the following node.
The qa() function of the ShortRead bioconductor library generates quality statistics from fastq files. The report() function then prepares a report of the various measures in an html format. A few other questions on this site have recommended using the display_html() function of IRdisplay to show html in jupyter notebooks using R (irkernel). However it only throws errors for me when trying to display an html report generated by the report() function of ShortRead.
library("ShortRead")
sample_dir <- system.file(package="ShortRead", "extdata", "E-MTAB-1147") # A sample fastq file
qa_object <- qa(sample_dir, "*fastq.gz$")
qa_report <- report(qa_object, dest="test") # Makes a "test" directory containing 'image/', 'index.html' and 'QA.css'
library("IRdisplay")
display_html(file = "test/index.html")
Gives me:
Error in read(file, size): unused argument (size)
Traceback:
1. display_html(file = "test/index.html")
2. display_raw("text/html", FALSE, data, file, isolate_full_html(list(`text/html` = data)))
3. prepare_content(isbinary, data, file)
4. read_all(file, isbinary)
Is there another way to display this report in jupyter with R?
It looks like there's a bug in the code. The quick fix is to clone the github repo, and make the following edit to the ./IRdisplay/R/utils.r, and on line 38 change the line from:
read(file,size)
to
read(size)
save the file, switch to the parent directory, and create a new tarbal, e.g.
tar -zcf IRdisplay.tgz IRdisplay/
and then re-install your new version, e.g. after re-starting R, type:
install.packages( "IRdisplay.tgz", repo=NULL )
I am relatively new to machine learning/python/ubuntu.
I have a set of images in .jpg format where half contain a feature I want caffe to learn and half don't. I'm having trouble in finding a way to convert them to the required lmdb format.
I have the necessary text input files.
My question is can anyone provide a step by step guide on how to use convert_imageset.cpp in the ubuntu terminal?
Thanks
A quick guide to Caffe's convert_imageset
Build
First thing you must do is build caffe and caffe's tools (convert_imageset is one of these tools).
After installing caffe and makeing it make sure you ran make tools as well.
Verify that a binary file convert_imageset is created in $CAFFE_ROOT/build/tools.
Prepare your data
Images: put all images in a folder (I'll call it here /path/to/jpegs/).
Labels: create a text file (e.g., /path/to/labels/train.txt) with a line per input image . For example:
img_0000.jpeg 1
img_0001.jpeg 0
img_0002.jpeg 0
In this example the first image is labeled 1 while the other two are labeled 0.
Convert the dataset
Run the binary in shell
~$ GLOG_logtostderr=1 $CAFFE_ROOT/build/tools/convert_imageset \
--resize_height=200 --resize_width=200 --shuffle \
/path/to/jpegs/ \
/path/to/labels/train.txt \
/path/to/lmdb/train_lmdb
Command line explained:
GLOG_logtostderr flag is set to 1 before calling convert_imageset indicates the logging mechanism to redirect log messages to stderr.
--resize_height and --resize_width resize all input images to same size 200x200.
--shuffle randomly change the order of images and does not preserve the order in the /path/to/labels/train.txt file.
Following are the path to the images folder, the labels text file and the output name. Note that the output name should not exist prior to calling convert_imageset otherwise you'll get a scary error message.
Other flags that might be useful:
--backend - allows you to choose between an lmdb dataset or levelDB.
--gray - convert all images to gray scale.
--encoded and --encoded_type - keep image data in encoded (jpg/png) compressed form in the database.
--help - shows some help, see all relevant flags under Flags from tools/convert_imageset.cpp
You can check out $CAFFE_ROOT/examples/imagenet/convert_imagenet.sh
for an example how to use convert_imageset.
Using a reference genome
Manual:
(http://creskolab.uoregon.edu/stacks/manual/#sfiles)
I did run the following command line:
for FILE in $ (ls /home/llcoutinho/Fabio/samples/bam *.); ref_map.pl the T-7-B-1-b chicken_radtags F2-D "Reference aligned genetic map RAD-Tag Samples"-o /home/llcoutinho/Fabio/samples/-s $ FILE; done
all outputs are present and accessible from the command line:
batch_1.catalog.alleles.tsv,
batch_1.catalog.snps.tsv,
batch_1.catalog.tags.tsv,
.matches.tsv(sample by sample),
.alleles.tsv(sample by sample),
.snps.tsv(sample by sample),
.tags.tsv (sample by sample),
batch_1.haplotypes,
batch_1.hapstats,
batch_1.markers,
batch_1.phistats,
batch_1.sumstats,
batch_1.sumstats_summary,
batch_1.populations,
ref_map.log
but when I access the web:
/localhost/stacks/index.php?db=chicken_radtags_
All samples output appear, but without information, totally empty
with no "unique stacks" and no "SNPs found"