gtkwave tcl script for adding specific signals - tcl

I have a huge VCD file that I use in combination with gtkwave to observe certain signal behaviors. I have a list of signals stored into a .txt file which are the ones that I wish to probe. The thing is that by doing the insertion of the signals manually by hand is a painstakingly long process. So my question here is,
Is there a way, given the .txt file to compose a .tcl script that filters and adds the designated signals from the list to the waveform editor?

Well, after scouting on manuals and some gists I found here and there seems that there is a load of gtkwave instructions one can use that are listed (most of them) on the gtkwave manual (Appendix E) here. So in a nutshell all one has to do is to write a .tcl script in the following format:
# add_waves.tcl
set sig_list [list sig_name_a, register_name\[32:0\], ... ] # note the escaping of the [,] brackets
gtkwave::addSignalsFromList $sig_list
and then invoke the gktwave as:
gtkwave VCD_file.vcd --script=add_waves.tcl
Furthermore, access to the GUI menu options are viable as well via the following syntax in tcl:
gtkwave::/Edit/<Option> <value>

Related

TCL: Can I extract a procedure from a script file without sourcing it?

I was wondering if there is a way to extract a procedure from another script without sourcing all of it.
The goal in this case is to not create a new file with the methods separated from the main script file, but to use the procs that are into that script.
I have use open, read and source and eval.
The implementation of source is almost exactly like open+read+eval (except with the C API). It isn't smart.
In general, getting the contents of a procedure can only be done by using info body after it has been created (plus info args and info default). But often people write code such that the word proc is in the first column of a line and the body is all indented, with first following non-indented line being the closing }. That won't work in all cases (the main trickiness is when namespace eval is in use) but will in many. Even then, the word proc is probably the first on its line (with indenting) and the close-brace is aligned with it.
I'm one of the few people that really writes code that doesn't always follow this convention. You don't need to worry about me.

Fortran90 - compiled program creates a blank csv file instead of reading the existing one

In short: I am trying to load a csv file but the program always overwrites the existing file as an empty new file.
Longer: I am pretty new to Fortran, so bear with me. I am trying to read data from a csv file into a fortran program. Now I didn't write the program and it is pretty big, so I can't post the whole thing here. The program consists of a whole bunch of .f90 files and everything is compiled using a makefile. Now since I am loading the gcc module before compiling, I am assuming that it is compiled using GNU Fortran, because it is part of gcc. (idk how to find out if that is correct)
The compiler returns an executable in a different directory. When I execute the program in that directory it apparently overwrites the existing .csv file with a new blank one, so the program only reads "End of File". I don't know why it always creates a new file, how do I stop it from doing so?
As a side note, the csv file I am trying to read simply consists of a single column of floats, e.g.
"0.01, 0.13, 0.041,..." etc.
The code that I inserted into a subroutine of one of the .f90 files is the following:
real*8, dimension(nz) :: Nsq
integer :: i
open(10, file='Nsq.csv')
do i=1,20
read(10, *) Nsq(i)
enddo
close(10)
I have also tried to write a small test program, essentially running the same code as above. That one works just fine and outputs the contents of the csv file without any issues. For that one I use gfortran to compile it.
I have no experience in Fortran at all, so I am completely stumped, why this happens. I know the chances are slim that you guys can help me with this, since I can't provide the whole source code. But maybe someone has an idea why this occurs. Maybe you know an alternate way of reading csv files?
Thanks for your time.
The open-statement in Fortran OPEN(connect-spec-list), has a lot of connection specifications which define how an external file should be managed (see. Fortran 2018 Standard sec 12.5.6).
When you open a file using the simplest form of the open-statement:
OPEN(unit=unitid,file="filename")
A lot of default assumptions are made such as: ACCESS="SEQUENTIAL", ASYNCHRONOUS="NO", BLANK="NULL", .... The most important ones, however, are ACTION and STATUS which define the purpose of the file. The action specification states if you want to use the file for reading, writing or both, while the status essentially defines if we work on an existing file or not, and what we should do with it (replace it, keep it, ...)
Both these specifications have a default compiler dependent state.
In the Intel compiler suit, the default is action="readwrite", status="unknown" (see here and here)
Intel defines the status="unknown" as :Indicates the file may or may not exist. If the file does not exist, a new file is created and its status changes to 'OLD'.
The Gnu compiler suit has a different take on this. The default action is defined by a set of rules which depend on its accessibility if the file exists (+rw,+r-w,-r+w) (see here). The behaviour for the default action="unknown" is not documented but seems to be REWRITE (see Default Status of "Unknown" in Open)
It is advised to use a proper method if you know what you want to do with the file:
OPEN(newunit=unitid, file="filename", action="read", status="old")

KNIME - Execute a EXE program in a Workflow

I have a workflow Knime, in the middle I must execute an external program to create an Excel file.
Exists some node that allows me to achieve this? I don't need to put any input or output, only execute the program and wait to generate the Excel file (I require to use this Excel for the next nodes).
There are (at least) two “External Tool” nodes which allow running executables on the command line:
External Tool
External Tool (Labs)
In case that should not be enough, you can always go for a Java Snippet node. The java.lang.Runtime class should be your entry point.
It's could be used the External tool node. The node requires inputs and outputs... but, you can use a table creator node for input:
This create an empty table.
In the external tool node, you must include an Input file and Output file, depending on your request, this config could be meaningless but require to the Node works.
In this case, the external app creates a text with the result of the execution, so, in the initial table (Table creator node), will be read the file and get the information into Knime.

SSIS - "switchable" file output for debug?

In an SSIS data-flow task, I'm using a Multicast transform at a key part of the flow which I want to hang a File Output destination off.
This, in itself, is no problem to do. However I only want output in the file if I enable it; i.e., I'd be using it for debugging the data if the flow fails unexpectedly and it's not immediately obvious from the default log message output why this occured.
My initial thought was to create a File Output whose output file was obtained from a variable, and by default, the variable would contain 'nul' - i.e., the Windows bit-bucket - which I could override through configuration in the event of needing to dig further.
Unfortuantly this isn't working: the File Output complains saying that "The filename is a device or contains invalid characters". So it looks like I can't use the bit-bucket.
Is anyone aware of a way to make output "switchable"? This would make enabling debug a less risky proposition than editing the package and dropping a File Output in directly.
I suppose I could have a Conditional Split off the multi-cast which basically sends output if a variable is set to some given value, but this seems overly messy, I'll be poking other options, but if anyone has any suggestions/solutions, they'd be welcome.
I'd go for the conditional split, redirecting rows to the konesans trash destination adaptor if your variable wasn't set, otherwise send to your file.

tiffcp.exe merging a results file with a results file in a loop

I am building a web app that takes several tiff image files and merges them together into one single tiff image file using GNUWin32 tiffcp.exe from command line.
The way I was doing it was to loop through the file list and build a string of file names to merge into one single variable.
strfileList = "c:folder\folder\folder\aased98-def-wsdeff-434fsdsd-dvv.tif c:folder\folder\folder\aased98-def-wsdeff-434fsdsd-axs.tif c:folder\folder\folder\aased98-def-wsdeff-434fsdsd-dxzs.tif"
Then I would just write to the command line:
tiffcp.exe strFileList results.tif
The file names are guids and so the paths are fairly long and I do not have any control to shorten them. So if I have a bunch of these documents (over 20 files or so), the length of the string variable exceeds the limits for windows command line and the merge fails.
Since this process is just merging files, my next thought was instead of writing the file names to a string, just do the merge one file at a time. So the first time the loop runs the following type of code:
tiffcp.exe file1.tif results.tif
The result is a perfect 476k tif file. But the next iteration of the loop needs to merge the second file plus the contents of the first "results" tif file. So I do this:
tiffcp.exe results.tif file2.tiff results.tif
The results each time are a blank 1K tiff file?
All the examples I can find of tiffcp.exe say file1.tif file2.tif results.tif, none use the results file to write back to itself?
Any suggestions on how to do this?
Try the -a switch to tiffcp.exe
I'm doing something similar in Python and inside my file processing loop I'm issuing the command:
tiffcpp.exe -a temp.tif output.tif
works fine.
For an ASP.NET project you may want to try LibTiff.Net (free, open source, BSD license). That port of libtiff library contains tiffcp utility with source code. You may try to use it in your code.
Disclaimer: I am one of the maintainers of the library.
I believe your problem is caused by the use of results.tif as both input as output. If you increment the file name (i.e. results1.tif to results2.tif etc.) I believe it should work.
This is a rather inefficient approach (tiff1 is copied 9 times if you have 10 files). Since you refer to libtiff, you may take a look at the source of libtiff cp and check if it is worthwhile to embed it.