I want to use static linking in free pascal & to get one compiled file output.
I specify -o$outputName and -XS parameters, but it still creates object files for all units, etc.
How i can tell compiler not to create such files (or delete them after job is done)?
Related
In short: I am trying to load a csv file but the program always overwrites the existing file as an empty new file.
Longer: I am pretty new to Fortran, so bear with me. I am trying to read data from a csv file into a fortran program. Now I didn't write the program and it is pretty big, so I can't post the whole thing here. The program consists of a whole bunch of .f90 files and everything is compiled using a makefile. Now since I am loading the gcc module before compiling, I am assuming that it is compiled using GNU Fortran, because it is part of gcc. (idk how to find out if that is correct)
The compiler returns an executable in a different directory. When I execute the program in that directory it apparently overwrites the existing .csv file with a new blank one, so the program only reads "End of File". I don't know why it always creates a new file, how do I stop it from doing so?
As a side note, the csv file I am trying to read simply consists of a single column of floats, e.g.
"0.01, 0.13, 0.041,..." etc.
The code that I inserted into a subroutine of one of the .f90 files is the following:
real*8, dimension(nz) :: Nsq
integer :: i
open(10, file='Nsq.csv')
do i=1,20
read(10, *) Nsq(i)
enddo
close(10)
I have also tried to write a small test program, essentially running the same code as above. That one works just fine and outputs the contents of the csv file without any issues. For that one I use gfortran to compile it.
I have no experience in Fortran at all, so I am completely stumped, why this happens. I know the chances are slim that you guys can help me with this, since I can't provide the whole source code. But maybe someone has an idea why this occurs. Maybe you know an alternate way of reading csv files?
Thanks for your time.
The open-statement in Fortran OPEN(connect-spec-list), has a lot of connection specifications which define how an external file should be managed (see. Fortran 2018 Standard sec 12.5.6).
When you open a file using the simplest form of the open-statement:
OPEN(unit=unitid,file="filename")
A lot of default assumptions are made such as: ACCESS="SEQUENTIAL", ASYNCHRONOUS="NO", BLANK="NULL", .... The most important ones, however, are ACTION and STATUS which define the purpose of the file. The action specification states if you want to use the file for reading, writing or both, while the status essentially defines if we work on an existing file or not, and what we should do with it (replace it, keep it, ...)
Both these specifications have a default compiler dependent state.
In the Intel compiler suit, the default is action="readwrite", status="unknown" (see here and here)
Intel defines the status="unknown" as :Indicates the file may or may not exist. If the file does not exist, a new file is created and its status changes to 'OLD'.
The Gnu compiler suit has a different take on this. The default action is defined by a set of rules which depend on its accessibility if the file exists (+rw,+r-w,-r+w) (see here). The behaviour for the default action="unknown" is not documented but seems to be REWRITE (see Default Status of "Unknown" in Open)
It is advised to use a proper method if you know what you want to do with the file:
OPEN(newunit=unitid, file="filename", action="read", status="old")
I have a folder for Octave M-files in C:\\Users\Dropbox\Octave, under which are various subfolders by function categories (normal distribution, chisq...). I just started making those subfolders and they will keep changing (adding, removing, reshuffling) as time goes on.
I would just like to set that folder as root and have Octave search for functions recursively there, just like you set a classpath in Java and JVM searches all folders there.
I used addpath(genpath('C:\\Users\Dropbox\Octave')), but the paths generated are then fixed, not reflecting subsequent subfolder changes.
Shall I add addpath(genpath('C:\\Users\Dropbox\Octave')) to the .octaverc file?
I think there is some confusion here. There are several ways to interact with the path, but for the most part these do not result in permanent changes, unless you save this somehow.
Simply adding a path for an existing octave session will not result in any permanent changes to the usual path that octave initialises at startup. Therefore when you say:
I used addpath(genpath('C:\Users\Dropbox\Octave')), but the paths generated are then fixed, not reflecting subsequent subfolder changes.
this makes no sense, because as soon as you exit your octave session, those added paths should have been gone altogether, and not appear in later octave sessions.
It is more likely that at some point you added these paths, and then used the savepath command, which resulted in your custom paths being added to your .octaverc file.
If that is the case, then yes, you can expect that octave will not "update" what was written in your .octaverc file, unless you call savepath again with an updated path definition.
If you would like the addpath(genpath('C:\Users\Dropbox\Octave')) command you mentioned to be called every time octave starts, so that the current/updated directory structure is loaded, then yes, the best way to do it would be to add that command to your .octaverc file. Make sure you remove the lines in your .octaverc that refer to the previous changes made by savepath. Note that there may be several levels of octaverc files that you need to check (see the relevant page in the manual)
Alternatively, you could simply make sure that this line appears in every script you want to call which intends to make use of those files.
While you may consider this last approach tedious, programmatically it is the most recommended one, since it makes dependencies clear in your code. This is especially important if you ever plan to share your code (and doubly so if you'd like it to be matlab compatible).
PS. All the above mostly applies to matlab too, with the exception that a) matlab's savepath saves path information in a file called pathdef.m, rather than directly in your startup files, and b) matlab uses startup.m instead of .octaverc as startup files. Also, if you don't care about doing this programmatically, matlab provides pathtool, which is a graphical interface for adding / saving directories to the matlab path.
I am trying to use SWIG to generate wrappers for some of my C++ function calls.
Also, I am trying to do build my own TCL shell so I need to static link the generated SWIG libraries. I have my own main function with a Tcl_AppInit call where I do some prior setup.
To do this what function should I include in my program's Tcl_AppInit call? I found that SWIG_init is not the right function. I even tried Cell_Init where cell is the name of the class in my code, but that doesn't help either.
How do I static link SWIG object files with my own main function and Tcl_Appinit call?
Currently when I use the following command to link my executabel I get the following error:
g++ -o bin/icde src/core/*.o src/read/*.o src/swig/*.o src/icde/*.o -ltk -ltcl
I get the following error:
src/icde/main.o: In function `AppInit(Tcl_Interp*)':
main.cpp:(.text+0xa9): undefined reference to `Cell_Init(Tcl_Interp*)'
collect2: ld returned 1 exit status
I checked the src/swig/cell.o file which has the Cell_Init function or not using objdump:
~> objdump -d src/swig/cell.o | grep Cell_Init
00006461 <Cell_Init>:
646c: 75 0a jne 6478 <Cell_Init+0x17>
I am not sure if I am doing something wrong while linking.
------------------- UPDATE ----------------------------
I found that including the swig/swig.cxx file directly in the main file which calls the Tcl_AppInit function resolves the linking issue. Is there a reason for this.
Isn't it possible to create and seprately link the swig file and the file with the main function?
In general, with SWIG you'll end up with a bunch of generated source files that you compile. The normal thing you do then is package them up into a shared library (with appropriate bound dependencies on other shared libraries) that can be imported into a Tcl runtime with the load command.
But you don't want that this time. Instead, you want the object files that you would use to make that shared lib, and you want to include them in the instructions to build an executable along with the object file that holds your main and Tcl_AppInit. You also need to make sure that when linking your main executable that you make it dependent on those external shared libraries; executable building requires that you satisfy all dependencies and make all symbols be bound to their definitions. (You can use a static library to make this easier: it combines a bunch of object files into one file. There's very little difference to just using the object files from it though; in particular, static libraries aren't bound to their dependencies.)
Finally, you do want to include a call to Cell_Init in your Tcl_AppInit. That's the right place to put it (well, as long as you're not arranging for the package to be loaded into sub-interpreters). If it was failing before, that was because you'd got your linking wrong. (Tip: linkers work best when objects and libraries on the link line only depend on things later on the link line. Getting the link order right is a bit of a black art when you've got a complex build!)
I need to visit a folder and all of its children with SSIS (SQL Server Integration Services). At the moment by setting the folder path into a variable after reading it, I able to loop through all the .txt files of the current folder and fill a pre-generated (with head info) xml file.
What I would need now is to be able to create one per each accessed folder, a new xml file (the beginning content will be always the same). Once I would be able to create it, as first action once a new folder is accessed, I can then simply apply the logic I developed so far.
However I am blocked at the moment, since within the loop where i read the files (with their full path) I cannot find a way to express "create the xml file if the accessed folder is new".
Assuming I understand the problem, you need to walk the entirety of a directory structure and for each folder you find, you need to create a base XML file. Then for each text file you find in that folder, you will perform some operation on the XML file. The trick being how do you only create the XML file once.
I would envision a process like this.
A script task that makes use of the System.IO.GetDirectories to populate a variable (directoryXML> that contains the folder structure, something like
<Dir>
<D>C:\ssisdata</D>
<D>C:\ssisdata\a</D>
<D>C:\ssidata\a\b</D>
</Dir>
Use a Foreach Nodelist Enumerator to shred that XML out into a variable (currentDirecotry).
You'd perform your one-time task of creating the XML file in currentDirectory.
Further using the currentDirectory variable as an expression on the Foreach File Enumerator (assign to Directory with a FileSpec of *.txt) you can then perform your task on all the files meeting that specification. Do not check the traverse subfolder option as that will not give the desired results.
This is a fairly high level approach to the problem as I'm assuming you have some familiarity with SSIS but the approach should be sound. Let me know if you have any particular sticking points.
I'm currently looking for a way to add data to an already compiled ELF executable, i.e. embedding a file into the executable without recompiling it.
I could easily do that by using cat myexe mydata > myexe_with_mydata, but I couldn't access the data from the executable because I don't know the size of the original executable.
Does anyone have an idea of how I could implement this ? I thought of adding a section to the executable or using a special marker (0xBADBEEFC0FFEE for example) to detect the beginning of the data in the executable, but I do not know if there is a more beautiful way to do it.
Thanks in advance.
You could add the file to the elf file as a special section with objcopy(1):
objcopy --add-section sname=file oldelf newelf
will add the file to oldelf and write the results to newelf (oldelf won't be modified)
You can then use libbfd to read the elf file and extract the section by name, or just roll your own code that reads the section table and finds you section. Make sure to use a section name that doesn't collide with anything the system is expecting -- as long as your name doesn't start with a ., you should be fine.
I've created a small library called elfdataembed which provides a simple interface for extracting/referencing sections embedded using objcopy. This allows you to pass the offset/size to another tool, or reference it directly from the runtime using file descriptors. Hopefully this will help someone in the future.
It's worth mentioning this approach is more efficient than compiling to a symbol, as it allows external tools to reference the data without needing to be extracted, and it also doesn't require the entire binary to be loaded into memory in order to extract/reference it.