using #include directives in Platform Builder DAT files - platform-builder

So I know I can use #include in BIB and REG files to pull in other files like this:
#include $(_PLATFORMROOT)\MYPLATFORM\FILES\MYBIB.BIB
but it seems that I can’t get DAT files to behave the same way. Am I missing something here? I have a component that has a fairly large DAT file and I’d rather not have users forced to paste the whole thing into their PROJECT.DAT file, but instead have a simple one-line include to pull it in.

It appears from further testing and from feedback from other developers who use Platform Builder that this is indeed a limitation of the SYSGEN process. The DAT file parser simply does not support #include directives. A few hacks were suggested to get a similar behavior, the "best" of which I think is to append the DAT contents using the PostFmergeObj.bat file and just give the customer 2 files to place in their BSP\FILES folder (the batch file PostFmergeObj.bat and the DAT file that gets appended to their platoform's INITOBJ.DAT file). A similar technique is outlined here for filtering pieces out of a DAT file.
Hopefully the next release of PB will have a better DAT parser.

Related

Fortran90 - compiled program creates a blank csv file instead of reading the existing one

In short: I am trying to load a csv file but the program always overwrites the existing file as an empty new file.
Longer: I am pretty new to Fortran, so bear with me. I am trying to read data from a csv file into a fortran program. Now I didn't write the program and it is pretty big, so I can't post the whole thing here. The program consists of a whole bunch of .f90 files and everything is compiled using a makefile. Now since I am loading the gcc module before compiling, I am assuming that it is compiled using GNU Fortran, because it is part of gcc. (idk how to find out if that is correct)
The compiler returns an executable in a different directory. When I execute the program in that directory it apparently overwrites the existing .csv file with a new blank one, so the program only reads "End of File". I don't know why it always creates a new file, how do I stop it from doing so?
As a side note, the csv file I am trying to read simply consists of a single column of floats, e.g.
"0.01, 0.13, 0.041,..." etc.
The code that I inserted into a subroutine of one of the .f90 files is the following:
real*8, dimension(nz) :: Nsq
integer :: i
open(10, file='Nsq.csv')
do i=1,20
read(10, *) Nsq(i)
enddo
close(10)
I have also tried to write a small test program, essentially running the same code as above. That one works just fine and outputs the contents of the csv file without any issues. For that one I use gfortran to compile it.
I have no experience in Fortran at all, so I am completely stumped, why this happens. I know the chances are slim that you guys can help me with this, since I can't provide the whole source code. But maybe someone has an idea why this occurs. Maybe you know an alternate way of reading csv files?
Thanks for your time.
The open-statement in Fortran OPEN(connect-spec-list), has a lot of connection specifications which define how an external file should be managed (see. Fortran 2018 Standard sec 12.5.6).
When you open a file using the simplest form of the open-statement:
OPEN(unit=unitid,file="filename")
A lot of default assumptions are made such as: ACCESS="SEQUENTIAL", ASYNCHRONOUS="NO", BLANK="NULL", .... The most important ones, however, are ACTION and STATUS which define the purpose of the file. The action specification states if you want to use the file for reading, writing or both, while the status essentially defines if we work on an existing file or not, and what we should do with it (replace it, keep it, ...)
Both these specifications have a default compiler dependent state.
In the Intel compiler suit, the default is action="readwrite", status="unknown" (see here and here)
Intel defines the status="unknown" as :Indicates the file may or may not exist. If the file does not exist, a new file is created and its status changes to 'OLD'.
The Gnu compiler suit has a different take on this. The default action is defined by a set of rules which depend on its accessibility if the file exists (+rw,+r-w,-r+w) (see here). The behaviour for the default action="unknown" is not documented but seems to be REWRITE (see Default Status of "Unknown" in Open)
It is advised to use a proper method if you know what you want to do with the file:
OPEN(newunit=unitid, file="filename", action="read", status="old")

Is it possible to generate a read-only CSV file?

for legal reason I should let the customer be able to download a CSV file but she/he should be able only to read it, not modify it.
What's a common way of handling this use case?
Some kind of signature on the file so that if it's modified you can see it's not in his original form?
I don't need a solution bound to a specific language, I would just like to know what is the best practice.
If customer will be able to download this file into his computer, than you can't stop her/him from modifying it.
However, you may easy detect changes, the easiest will be generating a cryptographic hash function for the file, i.e.:
$ sha256sum data.csv
eea8254c7500ba3de996aa8ad6af399183f04e17d4a8102fde539dbc93a90012 data.csv

What is the difference between a .JSON file and .JL file?

I have both JSON file and JL file on my computer but when I open them in Notepad their structure looks like the same. What is the difference between them? where shall I use each one?
Actually, the time that I was asking this question I didn't know that "the file type is no guarantee of what is inside it". in other words I thought that for every file name there is a separate manifesto and if a files name is ".something", there is a unique manifesto for it. But now I know that I can create a file, write anything that I want into it and name it ".peyman" and yes there is nothing special with it!
What was that file? the file was JSON Lines file format.
Where did I find it? in the Scrapy except writing scrapy crawl name -o file.json I saw that somebody wrote scrapy crawl name -o file.jl. I tried that and the file was 99% like JSON file so I wondered and asked this question here.
So:
What is the difference between a .JSON file and .JL file? Now I know that the better question is "What is the difference between a .JSON file and .JL file in the Scrapy?"
The JSON Line is like JSON but without the "[" and "]" at the
beginning and the end. it is used in the Scrapy because of this
There's quite a few things that a jl file extension could be referring to. If I remember correctly, it originally had something to do with the window manager Sawfish.
Sawfish was developed in Lisp, and the jl file was a Lisp source file for Sawfish. However, I'm guessing (because you said that inside was JSON-like sauce) that's not what you're asking about.
In that case, I do recall a few projects on GitHub... JSON lambda and Julia.
Both of those may be the reason why you're seeing JSON in a jl file. Without more information on where you got that file, or what it was part of, though, we won't be able to help you much.
That said, file extensions rarely matter in terms of Linux. In Windows, they're far more important, but in Linux you could literally append anything to a file as an "extension" (ie. thisfile.whatever) and you could still open it up in an editor. The same is true for most editors in Windows.
Likely, the packager of that file decided on jl for their own reasons, rather than following convention of using .json.
I guess JL extension is used for many purposes, but JL is also one of the few extensions used for JSON-lines (also known as NDJSON or JSONL).
This format can contain multiple JSON values, one JSON value (with "compact" formatting) per line and is useful for e.g. streaming or logging.

How to extract hhp file from a chm file

I have an A.chm file for my windows application which runs as expected.
When I decompile it using HTML workshop I get set of html files, .hhc file, .hhk file. When I compile another file B.chm from these extracted files without changing any of the files.((I want to add more html contents to this file but looks like I am losing some information after decompiling)) The output file I get is 72K where as the original file was 75K. B.chm's contents look all file when viewed in the chm viewer but the behavior is lost when when used with the application.
After reading around I found that if .hhp can be extracted from a .chm file then it can be re-constructed as it is without losing any mapping or aliases. Is that true?
How can I extract .hhp file from a .chm file?
Thanks,
Sam
No, Yes , and no.
The original hhp can't be guaranteed extracted
however since chm is an archive type, the project could have added all project files to the archive. I assume you already would have found them if that were the case.
If the decompile process does its administration, it can regenerate the .hhp to a certain degree.
Comments and #define names will probably be lost though, maybe more, but that should not result in problems when recompiling.
But of course it could be that the decompiler is limited. You could try some other (search for something from "keytools").
If not, then take "chmlib" and start drilling down into the format.

Packing a file into an ELF executable

I'm currently looking for a way to add data to an already compiled ELF executable, i.e. embedding a file into the executable without recompiling it.
I could easily do that by using cat myexe mydata > myexe_with_mydata, but I couldn't access the data from the executable because I don't know the size of the original executable.
Does anyone have an idea of how I could implement this ? I thought of adding a section to the executable or using a special marker (0xBADBEEFC0FFEE for example) to detect the beginning of the data in the executable, but I do not know if there is a more beautiful way to do it.
Thanks in advance.
You could add the file to the elf file as a special section with objcopy(1):
objcopy --add-section sname=file oldelf newelf
will add the file to oldelf and write the results to newelf (oldelf won't be modified)
You can then use libbfd to read the elf file and extract the section by name, or just roll your own code that reads the section table and finds you section. Make sure to use a section name that doesn't collide with anything the system is expecting -- as long as your name doesn't start with a ., you should be fine.
I've created a small library called elfdataembed which provides a simple interface for extracting/referencing sections embedded using objcopy. This allows you to pass the offset/size to another tool, or reference it directly from the runtime using file descriptors. Hopefully this will help someone in the future.
It's worth mentioning this approach is more efficient than compiling to a symbol, as it allows external tools to reference the data without needing to be extracted, and it also doesn't require the entire binary to be loaded into memory in order to extract/reference it.