Erlang: How to include libraries - json

I'm writing a simple Erlang program that requests an URL and parses the response as JSON.
To do that, I need to use a Library called Jiffy. I downloaded and compiled it, and now i have a .beam file along with a .app file. My question is: How do I use it? How do I include this library in my program?. I cannot understand why I can't find an answer on the web for something that must be very crucial.
Erlang has an include syntax, but receives a .hrl file.
Thanks!

You don't need to include the file in your project. In Erlang, it is at run time that the code will try to find any function. So the module you are using must be in the search path of the VM which run your code at the point you need it, that's all.
For this you can add files to your path when you start erlang: erl -pa your/path/to/beam (it exists also -pz see erlang doc)
Note that it is also possible to modify the path from the application itself using code:add_path(Dir).
You should have a look to the OTP way to build applications in erlang documentation or Learn You Some Erlang, and also look at Rebar a tool that helps you to manage erlang application (for example starting with rebar or rebar wiki)

To add to Pascal's answer, yes Erlang will search for your files at runtime and you can add extra paths as command line arguments.
However, when you build a project of a scale that you are including other libraries, you should be building an Erlang application. This normally entails using rebar.
When using rebar, your app should have a deps/ directory. To include jiffy in your project, it is easiest to simply clone the repo into deps/jiffy. That is all that needs to be done for you to do something like jiffy:decode(Data) in your project.
Additionally, you can specify additional include files in your rebar.config file by adding extra lines {erl_opts, [{i, "./Some/path/to/file"}]}.. rebar will then look for file.so using that path.

Related

How to enable "Go to declaration" from typescript to json (i18next)

Im working on a project that uses i18next with react and typescript, where translationkeys are defined in .json files.
One drawback of switching to json for the translation files, is that we can no longer use the intellij idea "Go to declaration" or ctrl + left-click feature, to quickly navigate from a key usage in typescript, to its declaration in the json file.
Is there any way to enable this without requiring all developers to download some third-party intellij plugin?
I've googled for hours for any information about this.
I've made a d.ts file to enable strong typing for where translationkeys are used. What strikes me as odd is that intellij/typescript is able to know when a key doesent exist and warns about it, but at the same time doesent know "where" that key exists whenever i type a correct key.
I also set resolveJsonModule:true in tsconfig, but to my limited understanding it doesent seem relevant.
This is not technically possible because commands like Go To Declaration will look for a declaration in a source code file (think .ts or .js or .d.ts) whereas you want to go ...to its declaration in the json file.
The resolveJsonModule flag won't help you either because as per the docs:
Allows importing modules with a ‘.json’ extension, which is a common practice in node projects. This includes generating a type for the import based on the static JSON shape.
One possible solution is to create a build script to take your .json file and output a .js or .ts file containing the same content, then IDE commands like Go To Declaration will jump to that file.
In summary: you will need some kind of plugin, or a custom build script.
DISCLAIMER: I don't use i18next or react, this answer is based on my understanding of both TypeScript and the JetBrains Rider IDE (which is like IntelliJ).

How to use ceylon js (also with google closure compiler)

Calling a file resulting from the concatenation (bash: cat ... >> app.js) of the following three files:
/usr/share/ceylon/1.2.0/repo/ceylon/language/1.2.0/ceylon.language-1.2.0.js
modules/com/example/helloworld/1.0.0/com.example.helloworld-1.0.0-model.js
modules/com/example/helloworld/1.0.0/com.example.helloworld-1.0.0.js
with the command nodejs app.js does nothing. The same when used in a web page. How do have I to call that javascript program so that it runs without using require.js ?
Please give the rules how ceylon modules and the run function and other functions contained within translate to javascript and are to be called.
How can I get one javascript file from compilation of several ceylon modules without concatenating them manually or with require.js?
The above is without using google closure compiler.
Given the size of 1.6 MB of the language module, it makes no sense to run ceylon-js without using google closure compiler.
Compiling "ceylon.language-1.2.0.js" alone with google closure compiler results in a lot of warnings.
java -jar compiler.jar --compilation_level ADVANCED_OPTIMIZATIONS --js /usr/share/ceylon/1.2.0/repo/ceylon/language/1.2.0/ceylon.language-1.2.0.js --js_output_file lib-compiled.js
How can I get rid of those warnings?
In what order do I have to chain together files resulting from ceylon-js with the model file and the language file to compile them in advanced mode with google closure compiler for dead code elimination.
These are 3 questions, really.
A Ceylon module is compiled to a CommonJS module. Concatenating the resulting files won't work because each file is on CommonJS format, which is a big function that returns an object with the exported declarations.
You can compile the modules with the --no-module option to get just the generated code, without it being wrapped in CommonJS format. For the language module, you can copy the file and just delete the first line and the last 5 lines.
I do not yet know how to get rid of the warnings you mention in the second question.
And as for the third question, I would recommend putting the language module first, then the rest of the files. If you have any toplevel declarations with the same name in different modules, you'll have conflicts (only the last declaration will remain), even if they're not shared, since they're all in the same module/unit.
Well, I think require.js can run the compilation of the modules to one file and then run the google-closure-compiler, see: http://www.requirejs.org/docs/optimization.html

Accessing a resource file manually

I'm using a 3rd party library that does serialization and deserialization of it's data, I need to feed the library a data file that I have stored under Resources.
I can't use FileUtils to read the contents of the file, I need to do let the 3rd party library do the reading of the file.
I need to get the full path of the file so the library can find it.
FileUtils::getInstance()->fullPathForFilename("file.map");
returns assets/file.map on Android which is not found by ifstream when given that path.
How do I read a file manually, given that it's located in Resources?
You can't use ifstream to operate with bundle resources on android because they're located in the apk file (in archive).
You can use the FileUtils::getInstance()->getDataFromFile("file.map") to get binary data and try to transfer it to your library.
Also You can look at this answer link to answer. It might help You too.

How do I use the Perl Text-MediawikiFormat to convert mediawiki to xhtml?

On an Ubuntu platform, I installed the nice little perl script
libtext-mediawikiformat-perl - Convert Mediawiki markup into other text formats
which is available on cpan. I'm not familiar with perl and have no idea how to go about using this library to write a perl script that would convert a mediawiki file to an html file. e.g. I'd like to just have a script I can run such as
./my_convert_script input.wiki > output.html
(perhaps also specifying the base url, etc), but have no idea where to start. Any suggestions?
I believe #amon is correct that perl library I reference in the question is not the right tool for the task I proposed.
I ended up using the mediawiki API with the action="parse" to convert to HTML using the mediawiki engine, which turned out to be much more reliable than any of the alternative parsers I tried proposed on the list. (I then used pandoc to convert my html to markdown.) The mediawiki API handles extraction of categories and other metadata too, and I just had to append the base url to internal image and page links.
Given the page title and base url, I ended up writing this as an R function.
wiki_parse <- function(page, baseurl, format="json", ...){
require(httr)
action = "parse"
addr <- paste(baseurl, "/api.php?format=", format, "&action=", action, "&page=", page, sep="")
config <- c(add_headers("User-Agent" = "rwiki"), ...)
out <- GET(addr, config=config)
parsed_content(out)
}
The Perl library Text::MediawikiFormat isn't really intended for stand-alone use but rather as a formatting engine inside a larger application.
The documentation at CPAN does actually show a way how to use this library, and does note that other modules might provide better support for one-off conversions.
You could try this (untested) one-liner
perl -MText::MediawikiFormat -e'$/=undef; print Text::MediawikiFormat::format(<>)' input.wiki >output.html
although that defies the whole point (and customization abilities) of this module.
I am sure that someone has already come up with a better way to convert single MediaWiki files, so here is a list of alternative MediaWiki processors on the mediawiki site. This SO question coud also be of help.
Other markup languages, such as Markdown provide better support for single-file conversions. Markdown is especially well suited for technical documents and mirrors email conventions. (Also, it is used on this site.)
The libfoo-bar-perl packages in the Ubuntu repositories are precompiled Perl modules. Usually, these would be installed via cpan or cpanm. While some of these libraries do include scripts, most don't, and aren't meant as stand-alone applications.

Packing a file into an ELF executable

I'm currently looking for a way to add data to an already compiled ELF executable, i.e. embedding a file into the executable without recompiling it.
I could easily do that by using cat myexe mydata > myexe_with_mydata, but I couldn't access the data from the executable because I don't know the size of the original executable.
Does anyone have an idea of how I could implement this ? I thought of adding a section to the executable or using a special marker (0xBADBEEFC0FFEE for example) to detect the beginning of the data in the executable, but I do not know if there is a more beautiful way to do it.
Thanks in advance.
You could add the file to the elf file as a special section with objcopy(1):
objcopy --add-section sname=file oldelf newelf
will add the file to oldelf and write the results to newelf (oldelf won't be modified)
You can then use libbfd to read the elf file and extract the section by name, or just roll your own code that reads the section table and finds you section. Make sure to use a section name that doesn't collide with anything the system is expecting -- as long as your name doesn't start with a ., you should be fine.
I've created a small library called elfdataembed which provides a simple interface for extracting/referencing sections embedded using objcopy. This allows you to pass the offset/size to another tool, or reference it directly from the runtime using file descriptors. Hopefully this will help someone in the future.
It's worth mentioning this approach is more efficient than compiling to a symbol, as it allows external tools to reference the data without needing to be extracted, and it also doesn't require the entire binary to be loaded into memory in order to extract/reference it.