CABAC: Looking for test patterns - h.264

I'd like to test (black-box testing) an H.265 CABAC module written in C and I'm looking for some test patterns (if they exist).
The way I see such a test pattern:
an input array
some information for each array value (to know what context model I need to use in the encoding process)
the expected output (binary)
Does anyone have knowledge of such a thing?

Related

How to marshal a predicate from JSON in Prolog?

In Python it is common to marshal objects from JSON. I am seeking similar functionality in Prolog, either swi-prolog or scryer.
For instance, if we have JSON stating
{'predicate':
{'mortal(X)', ':-', 'human(X)'}
}
I'm hoping to find something like load_predicates(j) and have that data immediately consulted. A version of json.dumps() and loads() would also be extremely useful.
EDIT: For clarity, this will allow interoperability with client applications which will be collecting rules from users. That application is probably not in Prolog, but something like React.js.
I agree with the commenters that it would be easier to convert the JSON data to a .pl file in the proper format first and then load that.
However, you can load the predicates from JSON directly, convert them to a representation that Prolog understands, and use assertz to add them to the knowledge base.
If indeed the data contains all the syntax needed for a predicate (as is the case in the example data in the question) then converting the representation is fairly simple as you just need to concatenate the elements of the list into a string and then create a term out of the string. Note that this assumption skips step 2 in the first comment by Guy Coder.
Note that the Prolog JSON library is rather strict in which format it accepts: only double quotes are valid as string delimiters, and lists with singleton values (i.e., not key-value pairs) need to use the notation [a,b,c] instead of {a,b,c}. So first the example data needs to be rewritten:
{"predicate":
["mortal(X)", ":-", "human(X)"]
}
Then you can load it in SWI-Prolog. Minimal working example:
:- use_module(library(http/json)).
% example fact for testing
human(aristotle).
load_predicate(J) :-
% open the file
open(J, read, JSONstream, []),
% parse the JSON data
json_read(JSONstream, json(L)),
% check for an occurrence of the predicate key with value L2
member(predicate=L2, L),
% concatenate the list into a string
atomics_to_string(L2, S),
% create a term from the string
term_string(T, S),
% add to knowledge base
assertz(T).
Example run:
?- consult('mwe.pl').
true.
?- load_predicate('example_predicate.json').
true.
?- mortal(X).
X = aristotle.
Detailed explanation:
The predicate json_read stores the data in the following form:
json([predicate=['mortal(X)', :-, 'human(X)']])
This is a list inside a json term with one element for each key-value pair. The element has the syntax key=value. In the call to json_read you can already strip the json() term and store the list directly in the variable L.
Then member/2 is used to search for the compound term predicate=L2. If you have more than one predicate in the JSON file then you should turn this into a foreach or in a recursive call to process all predicates in the list.
Since the list L2 already contains a syntactically well-formed Prolog predicate it can just be concatenated, turned into a term using term_string/2 and asserted. Note that in case the predicate is not yet in the required format, you can construct a predicate out of the various pieces using built-in predicate manipulation functionality, see https://www.swi-prolog.org/pldoc/doc_for?object=copy_predicate_clauses/2 for some pointers.

Language translation using TorchText (PyTorch)

I have recently started with ML/DL using PyTorch. The following pytorch example explains how we can train a simple model for translating from German to English.
https://pytorch.org/tutorials/beginner/torchtext_translation_tutorial.html
However I am confused on how to use the model for running inference on custom input. From my understanding so far :
1) We will need to save the "vocab" for both German (input) and English(output) [using torch.save()] so that they can be used later for running predictions.
2) At the time of running inference on a German paragraph, we will first need to convert the German text to tensor using the german vocab file.
3) The above tensor will be passed to the model's forward method for translation
4) The model will again return a tensor for the destination language i.e., English in current example.
5) We will use the English vocab saved in first step to convert this tensor back to English text.
Questions:
1) If the above understanding is correct, can the above steps be treated as a generic approach for running inference on any language translation model if we know the source and destination language and have the vocab files for the same? Or can we use the vocab provided by third party libraries like spacy?
2) How do we convert the output tensor returned from model back to target language? I couldn't find any example on how to do that. The above blog explains how to convert the input text to tensor using source-language vocab.
I could easily find various examples and detailed explanation for image/vision models but not much for text.
Yes globally what you are saying is correct, and of course you can any vocab, e.g. provided by spacy. To convert a tensor into natrual text, one of the most used thechniques is to keep both a dict that maps indexes to words and an other dict that maps words to indexes, the code below can do this:
tok2idx = defaultdict(lambda: 0)
idx2tok = {}
for seq in sequences:
for tok in seq:
if not tok in tok2idx:
tok2idx[tok] = index
idx2tok[index] = tok
index += 1
Here sequences is a list of all the sequences (i.e. sentences in your dataset). You can change the model easily if you have only a list of words or tokens, by only keeping the inner loop.

Extract Json Data with screaming frog

I'm using Screaming Frog as a way to extract data from a Json generated from an URL.
The Json generated is this form :
{"ville":[{"codePostal":"13009","ville":"VAUFREGE","popin":"ouverturePopin","zoneLivraison":"1300913982","url":""},{"codePostal":"13009","ville":"LES BAUMETTES","popin":"ouverturePopin","zoneLivraison":"1300913989","url":""},{"codePostal":"13009","ville":"MARSEILLE 9EME ARRON","popin":"ouverturePopin","zoneLivraison":"1300913209","url":""}]}
I'm using this regex in Custom > Extraction in Screaming Frog as a way to extract the values of "codePostal".
"codePostal":".*?"
Problem is it doesn't extract anything.
When I test my regex in regex101, it seems correct.
Do you have any clue about what is wrong ?
Thanks.
Regards.
Have you tried to save the output to understand what ScreamingFrog sees? It doesn't matter - not at the beginning - whether your RegEx works.
That said, don't forget that SF is a Java based tool hence it is the engine used by the reg ex, so make sure you test your regular expressions with the correct dialect.
You need to specify group extractors enclosed in parentheses. For instance in your example, you need to have ("codePostal":".*?") as extractor.
In addition if you simply want to extract the value, you could use the following instead.
"codePostal":"(.*?)"
It's not a problem with your Regular Expression. It seems to be that the problem is with the Content Type. ScreamingFrog isn't properly reading application/JSON content types for scraping. Hopefully they will fix this bug.

Is there a Go Language equivalent to Perls' Dumper() method in Data::Dumper?

I've looked at the very similarly titled post (Is there a C equivalent to Perls' Dumper() method in Data::Dumper?), regarding a C equivalent to Data::Dumper::Dumper();. I have a similar question for the Go language.
I'm a Perl Zealot by trade, and am a progamming hobbyist, and make use of Data::Dumper and similar offspring literally hundreds of times a day. I've taken up learning Go, because it looks like a fun and interesting language, something that will get me out of the Perl rut I'm in, while opening my eyes to new ways of doing stuffz... One of the things I really want is something like:
fmt.Println(dump.Dumper(decoded_json))
to see the resulting data structure, like Data::Dumper would turn the JSON into an Array of Hashes. Seeing this in Go, will help me to understand how to construct and work with the data. Something like this would be considered a major lightbulb moment in my learning of Go.
Contrary to the statements made in the C counterpart post, I believe we can write this, and since I'll be passing Dumper to Println, after compilation what ever JSON string or XML page I pass in and decode. I should be able to see the result of the decoding, in a Dumper like state... So, does any more know of anything like this that exists? or maybe some pointers to getting something like this done?
Hi and welcome to go I'm former perl hacker myself.
As to your question the encoding/json package is probably the closest you will find to a go data pretty printer. I'm not sure you really need it though. One of the reasons Data::Dumper was awesome in perl is because many times you really didn't know the structure of the data you were consuming without visually inspecting it. With go though everything is a specific type and every specific type has a specific structure. If you want to know what the data will look like then you probably just need to look at it's definition.
Some other tools you should look at include:
fmt.Println("%#v", data) will print the data in go-syntax form.
fmt.Println("%T", data) will print the data's type in go-syntax
form.
More fmt format string options are documented here: http://golang.org/pkg/fmt/
I found a couple packages to help visualize data in Go.
My personal favourite - https://github.com/davecgh/go-spew
There's also - https://github.com/tonnerre/golang-pretty
I'm not familiar with Perl and Dumper, but from what I understand of your post and the related C post (and the very name of the function!), it outputs the content of the data structure.
You can do this using the %v verb of the fmt package. I assume your JSON data is decoded into a struct or a map. Using fmt.Printf("%v", json_obj) will output the values, while %+v will add field names (for a struct - no difference if its a map, %v will output both keys and values), and %#v will output type information too.

Convert a list to a JSON Object in erlang (mochijson)

i would really appreciate any help.
I would like to convert this list
[[{id1,1},{id2,2},{id3,3},{id4,4}],[{id1,5},{id2,6},{id3,7},{id4,8}],[...]]
to a JSON object.
Need some inspiration :)
please help.
Thank you.
Since you asked for inspiration, I can immagine two directions you can take
You can write code to hand-role your own JSON which, if your need is modest enough, can be a very light-weight and appropriate solution. It would be pretty simple Erlang to take that one data-structure and convert it to the JSON.
"[[{\"id1\":1},{\"id2\":2},{\"id3\":3},{\"id4\":4}],[{\"id1\":5},{\"id2\":6} {\"id3\":7},{\"id4\":8}]]"
You can produce a data-structure that mochiweb's mochijson:encode/1 and decode/1 can handle. I took your list and hand coded it to JSON, getting:
X = "[[{\"id1\":1},{\"id2\":2},{\"id3\":3},{\"id4\":4}],[{\"id1\":5},{\"id2\":6},{\"id3\":7},{\"id4\":8}]]".
then I used mochison:decode(X) to see what structure mochiweb uses to represent JSON (too lazy to look at the documentation).
Y = mochijson:decode(X).
{array,[{array,[{struct,[{"id1",1}]},
{struct,[{"id2",2}]},
{struct,[{"id3",3}]},
{struct,[{"id4",4}]}]},
{array,[{struct,[{"id1",5}]},
{struct,[{"id2",6}]},
{struct,[{"id3",7}]},
{struct,[{"id4",8}]}]}]}
So, if you can create this slightly more elaborate data structure then the one you are using, then you can get the JSON by using mochijson:encode/1. Here is an example imbeddied in an io:format statement so that it prints it as a string -- often you would use the io_lib:format/X depending on your application.
io:format("~s~n",[mochijson:encode(Y)]).
[[{"id1":1},{"id2":2},{"id3":3},{"id4":4}],[{"id1":5},{"id2":6},{"id3":7},{"id4":8}]]