Is it possible to get serialized output of some sort from the /dashboard/projects screen in GitLab?
(I want to track differences and alert myself when someone assigns me a new project. One option is of course to build a script that iterates through the HTML pages, but if there's a way to get all projects at once -- preferably in a machine-friendly format -- that's even better.)
I think that usually this kind of alert are not strictly needed, because usually the assignment workflow is about issue/MR assignment (which usually end up in a email in you inbox), anyway..
You should take a look at GitLab API or, even better, use an already existing project like Python GitLab
It is a Python client implementation of GitLab API and also have an handy gitlab command line tool that can give you the required data in a human/machine readable format
Related
I've installed a mediawiki and imported an example page from Wikipedia. But the template is not shown properly. https://wordpress-251650-782015.cloudwaysapps.com/wiki/Cheeta
Any hint on what could be the cause?
You're most likely missing one or more required templates/Lua modules this template relies on. If you want to get all the required templates/modules you can get them via https://en.wikipedia.org/wiki/Special:Export by inserting the template name and ticking the box saying Include templates, and then importing the file generated from that via http://wordpress-251650-782015.cloudwaysapps.com/wiki/Speciale:Importa. However in most cases, except if you desperately want the exact look and feel its easier to write your one template, because Wikipedia templates get enormously complex
The Jenkins API documentation is pretty well written and clear on what you can get.
So to get a list of all artifacts you could request http://jenkins/job/myjob/../api/json?tree=artifacts[*].
What I was not able to find is a way to filter this list of artifacts, like e.g. http://jenkins/job/myjob/../api/json?tree=artifacts[relativePath="/dist/**/theme.min.css"].
Is there a way to filter artifacts like this?
I would like to use the tree parameter instead of xml with xpath and exclude to avoid the memory spike on the server while building the DOM (as mentioned in the docs)
I am attempting to upload a file via curl that basically should imitate how a user would upload a file to https://lutzroeder.github.io/netron/
I can see there is a:
<input type="file" id="open-file-dialog" style="display:none" multiple="false" accept=".onnx, .pb, .meta, .tflite, .lite, .tfl, .bin, .keras, .h5, .hd5, .hdf5, .json, .model, .mar, .params, .param, .armnn, .mnn, .ncnn, .dnn, .cmf, .mlmodel, .caffemodel, .pbtxt, .prototxt, .pkl, .pt, .pth, .t7, .joblib, .cfg, .xml">
But the input does not belong to any forms - which I haven't seen before. When I try doing a traditional post like:
curl -X POST -F ‘data=#example.h5’ https://lutzroeder.github.io/netron/
It is not permitted. How should I approach uploading a file to that input programmatically? I am trying to automate the creation of these Netron figures, as having to manually select e.g. 100 files to get 100 figures would be very cumbersome
Thanks!
Judging by your comment and others', the HTML issue is probably 1. not feasible; 2. not going to completely solve your goal of automating the creation of figures anyway (fill in the input is only the first step, you still need to automate the export process right?)
Therefore I suggest that the easiest solution is to run your own instance of Netron viewer. Netron is an open-source project, and there are many ways to run it on your own computer as given in its documentation.
The approach you are looking at is to utilise the browser version hosted on github.io. The documentation gives all sorts of other ways to run the viewer, macOS/Linux/Windows/Python Server pick one that's most suitable for your situation (depending on your OS and experience in programming) and then write a wrapper script (or hack the initialisation process since you have the source code) to feed the viewer with files and collect outputs.
I'm using Cypress to run a suite of automated tests.
The current version of cypress provides mocha-junit-reporter out of the box, and provides configuration options to pass to the reporter. One of the options is the 'mochaFile'.
I'm using the recommended [hash] tag to output reports across multiple spec files.
this results in a flat mess of files that look like 'results/test-output-abc12345.xml'.
What I want instead is for the test file's relative path and filename to be pass in as the reporter's output file path.
This would give me a structured, feature first view of the output, and in Azure Dev Ops, which aggregates the test output, it would give me correct filenames to detect intermittently failing tests.
Things I've tried that haven't worked:
I've tried to use hooks to modify Cypress's config or set environment variables to try to override the reporterOptions/mochaFile per test at (hopefully) the right time.
I've tried to grab the outputted defaultly-named xml file, and copy it to the correct path+filename given the Cypress.spec.name context, but I can't seem to find the right hook or time to do this.
after and afterEach don't work - I don't think the test report has saved the file yet.
Using a plugin, hooking to some event on test:before:run or test:after:run seem promising, but I'm flying blind since I can't debug into it, so I've been unsucessful in modifying the reporter's output path or copying the file.
I'd love it if someone could show a working example using mocha-junit-reporter, or even a different mocha compatible reporter, if the reporter would play well with Azure Dev Ops, and can help me discover intermittently failing tests.
I am creating a customized Wiki Markup parser/interpreter. There is a big task however in regards to interpreting functions like these:
{{convert|500|ft|m|0}}
which is converted like so:
500 feet (152 m)
I'd like to avoid having to manually code interpretations of these functions, and would rather employ a method where I query a string
+akiva#akiva-ThinkPad-X230:~$ wiki-to-text "convert|3|to(-)|6|ft|abbr=on}}"
and get a return of:
"3 to 6 ft (0.91–1.83 m)"
Is there a tool to do this? Offline is by far the most ideal solution, but I could live with having to query a server.
You could query the MediaWiki api to get a parsed text from wikitext. E.g. to parse the template Template:Done from the english wikipedia you could use: https://en.wikipedia.org/w/api.php?action=parse&text={{Template:done}}&title=Test (see the online docs for parse). You, however, need a MediaWiki instance that provides a template that you want to parse and which works in the exact same way. If you install a webserver locally, you can install your own MediaWiki instance and parse wikitext locally, too.
Btw.: There's the Parsoid project, too, which implements a node-based wikitext->html->wikitext parser. However, it, iirc, still needs to query the api of the wiki to parse templates.