Is there a way to get a current index or filename with interactiveStyleMPRSlice in vtk.js? - itk

I cloned vue-vtkjs-viewport and trying to get index to overlay a labeled image when scrolling (dcm/0.dcm <-> label/0.png).
https://kitware.github.io/vtk-js/api/Interaction_Style_InteractorStyleMPRSlice.html
In the official kitware github, there is a way to get slice but the slice shows strange number as -510.0123 and the sliceRange is [-758.2000122070312, -236.20001220703125].
I uploaded 50 dicom files and i wanted to get a current index or filename.
Is there any way to get the information?
Thank you.

Both ITK and VTK have a way to transform physical point to index, which is probably what you want.

Related

Get number of the current step

For troubleshooting purposes, I would like to obtain the URL to the current step of GitHub Actions logs.
The URL seems fairly easy to calculate:
url="https://github.com/$GITHUB_REPOSITORY/runs/$GITHUB_RUN_ID?check_suite_focus=true#step:$step_number:1"
What's missing is getting the number of the current step - I don't see it listed on https://docs.github.com/en/actions/learn-github-actions/contexts or https://docs.github.com/en/actions/learn-github-actions/environment-variables. Hard-coding the number is not ideal as adding/removing steps before this one will result in wrong/misleading URLs.
Is there perhaps some way to get the current step number that I've overlooked?
Alternatively, the step can have an id. However, it doesn't seem like there's a way to link to a step's log section by its id, is there?
Is there perhaps some way to get the current step number that I've overlooked?
Here is one (very) ugly way:
Give all steps an id. This causes them to be added to the steps object.
Obtain the length of the steps object with jq:
step_nr=$(echo '${{ toJson(steps) }}' | jq length)
Add 2, to get the 1-based step number. (+1 to convert 0-based numbering of length of steps so far to 1-based numbering used by the URL hash parser, and +1 for the "Set up job" step that runs automatically.)
Alternatively, the step can have an id. However, it doesn't seem like there's a way to link to a step's log section by its id, is there?
Looking at the JS code which handles the hash part of the URL, there is:
const e = window.location.hash.match(/^#step:(\d+):(\d+)$/) || [];
So, "no" apparently, at least not via the same mechanism as for indicating the step ID by number.

How to train tesseract and how to recognize multiple columns

I have the task of taking a PDF with images to a txt or csv file to store at a database. I am trying to use OCR on images like the one attached.
The results are as poor as the following:
`20—0
¿ ABÚEADD LDIDI ALBARH, JDSE
AHTÚHIÚ
—- EnlúndeLarreájzegm25- Sºt] . . . . . 944 355019
: ABDGADD 5E'I'IEH ÁLUAREI 5EUERIHD`
Of special importance is the phone number (944 355019), it seems close to correct but it still has wrong digits which makes the whole thing useless.
After much reading I still do not know how to train tesseract. I am following this instructions among others, which leads me with doubts such as:
It talks about getting a sample of the fonts to train. I have an image, so how do I get the exact font to somehow generate the training data?
More often than not I get the text moved from where you would expect to find it. I just read that that is because tesseract does OCR on a per column basis (and then I read it does not so I am confused). So, which one is it, and how make it to write it horizontally?

Scrape html Twitter followers using R

I have a continous task that I think can be automated using R.
Using the twitteR-package I have extracted a list of tweets. Those have been categorized into positive (and neutral) and negative tweets. This have been a manuel task - but I am looking into doing some machine learning on it.
My problem is the reach-part. I want to know not only the number of positive and negative tweets but also the number of people who potentialle have been exposed to the tweet.
There is a way to do this using the twitteR-package, but it is slow, as it requires the machine to sleep between each and every search. And with thousands of tweets this is not a proper way for me.
My thought was therefore if it is possible to extract the number of followers from the html-sourcecode of twitter using the html <- webpage <- getURL("http://www.twitter.com/AngelHaze") and here extract the number of followers.
Also, on top of this, I want to be able to do this using a vector of URL's ("http://www.twitter.com/AngelHaze") and then combining them into a dataframe with the ScreenName (AngelHaze) and the number of followers. I am from Denmark, so the sourcecode containing the number of followers look like this
a class="ProfileNav-stat ProfileNav-stat--link u-borderUserColor u-textCenter js-tooltip js-nav u-textUserColor" title="196.262 følgere" data-nav="followers"
href="/AngelHaze/followers""
Where "196.262 følgere" is the relevant part.
Is this possible? And if yes, can anyone help me going?
Best, Sander Ehmsen.

Latitude/Longitude Generation to be used as sample data

I am writing a demo web application that tracks multiple devices through my companies platform. I have the app working, but need a csv file that will simulate devices moving on a map as if they were a tracker attached to a car. The simulator works by reading 1 row of data every second (1 lat/lng point). Here is an example of the first few lines of a file that would work if the points weren't scattered across the US (the SclId is the device name).
SclId Latitude Longitude
HAT-0 44.968046 -94.420307
HAT-1 44.33328 -89.132008
HAT-2 33.755787 -116.359998
HAT-3 33.844843 -116.54911
HAT-4 44.92057 -93.44786
HAT-5 44.240309 -91.493619
HAT-0 44.968041 -94.419696
HAT-1 44.333304 -89.132027
HAT-2 33.755783 -116.360066
HAT-3 33.844847 -116.549069
HAT-4 44.920474 -93.447851
HAT-5 44.240304 -91.493768
If I had something that could create simulation data with mouse clicks it would save me a lot of time creating another program requiring me to drive around with a device and record the data to a CSV. Any help/recommendations would be greatly appreciated. If you don't understand the question please ask for clarification!
There is a website that I use to draw waypoint and download it as any format e.g., GPX, KML, JSON, CSV, etc.
https://www.alltrails.com/explore/map/map-e727fa5--12

Scrape hyperlinks from an html page

I am trying to extract the latitudes and longitudes for the places listed on the right side of this page. I want to create a table like the following:
Place Latitude Longitude
Agarda 23.12604 87.19869
Ahanda 23.13099 87.18501
.....
.....
West-Sanabandh 23.24876 86.99941
Is it possible to do this in R without calling up the individual hyperlinks for "Agarda:, "Ahanda"... etc. one at a time?
The data appears on different pages. You can't get that data without requesting each page.
If R supports threads then you can call them up in parallel rather than one at a time.
It's possible to use RCurl to scrape each page in some type of loop or sapply. If you combine it with some regex and/or readHTMLTable (to identify the hyperlinks) then it's a relatively straightforward function.
Within RCurl, it's possible to create a multicurl which will do this in parallel, although given the number of queries involved, it might be just as easy to serialise it and put a small system sleep between queries.