Discrepancy between representation of codes for UCUM in FHIR v OMOP - terminology

I’m seeing some differences between what I’m getting in FHIR messages from synthea v. codes in OMOP for UCUM vocabulary. Most of these seem to be around punctuation for lack of a better word (things like _, {, }, [, ], /,. etc.). Examples include (concept_id/FHIR/OMOP):
8876, mmHg, mm[Hg]
44777566, {score}, [score]
9117, mL/min/{1.73_m2}, mL/min/1.73.m2
These codes were specifically found in this resource:
https://syntheticmass.mitre.org/v1/fhir/Patient/6f7acde5-db81-4361-82cf-886893a3280c/$everything
I’m using this version of the OMOP vocabulary:
UCUM
Unified Code for Units of Measure (Regenstrief Institute)
http://aurora.regenstrief.org/~ucum/ucum.html#section-Alphabetic-Index
Version 1.8.2
44819107

If you're seeing mmHg as a UCUM code in FHIR synthea data, that's an error. Similarly, "{score}" is correct, while "[score]" is not. Both mL/min/1.73.m2 and ml/min/{1.73_m2} are valid units, though they're not equivalent.
I would recommend reporting the issues to Synthea.

Related

Use of Parentheses in HTML with "href=tel:"

Surprised I can't find a definitive answer about this anywhere online: I am setting up a translated HTML page in French with a different contact number that begins with "+33 (0)." Since I can't personally test it on this number -- a canonical question: can I get away with an anchor tag that begins <a href="tel:+33(0)..." i.e. has a number contained in parentheses with the remaining numbers following and have the link work?
Good question. Finding a clear, authorative answer regarding href="tel:" specifically is difficult.
RFC 3986 (Section 2.2) defines parenthesis as "reserved sub-delims". This means that they may have special meaning when used in certain parts of the URL. The RFC says:
URI producing applications should percent-encode data octets that
correspond to characters in the reserved set unless these characters
are specifically allowed by the URI scheme to represent data in that
component. If a reserved character is found in a URI component and
no delimiting role is known for that character, then it must be
interpreted as representing the data octet corresponding to that
character's encoding in US-ASCII.
(Emphasis mine)
Basically, you can use any character in the US-ASCII character set in a URL. But, in some situations, parentheses are reserved for specific uses, and in those cases, they should be percent-encoded. Otherwise they can be left as is.
So, yes, you can use parentheses in href="tel:" links and they should work across all browsers. But as with any web standard in the real world, performance relies on each browser correctly implementing that standard.
However, regarding your example (<a href="tel:+33(0)...), I would steer clear of the format you have given, that is:
[country code]([substituted leading 0 for domestic callers])[area code][phone number]
While I was unable to find a definitive guide to how browsers handle such cases, I think you will find, as #DigitalJedi has pointed out, that some (perhaps all?) browsers will strip the parentheses and leave the number contained therein, ultimately resulting in an incorrect number, e.g.
+33 (0) 123 456 7890
...which may result in a call to +3301234567890.
Will this still work? Maybe? We're getting into phone number routing territory now.
Some browsers/devices may be smart enought to figure out what is intended and adapt accordingly, but I would play it safe and instead simply use:
[country code][area code][phone number], e.g.
+33 123 456 7890
or
(0) 123 456 7890
There is no downside (that I know of) to having your local users dialing the international country code - it will result in the same thing as if they had omitted it and substituted the leading zero.
As a side note, according to the ITU's (International Telecommunications Union) E.123 document, section 7.2,
The ( ) should not be used in an international number.
This recommendation concerns how phone numbers are written, but is of some relevance in terms of the text that should be used when creating an href="tel:" link, and is the reason for the two alternative examples I have provided above.
(Credit to #NiKiZe for this semi-related info).
Finally, here is some semi-related, useful information regarding browser treatment of telephone links: https://css-tricks.com/the-current-state-of-telephone-links/
The () in the href attribute should not be a problem: (href-wise)
http://example.com/test(1).html
HOWEVER, if I do a href="tel:+000 (1) 000 000", after clicking the link on phone, the dial on my phone will show +0001000000 This is tested and confirmed on a Android device
This means that the parentheses are removed, as well as the spaces.
But this could still vary because of different OS on the phone.
P.S.:
If you think the + in your number is an issue... I did test this too, and the + does not have any unexpected behavior.
MORE: href="tel:" and mobile numbers
According to ITU-T E.123 Section 7.2 Use of parentheses
The ( ) should not be used in an international number.
So (0) should not be included at all.
I read and remember the correct format should be:
+33.1 23 45 67 89
But I can't find where from.

Status of in-place `rfft` and `irfft` in Julia

So I'm doing some hobby-related stuff which involves taking Fourier transforms of large real arrays which barely fit in memory, and was curious to see if there was an in-place version of rfft and irfft that saved RAM, since RAM consumption is important to me. These transforms are possible despite the input-vs-output-type mismatch, and require an extra row of padding.
In Implement in-place rfft! and irfft!, Tim Holy said he was working on an in-place rfft! and irfft! that made use of a buffer-containing RCpair object, but then Steven Johnson said that he was implementing something equivalent using A_mul_B!(y, plan, x), which he elaborated on here.
Things get a little weird from then on. In the documentation for both 0.3.0 and 0.4.0 there is no mention of A_mul_B!, although A_mul_B is listed. But when I try entering them into Julia, I get
A_mul_B!
A_mul_B! (generic function with 28 methods)
A_mul_B
ERROR: A_mul_B not defined
which suggests that the situation is actually the opposite of what the documentation currently describes.
So since A_mul_B! seems to exist, but isn't documented anywhere, I tried to guess how to test it in-place as follows:
A = rand(Float32, 10, 10);
p = plan_rfft(A);
A_mul_B!(A,p,A)
which resulted in
ERROR: `A_mul_B!` has no method matching A_mul_B!(::Array{Float32,2}, ::Function, ::Array{Float32,2})
So...
Are in-place real FFTs still a work in progress? Or am I using A_mul_B! wrong?
Is there a mismatch between the 0.3.0 documentation and 0.3.0's function library?
That pull request from Steven Johnson is listed as open, not merged; that means the work hasn't been finished yet. The one from me is closed, but if you want the code you can grab it by clicking on the commits.
The docs indeed omit mention of A_mul_B!. A_mul_B is equivalent to A*B, and so isn't exported independently now. A_mul_B! would be used like this: instead of C = A*B, you could say A_mul_B!(C, A, B).
Can you please edit the docs to fix these issues? (You can edit files here in your webbrowser.)

How is the tilde escaping in the JSON Patch RFC supposed to operate?

Referencing https://www.rfc-editor.org/rfc/rfc6902#appendix-A.14:
A.14. ~ Escape Ordering
An example target JSON document:
{
"/": 9,
"~1": 10
}
A JSON Patch document:
[
{"op": "test", "path": "/~01", "value": 10}
]
The resulting JSON document:
{
"/": 9,
"~1": 10
}
I'm writing an implementation of this RFC, and I'm stuck on this. What is this trying to achieve, and how is it supposed to work?
Assuming the answer to the first part is "Allowing json key names containing /s to be referenced," how would you do that?
The ~ character is a keyword in JSON pointer. Hence, we need to "encode" it as ~0. To quote jsonpatch.com,
If you need to refer to a key with ~ or / in its name, you must escape the characters with ~0 and ~1 respectively. For example, to get "baz" from { "foo/bar~": "baz" } you’d use the pointer /foo~1bar~0
So essentially,
[
{"op": "test", "path": "/~01", "value": 10}
]
when decoded yields
[
{"op": "test", "path": "/~1", "value": 10}
]
~0 expands to ~ so /~01 expands to /~1
I guess they mean that you shouldn't "double expand" so that expanded /~1 should not be expanded again to // and thus must not match the documents "/" key (which would happen if you double expanded). Neither should you expand literals in the source document so the "~1" key is literally that and not equivalent to the expanded "/". But I repeat that's my guess about the intention of this example, the real intention may be different.
The example is indeed really bad, in particular since it's using a "test" operation and doesn't specify the result of that operation. Other examples like the next one at A.15 at least says its test operation must fail, A.14 doesn't tell you if the operation should succeed or not. I assume they meant the operation should succeed, so that implies /~01 should match the "~1" key. That's probably all about that example.
If I were to write an implementation I'd probably not worry too much about this example and just look at what other implementations do - to check if I'm compatible with them. It's also a good idea to look for test suites of other projects, for example I found one from http://jsonpatch.com/ at https://github.com/json-patch/json-patch-tests
I think the example provided in RFC isn't exactly best thought-out, especially that it tries to document a feature only through example, which is vague at best - without providing any kind of commentary.
You might be interested in interpretation presented in following documents:
Documentation of Rackspace API
Documentation of OpenStack API
These seem awfully similar and I think it's due to nature of relation between Rackspace and OpenStack:
OpenStack began in 2010 as a joint project of Rackspace Hosting and NASA (...)
It actually provides some useful details including grammar it accepts and rationale behind introducing these tokens, as opposed to the RFC itself.
Edit: it seems that JSON pointers have separate RFC 6901, which is available here and OpenStack and Rackspace specifications above are consistent with it.

Iterating over a string in Vimscript or Parse a JSON file

So I'm creating a vim script that needs to load and parse a JSON file into a local object graph. I searched and I couldn't find any native way to process a JSON file, and I don't want to add any dependencies to the script. So I wrote my own function to parse the JSON string (gotten from the file), but it's really slow. At the moment, I iterate through each character in the file like so:
let len = strlen(jsonString) - 1
let i = 0
while i < len
let c = strpart(jsonString, i, 1)
let i += 1
" A lot of code to process file....
" Note: I've tried short cutting the process by searching for enclosing double-quotes when I come across the initial double quotes (also taking into account escaping '\' character. It doesn't help
endwhile
I've also tried this method:
for c in split(jsonString, '\zs')
" Do a lot of parsing ....
endfor
For reference, a file with ~29,000 characters takes about 4 seconds to process, which is unacceptable.
Is there a better way to iterate over a string in vim script?
Or better yet, have I missed a native function to parse JSON?
Update:
I asked for no dependencies because I:
Didn't want to deal with them
Genuinely wanted some ideas for best way to do this without someone else's work.
Sometimes I just like to do things manually even though the problem has already been solved.
I'm not against plugins or dependencies at all, it's just that I'm curious. Thus the question.
I ended up creating my own function to parse the JSON file. I was creating a script that could parse the package.json file associated with node.js modules. Because of this, I could rely on a fairly consistent format and quit the processing whenever I'd retrieved the information I needed. This usually cut out large chunks of the file since most developers put the largest chunk of the file, their "readme" section, at the end. Because the package.json file is strictly defined, I left the process somewhat fragile. It assumed a root dictionary { } and actively looks for certain entries. You can find the script here: https://github.com/ahayman/vim-nodejs-complete/blob/master/after/ftplugin/javascript.vim#L33.
Of course, this doesn't answer my own question. It's only the solution to my unique problem. I'll wait a few days for new answers and pick the best one before the bounty ends (already set an alarm on my phone).
The simplest solution with the least dependencies is just using the json_decode vim function.
let dict = json_decode(jsonString)
Even though Vim's origin dates back a lot it happens that its internal string() eval() representation is that close to JSON that its likely to work unless you need special characters.
You can lookup the implementation here which even supports true/false/null if you want:
https://github.com/MarcWeber/vim-addon-json-encoding
Better use that library (vim-addon-manager allows to install dependencies easily).
Now it depends on your data whether this is good enough.
Now Benjamin Klein posted your question to vim_use which is why I'm replying.
Best and fast replies happen if you subscribe to the Vim mailinglist.
Goto vim.sf.net and follow the community link.
You cannot expect the Vim community to scrape stackoverflow.
I've added the keyword "json" and "parsing" to that little code that it can be found easier.
If this solution does not work for you you can try the many :h if_* bindings or write an external script which extracts the information you're looking for, or turns JSON into Vim's dictionary representation which can be read by eval() escaping special characters you care about correctly.
If you seek for completely correct solution omitting dependencies is one of the worst thing you can do. The eval() variant mentioned by #MarcWeber is one of the fastest, but it has its disadvantages:
Using solution for securing eval I mentioned in comment makes it no longer the fastest. In fact after you use this it makes eval() slower by more then an order of magnitude (0.02s vs 0.53s in my test).
It does not respect surrogate pairs.
It cannot be used to verify that you have correct JSON: it accepts some strings (e.g. "\<C-o>") that are not JSON strings and it allows trailing commas.
It fails to give normal error messages. It fails badly if you use vam#VerifyIsJSON I mentioned in p.1.
It fails to load floating point values like 1e10 (vim requires numbers to look like 1.0e10, but numbers like 1e10 are allowed: note “and/or” in the first paragraph).
. All of the above (except for the first) statements also apply to vim-addon-json-encoding mentioned by #MarcWeber because it uses eval. There are some other possibilities:
Fastest and the most correct is using python: pyeval('json.loads(vim.eval("varname"))'). Not faster then eval, but fastest among other possibilities. (0.04 in my test: approximately two times slower then eval())
Note that I use pyeval() here. If you want solution for vim version that lacks this functionality it will no longer be one of the fastest.
Use my json.vim plugin. It has an advantages of slightly better error reporting compared to failed vam#VerifyIsJSON, slightly worse compared to eval() and it correctly loads floating-point numbers. It can be used for verification of strings (it does not accept "\<C-a>"), but it loads lists with trailing comma just fine. It does not support surrogate pairs. It is also very slow: in the test I used (it uses 279702 character long strings) it takes 11.59s to load. Json.vim tries to use python if possible though.
For the best error reporting you can take yaml.vim and purge YAML support out of it leaving only JSON (I once have done the same thing for pyyaml, though in python: see markedjson library used in powerline: it is pyyaml minus YAML stuff plus classes with marks). But this variant is even slower then json.vim and should only be used if the main thing you need is error reporting: 207 seconds for loading the same 279702 character long string.
Note that the only variant mentioned that satisfies both requirements “no dependencies” and “no python” is eval(). If you are not fine with its disadvantages you have to throw away one or both of these requirements. Or copy-paste code. Though if you take speed into account only two candidates are left: eval() and python: if you want to parse json fast you really must use C and only these solutions spend most time in functions written in C.
Most other interpreters (ruby/perl/TCL) do not have pyeval() equivalent so they will be slower even if their JSON implementation is written in C. Some other (lua/racket (mzscheme)) have pyeval() equivalent, but e.g. luaeval('{}') is zero meaning that you will have to add additional step explicitly and recursively converting objects into vim dictionaries and lists (e.g. luaeval('vim.dict({})')) which will impact performance. Cannot say anything about mzeval(), but I have never heard about anybody actually using racket (mzscheme) with vim.

What data format is this?

I was checking one share trading site's AJAX response and below is what it showed up in Firebug Response tab of XHR section. Can anyone explain me what format is this and how is it parsed ?
<ST=tat>
<SI=0>
<TB=txtSearch>
<560v=Tata Motors Ltdv=TATMOT>
<566v=Tata Steel Ltdv=TATSTE>
<3199v=Ashram Online.com Ltdv=ASHONL>
<4866v=Kreon Finnancial Services Ltdv=KREFIN>
<552v=Tata Chemicals Ltdv=TATCHE>
<554v=Tata Power Company Ltdv=TATPOW>
<2986v=Tata Metaliks Ltdv=TATMET>
<300v=Tata Sponge Iron Ltdv=TATSPO>
<121v=Tata Coffee Ltdv=TATCOF>
<2295v=Tata Communications Ltdv=TATCOM>
<0v=Time In Milli-Secondsv=0>
I think what we are dealing with here is some proprietary format, likely an Eldricht SGML Horror of some sort.
Banking in general has all sorts of Eldricht horrors running about.
On a related note, this is very much not XML.
Edit:
A quick analysis* indicates that this is a format consisting of a series of statements bracketed by <>; with the parts of the statements separated by = or v=. = seems to indicate a parameter to a control statement, indicated by a two-letter code. (<ST=tat>), while v= seems to indicate an assignment or coupling of some kind (short for "value"?), or perhaps just a field separator.
<ST appears to be short for "search term"; <TB appears to be short for "(source) table". The meaning of <SI eludes me. It is possible that <TB terminates the metadata section, but it's equally possible that the metadata section has a fixed number of terms.
As nothing refers to the number of fields in each statement in the data section, and they are all of the same length (3 fields), it is likely that the number of fields is fixed, but it might derive from the value of <TB, or even <SI, in some way.
What is abundantly clear, however, is that this data is not intended for consumption by other applications than the one that supplies it.
*Caveat: Without a much larger sample it's impossible to tell if this analysis is valid.
It is not a commonly used "web format".
It is probably a proprietary format used by that site and will be parsed by their custom JavaScript.