How use wikidata api to access to the statements - json

I'm trying to get information from Wikidata. For example, to access to "cobalt-70" I use the API.
API_ENDPOINT = "https://www.wikidata.org/w/api.php"
query = "cobalt-70"
params = {
'action': 'wbsearchentities',
'format': 'json',
'language': 'en',
'search': query
}
r = requests.get(API_ENDPOINT, params = params)
print(r.json())
So there is a "claims" which gives access to the statements. Is there a best way to check if a value exists in the statement? For example, "cobalt-70" have the value 0.5 inside the property P2114. So how can I check if a value exists in the statement of the entity? As this example.
Is there an approach to access it. Thank you!

I'm not sure this is exactly what you are looking for, but if it's close enough, you can probably modify it as necessary:
import requests
import json
url = 'https://www.wikidata.org/wiki/Special:EntityData/Q18844865.json'
req = requests.get(url)
targets = j_dat['entities']['Q18844865']['claims']['P2114']
for target in targets:
values = target['mainsnak']['datavalue']['value'].items()
for value in values:
print(value[0],value[1])
Output:
amount +0.5
unit http://www.wikidata.org/entity/Q11574
upperBound +0.6799999999999999
lowerBound +0.32
amount +108.0
unit http://www.wikidata.org/entity/Q723733
upperBound +115.0
lowerBound +101.0
EDIT:
To find property id by value, try:
targets = j_dat['entities']['Q18844865']['claims'].items()
for target in targets:
line = target[1][0]['mainsnak']['datavalue']['value']
if isinstance(line,dict):
for v in line.values():
if v == "+0.5":
print('property: ',target[0])
Output:
property: P2114

I try a solution which consists to search inside the json object as the solution proposed here : https://stackoverflow.com/a/55549654/8374738. I hope it can help. Let's give you the idea.
import pprint
def search(d, search_pattern, prev_datapoint_path=''):
output = []
current_datapoint = d
current_datapoint_path = prev_datapoint_path
if type(current_datapoint) is dict:
for dkey in current_datapoint:
if search_pattern in str(dkey):
c = current_datapoint_path
c+="['"+dkey+"']"
output.append(c)
c = current_datapoint_path
c+="['"+dkey+"']"
for i in search(current_datapoint[dkey], search_pattern, c):
output.append(i)
elif type(current_datapoint) is list:
for i in range(0, len(current_datapoint)):
if search_pattern in str(i):
c = current_datapoint_path
c += "[" + str(i) + "]"
output.append(i)
c = current_datapoint_path
c+="["+ str(i) +"]"
for i in search(current_datapoint[i], search_pattern, c):
output.append(i)
elif search_pattern in str(current_datapoint):
c = current_datapoint_path
output.append(c)
output = filter(None, output)
return list(output)
And you just need to use:
pprint.pprint(search(res.json(),'0.5','res.json()'))
Output:
["res.json()['claims']['P2114'][0]['mainsnak']['datavalue']['value']['amount']"]

Related

Pandas DataFrame KeyError: 1

I am a beginner so cant figure a reason for the error in the following code when train.jsonl uses format like that
{"claim": "But he said if people really want to know if they have CHIP they can get a blood test that costs a few MONEYc1", "evidence": "sentenceID100037", "label": "0"}
{"claim": "This is rather a courtly formulation and would doubtless trigger further eyerolling if uttered in", "evidence": "sentenceID100038", "label": "0"}
The top part executes without problem and displays the data.
import pandas as pd
prefix = '/content/'
train_df = pd.read_json(prefix + 'train.jsonl', orient='records', lines=True)
train_df.head()
[See my Colab Notebook][https://colab.research.google.com/gist/lenyabloko/0e17ebe0f3a0e808779bc1fa95e9b24d/semeval2020-delex.ipynb]
I even tried this additional trick which explained comments about 0 column
prefix = '/content/'
train_df = pd.read_json(prefix + 'train_delex.jsonl', orient='columns')
train_df.to_csv(prefix+'train.tsv', sep='\t', index=False, header=False)
train_df = pd.read_csv(prefix + 'train.tsv', header=None)
train_df.head()
Now I see column labeled '0' instead of the original three columns {"claim": "...", "evidence": " ...", "label": "..."} from the above JSONL file (why is that?)
But when I add DataFrame code it results in error
train_df = pd.DataFrame({
'id': train_df[1],
'text': train_df[0],
'labels':train_df[2]
})
In light of the column named "0" this wouldn't work. But where did that column come from??
KeyError Traceback (most recent call last)
2 frames
<ipython-input-16-0537eda6b397> in <module>()
6
7 train_df = pd.DataFrame({
----> 8 'id': train_df[1],
9 'text': train_df[0],
10 'labels':train_df[2]
/usr/local/lib/python3.6/dist-packages/pandas/core/frame.py in __getitem__(self, key)
2993 if self.columns.nlevels > 1:
2994 return self._getitem_multilevel(key)
-> 2995 indexer = self.columns.get_loc(key)
2996 if is_integer(indexer):
2997 indexer = [indexer]
/usr/local/lib/python3.6/dist-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
2897 return self._engine.get_loc(key)
2898 except KeyError:
-> 2899 return self._engine.get_loc(self._maybe_cast_indexer(key))
2900 indexer = self.get_indexer([key], method=method, tolerance=tolerance)
2901 if indexer.ndim > 1 or indexer.size > 1:
Here is the solution that worked for me:
import pandas as pd
prefix = '/content/'
test_df = pd.read_json(prefix + 'test_delex.jsonl', orient='records', lines=True)
test_df.rename(columns={'claim': 'text', 'evidence': 'id', 'label':'labels'}, inplace=True)
cols = test_df.columns.tolist()
cols = cols[-1:] + cols[:-1]
cols = cols[-1:] + cols[:-1]
test_df = test_df[cols]
test_df.to_csv(prefix+'test.csv', sep=',', index=False, header=False)
test_df.head()
I updated my shared Colab Notebook linked in the question above

Can prefix beam search commonly used in speech recognition with CTC be implemented in such a simpler way?

I am learning about speech recognition recently, and I have learned that the idea of prefix beam search is to merge paths with the same prefix, such as [1,1,_] and [_,1,_] (as you can see, _ indicates blank mark).
Based on this understanding, I implemented a version of mine, which can be simplified using pseudo code like this:
def prefix_beam_search(y, beam_size, blank):
seq_len, n_class = y.shape
logY = np.log(y)
beam = [([], 0)]
for t in range(seq_len):
buff = []
for prefix, p in beam:
for i in range(n_class):
new_prefix = list(prefix) + [i]
new_p = p + logY[t][i]
buff.append((new_prefix, new_p))
# merge the paths with same prefix'
new_beam = defaultdict(lambda: ninf)
for prefix, p in buff:
# 'norm_prefix' can simplify the path, [1,1,_,2] ==> [1,2]
# However, the ending 'blank' is retained, [1,1,_] ==> [1,_]
prefix = norm_prefix(prefix, blank)
new_beam[prefix] = logsumexp(new_beam[prefix], p)
# choose the best paths
new_beam = sorted(new_beam.items(), key=lambda x: x[1], reverse=True)
beam = new_beam[: beam_size]
return beam
But most of the versions I found online (according to the paper) are like this:
def _prefix_beam_decode(y, beam_size, blank):
T, V = y.shape
log_y = np.log(y)
beam = [(tuple(), (0, ninf))]
for t in range(T):
new_beam = defaultdict(lambda: (ninf, ninf))
for prefix, (p_b, p_nb) in beam:
for i in range(V):
p = log_y[t, i]
if i == blank:
new_p_b, new_p_nb = new_beam[prefix]
new_p_b = logsumexp(new_p_b, p_b + p, p_nb + p)
new_beam[prefix] = (new_p_b, new_p_nb)
continue
end_t = prefix[-1] if prefix else None
new_prefix = prefix + (i,)
new_p_b, new_p_nb = new_beam[new_prefix]
if i != end_t:
new_p_nb = logsumexp(new_p_nb, p_b + p, p_nb + p)
else:
new_p_nb = logsumexp(new_p_nb, p_b + p)
new_beam[new_prefix] = (new_p_b, new_p_nb)
if i == end_t:
new_p_b, new_p_nb = new_beam[prefix]
new_p_nb = logsumexp(new_p_nb, p_nb + p)
new_beam[prefix] = (new_p_b, new_p_nb)
beam = sorted(new_beam.items(), key=lambda x: logsumexp(*x[1]), reverse=True)
beam = beam[:beam_size]
return beam
The results of the two are different, and my version tends to return longer strings. And I don't quite understand the main two aspects:
Are there any details of my version that are not thoughtful?
The common version while generate new prefix by new_prefix = prefix + (i,) regardless of whether the end of the previous are the same as the given 's'. For example, the old prefix is [a,a,b] and when a new character s is added, both [a,a,b] and [a,a,b,b] are saved. What is the purpose if this? And does it cause double counting?
Looking forward to your answer, thanks in advance!
When you choose the best paths in your code, you don't want to differentiate between [1,_] and [1] since both correspond to the same prefix [1].
If you have for example:
[1], [1,_], [1,2]
then you want the probability of [1] and [1,_] both to have the sum of the two.
probability([1]) = probability([1])+probability([1,_])
probability([1,_]) = probability([1])+probability([1,_])
And after sorting with these probabilities, you may want to keep so many that the number of true prefixes is beam_size.
For example, you have [1], [1,_], [2], [3].
Of which probabilities are: 0.1, 0.08, 0.11, 0.15
Then the probabilities with which you want to sort them are:
0.18, 0.18, 0.11, 0.15, respectively (0.18 = 0.1 + 0.08)
Sorted: [1]:0.18, [1,_]: 0.18, [3]:0.15, [2]:0.11
And if you have beam_size 2, for example, then you may want to keep
[1], [1,_] and [3] so that you have 2 prefixes in your beam, because [1] and [1,_] count as the same prefix (as long as the next character is not 1 - that's why we keep track of [1] and [1,_] separately).

Is there a way to take a list of strings and create a JSON file, where both the key and value are list items?

I am creating a python script that can read scanned, and tabular .pdfs and extract some important data and insert it into a JSON to later be implemented into a SQL database (I will also be developing the DB as a project for learning MongoDB).
Basically, my issue is I have never worked with any JSON files before but that was the format I was recommended to output to. The scraping script works, the pre-processing could be a lot cleaner, but for now it works. The issue I run into is the keys, and values are in the same list, and some of the values because they had a decimal point are two different list items. Not really sure where to even start.
I don't really know where to start, I suppose since I know what the indexes of the list are I can easily assign keys and values, but then it may not be applicable to any .pdf, that is the script cannot be coded explicitly.
import PyPDF2 as pdf2
import textract
with "TestSpec.pdf" as filename:
pdfFileObj = open(filename, 'rb')
pdfReader = pdf2.pdfFileReader(pdfFileObj)
num_pages = pdfReader.numpages
count = 0
text = ""
while count < num_pages:
pageObj = pdfReader.getPage(0)
count += 1
text += pageObj.extractText()
if text != "":
text = text
else:
text = textract.process(filename, method="tesseract", language="eng")
def cleanText(x):
'''
This function takes the byte data extracted from scanned PDFs, and cleans it of all
unnessary data.
Requires re
'''
stringedText = str(x)
cleanText = stringedText.replace('\n','')
splitText = re.split(r'\W+', cleanText)
caseingText = [word.lower() for word in splitText]
cleanOne = [word for word in caseingText if word != 'n']
dexStop = cleanOne.index("od260")
dexStart = cleanOne.index("sheet")
clean = cleanOne[dexStart + 1:dexStop]
return clean
cleanText = cleanText(text)
This is the current output
['n21', 'feb', '2019', 'nsequence', 'lacz', 'rp', 'n5', 'gat', 'ctc', 'tac', 'cat', 'ggc', 'gca', 'cat', 'ttc', 'ccc', 'gaa', 'aag', 'tgc', '3', 'norder', 'no', '15775199', 'nref', 'no', '207335463', 'n25', 'nmole', 'dna', 'oligo', '36', 'bases', 'nproperties', 'amount', 'of', 'oligo', 'shipped', 'to', 'ntm', '50mm', 'nacl', '66', '8', 'xc2', 'xb0c', '11', '0', '32', '6', 'david', 'cook', 'ngc', 'content', '52', '8', 'd260', 'mmoles', 'kansas', 'state', 'university', 'biotechno', 'nmolecular', 'weight', '10', '965', '1', 'nnmoles']
and we want the output as a JSON setup like
{"Date | 21feb2019", "Sequence ID: | lacz-rp", "Sequence 5'-3' | gat..."}
and so on. Just not sure how to do that.
here is a screenshot of the data from my sample pdf
So, i have figured out some of this. I am still having issues with grabbing the last 3rd of the data i need without explicitly programming it in. but here is what i have so far. Once i have everything working then i will worry about optimizing it and condensing.
# for PDF reading
import PyPDF2 as pdf2
import textract
# for data preprocessing
import re
from dateutil.parser import parse
# For generating the JSON file array
import json
# This finds and opens the pdf file, reads the data, and extracts the data.
filename = "*.pdf"
pdfFileObj = open(filename, 'rb')
pdfReader = pdf2.PdfFileReader(pdfFileObj)
text = ""
pageObj = pdfReader.getPage(0)
text += pageObj.extractText()
# checks if extracted data is in string form or picture, if picture textract reads data.
# it then closes the pdf file
if text != "":
text = text
else:
text = textract.process(filename, method="tesseract", language="eng")
pdfFileObj.close()
# Converts text to string from byte data for preprocessing
stringedText = str(text)
# Removed escaped lines and replaced them with actual new lines.
formattedText = stringedText.replace('\\n', '\n').lower()
# Slices the long string into a workable piece (only contains useful data)
slice1 = formattedText[(formattedText.index("sheet") + 10): (formattedText.index("secondary") - 2)]
clean = re.sub('\n', " ", slice1)
clean2 = re.sub(' +', ' ', clean)
# Creating the PrimerData dictionary
with open("PrimerData.json",'w') as file:
primerDataSlice = clean[clean.index("molecular"): -1]
primerData = re.split(": |\n", primerDataSlice)
primerKeys = primerData[0::2]
primerValues = primerData[1::2]
primerDict = {"Primer Data": dict(zip(primerKeys,primerValues))}
# Generatring the JSON array "Primer Data"
primerJSON = json.dumps(primerDict, ensure_ascii=False)
file.write(primerJSON)
# Grabbing the date (this has just the date, so json will have to add date.)
date = re.findall('(\d{2}[\/\- ](\d{2}|january|jan|february|feb|march|mar|april|apr|may|may|june|jun|july|jul|august|aug|september|sep|october|oct|november|nov|december|dec)[\/\- ]\d{2,4})', clean2)
Without input data it is difficult to give you working code. A minimal working example with input would help. As for JSON handling, python dictionaries can dump to json easily. See examples here.
https://docs.python-guide.org/scenarios/json/
Get a json string from a dictionary and write to a file. Figure out how to parse the text into a dictionary.
import json
d = {"Date" : "21feb2019", "Sequence ID" : "lacz-rp", "Sequence 5'-3'" : "gat"}
json_data = json.dumps(d)
print(json_data)
# Write that data to a file
So, I did figure this out, the problem was really just that because of the way my pre-processing was pulling all the data into a single list wasn't really that great of an idea considering that the keys for the dictionary never changed.
Here is the semi-finished result for making the Dictionary and JSON file.
# Collect the sequence name
name = clean2[clean2.index("Sequence") + 11: clean2.index("Sequence") + 19]
# Collecting Shipment info
ordered = input("Who placed this order? ")
received = input("Who is receiving this order? ")
dateOrder = re.findall(
r"(\d{2}[/\- ](\d{2}|January|Jan|February|Feb|March|Mar|April|Apr|May|June|Jun|July|Jul|August|Aug|September|Sep|October|Oct|November|Nov|December|Dec)[/\- ]\d{2,4})",
clean2)
dateReceived = date.today()
refNo = clean2[clean2.index("ref.No. ") + 8: clean2.index("ref.No.") + 17]
orderNo = clean2[clean2.index("Order No.") +
10: clean2.index("Order No.") + 18]
# Finding and grabbing the sequence data. Storing it and then finding the
# GC content and melting temp or TM
bases = int(clean2[clean2.index("bases") - 3:clean2.index("bases") - 1])
seqList = [line for line in clean2 if re.match(r'^[AGCT]+$', line)]
sequence = "".join(i for i in seqList[:bases])
def gc_content(x):
count = 0
for i in x:
if i == 'G' or i == 'C':
count += 1
else:
count = count
return round((count / bases) * 100, 1)
gc = gc_content(sequence)
tm = mt.Tm_GC(sequence, Na=50)
moleWeight = round(mw(Seq(sequence, generic_dna)), 2)
dilWeight = float(clean2[clean2.index("ug/OD260:") +
10: clean2.index("ug/OD260:") + 14])
dilution = dilWeight * 10
primerDict = {"Primer Data": {
"Sequence": sequence,
"Bases": bases,
"TM (50mM NaCl)": tm,
"% GC content": gc,
"Molecular weight": moleWeight,
"ug/0D260": dilWeight,
"Dilution volume (uL)": dilution
},
"Shipment Info": {
"Ref. No.": refNo,
"Order No.": orderNo,
"Ordered by": ordered,
"Date of Order": dateOrder,
"Received By": received,
"Date Received": str(dateReceived.strftime("%d-%b-%Y"))
}}
# Generating the JSON array "Primer Data"
with open("".join(name) + ".json", 'w') as file:
primerJSON = json.dumps(primerDict, ensure_ascii=False)
file.write(primerJSON)

Recall from nltk.metrics.score returning None

I'm trying to calculate the precision and recall using the nltk.metrics.score (http://www.nltk.org/_modules/nltk/metrics/scores.html) with my NLTK.NaiveBayesClassifier.
However, I stumble upon the error:
"unsupported operand type(s) for +: 'int' and 'NoneType".
which I suspect is from my 10-fold cross-validation where in some reference sets, there are zero negative (the data set is a bit imbalanced where 87% of it is positive).
According to nltk.metrics.score,
def precision(reference, test):
"Given a set of reference values and a set of test values, return
the fraction of test values that appear in the reference set.
In particular, return card(``reference`` intersection
``test``)/card(``test``).
If ``test`` is empty, then return None."
It seems that some of my 10-fold set is returning recall as None since there are no Negative in the reference set. Any idea on how to approach this problem?
My full code is as follow:
trainfeats = negfeats + posfeats
n = 10 # 5-fold cross-validation
subset_size = len(trainfeats) // n
accuracy = []
pos_precision = []
pos_recall = []
neg_precision = []
neg_recall = []
pos_fmeasure = []
neg_fmeasure = []
cv_count = 1
for i in range(n):
testing_this_round = trainfeats[i*subset_size:][:subset_size]
training_this_round = trainfeats[:i*subset_size] + trainfeats[(i+1)*subset_size:]
classifier = NaiveBayesClassifier.train(training_this_round)
refsets = collections.defaultdict(set)
testsets = collections.defaultdict(set)
for i, (feats, label) in enumerate(testing_this_round):
refsets[label].add(i)
observed = classifier.classify(feats)
testsets[observed].add(i)
cv_accuracy = nltk.classify.util.accuracy(classifier, testing_this_round)
cv_pos_precision = precision(refsets['Positive'], testsets['Positive'])
cv_pos_recall = recall(refsets['Positive'], testsets['Positive'])
cv_pos_fmeasure = f_measure(refsets['Positive'], testsets['Positive'])
cv_neg_precision = precision(refsets['Negative'], testsets['Negative'])
cv_neg_recall = recall(refsets['Negative'], testsets['Negative'])
cv_neg_fmeasure = f_measure(refsets['Negative'], testsets['Negative'])
accuracy.append(cv_accuracy)
pos_precision.append(cv_pos_precision)
pos_recall.append(cv_pos_recall)
neg_precision.append(cv_neg_precision)
neg_recall.append(cv_neg_recall)
pos_fmeasure.append(cv_pos_fmeasure)
neg_fmeasure.append(cv_neg_fmeasure)
cv_count += 1
print('---------------------------------------')
print('N-FOLD CROSS VALIDATION RESULT ' + '(' + 'Naive Bayes' + ')')
print('---------------------------------------')
print('accuracy:', sum(accuracy) / n)
print('precision', (sum(pos_precision)/n + sum(neg_precision)/n) / 2)
print('recall', (sum(pos_recall)/n + sum(neg_recall)/n) / 2)
print('f-measure', (sum(pos_fmeasure)/n + sum(neg_fmeasure)/n) / 2)
print('')
Perhaps not the most elegant, but guess the most simple fix would be setting it to 0 and the actual value if not None, e.g.:
cv_pos_precision = 0
if precision(refsets['Positive'], testsets['Positive']):
cv_pos_precision = precision(refsets['Positive'], testsets['Positive'])
And for the others as well, of course.

Display struct fields without the mess

I have a struct in Octave that contains some big arrays.
I'd like to know the names of the fields in this struct without having to look at all these big arrays.
For instance, if I have:
x.a=1;
x.b=rand(3);
x.c=1;
The obvious way to take a gander at the structure is as follows:
octave:12> x
x =
scalar structure containing the fields:
a = 1
b =
0.7195967 0.9026158 0.8946427
0.4647287 0.9561791 0.5932929
0.3013618 0.2243270 0.5308220
c = 1
In Matlab, this would appear as the more succinct:
>> x
x =
a: 1
b: [3x3 double]
c: 1
How can I see the fields/field names without seeing all these big arrays?
Is there a way to display a succinct overview (like Matlab's) inside Octave?
Thanks!
You might want to take a look at Basic Usage & Examples. There's several functions mentioned that sound like they'll control the displaying in the terminal.
struct_levels_to_print
print_struct_array_contents
These two functions sound like they're do what you want. I tried both and couldn't get the 2nd one to work. The 1st function changed the terminal output like so:
octave:1> x.a=1;
octave:2> x.b=rand(3);
octave:3> x.c=1;
octave:4> struct_levels_to_print
ans = 2
octave:5> x
x =
{
a = 1
b =
0.153420 0.587895 0.290646
0.050167 0.381663 0.330054
0.026161 0.036876 0.818034
c = 1
}
octave:6> struct_levels_to_print(0)
octave:7> x
x =
{
1x1 struct array containing the fields:
a: 1x1 scalar
b: 3x3 matrix
c: 1x1 scalar
}
I'm running a older version of Octave.
octave:8> version
ans = 3.2.4
If I get a chance I'll check that other function, print_struct_array_contents, to see if it does what you want. Octave 3.6.2 looks to be the latest version as of 11/2012.
Use fieldnames ()
octave:33> x.a = 1;
octave:34> x.b = rand(3);
octave:35> x.c = 1;
octave:36> fieldnames (x)
ans =
{
[1,1] = a
[2,1] = b
[3,1] = c
}
Or you you want it to be recursive, add the following to your .octaverc file (you may want to adjust it to your preferences)
function displayfields (x, indent = "")
if (isempty (indent))
printf ("%s: ", inputname (1))
endif
if (isstruct (x))
printf ("structure containing the fields:\n");
indent = [indent " "];
nn = fieldnames (x);
for ii = 1:numel(nn)
if (isstruct (x.(nn{ii})))
printf ("%s %s: ", indent, nn{ii});
displayfields (x.(nn{ii}), indent)
else
printf ("%s %s\n", indent, nn{ii})
endif
endfor
else
display ("not a structure");
endif
endfunction
You can then use it in the following way:
octave> x.a=1;
octave> x.b=rand(3);
octave> x.c.stuff = {2, 3, 45};
octave> x.c.stuff2 = {"some", "other"};
octave> x.d=1;
octave> displayfields (x)
x: structure containing the fields:
a
b
c: structure containing the fields:
stuff
stuff2
d
in Octave, version 4.0.0 configured for "x86_64-pc-linux-gnu".(Ubuntu 16.04)
I did this on the command line:
print_struct_array_contents(true)
sampleFactorList % example, my nested structure array
Output: (shortened):
sampleFactorList =
scalar structure containing the fields:
sampleFactorList =
1x6 struct array containing the fields:
var =
{
[1,1] = 1
[1,2] =
2 1 3
}
card =
{
[1,1] = 3
[1,2] =
3 3 3
}
To disable/get back to the old behaviour
print_struct_array_contents(false)
sampleFactorList
sampleFactorList =
scalar structure containing the fields:
sampleFactorList =
1x6 struct array containing the fields:
var
card
val
I've put this print_struct_array_contents(true) also into the .octaverc file.