I tried this solution, and the highest ranking solution here and none work.
I have a java automation script that I need to run a python script remotely while giving it a JSON object to treat as a dictionary.
JSON object as the java give it:
{\'oracle_11gd.pcap\': \'19.0.0.10\' ,\'oracle_11g_also.pcap\': \'19.0.0.17\'}
The object is actually longer, I just gave a sample here.
The solution each fails, and I suspect it's because of the backslashes, but I can't even bring it in as a string to replace the backslash with nothing, like this:
data = str(sys.argv[1])
data = data.replace("\\", "")
transportDictionary = ast.literal_eval(data)
I tried just
data = sys.argv[1]
Also didn't work.
The solution I found was to echo the JSON into a JSON file on the remote machine where the python runs, while using sed to replace the backslashes and single quotes into nothing and double quotes. Like so:
echo {\'oracle_11gd.pcap\': \'19.0.0.10\' ,\'oracle_11g_also.pcap\': \'19.0.0.17\'} sed 's|\\||g' | sed 's|\\x27|\"|g' > pcaps.json
Then, in the python, I read the file and json.load like so:
import json
.
.
.
with open(directory + 'pcaps.json', 'r') as f:
data = json.load(f)
transportDictionary = eval(json.dumps(data))
Related
I am parsing 350 txt files having json data using python. I am able to retrieve 62 of those object and store them on mysql database, but after that I am getting an error saying JSONDecodeError: ExtraData
Python:
import os
import ast
import json
import mysql.connector as mariadb
from mysql.connector.constants import ClientFlag
mariadb_connection = mariadb.connect(user='root', password='137800000', database='shaproject',client_flags=[ClientFlag.LOCAL_FILES])
cursor = mariadb_connection.cursor()
sql3 = """INSERT INTO shaproject.alttwo (alttwo_id,responses) VALUES """
os.chdir('F:/Code Blocks/SEM 2/DM/Project/350/For Merge Disqus')
current_list_dir=os.listdir()
print(current_list_dir)
cur_cwd=os.getcwd()
cur_cwd=cur_cwd.replace('\\','/')
twoid=1
for every_file in current_list_dir:
file=open(cur_cwd + "/" + every_file)
utffile=file.read()
data=json.loads(utffile)
for i in range(0,len(data['response'])):
data123 = json.dumps(data['response'][i])
tup=(twoid,data123)
print(sql3+str(tup))
twoid+=1
cursor.execute(sql3+str(tup)+";")
print(tup)
mariadb_connection.commit()
I have searched online and found that multiple dump statements are resulting in this error. But I am unable to resolve it.
You want to use glob.
Rather than os.listdir(), which is too permissive,
use glob to focus on just the *.json files.
Print out the name of the file before asking .loads() to parse it.
Rename any badly formatted files to .txt rather than .json, in order to skip them.
Note that you can pass the open file directly to .load(), if you wish.
Closing open files would be a good thing.
Rather than a direct assignment (with no close()!)
you would be better off with with:
with open(cur_cwd + "/" + every_file) as file:
data = json.load(file)
Talking about current current working directory seems
both repetitive and redundant.
It would suffice to call it cwd.
I am looking at any library (in java) that can help me generate a dummy JSON file to test my code for e.g The JSON file can contain random user profile data-name, address, zipcode
I searched StackOverflow and found this link, found the following link : How to generate JSON string in Java?
I think the suggested library https://github.com/DiUS/java-faker, seems to be useful, however because of security constraints I cannot use this particular library. Are there any more recommendations?
Use for instance Faker, like that:
#!/usr/bin/env python3
from json import dumps
from faker import Faker
fake = Faker()
def user():
return dict(
name=fake.name(),
address=fake.address(),
bio=fake.text()
)
print('[')
try:
while True:
print(dumps(user()))
print(',')
except KeyboardInterrupt:
# XXX: json array can not end with a comma
print(dumps(user()))
print(']')
You can use it like that:
python3 fake_user.py > users.json
Use Ctrl+C to stop it when the file is big enough
I am not able to load the following 'data.json' file in python 2.7.11,
file data.json
{
"name":xyz,
"age":12
}
The code i am using the load the above file,
import json
json_data = open ('data.json').read ()
json.loads(json_data)
I always get the following error
ValueError: No JSON object could be decoded
In the meantime,i also tried using yaml.load and it worked fine. But i wanted to know what is that i am doing wrong.
Your JSON is invalid.
Remember, if you're using alphabetical characters in a json Value, it's a string. So you have to write it within double quotes, like so:
{
"name":"xyz",
"age":12
}
Hopefully, this fix should solve your problem.
I have a json file including the sample lines of code below:
[{"tarih":"20130824","tarihView":"24-08-2013"},{"tarih":"20130817","tarihView":"17-08-2013"},{"tarih":"20130810","tarihView":"10-08-2013"},{"tarih":"20130803","tarihView":"03-08-2013"},{"tarih":"20130727","tarihView":"27-07-2013"},{"tarih":"20130720","tarihView":"20-07-2013"},{"tarih":"20130713","tarihView":"13-07-2013"},{"tarih":"20130706","tarihView":"06-07-2013"}]
I need to extract all the dates in the yy/mm/dd format into a text format with proper line endings:
20130824
20130817
20130810
20130803
...
20130706
How can I do this by using sed or similar console utility?
Many thanks for your help.
this line works for your example:
grep -Po '\d{8}' file
or with BRE:
grep -o '[0-9]\{8\}' file
it outputs:
20130824
20130817
20130810
20130803
20130727
20130720
20130713
20130706
if you want to extract the string after "tarih":", you could :
grep -Po '"tarih":"\K\d{8}' file
it gives same output.
Note that regex won't do date string validation.
This is VERY easy in python:
#!/bin/bash
python -c "vals=$(cat jsonfile)
for curVal in vals: print curVal['tarih']"
If I paste your example to jsonfile I get this output
20130824
20130817
20130810
20130803
20130727
20130720
20130713
20130706
Which is exactly what you need, right?
This works because in python [] is a list and {} is a dictionary, so it is very easy to get any data from that structure. This method is very safe as well, because it wont fail if some field in your data contains { , " or any other character that sed will probably look for. Also it does not depend on the field position or the number of fields.
I have a working code for parsing a JSON output using KornShell by treating it as a string of characters. The issue I have is that the vendor keeps changing the position of the field that I am intersted in. I understand in JSON, we can parse it by key-value pairs.
Is there something out there that can do this? I am intersted in a specific field and I would like to use it to run the checks on the status of another RESTAPI call.
My sample json output is like this:
JSONDATA value :
{
"status": "success",
"job-execution-id": 396805,
"job-execution-user": "flexapp",
"job-execution-trigger": "RESTAPI"
}
I would need the job-execution-id value to monitor this job through the rest of the script.
I am using the following command to parse it:
RUNJOB=$(print ${DATA} |cut -f3 -d':'|cut -f1 -d','| tr -d [:blank:]) >> ${LOGDIR}/${LOGFILE}
The problem with this is, it is field delimited by :. The field position has been known to be changed by the vendors during releases.
So I am trying to see if I can use a utility out there that would always give me the key-value pair of "job-execution-id": 396805, no matter where it is in the json output.
I started looking at jsawk, and it requires the js interpreter to be installed on our machines which I don't want. Any hint on how to go about finding which RPM that I need to solve it?
I am using RHEL5.5.
Any help is greatly appreciated.
The ast-open project has libdss (and a dss wrapper) which supposedly could be used with ksh. Documentation is sparse and is limited to a few messages on the ast-user mailing list.
The regression tests for libdss contain some json and xml examples.
I'll try to find more info.
Python is included by default with CentOS so one thing you could do is pass your JSON string to a Python script and use Python's JSON parser. You can then grab the value written out by the script. An example you could modify to meet your needs is below.
Note that by specifying other dictionary keys in the Python script you can get any of the values you need without having to worry about the order changing.
Python script:
#get_job_execution_id.py
# The try/except is because you'll probably have Python 2.4 on CentOS 5.5,
# and the straight "import json" statement won't work unless you have Python 2.6+.
try:
import json
except:
import simplejson as json
import sys
json_data = sys.argv[1]
data = json.loads(json_data)
job_execution_id = data['job-execution-id']
sys.stdout.write(str(job_execution_id))
Kornshell script that executes it:
#get_job_execution_id.sh
#!/bin/ksh
JSON_DATA='{"status":"success","job-execution-id":396805,"job-execution-user":"flexapp","job-execution-trigger":"RESTAPI"}'
EXECUTION_ID=`python get_execution_id.py "$JSON_DATA"`
echo $EXECUTION_ID