SoapUI Script Assertion to validate the keys are present without validating the values within the keys - json

I have a REST request that will return a json response with a set of nine keys and there values. No the input values for the request are randomized and therefore will I will get different values every time it is run.
Is is possible to create a script assertion that will just validated whether the json structure is correct.
Json Response:
{
"sid": 636811,
"poss": 122,
"mis": -150,
"pres": 253,
"aea": 0,
"aa": 12,
"ua": 7,
"lar": null,
"lbr": 1
}
Script Assertion:
def expectedMap = [sid:'', poss:'', mis:'', pres:'', aea:'', aa:'', ua:'', lar:'', lbr:'']
def json = new groovy.json.JsonSlurper().parseText(context.response))
assert json.keySet().sort() == expectedMap.keySet().sort()
I believe the following script assertion I have is failing because is it asserting the key values as well.
log.info expectedMap.keySet().sort()
log.info json.keySet().sort()
Tue Jun 26 14:27:52 BST 2018:INFO:[aa, aea, lar, lbr, mis, poss, pres, sid, ua]
Tue Jun 26 14:27:52 BST 2018:INFO:[aa, aea, lar, lbr, mis, poss, pres, sid, ua]
log.info expectedMap.keySet().sort().getClass()
log.info json.keySet().sort().getClass()
Tue Jun 26 14:17:12 BST 2018:INFO:class java.util.ArrayList
Tue Jun 26 14:17:12 BST 2018:INFO:class java.util.TreeMap$KeySet

You are almost there. Just need to get the keys, sort them and compare.
Change from:
assert expectedMap == json, 'Actual response is not matching with expected data'
To:
assert expectedMap.keySet().sort() == json.keySet().sort() as List, 'Actual response is not matching with expected data'

Related

Unable to process JSON input when ingesting logs to Oracle

Error:
send: b'{"specversion": "1.0", "logEntryBatches": [{"entries": [{"data": "{\\"hello\\": \\"oracle\\", \\"as\\": \\"aaa\\"}", "id": "ocid1.test.oc1..jkhjkhh23423fd", "time": "2021-04-01T12:19:28.416000Z"}], "source": "EXAMPLE-source-Value", "type": "remediationLogs", "defaultlogentrytime": "2021-04-01T12:19:28.416000Z"}]}'
reply: 'HTTP/1.1 400 Bad Request\r\n'
header: Date: Fri, 02 Apr 2021 07:39:16 GMT
header: opc-request-id: ER6S6HDVTNWUOKCJ7XXZ/OpcRequestIdExample/770899C2C7CA6ABA11D996CC57E8EE8F
header: Content-Type: application/json
header: Connection: close
header: Content-Length: 79
Traceback (most recent call last):
File "tool.py", line 45, in <module>
put_logs_response = loggingingestion_client.put_logs(
File "/home/ubuntu/.local/lib/python3.8/site-packages/oci/loggingingestion/logging_client.py", line 172, in put_logs
return self.base_client.call_api(
File "/home/ubuntu/.local/lib/python3.8/site-packages/oci/base_client.py", line 276, in call_api
response = self.request(request)
File "/home/ubuntu/.local/lib/python3.8/site-packages/oci/base_client.py", line 388, in request
self.raise_service_error(request, response)
File "/home/ubuntu/.local/lib/python3.8/site-packages/oci/base_client.py", line 553, in raise_service_error
raise exceptions.ServiceError(
oci.exceptions.ServiceError: {'opc-request-id': 'ER6S6HDVTNWUOKCJ7XXZ/OpcRequestIdExample/770899C2C7CA6ABA11D996CC57E8EE8F', 'code': 'InvalidParameter', 'message': 'Unable to process JSON input', 'status': 400}
I am trying to send json data to Oracle logs, but getting the above error. I am using json.dumps(data) to convert the dict to string. Kindly let me know if any workaround available to this.
Code:
data = {'hello':'oracle', "as":"aaa"}
put_logs_response = loggingingestion_client.put_logs(
log_id="ocid1.log.oc1.iad.<<Log OCID>>",
put_logs_details=oci.loggingingestion.models.PutLogsDetails(
specversion="1.0",
log_entry_batches=[
oci.loggingingestion.models.LogEntryBatch(
entries=[
oci.loggingingestion.models.LogEntry(
data= json.dumps(data),
id="ocid1.test.oc1..jkhjkhh23423fd",
time=datetime.strptime(
"2021-04-01T12:19:28.416Z",
"%Y-%m-%dT%H:%M:%S.%fZ"))],
source="EXAMPLE-source-Value",
type="Logs",
defaultlogentrytime=datetime.strptime(
"2021-04-01T12:19:28.416Z",
"%Y-%m-%dT%H:%M:%S.%fZ"))]),
timestamp_opc_agent_processing=datetime.strptime(
"2021-04-01T12:19:28.416Z",
"%Y-%m-%dT%H:%M:%S.%fZ"),
opc_agent_version="EXAMPLE-opcAgentVersion-Value",
opc_request_id="ER6S6HDVTNWUOKCJ7XXZ/OpcRequestIdExample/")
This exception indicates that you have an InvalidParameter in your JSON input.
oci.exceptions.ServiceError: {'opc-request-id': 'ER6S6HDVTNWUOKCJ7XXZ/OpcRequestIdExample/770899C2C7CA6ABA11D996CC57E8EE8F', 'code': 'InvalidParameter', 'message': 'Unable to process JSON input', 'status': 400}
The InvalidParameter is your timestamp, which is date - 2021-04-01T12:19:28.416Z.
According to Oracle's documentation you need to use a RFC3339-formatted date-time string with milliseconds precision when creating a LogEntry.
This code snippet is from oci-python-sdk - log_entry.py, but it doesn't mention the milliseconds precision like Oracle's documentation.
#time.setter
def time(self, time):
"""
Sets the time of this LogEntry.
Optional. The timestamp associated with the log entry. An RFC3339-formatted date-time string.
If unspecified, defaults to PutLogsDetails.defaultlogentrytime.
:param time: The time of this LogEntry.
:type: datetime
"""
self._time = time
This code create a UTC RFC3339 complaint timestamp with milliseconds precision
from datetime import datetime
from datetime import timezone
current_utc_time_with_offset = datetime.now(timezone.utc).isoformat()
print(current_utc_time_with_offset)
#output
2021-04-06T13:00:52.706040+00:00
current_utc_time_with_timezone = datetime.now(timezone.utc).isoformat().replace("+00:00", "Z")
print(current_utc_time_with_timezone)
#output
2021-04-06T13:09:10.053432Z
This Stack Overflow question is worth a read:
What's the difference between ISO 8601 and RFC 3339 Date Formats?
This article is also useful:
Understanding about RFC 3339 for Datetime and Timezone Formatting in Software Engineering
Your data looks fine , I think the issue is that Time precision is more than millisec. it should work fine if you loose the trailing zeros in time.
Format a datetime into a string with milliseconds
https://docs.oracle.com/en-us/iaas/api/#/en/logging-dataplane/20200831/LogEntry/
Your time is RFC3339 but precision is more than millisec
'{"specversion": "1.0", "logEntryBatches": [{"entries": [{"data": "{\"hello\": \"oracle\", \"as\": \"aaa\"}", "id": "ocid1.test.oc1..jkhjkhh23423fd", "time": "2021-04-01T12:19:28.416000Z"}], "source": "EXAMPLE-source-Value", "type": "remediationLogs", "defaultlogentrytime": "2021-04-01T12:19:28.416000Z"}]}'
See https://docs.oracle.com/en-us/iaas/api/#/en/logging-dataplane/20200831/LogEntry/
The timestamp associated with the log entry. An RFC3339-formatted date-time string with milliseconds precision. If unspecified, defaults to PutLogsDetails.defaultlogentrytime.

Why are there no 'To' nor 'From' headers in the output from the internetMessageHeaders selector?

When I make the following call:
/beta/me/messages/{id}?$select=internetMessageHeaders
I get the following output:
{
"#odata.context": "https://graph.microsoft.com/beta/$metadata#users('...')/messages(internetMessageHeaders)/$entity",
"#odata.etag": "...",
"id": "AAMkAGY1Mz...",
"internetMessageHeaders": [
{
"name": "Received",
"value": "from CY1PR16MB0549.namprd16.prod.outlook.com (2603:10b6:903:13d::13) by DM3PR16MB0553.namprd16.prod.outlook.com with HTTPS via CY4PR06CA0051.NAMPRD06.PROD.OUTLOOK.COM; Fri, 16 Feb 2018 22:14:45 +0000"
},
...
]
}
And nowhere do I find 'To' or 'From' fields in the response. Why? Is there a way to retrieve this information?
From the documentation, this property holds:
A key-value pair that represents an Internet message header, as defined by RFC5322, that provides details of the network path taken by a message from the sender to the recipient.
Based on that description, your result looks correct to me:
from CY1PR16MB0549.namprd16.prod.outlook.com (2603:10b6:903:13d::13)
by DM3PR16MB0553.namprd16.prod.outlook.com
with HTTPS
via CY4PR06CA0051.NAMPRD06.PROD.OUTLOOK.COM;
Fri, 16 Feb 2018 22:14:45 +0000
For the To and From addresses, you need to add toRecipients and from to your $select clause.
/beta/me/messages/{id}?$select=toRecipients,from,internetMessageHeaders

FCM Device group BadJsonFormat - properly formed json

I was flooded with a primitive json body for fcm:
Body = mochijson2:encode([ {<<"operation">>, <<"create">>},{<<"notification_key_name">>, <<"console group">>},{<<"registration_ids">>, [<<"02aa6XXXX3c9b6d">>,<<"APA91bGtaXXXXXXXXXXXXoi4UH8vIdZk1X67A_9izpSFSHV3BXxdIwG">>]}]).
And send POST-request to create group according to documentation:
httpc:request(post, {Url, [{"Authorization", KeyApi}, {"project_id", ProjectId}], "application/json", Body},[{timeout, 5000}], []).
But I got error BadJsonFormat:
{ok,{{"HTTP/1.1",400,"Bad Request"},
[{"cache-control","private, max-age=0"},
{"date","Fri, 10 Mar 2017 16:19:37 GMT"},
{"accept-ranges","none"},
{"server","GSE"},
{"vary","Accept-Encoding"},
{"content-length","25"},
{"content-type","application/json; charset=UTF-8"},
{"expires","Fri, 10 Mar 2017 16:19:37 GMT"},
{"x-content-type-options","nosniff"},
{"x-frame-options","SAMEORIGIN"},
{"x-xss-protection","1; mode=block"},
{"alt-svc","quic=\":443\"; ma=2592000; v=\"36,35,34\""}],
"{\"error\":\"BadJsonFormat\"}"}}
But mochijson2:decode(Body) works fine, and it looks like properly formed json, but I get the error BadJsonFormat anyway.
What was wrong? How can I fix this?
The function mochijson2:encode doesn't return a string or a binary, but an iolist:
1> Body = mochijson2:encode([ {<<"operation">>, <<"create">>},{<<"notification_key_name">>, <<"console group">>},{<<"registration_ids">>, [<<"02aa6XXXX3c9b6d">>,<<"APA91bGtaXXXXXXXXXXXXoi4UH8vIdZk1X67A_9izpSFSHV3BXxdIwG">>]}]).
[123,
[34,<<"operation">>,34],
58,
[34,<<"create">>,34],
44,
[34,<<"notification_key_name">>,34],
58,
[34,<<"console group">>,34],
44,
[34,<<"registration_ids">>,34],
58,
[91,
[34,<<"02aa6XXXX3c9b6d">>,34],
44,
[34,<<"APA91bGtaXXXXXXXXXXXXoi4UH8vIdZk1X67A_9izpSF"...>>,
34],
93],
125]
There is nothing wrong with that, by itself. Using iolists instead of strings or binaries means that you don't have to create an expensive flat data structure, that you would just write to a file or a socket, after which you'd throw it away. Function like file:write_file and gen_tcp:send handle iolists just as well as strings or binaries.
However, httpc:request doesn't!
Let's test that by starting a server on port 1111 with netcat in a shell:
$ nc -l 1111
And then make a request from the Erlang shell:
3> httpc:request(post, {"http://127.0.0.1:1111", [], "application/json", Body},[{timeout, 5000}], []).
The netcat server shows this output:
POST / HTTP/1.1
content-type: application/json
content-length: 13
te:
host: 127.0.0.1:1111
connection: keep-alive
{"operation":"create",....
Note that the content-length is 13 instead of 159! httpc:request is able to send the iolist, but it uses the function length instead of iolist_size to generate the content-length header, and as a result the server only considers the first 13 bytes of the JSON object, which is not valid JSON by itself.
The solution is to pass iolist_to_binary(Body) to httpc:request instead of just Body.

Unable to Read JSON file using Elephant Bird

Trying to load the json file which is having null values in it by using elephant-bird JsonLoader.
sample.json
{"created_at": "Mon Aug 22 10:48:23 +0000 2016","id": 767674772662607873,"id_str": "767674772662607873","text": "KPIT Image Result for https:\/\/t.co\/Nas2ZnF1zZ... https:\/\/t.co\/9TnelwtIvm","source": "\u003ca href=\"http:\/\/twitter.com\" rel=\"nofollow\"\u003eTwitter Web Client\u003c\/a\u003e","truncated": false,"in_reply_to_status_id": 123,"in_reply_to_status_id_str": null,"in_reply_to_user_id": null,"in_reply_to_user_id_str": null,"in_reply_to_screen_name": null,"geo": null,"coordinates": null,"place": null,"contributors": null,"is_quote_status": false,"retweet_count": 0,"favorite_count": 0,"entities": {"hashtags": [],"urls": [{"url": "https:\/\/t.co\/Nas2ZnF1zZ","expanded_url": "http:\/\/miltonious.com\/","display_url": "miltonious.com","indices": [24, 47]}],"user_mentions": [],"symbols": []},"favorited": false,"retweeted": false,"possibly_sensitive": false,"filter_level": "low","lang": "en","timestamp_ms": "1471862903167"}
script:
REGISTER piggybank.jar
REGISTER json-simple-1.1.1.jar
REGISTER elephant-bird-pig-4.3.jar
REGISTER elephant-bird-core-4.1.jar
REGISTER elephant-bird-hadoop-compat-4.3.jar
json = LOAD 'sample.json' USING JsonLoader('created_at:chararray, id:chararray, id_str:chararray, text:chararray, source:chararray, in_reply_to_status_id:chararray, in_reply_to_status_id_str:chararray, in_reply_to_user_id:chararray, in_reply_to_user_id_str:chararray, in_reply_to_screen_name:chararray, geo:chararray, coordinates:chararray, place:chararray, contributors:chararray, is_quote_status:bytearray, retweet_count:long, favorite_count:chararray, entities:map[], favorited:bytearray, retweeted:bytearray, possibly_sensitive:bytearray, lang:chararray');
describe json;
dump json;
When I dump json,I am getting the following output and the worning
(Mon Aug 22 10:48:23 +0000 2016,767674772662607873,767674772662607873,google Image Result for Twitter Web Client,false,1234,12345,3214,43215,,,,,,,,,,,,,,)
WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigHadoopLogger - org.apache.pig.builtin.JsonLoader(UDF_WARNING_1): Bad record, returning null for {complete json}
By warning i guess it is getting NULL values.
So how can we load a Json which is having null values in it.
And I have tried in another way i.e
json = LOAD 'sample.json' USING com.twitter.elephantbird.pig.load.JsonLoader('created_at:chararray, id:chararray, id_str:chararray, text:chararray, source:chararray, in_reply_to_status_id:chararray, in_reply_to_status_id_str:chararray, in_reply_to_user_id:chararray, in_reply_to_user_id_str:chararray, in_reply_to_screen_name:chararray, geo:chararray, coordinates:chararray, place:chararray, contributors:chararray, is_quote_status:bytearray, retweet_count:long, favorite_count:chararray, entities:map[], favorited:bytearray, retweeted:bytearray, possibly_sensitive:bytearray, lang:chararray');
describe json;
Output
Schema for json unknown.
Please suggest me.
Thanks.
You can try something like this,
MY_JSON = LOAD 'sample.json' USING com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad');
dump MY_JSON;

R: getting google finance JSON data into a dataframe

I am trying to get google finance JSON data into a dataframe.
I tried:
library(jsonlite)
dat1 <- fromJSON("http://www.google.com/finance/info?q=NSE:%20AAPL,MSFT,TSLA,AMZN,IBM")
dat1
However I get an error:
Error in feed_push_parser(readBin(con, raw(), n), reset = TRUE) :
parse error: trailing garbage
Thank you for any help.
I could not replicate your error using fromJSON due to proxy issues from my side but the following works using httr
require(jsonlite)
require(httr)
#Set your proxy setting if needed
#set_config(use_proxy(url='hostname',port= port,username="",password=""))
url.name = "http://www.google.com/finance/info?q=NSE:%20AAPL,MSFT,TSLA,AMZN,IBM"
url.get = GET(url.name)
#parsing the content as json results in similar error as you encountered
#url.content = content(url.get,type="application/json")
#Error in parseJSON(txt) : parse error: trailing garbage
# " : "0.57" ,"yld" : "2.46" } ,{ "id": "358464" ,"t" : "MSFT"
# (right here) ------^
#read content as html text
url.content = content(url.get, as="text")
#remove html tags
clean.text = gsub("<.*?>", "", url.content)
#remove residual text
clean.text = gsub("\\n|\\//","",clean.text)
DF = fromJSON(clean.text)
head(DF[,1:10],5)
# id t e l l_fix l_cur s ltt lt lt_dts
#1 22144 AAPL NASDAQ 92.51 92.51 92.51 1 4:00PM EDT May 11, 4:00PM EDT 2016-05-11T16:00:02Z
#2 358464 MSFT NASDAQ 51.05 51.05 51.05 1 4:00PM EDT May 11, 4:00PM EDT 2016-05-11T16:00:02Z
#3 12607212 TSLA NASDAQ 208.96 208.96 208.96 1 4:00PM EDT May 11, 4:00PM EDT 2016-05-11T16:00:02Z
#4 660463 AMZN NASDAQ 713.23 713.23 713.23 1 4:00PM EDT May 11, 4:00PM EDT 2016-05-11T16:00:02Z
#5 18241 IBM NYSE 148.95 148.95 148.95 2 6:59PM EDT May 11, 6:59PM EDT 2016-05-11T18:59:12Z
I got the below code from here. Let me know if this helps. On a side note, I would also recommend netfonds. Netfonds is the only source I've found that provides intra-day tick level data for both historical prices and the open book. I posted some additional links below for pulling the Netfonds data if you're interested.
http://www.blackarbs.com/blog/3/22/2015/how-to-get-free-intraday-stock-data-from-netfonds
http://www.onestepremoved.com/free-stock-data/
import urllib
from datetime import date, datetime
""" googlefinance
This module provides a Python API for retrieving stock data from Google Finance.
"""
_month_dict = {
'Jan': 1,
'Feb': 2,
'Mar': 3,
'Apr': 4,
'May': 5,
'Jun': 6,
'Jul': 7,
'Aug': 8,
'Sep': 9,
'Oct': 10,
'Nov': 11,
'Dec': 12}
# Google doesn't like Python's user agent...
class FirefoxOpener(urllib.FancyURLopener):
version = 'Mozilla/5.0 (X11; U; Linux i686) Gecko/20071127 Firefox/2.0.0.11'
def __request(symbol):
url = 'http://google.com/finance/historical?q=%s&output=csv' % symbol
opener = FirefoxOpener()
return opener.open(url).read().strip().strip('"')
def get_historical_prices(symbol, start_date=None, end_date=None):
"""
Get historical prices for the given ticker symbol.
Returns a nested list. fields are Date, Open, High, Low, Close, Volume.
"""
price_data = [data.split(',') for data in __request(symbol).split('\n')[1:]]
for quote in price_data:
quote[0] = _format_date(quote[0])
return price_data
def _format_date(datestr):
""" Change datestr from google format ('20-Jul-12') to the format yahoo uses ('2012-07-20')
"""
parts = datestr.split('-')
day = int(parts[0])
month = _month_dict[parts[1]]
year = int('20'+ parts[2])
return date(year, month, day).strftime('%Y-%m-%d')
If the Google finance endpoint returns a newline delimited json, the solution in R should be:
library(jsonlite)
dat1 <- stream_in(url("http://www.google.com/finance/info?q=NSE:%20AAPL,MSFT,TSLA,AMZN,IBM"))
But it seems the endpoint is not accepting such request (any more?):
HTTP status was '403 Forbidden'