Error checking with Aeson - json

This code parses a recursive JSON structure into a haskell object that I made. I'm using the Aeson library. The problem that I'm encountering is that I want to be able to do error checking easily, even with a recursive call. Right now I use a dummy value (ayyLmao) whenever an error occurs. However I would like to leverage the error checking I get from the Parser monad. How can I do this and possibly clean up my code in the process? If necessary I can also post some sample JSON.
EDIT: I'd like to point out that I'd like to get rid of "ayyLmao" (hence the stupid name), and somehow use 'mzero' for the Parser monad for my error checking instead.
type Comments = Vector Comment
data Comment = Comment
{ author :: Text
, body :: Text
, replies :: Comments
} deriving Show
-- empty placeholder value (only should appear when errors occur)
ayyLmao :: Comment
ayyLmao = Comment "Ayy" "Lmao" V.empty
parseComment :: Object -> Maybe Comments
parseComment obj = flip parseMaybe obj $ \listing -> do
-- go through intermediate objects
comments <- listing .: "data" >>= (.: "children")
-- parse every comment in an array
return $ flip fmap comments $ \commentData -> case commentData of
-- if the data in the array is an object, parse the comment
-- (using a dummy value on error)
Object v -> fromMaybe ayyLmao (parseMaybe parseComment' v)
-- use a dummy value for errors (we should only get objects in
-- the array
_ -> ayyLmao
where
parseComment' :: Object -> Parser Comment
parseComment' v = do
-- get all data from the object
comment <- v .: "data"
authorField <- comment .: "author"
bodyField <- comment .: "body"
replyObjs <- comment .: "replies"
return $ case replyObjs of
-- if there are more objects, then parse recursively
Object more -> case parseComment more of
-- errors use the dummy value again
Just childReplies -> Comment authorField bodyField childReplies
Nothing -> ayyLmao
-- otherwise, we've reached the last comment in the
-- tree
_ -> Comment authorField bodyField V.empty
EDIT: The code in the answer below is correct, but I'd like to add my modified solution. The solution given assumes that "null" indicates no more replies, but for some reason the API designers decided that that should be represented by the empty string.
instance FromJSON Comment where
parseJSON = withObject "Comment" $ \obj -> do
dat <- obj .: "data"
commReplies <- dat .: "replies"
Comment
<$> dat .: "author"
<*> dat .: "body"
<*> case commReplies of
Object _ -> getComments <$> dat .: "replies"
String "" -> return V.empty
_ -> fail "Expected more comments or a the empty string"

You hit the mark with "Or I could have a list of Parsers and then fold it into one larger parser". This is exactly how you would propagate errors from nested parsers. The minimum change to your code to remove ayyLmao would be:
parseComment :: Object -> Maybe Comments
parseComment obj = flip parseMaybe obj $ \listing -> do
-- go through intermediate objects
comments <- listing .: "data" >>= (.: "children")
-- parse every comment in an array
V.sequence $ flip fmap comments $ \commentData -> case commentData of
-- if the data in the array is an object, parse the comment
-- (using a dummy value on error)
Object v -> parseComment' v
-- use a dummy value for errors (we should only get objects in
-- the array
_ -> mzero
where
parseComment' :: Object -> Parser Comment
parseComment' v = do
-- get all data from the object
comment <- v .: "data"
authorField <- comment .: "author"
bodyField <- comment .: "body"
replyObjs <- comment .: "replies"
case replyObjs of
-- if there are more objects, then parse recursively
Object more -> case parseComment more of
-- errors use the dummy value again
Just childReplies -> return $ Comment authorField bodyField childReplies
Nothing -> mzero
-- otherwise, we've reached the last comment in the
-- tree
_ -> return $ Comment authorField bodyField V.empty
This uses mzero for the error cases and propagates errors from the list of replies with V.sequence. sequence is a exactly the thing that takes a list of parsers (or, in this case, a vector) and folds into a single parser that either succeeds or fails.
However, the above is not a very good way to use aeson. It's usually better to derive an instance of the FromJSON type-class and work from there. I would implement the above as
{-# LANGUAGE OverloadedStrings #-}
import qualified Data.Vector as V
import Data.Vector (Vector)
import Data.Text (Text)
import Data.Aeson
import Data.Maybe (fromMaybe)
import Control.Applicative
type Comments = Vector Comment
data Comment = Comment
{ author :: Text
, body :: Text
, replies :: Comments
} deriving Show
newtype CommentList = CommentList { getComments :: Comments }
instance FromJSON Comment where
parseJSON = withObject "Comment" $ \obj -> do
dat <- obj .: "data"
Comment
<$> dat .: "author"
<*> dat .: "body"
<*> (fromMaybe V.empty . fmap getComments <$> dat .: "replies")
instance FromJSON CommentList where
parseJSON = withObject "CommentList" $ \obj -> do
dat <- obj .: "data"
CommentList <$> dat .: "children"
This introduces a wrapper type CommentList which is used to fetch the obj.data.children attribute from the JSON. This takes advantages of the existing FromJSON instance for Vector so you don't have to manually loop through the replies and parse them separately.
The expression
fromMaybe V.empty . fmap getComments <$> dat .: "replies"
assumes that the replies attribute in the JSON contains either a null value or a valid CommentList so it tries to parse a Maybe CommentList value (null is parsed to Nothing) and then replaces a Nothing value with an empty vector using fromMaybe.

Related

Aeson does not find a key that I believe is present

I'm trying to parse a JSON blob that looks like this:
"{\"order_book\":{\"asks\":[[\"0.06777\",\"0.00006744\"],[\"0.06778\",\"0.01475361\"], ... ]],\"bids\":[[\"0.06744491\",\"1.35\"],[\"0.06726258\",\"0.148585363\"], ...]],\"market_id\":\"ETH-BTC\"}}"
Those lists of pairs of numbers are actually much longer; I've replaced their tails with ellipses.
Here's my code:
{-# LANGUAGE OverloadedStrings #-}
module Demo where
import Data.Aeson
import Data.ByteString.Lazy hiding (putStrLn)
import Data.Either (fromLeft)
import Network.HTTP.Request
data OrderBook = OrderBook
{ orderBook_asks :: [[(Float,Float)]]
, orderBook_bids :: [[(Float,Float)]]
, orderBook_marketId :: String
}
instance FromJSON OrderBook where
parseJSON = withObject "order_book" $ \v -> OrderBook
<$> v .: "asks"
<*> v .: "bids"
<*> v .: "market_id"
demo :: IO ()
demo = do
r <- get "https://www.buda.com/api/v2/markets/eth-btc/order_book"
let d = eitherDecode $ fromStrict $ responseBody r :: Either String OrderBook
putStrLn $ "Here's the parse error:"
putStrLn $ fromLeft undefined d
putStrLn $ "\n\nAnd here's the data:"
putStrLn $ show $ responseBody r
Here's what running demo gets me:
Here's the parse error:
Error in $: key "asks" not found
And here's the data:
"{\"order_book\":{\"asks\":[[\"0.06777\",\"0.00006744\"],[\"0.06778\",\"0.01475361\"], ... ]],\"bids\":[[\"0.06744491\",\"1.35\"],[\"0.06726258\",\"0.148585363\"], ...]],\"market_id\":\"ETH-BTC\"}}"
The "asks" key looks clearly present to me -- it's the first one nested under the "order_book" key.
The key is present, but it's wrapped inside another nested object, so you have to unwrap the outer object before you can parse the keys.
The smallest-diff way to do this is probably just inline:
instance FromJSON OrderBook where
parseJSON = withObject "order_book" $ \outer -> do
v <- outer .: "order_book"
OrderBook
<$> v .: "asks"
<*> v .: "bids"
<*> v .: "market_id"
Though you might want to consider introducing another wrapping type instead. This would really depend on the semantics of the data format you have.
I guess you were probably assuming that this is what withObject "order_book" would do, but that's not what it does. The first parameter of withObject is just a human-readable name of the object being parsed, used to create error messages. Customarily that parameter should name the type that is being parsed - i.e. withObject "OrderBook". See the docs.
Separately, I think your asks and bids fields are mistyped.
First, your JSON input looks like they are supposed to be arrays of tuples, but your Haskell type says doubly nested arrays of tuples. So this will fail to parse.
Second, your JSON input has strings as elements of those tuples, but your Haskell type says Float. This will also fail to parse.
The correct type, according to your JSON input, should be:
{ orderBook_asks :: [(String,String)]
, orderBook_bids :: [(String,String)]
Alternatively, if you really want the floats, you'll have to parse them from strings:
instance FromJSON OrderBook where
parseJSON = withObject "order_book" $ \outer -> do
v <- outer .: "order_book"
OrderBook
<$> (map parseTuple <$> v .: "asks")
<*> (map parseTuple <$> v .: "bids")
<*> v .: "market_id"
where
parseTuple (a, b) = (read a, read b)
(note that this ☝️ code is not to be copy&pasted: I'm using read for parsing strings into floats, which will crash at runtime if the strings are malformatted; in a real program you should use a better way of parsing)
withObject "order_book" does not look into the value at key "order_book". In fact, the "order_book" argument is ignored apart from appearing in the error message; actually you should have withObject "OrderBook" there.
All withObject does is confirm that what you have is an object. Then it proceeds using that object to look for the keys "asks", "bids" and "market_id" – but the only key that's there at this level is order_book.
The solution is to only use this parser with the {"asks":[["0.06777"...]...]...} object. The "order_book" key tells no information anyway, unless there are other keys present there as well. You can represent that outer object with another Haskell type and its own FromJSON instance.

Haskell, Aeson - Is there a better way of parsing historical data?

By 'historical data' I just mean dates as key, and value on that day as value.
For example, often govt institutes or uni's research division compile date about earthquakes, rainfalls, market movement, etc. in this format
{
"Meta Data": {
"1: Country": "SomeCountry",
"2: Region": "SomeRegion",
"3: Latest Recording": "2018-11-16"
},
"EarthQuakes": {
"2018-11-16": {
"Richter": "5.2508"
},
"2018-11-09": {
"Richter": "4.8684"
},
"2018-11-02": {
"Richter": "1.8399"
},
...
...
...
"1918-11-02": {
"Richter": "1.8399"
}
}
Usually it'll have a "Meta Data" section and other one would contain the values/data.
I as a beginner know of two ways to parse these type of documents.
Either you go with general parsing shown in Aeson's documentation where you define data types like this
Data MetaData = MetaData { country :: String, region :: String, latestRec :: String } deriving (Show, Eq, Generic)
Make it an instance of FromJSON
instance FromJSON MetaData where
parseJSON = withObject "MetaData" $
\v -> do
metaData <- v .: pack "Meta Data"
country <- metaData .: pack "1: Country"
region <- metaData .: pack "2: Region"
latestRec <- metaData .: pack "3: Latest Recording"
return MetaData{..}
With of course RecordWildCard and DeriveGenerics extensions enabled.
The problem I see with this approach is that it can't be easily implemented for the "EarthQuakes" section.
I'll have to define each and every single date
earthQuakes <- v .: "EarthQuakes"
date1 <- earthQuakes .: "2018-11-16"
date2 <- earthQuakes .: "2018-11-06"
date3 <- earthQuakes .: "2018-11-02"
...
...
dateInfinity <- earthQuakes .: "1918-11-16"
A better approach would be just parsing all the data as default JSON values by decoding the link into Object type
thisFunction = do
linksContents <- simpleHttp "somelink"
let y = fromJust (decode linksContents :: Object)
z = aLotOfFunctionCompositions y
return z
where the aLotOfFunctionCompositions would first convert Object to maybe HashMap having [(k, v)] pairs. Then I would map an unConstruct function to get the value out of default constructors like
unConstruct (DefaultType value) = case (DefaultType value) of
DefaultType x -> x
and finally you would get a nice list!
The problem with this approach is the aLotOfFunctionComposition.
That is just an example! But in reality it can look as ugly and unreadable as this
let y = Prelude.map (\(a, b) -> (decode (encode a) :: Maybe String, decode (encode (snd (Prelude.head b))) :: Maybe String)) x
z = Prelude.map (\(a, b) -> (fromJust a, fromJust b)) y
a = Prelude.map (\(a, b) -> (a, read b :: Double)) z
b = Prelude.map (\(a, b) -> (Prelude.filter (/= '-') a, b)) a
c = Prelude.map (\(a, b) -> (read a :: Int, b)) b
This is snippet from a working code I made.
So my question is this: Is there a better/cleaner way of decoding these sorts of JSON files where you have a lot of "dates" keys and you need to parse them into workable datatypes?
Put a Map in your data type. Aeson translates Map k vs to/from objects, where the vs are en-/de-coded via their own To-/From-JSON instances and the ks by To-/From-JSONKeys. It turns out that Day (from the time package) has perfectly suitable To-/From-JSONKey instances.
data EarthquakeData = EarthquakeData {
metaData :: MetaData,
earthquakes :: Map Day Earthquake
} deriving (Eq, Show, Generic)
instance FromJSON EarthquakeData where
parseJSON = withObject "EarthquakeData $ \v ->
EarthquakeData <$> v .: "Meta Data"
-- Map k v has a FromJSON instance that just does the right thing
-- so just get the payloads with (.:)
-- all this code is actually just because your field names are really !#$#~??
-- not an Aeson expert, maybe there's a better way
<*> v .: "EarthQuakes"
instance ToJSON EarthquakeData where
toJSON EarthquakeData{..} = object [ "Meta Data" .= metaData
, "EarthQuakes" .= earthquakes
]
data MetaData = MetaData { country :: String, region :: String, latestRec :: Day } deriving (Eq, Show)
instance FromJSON MetaData where
parseJSON = withObject "MetaData" $ \v ->
-- if you haven't noticed, applicative style is much neater than do
-- using OverloadedStrings avoids all the pack-ing static
MetaData <$> v .: "1: Country"
<*> v .: "2: Region"
<*> v .: "3: Latest Recording"
instance ToJSON MetaData where
toJSON MetaData{..} = object [ "1: Country" .= country
, "2: Region" .= region
, "3: Latest Recording" .= latestRec
]
toEncoding MetaData{..} = pairs $ "1: Country" .= country
<> "2: Region" .= region
<> "3: Latest Recording" .= latestRec
data Earthquake = Earthquake { richter :: Double } deriving (Eq, Show)
-- Earthquake is a bit funky because your JSON apparently has
-- numbers inside strings?
-- only here do you actually need monadic operations
instance FromJSON Earthquake where
parseJSON = withObject "Earthquake" $ \v ->
do string <- v .: "Richter"
stringNum <- parseJSON string
case readMaybe stringNum of
Just num -> return $ Earthquake num
Nothing -> typeMismatch "Double inside a String" string
instance ToJSON Earthquake where
toJSON = object . return . ("Richter" .=) . show . richter
toEncoding = pairs . ("Richter" .=) . show . richter
I've tested this against your example JSON, and it appears to roundtrip encode and decode successfully.

Cassava parsing error in haskell

Im trying to convert a csv into a vector using cassava. The csv Im trying to convert is the fischer iris data set, used for machine learning. It consists of four doubles and one string.
My code is the following:
{-# LANGUAGE OverloadedStrings #-}
module Main where
import Data.Csv
import qualified Data.ByteString.Lazy as BS
import qualified Data.Vector as V
data Iris = Iris
{ sepal_length :: !Double
, sepal_width :: !Double
, petal_length :: !Double
, petal_width :: !Double
, iris_type :: !String
} deriving (Show, Eq, Read)
instance FromNamedRecord Iris where
parseNamedRecord r =
Iris
<$> r .: "sepal_length"
<*> r .: "sepal_width"
<*> r .: "petal_length"
<*> r .: "petal_width"
<*> r .: "iris_type"
printIris :: Iris -> IO ()
printIris r = putStrLn $ show (sepal_length r) ++ show (sepal_width r)
++ show(petal_length r) ++ show(petal_length r) ++ "hola"
main :: IO ()
main = do
csvData <- BS.readFile "./iris/test-iris"
print csvData
case decodeByName csvData of
Left err -> putStrLn err
-- forM : O(n) Apply the monadic action to all elements of the vector,
-- yielding a vector of results.
Right (h, v) -> V.forM_ v $ printIris
When I run this, it seems as if the csvData is correctly formatted, the first lines from the print csvData return the following:
"5.1,3.5,1.4,0.2,Iris-setosa\n4.9,3.0,1.4,0.2,Iris- setosa\n4.7,3.2,1.3,0.2,Iris-setosa\n4.6,3.1,1.5,0.2,Iris-setosa\n5.0,3.6,1.4,0.2,Iris-setosa\n5.4,3.9,1.7,0.4,Iris-setosa\n4.6,3.4,1.4,0.3,Iris-setosa\n5.0,3.4,1.5,0.2,Iris-setosa\n4.4,2.9,1.4,0.2,Iris-setosa\n4.9,3.1,1.5,0.1,Iris-setosa\n5.4,3.7,1.5,0.2,Iris-setosa\n4.8,3.4,1.6,0.2,Iris-setosa\n4.8,3.0,1.4,0.1,Iris-setosa\n4.3,3.0,1.1,0.1,Iris-setosa\n5.8,4.0,1.2,0.2,Iris-setosa\n5.7,4.4,1.5,0.4,Iris-set
But I get the following error:
parse error (Failed reading: conversion error: no field named "sepal_length") at
4.7,3.2,1.3,0.2,Iris-setosa
4.6,3.1,1.5,0.2,Iris-setosa
5.0,3.6,1.4,0.2,Iris-setosa
5.4,3.9,1.7,0.4 (truncated)
Does anybody have any idea as to why I can be getting this error? The csv has no missing values, and if I replace the line which produces the error for another row I get the same error.
It appears your data does not have a header, which is assumed by decodeByName
The data is assumed to be preceeded by a header.
Add a header, or use decode NoHeader and the FromRecord type class.

Haskell, Aeson - how to debug instances?

I have a complex nested json, which i'm trying to parse with Aeson and Attoparsec, into my custom types. Based on info from questions: Haskell, Aeson & JSON parsing into custom type, Aeson: How to convert Value into custom type? and some info from Internet.
When I'm using following code I'm getting "Nothing" Value from overlapped FromJSON instance, but code goes through each instance for sure, I've tested this by disabling some other insances. So the main question : how to test code in instances and see how data changes over execution in GHCi?
P.S: Tried to set breakpoints and "trace", but they are worked only in main & parseCfg functions.
{-# LANGUAGE OverloadedStrings, FlexibleInstances #-}
-- high level data
data Cfg = Cfg { nm :: CProperty,
author :: CProperty,
langs :: CValue,
grops :: CListArr,
projs :: CPropArr
} deriving (Show)
...
instance FromJSON CProperty where
parseJSON _ = mzero
parseJSON (Object o) = CProperty <$> toCProperty o
where
toCProperty :: (HM.HashMap T.Text Value) -> J.Parser (T.Text, T.Text)
toCProperty _ = error "unexpected property"
toCProperty o' = do
l <- return $ HM.toList o'
k <- return $ fst $ head l
v <- return $ snd $ head l
v' <- parseJSON v
return $ (k, v')
... lot's of different instances
-- |this instance is specific for different files
-- based on common functions to work with most of nested json code
instance FromJSON Cfg where
parseJSON _ = mzero
parseJSON (Object o) = do
nm <- (parseJSON :: Value -> J.Parser CProperty) =<< (o .: T.pack "Name")
autor <- (parseJSON :: Value -> J.Parser CValue) =<< (o .: T.pack "Author")
langs <- (parseJSON :: Value -> J.Parser CProperty) =<< (o .: T.pack "Languages")
groups <- (parseJSON :: Value -> J.Parser CListArr) =<< (o .: T.pack "Groups")
projs <- (parseJSON :: Value -> J.Parser CPropArr) =<< (o .: T.pack "Projects")
return $ Cfg nm author langs groups projs
------------------------------------------------------------------------------------
main :: IO ()
main = do:
s <- L.readFile "/home/config.json"
-- print $ show s
let cfg = parseCfg s
print $ show $ cfg
parseCfg :: L.ByteString -> Maybe Cfg
parseCfg s = decode s
The obvious problem is that in
instance FromJSON CProperty where
parseJSON _ = mzero
parseJSON (Object o) = ...
the first clause matches all input, so your instance returns mzero whatever the argument is. You should change the order of the clauses.
When compiling with warnings, GHC would tell you of the overlapping patterns.

Parsing JSON string into record in Haskell

I'm struggling to understand this (I'm still a bit new to Haskell) but I'm finding the documentation for the Text.JSON package to be a little confusing. Basically I have this data record type: -
data Tweet = Tweet
{
from_user :: String,
to_user_id :: String,
profile_image_url :: String,
created_at :: String,
id_str :: String,
source :: String,
to_user_id_str :: String,
from_user_id_str :: String,
from_user_id :: String,
text :: String,
metadata :: String
}
and I have some tweets in JSON format that conform to the structure of this type. The thing that I'm struggling with is how to map the above to what gets returned from the following code
decode tweet :: Result JSValue
into the above datatype. I understand that I'm supposed to create an instance of instance JSON Tweet but I don't know where to go from there.
Any pointers would be greatly appreciated, thanks!
I'd recommend that you use the new aeson package instead of the json package, as the former performs much better. Here's how you'd convert a JSON object to a Haskell record, using aeson:
{-# LANGUAGE OverloadedStrings #-}
module Example where
import Control.Applicative
import Control.Monad
import Data.Aeson
data Tweet = Tweet {
from_user :: String,
to_user_id :: String,
profile_image_url :: String,
created_at :: String,
id_str :: String,
source :: String,
to_user_id_str :: String,
from_user_id_str :: String,
from_user_id :: String,
text :: String,
metadata :: String
}
instance FromJSON Tweet where
parseJSON (Object v) =
Tweet <$> v .: "from_user"
<*> v .: "to_user_id"
<*> v .: "profile_image_url"
<*> v .: "created_at"
<*> v .: "id_str"
<*> v .: "source"
<*> v .: "to_user_id_str"
<*> v .: "from_user_id_str"
<*> v .: "from_user_id"
<*> v .: "text"
<*> v .: "metadata"
-- A non-Object value is of the wrong type, so use mzero to fail.
parseJSON _ = mzero
Then use Data.Aeson.json to get a attoparsec parser that converts a ByteString into a Value. The call fromJSON on the Value to attempt to parse it into your record. Note that there are two different parsers involved in these two steps, a Data.Attoparsec.Parser parser for converting the ByteString into a generic JSON Value and then a Data.Aeson.Types.Parser parser for converting the JSON value into a record. Note that both steps can fail:
The first parser can fail if the ByteString isn't a valid JSON value.
The second parser can fail if the (valid) JSON value doesn't contain one of the fields you mentioned in your fromJSON implementation.
The aeson package prefers the new Unicode type Text (defined in the text package) to the more old school String type. The Text type has a much more memory efficient representation than String and generally performs better. I'd recommend that you change the Tweet type to use Text instead of String.
If you ever need to convert between String and Text, use the pack and unpack functions defined in Data.Text. Note that such conversions require O(n) time, so avoid them as much as possible (i.e. always use Text).
You need to write a showJSON and readJSON method, for your type, that builds your Haskell values out of the JSON format. The JSON package will take care of parsing the raw string into a JSValue for you.
Your tweet will be a JSObject containing a map of strings, most likely.
Use show to look at the JSObject, to see how the fields are laid out.
You can lookup each field using get_field on the JSObject.
You can use fromJSString to get a regular Haskell strings from a JSString.
Broadly, you'll need something like,
{-# LANGUAGE RecordWildCards #-}
import Text.JSON
import Text.JSON.Types
instance JSON Tweet where
readJSON (JSObject o) = return $ Tweet { .. }
where from_user = grab o "from_user"
to_user_id = grab o "to_user_id"
profile_image_url = grab o "proile_image_url"
created_at = grab o "created_at"
id_str = grab o "id_str"
source = grab o "source"
to_user_id_str = grab o "to_user_id_str"
from_user_id_str = grab o "from_user_id_str"
from_user_id = grab o "from_user_id"
text = grab o "text"
metadata = grab o "metadata"
grab o s = case get_field o s of
Nothing -> error "Invalid field " ++ show s
Just (JSString s') -> fromJSString s'
Note, I'm using the rather cool wild cards language extension.
Without an example of the JSON encoding, there's not much more I can advise.
Related
You can find example instances for the JSON encoding via instances
in the source, for
simple types. Or in other packages that depend on json.
An instance for AUR messages is here, as a (low level) example.
Import Data.JSon.Generic and Data.Data, then add deriving (Data) to your record type, and then try using decodeJSON on the tweet.
I support the answer by #tibbe.
However, I would like to add How you check put some default value in case, the argument misses in the JSON provided.
In tibbe's answer you can do the following:
Tweet <$> v .: "from_user"
<*> v .:? "to_user_id" .!= "some user here"
<*> v .: "profile_image_url" .!= "url to image"
<*> v .: "created_at"
<*> v .: "id_str" != 232131
<*> v .: "source"
this will the dafault parameters to be taken while parsing the JSON.