Reading a nested list in a CSV column with Cassava - csv

An example is easier to explain so I would like to parse this data into a datatype with cassava
title;authors
Cambridge Economic History;Ian MorrisWalter,ScheidelRichard,P Saller
I tried to do the following but it does not work (minimal not-working example) :
{-# LANGUAGE OverloadedStrings, DeriveGeneric #-}
module Library where
import Data.Csv
import Data.List.Split
import qualified Data.Vector as V
import qualified Data.Text as T
import GHC.Generics
data Book = Book {
title :: T.Text,
authors :: Authors
} deriving (Generic, Show)
type Authors = [T.Text]
instance FromNamedRecord Book
instance FromNamedRecord Authors
parseField "authors" =
pure $ splitOn "," ???
opts = defaultDecodeOptions {
decDelimiter = fromIntegral (ord ';')
}
main c = do
csvData <- BL.readFile "data.csv"
let res = decodeByNameWith opts csvData :: Either String (Header, V.Vector Book)
Is it possible to do it in Cassava ? Thanks!

The easiest way to do this is to write a customized FromNamedRecord instance for Book, instead of deriving the Generic one. It would look something like:
instance FromNamedRecord Book where
parseNamedRecord m = Book <$>
m .: "title" <*>
(T.splitOn "," <$> m .: "authors")
Here, m .: "authors" retrieves the authors field as a Text record, and the T.splitOn "," is fmapped (<$>) over that result to split the Text into [Text] by commas.
The full program:
{-# LANGUAGE OverloadedStrings #-}
module Library where
import Data.Char
import Data.Csv
import qualified Data.Vector as V
import qualified Data.Text as T
import Data.ByteString.Lazy as BL
data Book = Book {
title :: T.Text,
authors :: Authors
} deriving (Show)
type Authors = [T.Text]
instance FromNamedRecord Book where
parseNamedRecord m = Book <$>
m .: "title" <*>
(T.splitOn "," <$> m .: "authors")
opts = defaultDecodeOptions {
decDelimiter = fromIntegral (ord ';')
}
main = do
csvData <- BL.readFile "data.csv"
let res = decodeByNameWith opts csvData :: Either String (Header, V.Vector Book)
print res
giving:
λ> main
Right (["title","authors"],[Book {title = "Cambridge Economic History",
authors = ["Ian MorrisWalter","ScheidelRichard","P Saller"]}])
Note that this doesn't allow you to handle per-author quoting in the author list, so if you need to parse an author with an embedded comma, like:
Another Book;John Smith,"Anne Douglas, Jr."
you'll be out of luck. Cassava will refuse to parse an "authors" field with embedded quotes like this, and you'll end up having to write your own specialized CSV parser, I think.

Related

Haskell cassava (Data.Csv): Ignore missing columns/fields

How can I set up cassava to ignore missing columns/fields and fill the respective data type with a default value? Consider this example:
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE OverloadedStrings #-}
import Data.ByteString.Lazy.Char8
import Data.Csv
import Data.Vector
import GHC.Generics
data Foo = Foo {
a :: String
, b :: Int
} deriving (Eq, Show, Generic)
instance FromNamedRecord Foo
decodeAndPrint :: ByteString -> IO ()
decodeAndPrint csv = do
print $ (decodeByName csv :: Either String (Header, Vector Foo))
main :: IO ()
main = do
decodeAndPrint "a,b,ignore\nhu,1,pu" -- [1]
decodeAndPrint "ignore,b,a\npu,1,hu" -- [2]
decodeAndPrint "ignore,b\npu,1" -- [3]
[1] and [2] work perfectly fine, but [3] fails with
Left "parse error (Failed reading: conversion error: no field named \"a\") at \"\""
How could I make decodeAndPrint capable of handling this incomplete input?
I could of course manipulate the input bytestring, but maybe there is a more elegant solution.
A Solution thanks to the input of Daniel Wagner below:
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE OverloadedStrings #-}
import Control.Applicative
import Data.ByteString.Lazy.Char8
import Data.Csv
import Data.Vector
import GHC.Generics
data Foo = Foo {
a :: Maybe String
, b :: Maybe Int
} deriving (Eq, Show, Generic)
instance FromNamedRecord Foo where
parseNamedRecord rec = pure Foo
<*> ((Just <$> Data.Csv.lookup rec "a") <|> pure Nothing)
<*> ((Just <$> Data.Csv.lookup rec "b") <|> pure Nothing)
decodeAndPrint :: ByteString -> IO ()
decodeAndPrint csv = do
print $ (decodeByName csv :: Either String (Header, Vector Foo))
main :: IO ()
main = do
decodeAndPrint "a,b,ignore\nhu,1,pu" -- [1]
decodeAndPrint "ignore,b,a\npu,1,hu" -- [2]
decodeAndPrint "ignore,b\npu,1" -- [3]
(Warning: completely untested! Code is for idea transmission only, not suitable for any use, etc. etc.)
The Parser type demanded by FromNamedRecord is an Alternative, so just toss a default on with (<|>).
instance FromNamedRecord Foo where
parseNamedRecord rec = pure Foo
<*> (lookup rec "a" <|> pure "missing")
<*> (lookup rec "b" <|> pure 0)
If you want to know later whether the field was there or not, make your fields rich enough to record that:
data RichFoo = RichFoo
{ a :: Maybe String
, b :: Maybe Int
}
instance FromNamedRecord Foo where
parseNamedRecord rec = pure RichFoo
<*> ((Just <$> lookup rec "a") <|> pure Nothing)
<*> ((Just <$> lookup rec "b") <|> pure Nothing)

Trouble with JSON (Data.Aeson)

I'm new to Haskell and in order to learn the language I am working on a project that involves dealing with JSON. I am currently getting the feeling Haskell is the wrong language for the job, but that isn't the point here.
I've been struggling to understand how this works for a few days. I have searched and everything I have found does not seem to work. Here's the issue:
I have some JSON in the following format:
>>>less "path/to/json"
{
"stringA1_stringA2": {"stringA1":floatA1,
"stringA2":foatA2},
"stringB1_stringB2": {"stringB1":floatB1,
"stringB2":floatB2}
...
}
Here floatX1 and floatX2 are actually strings of the form "0.535613567", "1.221362183" etc. What I want to do is parse this into the following data
data Mydat = Mydat { name :: String, num :: Float} deriving (Show)
where name would correspond to "stringX1_stringX2" and num to floatX1 for X = A,B,...
So far I have reached a 'solution' which feels fairly hackish and convoluted and doesn't work properly.
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE DeriveGeneric #-}
import Data.Functor
import Data.Monoid
import Data.Aeson
import Data.List
import Data.Text
import Data.Map (Map)
import qualified Data.HashMap.Strict as DHM
--import qualified Data.HashMap as DHM
import qualified Data.ByteString.Lazy as LBS
import System.Environment
import GHC.Generics
import Text.Read
data Mydat = Mydat {name :: String, num :: Float} deriving (Show)
test s = do
d <- LBS.readFile s
let v = decode d :: Maybe (DHM.HashMap String Object)
case v of
-- Just v -> print v
Just v -> return $ Prelude.map dataFromList $ DHM.toList $ DHM.map (DHM.lookup "StringA1") v
good = ['1','2','3','4','5','6','7','8','9','0','.']
f x = elem x good
dataFromList :: (String, Maybe Value) -> Mydat
dataFromList (a,b) = Mydat a (read (Prelude.filter f (show b)) :: Float)
Now I can compile this and run
test "path/to/json"
in ghci and it prints a list of Mydat's in the case where "stringX1"="stringA1" for all X. In reality there are two values for "stringX1" so aside from the hackyness this is not satisfactory. There must be a better way to do this. I get that I need to write my own parser probably but I am confused about how this works so any suggestions would be great. Thanks in advance.
The structure of your JSON is pretty nasty, but here's a basic working solution:
#!/usr/bin/env stack
-- stack --resolver lts-11.5 script --package containers --package aeson
{-# LANGUAGE OverloadedStrings #-}
import qualified Data.Map as Map
import qualified Data.Aeson as Aeson
data Mydat = Mydat { name :: String
, num :: Float
} deriving (Show)
instance Eq Mydat where
(Mydat _ x1) == (Mydat _ x2) = x1 == x2
instance Ord Mydat where
(Mydat _ x1) `compare` (Mydat _ x2) = x1 `compare` x2
type MydatRaw = Map.Map String (Map.Map String String)
processRaw :: MydatRaw -> [Mydat]
processRaw = Map.foldrWithKey go []
where go key value accum =
accum ++ (Mydat key . read <$> Map.elems value)
main :: IO ()
main =
do let json = "{\"stringA1_stringA2\":{\"stringA1\":\"0.1\",\"stringA2\":\"0.2\"}}"
print $ fmap processRaw (Aeson.eitherDecode json)
Note that read is partial and generally not a good idea. But I'll leave it to you to flesh out a safer version :)
As I commented, the best thing would probably be to make your JSON file well-formed in the sense that the float fields should really be floats, not strings.
If that's not an option, I would recommend you phrase out the type that the JSON file seems to represent as simple as possible (but without dynamic Objects), and then convert that to the type you actually want.
import Data.Map (Map)
import qualified Data.Map as Map
type GarbledJSON = Map String (Map String String)
-- ^ you could also stick with hash maps for everything, but
-- usually `Map` is actually more sensible in Haskell.
data MyDat = MyDat {name :: String, num :: Float} deriving (Show)
test :: FilePath -> IO [MyDat]
test s = do
d <- LBS.readFile s
case decode d :: Maybe GarbledJSON of
Just v -> return [ MyDat iName ( read . filter (`elem`good)
$ iVals Map.! valKey )
| (iName, iVals) <- Map.toList v
, let valKey = takeWhile (/='_') iName ]
Note that this will crash completely if any of the items don't contain the first part of the name as a string of float format, and likely give bogus items when you filter out characters that aren't good. If you just want to ignore any malformed items (which is also not a very clean approach...), you can do it this way:
test :: FilePath -> IO [MyDat]
test s = do
d <- LBS.readFile s
return $ case decode d :: Maybe GarbledJSON of
Just v -> [ MyDat iName iVal
| (iName, iVals) <- Map.toList v
, let valKey = takeWhile (/='_') iName
, Just iValStr <- [iVals Map.!? valKey]
, [(iVal,"")] <- [reads iValStr] ]
Nothing -> []

Haskell type mismatch with csv parsing

I'm trying to parse a csv file where I want to ignore the first line and the last line, as in:
Someheader
foo, 1000,
bah, 2000,
somefooter
I wrote some Haskell using the cassava library:
{-# LANGUAGE OverloadedStrings #-}
import Control.Applicative
import qualified Data.ByteString.Lazy as BL
import Data.Csv
import qualified Data.Vector as V
import Control.Monad (mzero)
data Demand = Demand
{ name :: !String
, amount :: !Int
} deriving Show
instance FromRecord Demand where
parseRecord r
| length == 2 = Demand <$> r .! 0
<*> r .! 1
| otherwise = mzero
main :: IO ()
main = do
csvData <- BL.readFile "demand.csv"
case decode HasHeader csvData of
Left err -> putStrLn err
Right (_, v) -> V.forM_ v $ \ p ->
putStrLn $ (name p) ++ " amount " ++ show (amount p)
When I run this get a type mismatch, that I can't figure out:
parser.hs:34:15: error:
• Couldn't match expected type ‘V.Vector a2’
with actual type ‘(a1, V.Vector Demand)’
• In the pattern: (_, v)
In the pattern: Right (_, v)
My guess is that I haven't unpacked the Vector in the record correctly? Any help, gratefully received.
decode has the type FromRecord a => HasHeader -> ByteString-> Either String (Vector a) based on the documentation for cassava.
So the correct pattern would be Right v instead of Right (_, v).
Another problem in the code, is that length is a function, and you didn't apply it to anything, in the guard | length == 2 = .... I believe the correct code should instead be | length r == 2 = ...
Here's the complete code after those changes:
{-# LANGUAGE OverloadedStrings #-}
import Control.Applicative
import qualified Data.ByteString.Lazy as BL
import Data.Csv
import qualified Data.Vector as V
import Control.Monad (mzero)
data Demand = Demand
{ name :: !String
, amount :: !Int
} deriving Show
instance FromRecord Demand where
parseRecord r
| length r == 2 = Demand <$> r .! 0
<*> r .! 1
| otherwise = mzero
main :: IO ()
main = do
csvData <- BL.readFile "demand.csv"
case decode HasHeader csvData of
Left err -> putStrLn err
Right v -> V.forM_ v $ \ p ->
putStrLn $ (name p) ++ " amount " ++ show (amount p)

Haskell: Segmentation Fault using big CSV file and Cassava

I am trying to use cassava with 2 large CSV files. The code compiles, but just gives 'Segmentation Fault' when I run it. Here is the full code:
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE OverloadedStrings #-}
import Control.Applicative
import qualified Data.ByteString.Internal
import qualified Data.ByteString.Lazy as BL
import qualified Data.ByteString.Lazy.Search as BLS
import Data.Csv
import Data.Char
import qualified Data.Vector as V
lang1 :: String
lang1 = "eng"
pattern :: Data.ByteString.Internal.ByteString
pattern = ""
checkIfLang :: String -> Bool
checkIfLang x =
if x == lang1
then True
else False
myOptions = defaultDecodeOptions {
decDelimiter = fromIntegral (ord '\t')
}
main :: IO ()
main = do
sentenceCSV <- BL.readFile "sentenceData/sentences.csv"
linksCSV <- BL.readFile "sentenceData/links.csv"
case decodeWith myOptions NoHeader (BLS.replace "\"" pattern sentenceCSV) of
Left err -> putStrLn err
Right v -> V.forM_ v $ \ (id :: Int, lang :: String, sentence :: String) ->
if checkIfLang lang
then do putStrLn $ show id ++ " has lang of " ++ lang ++ "and is:"
putStrLn $ sentence
else return ()
I am unsure what to do, since I can't figure out how to do it from just googling. Any corrections, even if not related to my problem, will help me a lot in learning Haskell. I am extremely new to it.

Parsing a nested array of objects with Aeson

I want to parse a JSON object and create a JSONEvent with the given name and args
I'm using Aeson, and right now I'm stucked on converting "args":[{"a": "b"}] to a [(String, String)].
Thank's in advance!
{-# LANGUAGE OverloadedStrings #-}
import Control.Applicative
import Data.Aeson
data JSONEvent = JSONEvent [(String, String)] (Maybe String) deriving Show
instance FromJSON JSONEvent where
parseJSON j = do
o <- parseJSON j
name <- o .:? "name"
args <- o .:? "args" .!= []
return $ JSONEvent args name
let decodedEvent = decode "{\"name\":\"edwald\",\"args\":[{\"a\": \"b\"}]}" :: Maybe JSONEvent
Here's a bit more elaborate example based on ehird's example. Note that the explicit typing on calls to parseJSON is unnecessary but I find them useful for documentation and debugging purposes. Also I'm not sure what you intended, but with args with multiple values I simply concatenate all the args together like so:
*Main> decodedEvent2
Just (JSONEvent [("a","b"),("c","d")] (Just "edwald"))
*Main> decodedEvent3
Just (JSONEvent [("a","b"),("c","d")] (Just "edwald"))
Here's the code:
{-# LANGUAGE OverloadedStrings #-}
import Control.Applicative
import qualified Data.Text as T
import qualified Data.Foldable as F
import qualified Data.HashMap.Lazy as HM
import qualified Data.Vector as V
import Data.Aeson
import qualified Data.Attoparsec as P
import Data.Aeson.Types (Parser)
import qualified Data.Aeson.Types as DAT
import qualified Data.String as S
data JSONEvent = JSONEvent [(String, String)] (Maybe String) deriving Show
instance FromJSON JSONEvent where
parseJSON = parseJSONEvent
decodedEvent = decode "{\"name\":\"edwald\",\"args\":[{\"a\": \"b\"}]}" :: Maybe JSONEvent
decodedEvent2 = decode "{\"name\":\"edwald\",\"args\":[{\"a\": \"b\"}, {\"c\": \"d\"}]}" :: Maybe JSONEvent
decodedEvent3 = decode "{\"name\":\"edwald\",\"args\":[{\"a\": \"b\", \"c\": \"d\"}]}" :: Maybe JSONEvent
emptyAesonArray :: Value
emptyAesonArray = Array $ V.fromList []
parseJSONEvent :: Value -> Parser JSONEvent
parseJSONEvent v =
case v of
Object o -> do
name <- o .:? "name"
argsJSON <- o .:? "args" .!= emptyAesonArray
case argsJSON of
Array m -> do
parsedList <- V.toList <$> V.mapM (parseJSON :: Value -> Parser (HM.HashMap T.Text Value)) m
let parsedCatList = concatMap HM.toList parsedList
args <- mapM (\(key, value) -> (,) <$> (return (T.unpack key)) <*> (parseJSON :: Value -> Parser String) value) parsedCatList
return $ JSONEvent args name
_ -> fail ((show argsJSON) ++ " is not an Array.")
_ -> fail ((show v) ++ " is not an Object.")
-- Useful for debugging aeson parsers
decodeWith :: (Value -> Parser b) -> String -> Either String b
decodeWith p s = do
value <- P.eitherResult $ (P.parse json . S.fromString) s
DAT.parseEither p value
I'm not an aeson expert, but if you have Object o, then o is simply a HashMap Text Value; you could use Data.HashMap.Lazy.toList to convert it into [(Text, Value)], and Data.Text.unpack to convert the Texts into Strings.
So, presumably you could write:
import Control.Arrow
import Control.Applicative
import qualified Data.Text as T
import qualified Data.Foldable as F
import qualified Data.HashMap.Lazy as HM
import Data.Aeson
instance FromJSON JSONEvent where
parseJSON j = do
o <- parseJSON j
name <- o .:? "name"
Object m <- o .:? "args" .!= []
args <- map (first T.unpack) . HM.toList <$> F.mapM parseJSON m
return $ JSONEvent args name