How to read FileHelpers quoted fields contains single quote?
Below is my csv records
"1","7" Screen","Mobile"
Model:
[DelimitedRecord(",")]
public class LineModel
{
[FieldQuoted('"', QuoteMode.OptionalForBoth)]
public string Id;
[FieldQuoted('"', QuoteMode.OptionalForBoth)]
public string Details;
[FieldQuoted('"', QuoteMode.OptionalForBoth)]
public string Device;
}
Getting error for above record:-The field Details is quoted but the quoted char: " not is just before the separator (You can use [FieldTrim] to avoid this error)
QuoteMode does not work very well when you have ambiguous quotes in your input file. Instead, you can remove the [FieldQuoted] attributes and handle the quotes in a custom converter.
[DelimitedRecord(",")]
public class LineModel
{
[FieldConverter(typeof(MyQuotedFieldConverter))]
public string Id;
[FieldConverter(typeof(MyQuotedFieldConverter))]
public string Details;
[FieldConverter(typeof(MyQuotedFieldConverter))]
public string Device;
}
public class MyQuotedFieldConverter : ConverterBase
{
public override object StringToField(string from)
{
// If the field starts and ends with a double quote
if (from.StartsWith("\"") && from.EndsWith("\""))
{
// Remove the first and last character
return from.Substring(1, from.Length - 1);
}
return from;
}
}
Of course then you'll have trouble if you have "," within your fields.
"1","7, Screen","Mobile"
In which case, you have to pre-parse the record line to clean up the input by implementing the INotifyRead interface. Something like:
[DelimitedRecord(",")]
public class LineModel : INotifyRead
{
//... fields as before
public void BeforeRead(BeforeReadEventArgs e)
{
if (e.RecordLine.Count(x => x == ',') > 3)
{
e.RecordLine = DetectAndReplaceEmbeddedDelimiters(e.RecordLine);
}
}
public void AfterRead(AfterReadEventArgs e)
{
}
}
Another approach to consider the reverse: use the custom converter to add quotes to every field and remove/replace embedded quotes. Then use QuoteMode.AlwaysQuoted.
Related
Goal
I am trying to push some data to a mongo db using mongojack.
I expect the result to be something like this in the db:
{
"_id": "840617013772681266",
"messageCount": 69420,
"seedCount": 18,
"prefix": "f!",
"language": "en"
}
Problem
Instead, I get this error in my console.
Caused by: java.lang.IllegalArgumentException: invalid hexadecimal representation of an ObjectId: [840617013772681266]
at org.bson.types.ObjectId.parseHexString(ObjectId.java:390)
at org.bson.types.ObjectId.<init>(ObjectId.java:193)
at org.mongojack.internal.ObjectIdSerializer.serialiseObject(ObjectIdSerializer.java:66)
at org.mongojack.internal.ObjectIdSerializer.serialize(ObjectIdSerializer.java:49)
at com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:728)
at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:770)
... 59 more
Code
This is the code that gets called when I try to create a new Guild in the db:
public static Guild getGuild(String id) throws ExecutionException {
return cache.get(id);
}
cache is the following (load get executed):
private static LoadingCache<String, Guild> cache = CacheBuilder.newBuilder()
.expireAfterAccess(10, TimeUnit.MINUTES)
.build(
new CacheLoader<>() {
#Override
public Guild load(#NotNull String id) {
return findGuild(id).orElseGet(() -> new Guild(id, "f!"));
}
});
The findGuild method that gets called first:
public static Optional<Guild> findGuild(String id) {
return Optional.ofNullable(guildCollection.find()
.filter(Filters.eq("_id", id)).first());
}
And finally the Guild document.
#Getter
#Setter
public class Guild implements Model {
public Guild(String id, String prefix) {
this.id = id;
this.prefix = prefix;
}
public Guild() {
}
private String id;
/*
If a Discord guild sent 1,000,000,000 messages per second,
it would take roughly 292471 years to reach the long primitive limit.
*/
private long messageCount;
private long seedCount;
// The default language is specified in BotValues.java's bot.yaml.
private String language;
private String prefix;
#ObjectId
#JsonProperty("_id")
public String getId() {
return id;
}
#ObjectId
#JsonProperty("_id")
public void setId(String id) {
this.id = id;
}
}
What I've tried
I've tried multiple things, such as doing Long.toHexString(Long.parseLong(id)) truth is I don't understand the error completely and after seeing documentation I'm left with more questions than answers.
ObjectId is a 12-byte value that is commonly represented as a sequence of 24 hex digits. It is not an integer.
You can either create ObjectId values using the appropriate ObjectId constructor or parse a 24-hex-digit string. You appear to be trying to perform an integer conversion to ObjectId which generally isn't a supported operation.
You can technically convert the integer 840617013772681266 to an ObjectId by zero-padding it to 12 bytes, but standard MongoDB driver tooling doesn't do that for you and considers this invalid input (either as an integer or as a string) for conversion to ObjectId.
Example in Ruby:
irb(main):011:0> (v = '%x' % 840617013772681266) + '0' * (24 - v.length)
=> "baa78b862120032000000000"
Note that while the resulting value would be parseable as an ObjectId, it isn't constructed following the ObjectId rules and thus the value cannot be sensibly decomposed into the ObjectId components (machine id, counter and a random value).
A CSV with trailing commas like this:
name, phone
joe, 123-456-7890,
bob, 333-555-6666,
processed like this:
CSVReaderHeaderAware r = new CSVReaderHeaderAware(reader);
Map<String, String> values = r.readMap();
will throw this exception:
java.io.IOException: Error on record number 2: The number of data elements is not the same as the number of header elements
For now I'm stripping commas from input files using sed:
find . -type f -exec sed -i 's/,*\r*$//' {} \;
Is there some easy way to tell OpenCSV to ignore trailing commas?
OpenCSV maintainers commented here. As of OpenCSV v5.1 there is no simple way to accomplish this and pre-processing the file using sed, etc is best for now.
According to link provided in #Andrew's answer it's a malformed CSV input.
But as own maintainer suggests ( here ):
If you know you will always have single-line records, you could
derive a class from CSVReader, override getNextLine() to call
super.getNextLine(), then cut off the trailing comma, and of course,
pass your new reader into opencsv to use in parsing.
In other words, create your own CustomCSVReader and remove the last comma.
Here's an example:
import com.opencsv.CSVReader;
public class CustomCSVReader extends CSVReader {
public CustomCSVReader(Reader reader) {
super(reader);
}
#Override
protected String getNextLine() throws IOException {
String line = super.getNextLine();
if (line == null) {
return null;
}
boolean endsWithComma = line.endsWith(",");
if (endsWithComma) {
return line.substring(0, line.length() - 1);
}
return line;
}
}
The Model Converter using CustomCSVReader
public class CustomCSVParser{
public List<User> convert(String data) {
return new CsvToBeanBuilder<Transaction>(new CustomCSVReader(new StringReader(data)))
.withType(User.class)
.build()
.parse();
}
The Model class
import com.opencsv.bean.CsvBindByName;
public class User {
#CsvBindByName(column = "name")
private String userName;
#CsvBindByName(column = "phone")
private String phoneNumber;
// Constructor, Getters and Setters ommited
}
Test Class
class CustomCSVParserTest {
private CustomCSVParser instance;
#BeforeEach
void setUp() {
instance = new CustomCSVParser();
}
#Test
void csvInput_withCommaInLastLine_mustBeParsed() {
String data = "name, phone
joe, 123-456-7890,
bob, 333-555-6666,";
List<User> result = instance.convert(data);
List<User> expectedResult = Arrays.asList(
new User("joe", "123-456-7890"),
new User("bob", "333-555-6666"));
Assertions.assertArrayEquals(expectedResult.toArray(), result.toArray());
}
}
That's it.
i want call this function in other pages. but idont know
public class registering_class_file
{
public KeyValuePair<Literal, Literal> settingfunc1(Literal lit_pub_adver_barcap1, Literal lit_div_adver_start1)
{
return new KeyValuePair<Literal, Literal>(lit_pub_adver_barcap1, lit_div_adver_start1);}}
First I recommend that you include your class in a name space like this:
namespace NameA
{
public class registering_class_file
{
public KeyValuePair<Literal, Literal> settingfunc1(Literal lit_pub_adver_barcap1, Literal lit_div_adver_start1)
{
return new KeyValuePair<Literal, Literal>(lit_pub_adver_barcap1, lit_div_adver_start1);
}
}
}
Then you may use this in another file with a using clause:
using NameA;
namespace NameB
{
public class B
{
private registering_class_file regClass; // references NameA.registering_class_file
...
private KeyValuePair<Literal, Literal> register(Literal key, Literal value)
{
return regClass.settingfunc1(key, value);
}
}
}
thank you my answer is:
Literal lit_pub_adver_barcap1 = new Literal();
Literal lit_div_adver_start1 = new Literal();
registering_class_file ss = new registering_class_file();
KeyValuePair<Literal, Literal> newValue = ss.settingfunc1(lit_pub_adver_barcap1, lit_div_adver_start1);
Creating mysql table from Grails domain class does not generates table and column name in Uppercase letters. table names are created in lowercase . Even when doing reverse-engineer with table names in uppercase letters the domain class is generated in lowercase only. How to created table with table and column name in uppercase ?
You can customize table names with a custom NamingStrategy. By default Grails uses an ImprovedNamingStrategy but you can use your as described in the docs: http://grails.org/doc/latest/guide/GORM.html#customNamingStrategy
This subclass of ImprovedNamingStrategy will generate uppercase names:
package com.foo.bar
import org.hibernate.cfg.ImprovedNamingStrategy
class UppercaseNamingStrategy extends ImprovedNamingStrategy {
private static final long serialVersionUID = 1
String classToTableName(String className) {
super.classToTableName(className).toUpperCase()
}
String collectionTableName(String ownerEntity, String ownerEntityTable, String associatedEntity, String associatedEntityTable, String propertyName) {
super.collectionTableName(ownerEntity, ownerEntityTable, associatedEntity, associatedEntityTable, propertyName).toUpperCase()
}
String logicalCollectionTableName(String tableName, String ownerEntityTable, String associatedEntityTable, String propertyName) {
super.logicalCollectionTableName(tableName, ownerEntityTable, associatedEntityTable, propertyName).toUpperCase()
}
String tableName(String tableName) {
super.tableName(tableName).toUpperCase()
}
}
Specify it in DataSource.groovy in the hibernate block:
hibernate {
...
naming_strategy = com.foo.bar.UppercaseNamingStrategy
}
I agree with Burt, You can add also the following code to change the generated column name in database to upper case by overriding the other methods
public String propertyToColumnName(String propertyName) {
return super.propertyToColumnName(propertyName).toUpperCase();
}
public String columnName(String columnName) {
return super.columnName(columnName).toUpperCase();
}
public String joinKeyColumnName(String joinedColumn, String joinedTable) {
return super.joinKeyColumnName( joinedColumn, joinedTable ).toUpperCase();
}
public String foreignKeyColumnName(String propertyName, String propertyEntityName, String propertyTableName, String referencedColumnName) {
return super.foreignKeyColumnName(propertyName, propertyEntityName, propertyTableName, referencedColumnName).toUpperCase();
}
public String logicalColumnName(String columnName, String propertyName) {
return super.logicalColumnName(columnName, propertyName).toUpperCase();
}
public String logicalCollectionColumnName(String columnName, String propertyName, String referencedColumn) {
return logicalCollectionColumnName(columnName, propertyName, referencedColumn).toUpperCase();
}
I need to parse this json string to values.
"start": { "dateTime": "2013-02-02T15:00:00+05:30" }, "end": { "dateTime": "2013-02-02T16:00:00+05:30" },
The problem is I am using JSONParser in apex (salesforce).
And my class is:
public class wrapGoogleData{
public string summary{get;set;}
public string id{get;set;}
public string status;
public creator creator;
public start start;
public wrapGoogleData(string entnm,string ezid,string sta, creator c,start s){
summary= entnm;
id= ezid;
status = sta;
creator = c;
start = s;
}
}
public class creator{
public string email;
public string displayName;
public string self;
}
public class start{
public string datetimew;
}
I am able to get all the datat from this except the datetime in the above string. As datetime is a reserved keyword in apex so i am not able to give the variable name as datetime in my class.
Any suggestion !!
Json Parser code:
JSONParser parser = JSON.createParser(jsonData );
while (parser.nextToken() != null) {
// Start at the array of invoices.
if (parser.getCurrentToken() == JSONToken.START_ARRAY) {
while (parser.nextToken() != null) {
// Advance to the start object marker to
// find next invoice statement object.
if (parser.getCurrentToken() == JSONToken.START_OBJECT) {
// Read entire invoice object, including its array of line items.
wrapGoogleData inv = (wrapGoogleData)parser.readValueAs(wrapGoogleData.class);
String s = JSON.serialize(inv);
system.debug('Serialized invoice: ' + s);
// Skip the child start array and start object markers.
//parser.skipChildren();
lstwrap.put(inv.id,inv);
}
}
}
}
Similar to Kumar's answer but without using an external app.
Changing your start class was the right idea
public class start{
public string datetimew;
}
Now, just parse the JSON before you run it through the deserializer.
string newjsondata = jsonData.replace('"dateTime"','"datetimew"');
JSONParser parser = JSON.createParser(newjsondata);
while (parser.nextToken() != null) {
...
}
Use string.replace() function and replace keys named dateTime with something like dateTime__x and then you can parse using Json.deserialize if you have converted your json to apex using json to apex convertor app on heruko platform
http://json2apex.herokuapp.com/
The above link points to an app that will convert Json into apex class and then you can use Json.serialize to parse json into apex class structure.