Mongojack: invalid hexadecimal representation of an ObjectId - json

Goal
I am trying to push some data to a mongo db using mongojack.
I expect the result to be something like this in the db:
{
"_id": "840617013772681266",
"messageCount": 69420,
"seedCount": 18,
"prefix": "f!",
"language": "en"
}
Problem
Instead, I get this error in my console.
Caused by: java.lang.IllegalArgumentException: invalid hexadecimal representation of an ObjectId: [840617013772681266]
at org.bson.types.ObjectId.parseHexString(ObjectId.java:390)
at org.bson.types.ObjectId.<init>(ObjectId.java:193)
at org.mongojack.internal.ObjectIdSerializer.serialiseObject(ObjectIdSerializer.java:66)
at org.mongojack.internal.ObjectIdSerializer.serialize(ObjectIdSerializer.java:49)
at com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:728)
at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:770)
... 59 more
Code
This is the code that gets called when I try to create a new Guild in the db:
public static Guild getGuild(String id) throws ExecutionException {
return cache.get(id);
}
cache is the following (load get executed):
private static LoadingCache<String, Guild> cache = CacheBuilder.newBuilder()
.expireAfterAccess(10, TimeUnit.MINUTES)
.build(
new CacheLoader<>() {
#Override
public Guild load(#NotNull String id) {
return findGuild(id).orElseGet(() -> new Guild(id, "f!"));
}
});
The findGuild method that gets called first:
public static Optional<Guild> findGuild(String id) {
return Optional.ofNullable(guildCollection.find()
.filter(Filters.eq("_id", id)).first());
}
And finally the Guild document.
#Getter
#Setter
public class Guild implements Model {
public Guild(String id, String prefix) {
this.id = id;
this.prefix = prefix;
}
public Guild() {
}
private String id;
/*
If a Discord guild sent 1,000,000,000 messages per second,
it would take roughly 292471 years to reach the long primitive limit.
*/
private long messageCount;
private long seedCount;
// The default language is specified in BotValues.java's bot.yaml.
private String language;
private String prefix;
#ObjectId
#JsonProperty("_id")
public String getId() {
return id;
}
#ObjectId
#JsonProperty("_id")
public void setId(String id) {
this.id = id;
}
}
What I've tried
I've tried multiple things, such as doing Long.toHexString(Long.parseLong(id)) truth is I don't understand the error completely and after seeing documentation I'm left with more questions than answers.

ObjectId is a 12-byte value that is commonly represented as a sequence of 24 hex digits. It is not an integer.
You can either create ObjectId values using the appropriate ObjectId constructor or parse a 24-hex-digit string. You appear to be trying to perform an integer conversion to ObjectId which generally isn't a supported operation.
You can technically convert the integer 840617013772681266 to an ObjectId by zero-padding it to 12 bytes, but standard MongoDB driver tooling doesn't do that for you and considers this invalid input (either as an integer or as a string) for conversion to ObjectId.
Example in Ruby:
irb(main):011:0> (v = '%x' % 840617013772681266) + '0' * (24 - v.length)
=> "baa78b862120032000000000"
Note that while the resulting value would be parseable as an ObjectId, it isn't constructed following the ObjectId rules and thus the value cannot be sensibly decomposed into the ObjectId components (machine id, counter and a random value).

Related

How can I test a spring-cloud-contract containing a java.time.Instant field

I want to test a contract where one field is of type java.time.Instant. But not all instances of an Instant are handled as I expect by spring-cloud-contract. Given the following simple contract:
Contract.make {
description("Get a version")
request {
method 'GET'
url '/config/version'
headers {
contentType(applicationJson())
}
}
response {
status 200
body(
nr: 42,
creationDate: producer(anyIso8601WithOffset())
)
headers {
contentType(applicationJson())
}
}
}
And this service implementation:
#RestController
public class VersionController {
#GetMapping(path = "/version")
public ResponseEntity<Version> getCurrentVersion() {
return ResponseEntity.ok(new Version(42, Instant.ofEpochMilli(0)));
}
}
Executing gradle test works fine. But if I replace the Instant with Instant.now(), my provider test fails with
java.lang.IllegalStateException: Parsed JSON [{"nr":42,"creationDate":"2018-11-11T15:28:26.958284Z"}] doesn't match the JSON path [$[?(#.['creationDate'] =~ /([0-9]{4})-(1[0-2]|0[1-9])-(3[01]|0[1-9]|[12][0-9])T(2[0-3]|[01][0-9]):([0-5][0-9]):([0-5][0-9])(\.\d{3})?(Z|[+-][01]\d:[0-5]\d)/)]]
which is understandable because Instant.now() produces an Instant whose string representation does indeed not match the anyIso8601WithOffset() pattern. But why is this? Why are Instants represented differently and how can I describe a contract that validates for any instant?
Ok, I found a solution that works for me. Although I do not know if this is the way to go.
In order to always get the exact same format of the serialized instant, I define the format of the corresponding property of my version bean as follows:
public class Version {
private final int nr;
private final Instant creationDate;
#JsonCreator
public Version(
#JsonProperty("nr") int nr,
#JsonProperty("creationDate") Instant creationDate)
{
this.nr = nr;
this.creationDate = creationDate;
}
public int getNr() {
return nr;
}
#JsonFormat(pattern = "yyyy-MM-dd'T'HH:mm:ss.SSSX", timezone = "UTC")
public Instant getCreationDate() {
return creationDate;
}
}

How to read csv data one by one and pass it in multiple testNG tests

I need to insert a data multiple times in an web application. I am using selenium with testNG along with data driven framework.
I am using CSV file for reading the the input values.
Please find the sample code below.
public class TestData
{
private static String firstName;
public static String lastName;
#BeforeClass
public void beforeClass() throws IOException
{
reader = new CSVReader(new FileReader(fileName));
while((record = reader.readNext()) != null)
{
firstName = record[0];
lastName = record[1];
}
}
#Test
public void test1()
{
driver.findElement(By.id(id)).sendKeys(firstName);
driver.findElement(By.id(id)).click();
and so on....
}
#Test
public void test2()
{
driver.findElement(By.id(id)).sendKeys(lastName);
driver.findElement(By.id(id)).click();
and so on....
}
}
Here, I need to insert 3 records, but when I use the above code, only the 3rd record gets inserted.
Kindly help me to fix this issue.
Sample Input File
What you need here is a Factory powered by a DataProvider. The Factory would produce test class instances (A test class here is basically a regular class that contains one or more #Test methods housed in it). The data provider would basically feed the factory method with the data required to instantiate the test class.
Now your #Test methods would basically work with the data members in the instances to run its logic.
Here's a simple sample that shows this in action.
import org.assertj.core.api.Assertions;
import org.testng.annotations.DataProvider;
import org.testng.annotations.Factory;
import org.testng.annotations.Test;
public class TestClassSample {
private String firstName;
private String lastName;
#Factory(dataProvider = "dp")
public TestClassSample(String firstName, String lastName) {
this.firstName = firstName;
this.lastName = lastName;
}
#DataProvider(name = "dp")
public static Object[][] getData() {
//feel free to replace this with the logic that reads up a csv file (using CSVReader)
// and then translates it to a 2D array.
return new Object[][]{
{"Mohan", "Kumar"},
{"Kane", "Williams"},
{"Mark", "Henry"}
};
}
#Test
public void test1() {
Assertions.assertThat(this.firstName).isNotEmpty();
}
#Test
public void test2() {
Assertions.assertThat(this.lastName).isNotEmpty();
}
}
As per the data given by you , the while loop ends at the third record of CSV file. In each iteration your variables "firstName" and "lastName" are overwritten.
When the loop breaks , the variables store the lastly written values. So , use a better data structure for storing all values. I recommend map.
You can further club all the test cases in a single method , use invocationcount attribute in #Test annotation to repeat the execution for each entry from map. Add one more method with #BeforeTest for increment to next keyset in map.

saving large object takes too long on hibernate

I have an object with a Blob column requestData and a Text Column "requestDataText" .
These two fields may hold large Data. In my example , the blob data is around 1.2 MBs and the Text column holds the text equivalent of that Data.
When i try to commit this single entity , it takes around 20 seconds .
DBUtil.beginTransaction();
session.saveOrUpdate(entity);
DBUtil.commitTransaction();
Is there something wrong or is there a way to shorten this period ?
package a.db.entity;
// Generated Feb 22, 2016 11:57:10 AM by Hibernate Tools 3.2.1.GA
/**
* Foo generated by hbm2java
*/
#Entity
#Table(name="foo"
,catalog="bar"
)
public class Foo implements java.io.Serializable {
private Long id;
private Date reqDate;
private byte[] requestData;
private String requestDataText;
private String functionName;
private boolean confirmed;
private boolean processed;
private boolean errorOnProcess;
private Date processStartedAt;
private Date processFinishedAt;
private String responseText;
private String processResult;
private String miscData;
public AsyncRequestLog() {
}
#Id #GeneratedValue(strategy=IDENTITY)
#Column(name="Id", unique=true, nullable=false)
public Long getId() {
return this.id;
}
public void setId(Long id) {
this.id = id;
}
...
}
I just noticed you're starting a transaction and then doing a saveOrUpdate() which might explain the slow down, as hibernate will try to retrieve the row from the DB first (as explained on this other SO answer).
If you know if the entity is new call save() and if you the entity has to be updated call update().
Another suggestion, but I'm not sure if this applies any more to MySQL, try to store the blobs/clobs in a different table from where you store the data, if you are intending to update the blob/clobs. In the past this mix made MySQL run slow as it had to resize the 'block' allocated to a row. So have one table with all the attributes and a different table just for the blob/clob. This is not the case if the table is read-only.

Mapping BIGDECIMAL from Mongo, lack CONSTRUCTOR

I have a problem when I want get a object from mongo with a BigDecimal field.
I have the next structure in mongo:
{
"_id":ObjectId("546b07420c74bf96c7c3cd5f"),
"accountId":"1",
"modelVersion":"seasonal_optimized",
"yearMonth":"20143",
"income":{
"unscaled":{
"$numberLong":"68500"
},
"scale":2
},
"expense":{
"unscaled":{
"$numberLong":"125900"
},
"scale":2
}
}
And the entity is :
#Data
#NoArgsConstructor
#AllArgsConstructor
#Document(collection = "forecasts")
public class Forecast {
private String accountId;
private LocalDate monthYear;
private String modelVersion;
private BigDecimal income;
private BigDecimal expense;
}
and I'm trying retrieving a object from mongo, but I got the next error:
org.springframework.data.mapping.model.MappingInstantiationException: Failed to instantiate java.math.BigDecimal using constructor NO_CONSTRUCTOR with arguments.
Anybody can help me?
Thank you!!!!
It's an old question but I had the same issue using Spring data mongo and the aggregation framework.
Specifically when I want a BigDecimal as return type.
A possible workaround is to wrap your BigDecimal in a class and provide it as your return OutPutType, ie:
#Data
public class AverageTemperature {
private BigDecimal averageTemperature;
}
#Override
public AverageTemperature averageTemperatureByDeviceIdAndDateRange(String deviceId, Date from, Date to) {
final Aggregation aggregation = newAggregation(
match(
Criteria.where("deviceId")
.is(deviceId)
.andOperator(
Criteria.where("time").gte(from),
Criteria.where("time").lte(to)
)
),
group("deviceId")
.avg("temperature").as("averageTemperature"),
project("averageTemperature")
);
AggregationResults<AverageTemperature> result = mongoTemplate.aggregate(aggregation, mongoTemplate.getCollectionName(YourDocumentClass.class), AverageTemperature.class);
List<AverageTemperature> mappedResults = result.getMappedResults();
return (mappedResults != null ? mappedResults.get(0) : null);
}
In the example above the aggregation calculates the average temperature as BigDecimal.
Keep in mind that the default BigDecimal converter map it to a string in mongodb when saving. mapping-conversion
Retrieving the document as is should not be a problem anymore with Spring data mongo, versions I used:
org.springframework.boot:spring-boot-starter-data-mongodb:jar:2.0.1.RELEASE
spring-data-mongodb:jar:2.0.6.RELEASE
org.mongodb:mongodb-driver:jar:3.6.3

The most efficient way to store photo reference in a database

I'm currently looking to store approximately 3.5 million photo's from approximately 100/200k users. I'm only using a mysql database on aws. My question is in regards to the most efficient way to store the photo reference. I'm only aware of two ways and I'm looking for an expert opinion.
Choice A
A user table with a photo_url column, in that column I would build a comma separated list of photo's that both maintain the name and sort order. The business logic would handle extracting the path from the photo name and append photo size. The downside is the processing expense.
Database example
"0ea102, e435b9, etc"
Business logic would build the following urls from photo name
/0e/a1/02.jpg
/0e/a1/02_thumb.jpg
/e4/35/b9.jpg
/e4/35/b9_thumb.jpg
Choice B - Relational Table joined on user table with the following fields. I'm just concerned I may have potential database performance issues.
pk
user_id
photo_url_800
photo_url_150
photo_url_45
order
Does anybody have any suggestions on the better solution?
The best and most common answer would be: choice B - Relational Table joined on user table with the following fields.
id
order
user_id
desc
photo_url_800
photo_url_150
photo_url_45
date_uploaded
Or a hybrid, wherein, you store the file names individually and add the photo directory with your business logic layer.
My analysis, your first option is a bad practice. Comma separated fields are not advisable for database. It would be difficult for you to update these fields and add description on it.
Regarding the table optimization, you might want to see these articles:
Optimizing MyISAM Queries
Optimizing InnoDB Queries
Here is an example of my final solution using the hibernate ORM, Christian Mark, and my hybrid solution.
#Entity
public class Photo extends StatefulEntity {
private static final String FILE_EXTENSION_JPEG = ".jpg";
private static final String ROOT_PHOTO_URL = "/photo/";
private static final String PHOTO_SIZE_800 = "_800";
private static final String PHOTO_SIZE_150 = "_150";
private static final String PHOTO_SIZE_100 = "_100";
private static final String PHOTO_SIZE_50 = "_50";
#ManyToOne
#JoinColumn(name = "profile_id", nullable = false)
private Profile profile;
//Example "a1d2b0" which will later get parsed into "/photo/a1/d2/b0_size.jpg"
//using the generatePhotoUrl business logic below.
#Column(nullable = false, length = 6)
private String fileName;
private boolean temp;
#Column(nullable = false)
private int orderBy;
#Temporal(TemporalType.TIMESTAMP)
private Date dateUploaded;
public Profile getProfile() {
return profile;
}
public void setProfile(Profile profile) {
this.profile = profile;
}
public String getFileName() {
return fileName;
}
public void setFileName(String fileName) {
this.fileName = fileName;
}
public Date getDateUploaded() {
return dateUploaded;
}
public void setDateUploaded(Date dateUploaded) {
this.dateUploaded = dateUploaded;
}
public boolean isTemp() {
return temp;
}
public void setTemp(boolean temp) {
this.temp = temp;
}
public int getOrderBy() {
return orderBy;
}
public void setOrderBy(int orderBy) {
this.orderBy = orderBy;
}
public String getPhotoSize800() {
return generatePhotoURL(PHOTO_SIZE_800);
}
public String getPhotoSize150() {
return generatePhotoURL(PHOTO_SIZE_150);
}
public String getPhotoSize100() {
return generatePhotoURL(PHOTO_SIZE_100);
}
public String getPhotoSize50() {
return generatePhotoURL(PHOTO_SIZE_50);
}
private String generatePhotoURL(String photoSize) {
String firstDir = getFileName().substring(0, 2);
String secondDir = getFileName().substring(2, 4);
String photoName = getFileName().substring(4, 6);
StringBuilder sb = new StringBuilder();
sb.append(ROOT_PHOTO_URL);
sb.append("/");
sb.append(firstDir);
sb.append("/");
sb.append(secondDir);
sb.append("/");
sb.append(photoName);
sb.append(photoSize);
sb.append(FILE_EXTENSION_JPEG);
return sb.toString();
}
}