Fiware - Cygnus mongoSink metadata persistence - fiware

I'm trying to persist with cygnus using a Mongo sink, data from entities with metadata data estructures. So far, I haven't been able to achieve that.
I'm using cygnus version 0.13.0. Seems to be possible to save metadata info using MySQL and CKAN persistence sinks.
¿Is it possible too using Mongo?
¿Is it configuration matter?
Thanks in advance for any help.

Cygnus does not store the attribute metadata in MongoDB. This is because the internal usage we make of Cygnus when persisting in MongoDB, which imposes a strong constraint regarding this issue.
Anyway, modifying the code in a fork of yourself in order to fix this should be relatively easy. Simply have a look on this method:
private Document createDoc(long recvTimeTs, String entityId, String entityType, String attrName, String attrType, String attrValue) {
Passing an additional parameter String attrMd and appending this value to the doc variable should do the trick:
private Document createDoc(long recvTimeTs, String entityId, String entityType, String attrName, String attrType, String attrValue, String attrMd) {
Document doc = new Document("recvTime", new Date(recvTimeTs));
switch (dataModel) {
case DMBYSERVICEPATH:
doc.append("entityId", entityId)
.append("entityType", entityType)
.append("attrName", attrName)
.append("attrType", attrType)
.append("attrValue", attrValue)
.append("attrMd", attrMd);
break;
case DMBYENTITY:
doc.append("attrName", attrName)
.append("attrType", attrType)
.append("attrValue", attrValue)
.append("attrMd", attrMd);
break;
case DMBYATTRIBUTE:
doc.append("attrType", attrType)
.append("attrValue", attrValue)
.append("attrMd", attrMd);
break;
default:
return null; // this will never be reached
} // switch
return doc;
} // createDoc

Starting with version 1.8.0, FIWARE CYGNUS adds metadata support. As you can see in the template of the configuration file, the only thing you have to do is set the property cygnus-ngsi.sinks.mongo-sink.attr_metadata_store to True, which by the way is set to False by default.
Regards!

Related

How to force the RESTkit mapping to always update response NSManagedObject without a unique ID in JSON body

I do not have any specific id available in the response JSON body (I can not change the body). That is why I can not use
RKEntityMapping *mapping = [RKEntityMapping mappingForEntityForName:....
mapping.identificationAttributes = #[#"specificId"];
Is it possible to configure the mapping in such a way that there is no new NSManagedObject created but always the previous one is updated, if such object exists?
I would like to fetch data for ui update from a single response object (of specific class). Yes I can delete the previous instance of the response before the new one is received but the approach required in this question is cleaner and I do not need to keep the reference/id to the response entity.
I am reading the documentation for RKManagedObjectRequestOperation but it is not clear whether this approach is supported by Restkit.
Thank you for comment.
I have made a hack that is not acceptable but it works: I am using a special attribute in each special "singleton" NSManagedObject's subclass: e.g. unique which I use for the identification on the class level:
In RKManagedObjectMappingOperationDataSource there is the condition modified to allow passing entities with the special unique attribute:
// If we have found the entity identification attributes, try to find an existing instance to update
if ([entityIdentifierAttributes count] || [self.managedObjectCache isUniqueEntityClass:entity])...
In RKFetchRequestManagedObjectCache and in RKInMemoryManagedObjectCache there is the new method defined:
- (BOOL) isUniqueEntityClass:(NSEntityDescription*)entity {
__block BOOL isUniqueEntityClass = NO;
[[[entity attributesByName] allKeys] enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
isUniqueEntityClass = [obj isEqualToString:#"unique"];
if(isUniqueEntityClass) {
*stop = YES;
return;
}
}];
return isUniqueEntityClass;
}
In the method
- (NSSet *)managedObjectsWithEntity:(NSEntityDescription *)entity
attributeValues:(NSDictionary *)attributeValues
inManagedObjectContext:(NSManagedObjectContext *)managedObjectContext...
isUniqueEntityClass decides if we should use predicate based on attributeValues to fetch the entity or directly fetch the entity without the predicate.

Are JPA (EclipseLink) custom types possible?

In particular I am interested in using PostgreSQLs json type.
The core of the problem seems to be that there is no internal mapping in Eclipselink to json type. So, using a naive approach with:
#Column(name = "json", columnDefinition = "json")
public String getJson() {
return json;
}
... and trying to insert an object, I get an exception:
Internal Exception: org.postgresql.util.PSQLException: ERROR: column "json" is of type json but expression is of type character varying
Fair enough I suppose.
Looking through the EclipseLink docs, it seems that the applicable customizations (Transformation Mappings, Native Queries, Converters) rely on the data being made up of the supported mappings (numbers, dates, strings etc) so it makes this quite awkward to get around using vendor specific types.
The main reason this is so frustrating is that json type in posgresql is expressed the same as text/varchar and I believe (at the moment, but not forever) is just an alias of that type - therefore the driver is more than capable of transmitting this, it's just validation rules in my way.
In terms of the solution, I don't mind losing portability (in terms of being database agnostic and using vendor specific types) but just want a solution that allows me to use a json type as an attribute on a normal JPA Entity and retain all the other behavior it is accustomed to (schema generation, merge, persist, transactional code
Walking through SO I've found many questions like this regarding JSON or XML types for mapping into Postgres. It looks like nobody have faced the problem of reading from custom Postgres type, so here the solution for both reading and writing using pure JPA type conversion mechanism.
Postgres JDBC driver maps all attributes for unknown (to Java) types into org.postgresql.util.PGobject object, so it is enough to make converter for this type. Here is entity example:
#Entity
public class Course extends AbstractEntity {
#Column(name = "course_mapped", columnDefinition = "json")
#Convert(converter = CourseMappedConverter.class)
private CourseMapped courseMapped; // have no idea why would you use String json instead of the object to map
// getters and setters
}
Here the converter example:
#Converter
public class CourseMappedConverter implements AttributeConverter<CourseMapped, PGobject> {
#Override
public PGobject convertToDatabaseColumn(CourseMapped courseMapped) {
try {
PGobject po = new PGobject();
// here we tell Postgres to use JSON as type to treat our json
po.setType("json");
// this is Jackson already added as dependency to project, it could be any JSON marshaller
po.setValue((new ObjectMapper()).writeValueAsString(courseMapped));
return po;
} catch (JsonProcessingException e) {
e.printStackTrace();
return null;
} catch (SQLException e) {
e.printStackTrace();
return null;
}
}
#Override
public CourseMapped convertToEntityAttribute(PGobject po) {
try {
return (new ObjectMapper()).readValue(po.getValue(),CourseMapped.class);
} catch (IOException e) {
e.printStackTrace();
return null;
}
}
}
If you really need to stick to String JSON representation in your entity, you can make converter like this for String type
implements AttributeConverter<String, PGobject>
Here is very dirty (though working) proof of concept, it also uses fake object serialization to tell JPA that object was changed if it was
https://github.com/sasa7812/psql-cache-evict-POC
PostgreSQL is too strict about implicit casts between text-like types. The simplest way is a workaround by creating a cast; see this answer.
The clean way to do it would be to create a JPA provider extension that calls setObject(my_json), and/or teach your JPA provider to explicitly add CAST('myvalue' AS json) when it generates queries. This is a pain, as it requires JPA provider specific extensions.
This Stack Overflow search will find a bunch of related questions for the xml type, which people have similar problems with.

How to export data from LinqPAD as JSON?

I want to create a JSON file for use as part of a simple web prototyping exercise. LinqPAD is perfect for accessing the data from my DB in just the shape I need, however I cannot get it out as JSON very easily.
I don't really care what the schema is, because I can adapt my JavaScript to work with whatever is returned.
Is this possible?
A more fluent solution is to add the following methods to the "My Extensions" File in Linqpad:
public static String DumpJson<T>(this T obj)
{
return
obj
.ToJson()
.Dump();
}
public static String ToJson<T>(this T obj)
{
return
new System.Web.Script.Serialization.JavaScriptSerializer()
.Serialize(obj);
}
Then you can use them like this in any query you like:
Enumerable.Range(1, 10)
.Select(i =>
new
{
Index = i,
IndexTimesTen = i * 10,
})
.DumpJson();
I added "ToJson" separately so it can be used in with "Expessions".
This is not directly supported, and I have opened a feature request here. Vote for it if you would also find this useful.
A workaround for now is to do the following:
Set the language to C# Statement(s)
Add an assembly reference (press F4) to System.Web.Extensions.dll
In the same dialog, add a namespace import to System.Web.Script.Serialization
Use code like the following to dump out your query as JSON
new JavaScriptSerializer().Serialize(query).Dump();
There's a solution with Json.NET since it does indented formatting, and renders Json dates properly. Add Json.NET from NuGet, and refer to Newtonsoft.Json.dll to your “My Extensions” query and as well the following code :
public static object DumpJson(this object value, string description = null)
{
return GetJson(value).Dump(description);
}
private static object GetJson(object value)
{
object dump = value;
var strValue = value as string;
if (strValue != null)
{
var obj = JsonConvert.DeserializeObject(strValue);
dump = JsonConvert.SerializeObject(obj, Newtonsoft.Json.Formatting.Indented);
}
else
{
dump = JsonConvert.SerializeObject(value, Newtonsoft.Json.Formatting.Indented);
}
return dump;
}
Use .DumpJson() as .Dump() to render the result. It's possible to override more .DumpJson() with different signatures if necessary.
As of version 4.47, LINQPad has the ability to export JSON built in. Combined with the new lprun.exe utility, it can also satisfy your needs.
http://www.linqpad.net/lprun.aspx

Post/Put/Delete http Json with additional parameters in Jersey + general design issues

For some reason, I haven't found any normal way to do the following:
I want to Post a json object, and add additional parameters to the call (in this case, an authentication token).
This is a simple RESTful server in myUrl/server, which should give access to different resources of a "person" in the url myUrl/server/person/personCode/resourceName.
GET is easy, and requires no object, only parameters.
The problem arrises when I get to POST - how do I attach the JSON, and keep the other parameters as well?
The class (much has been removed for clarity and proprietary reasons...):
//Handles the person's resources
#Path("/person/{personCode}/{resourceName}")
public class PersonResourceProvider {
#GET
#Produces("application/json")
public String getPersonResource(#PathParam("personCode") String personCode, #PathParam("resourceName") String resourceName, #DefaultValue("") #QueryParam("auth_token") String auth_token) throws UnhandledResourceException, UnauthorizedAccessException {
//Authenticates the user in some way, throwing an exception when needed...
authenticate(personCode, auth_token, resourceName);
//Returns the resource somehow...
}
#POST
#Produces("application/json")
public String postPersonResource(#PathParam("personCode") String personCode, #PathParam("resourceName") String resourceName, #DefaultValue("") #QueryParam("resourceData") String resourceData, #DefaultValue("") #QueryParam("auth_token") String auth_token) throws UnhandledResourceException, UnauthorizedAccessException {
//Again, authenticating
authenticate(personCode, auth_token, resourceName);
//Post the given resource
}
}
Now, the GET method works perfectly, when you go to
myUrl/person/personCode/resourceName, it gives me the correct resource.
The auth_token is used with every single call to the server (for now, authentication is done by comparing with a predefined string), so it's needed. All the other parameters are provided through the path, except for the authentication token, which should not be in the path as it does not relate to the identity of the required resource.
When I get to POST, it's a problem.
I know there's a way to tell the method it consumes a JSON, but in that case, what will happen to the other parameters (auth_token is one of them)?
Should I use Multipart?
Another related question, this is the first time I've designed such a server, is this design correct?
Thanks!
I am not sure I understand what you are trying to achieve. Let me try explain a few things - hope it will be relevant to your question:
#QueryParam injects parameters which are part of your path - i.e. the part of the URL that goes after "?".
E.g. if you have a URL like this:
http://yourserver.com/person/personCode/resourceName?resourceData=abc&token=1234
Then there would be 2 query params - one named resourceData with value "abc" and the other one named token with value "1234".
If you are passing an entity in the POST request, and that entity is of application/json type, you can simply annotate your post method using #Consumes("application/json") annotation and add another parameter to your method, which does not need to be annotated at all.
That parameter can be either a String (in that case Jersey would pass a raw JSON string and you would have to parse it yourself) or it can be a java bean annotated with #XmlRootElement annotation - in that case (if you also include jersey-json module on your classpath) Jersey will try to unmarshall the json string into that object using JAXB. You can also use Jackson or Jettison libraries to do that - see this section of Jersey User Guide for more info: http://jersey.java.net/nonav/documentation/latest/json.html
Found!
Client side:
Client c = Client.create();
WebResource service = c.resource("www.yourserver.com/");
String s = service.path("test/personCode/resourceName")
.queryParam("auth_token", "auth")
.type("text/plain")
.post(String.class, jsonString);
Server side:
import com.sun.jersey.api.client.Client;
import com.sun.jersey.api.client.WebResource;
#Path("/test/{personCode}/{resourceName}")
public class TestResourceProvider {
#POST
#Consumes("text/plain")
#Produces("application/json")
public String postUserResource(String jsonString,
#PathParam("personCode") String personCode,
#PathParam("resourceName") String resourceName,
#QueryParam("auth_token") String auth_token)
throws UnhandledResourceException {
//Do whatever...
}
}
In my case, I will parse the json I get in the server depending on the resource name, but you can also pass the object itself, and make the server consume an "application/json".

Hibernate Encryption of Database Completely Transparent to Application

I'm working on a Grails 1.0.4 project that has to be released in less than 2 weeks, and the customer just came up with a requirement that all data in the database should be encrypted.
Since encryption of every database access in the application itself could take a lot of time and will be error prone, the solution I seek is some kind of encryption transparent to the application.
Is there a way to setup Hibernate to encrypt all data in all tables (except maybie the id and version columns) or should I seek a MySQL solution (we're using MySQL 5.0) ?
EDIT:
Thanks for all of your posts for alternative solutions, if the customer changes mind it would be great. As for now, the requirement is "No plain text in the Database".
Second thing I'd like to point out is that I'm using Grails, for those not fammiliar with it, It's a convention over configuration, so even small changes to the application that are not by convention should be avoided.
If you end doing the work in the application, you can use Hibernate custom types and it wouldn't add that many changes to your code.
Here's an encrypted string custom type that I've used:
import org.hibernate.usertype.UserType
import org.apache.log4j.Logger
import java.sql.PreparedStatement
import java.sql.ResultSet
import java.sql.SQLException
import java.sql.Types
class EncryptedString implements UserType {
// prefix category name with 'org.hibernate.type' to make logging of all types easier
private final Logger _log = Logger.getLogger('org.hibernate.type.com.yourcompany.EncryptedString')
Object nullSafeGet(ResultSet rs, String[] names, Object owner) throws SQLException {
String value = rs.getString(names[0])
if (!value) {
_log.trace "returning null as column: $names[0]"
return null
}
_log.trace "returning '$value' as column: $names[0]"
return CryptoUtils.decrypt(value)
}
void nullSafeSet(PreparedStatement st, Object value, int index) throws SQLException {
if (value) {
String encrypted = CryptoUtils.encrypt(value.toString())
_log.trace "binding '$encrypted' to parameter: $index"
st.setString index, encrypted
}
else {
_log.trace "binding null to parameter: $index"
st.setNull(index, Types.VARCHAR)
}
}
Class<String> returnedClass() { String }
int[] sqlTypes() { [Types.VARCHAR] as int[] }
Object assemble(Serializable cached, Object owner) { cached.toString() }
Object deepCopy(Object value) { value.toString() }
Serializable disassemble(Object value) { value.toString() }
boolean equals(Object x, Object y) { x == y }
int hashCode(Object x) { x.hashCode() }
boolean isMutable() { true }
Object replace(Object original, Object target, Object owner) { original }
}
and based on this it should be simple to create similar classes for int, long, etc. To use it, add the type to the mapping closure:
class MyDomainClass {
String name
String otherField
static mapping = {
name type: EncryptedString
otherField type: EncryptedString
}
}
I omitted the CryptoUtils.encrypt() and CryptoUtils.decrypt() methods since that's not Grails-specific. We're using AES, e.g. "Cipher cipher = Cipher.getInstance('AES/CBC/PKCS5Padding')". Whatever you end up using, make sure it's a 2-way crypto, i.e. don't use SHA-256.
If the customer is worried about someone physically walking away with the hard drive then using a full disk solution like Truecrypt should work. If there worried about traffic being sniffed then take a look at this part of the mysql documentation on ssl over JDBC. Remember if someone compromises your server all bets are off.
the customer could easily do this without changing a thing in your application.
first, encrypt the communications between the server by turning on SSL in the mysql layer, or use an SSH tunnel.
second, store the mysql database on an encrypted volume.
any attack that can expose the filesystem of the mysql database or the credentials needed to log in to the mysql server is not mitigated by encrypting the data since that same attack can be used to retrieve the encryption key from the application itself.
Well it has been a long time since I've asked the question. In the meantime, thanks for all the answers. They were great when dealing with the original idea of encrypting the entire database, but the requirement changed to only encrypting sensitive user info, like name and address. So the solution was something like the code down below.
We've implemented an Encrypter which reads the encryption method from the record ( so there can be different encryption per record) and use it to connect transient duplicate fields to the ones encrypted in the database. The added bonus/drawbacks are:
The data is also encrypted in memory, so every access to the method getFirstName descrypts the data (I guess there is a way to cache decrypted data, but I dont need it in this case)
Encrypted fields cannot be used with default grails/hibernate methods for search through database, we've made custom methods in services that get data, encrypt it and then use the encrypted data in the where clause of a query. It's easy when using User.withCriteria
class User {
byte[] encryptedFirstName
byte[] encryptedLastName
byte[] encryptedAddress
Date dateCreated // automatically set date/time when created
Date lastUpdated // automatically set date/time when last updated
EncryptionMethod encryptionMethod = ConfigurationHolder.config.encryption.method
def encrypter = Util.encrypter
static transients = [
'firstName',
'lastName',
'address',
'encrypter'
]
static final Integer BLOB_SIZE = 1024
static constraints = {
encryptedFirstName maxSize: BLOB_SIZE, nullable: false
encryptedLastName maxSize: BLOB_SIZE, nullable: false
encryptedAddress maxSize: BLOB_SIZE, nullable: true
encryptionMethod nullable: false
} // constraints
String getFirstName(){
decrypt('encryptedFirstName')
}
void setFirstName(String item){
encrypt('encryptedFirstName',item)
}
String getLastName(){
decrypt('encryptedLastName')
}
void setLastName(String item){
encrypt('encryptedLastName',item)
}
String getAddress(){
decrypt('encryptedAddress')
}
void setAddress(String item){
encrypt('encryptedAddress',item)
}
byte[] encrypt(String name, String value) {
if( null == value ) {
log.debug "null string to encrypt for '$name', returning null"
this.#"$name" = null
return
}
def bytes = value.getBytes(encrypter.ENCODING_CHARSET)
def method = getEncryptionMethod()
byte[] res
try {
res = encrypter.encrypt( bytes, method )
} catch(e) {
log.warn "Problem encrypting '$name' data: '$string'", e
}
log.trace "Encrypting '$name' with '$method' -> '${res?.size()}' bytes"
this.#"$name" = res
}
String decrypt(String name) {
if(null == this.#"$name") {
log.debug "null bytes to decrypt for '$name', returning null"
return null
}
def res
def method = getEncryptionMethod()
try {
res = new String(encrypter.decrypt(this.#"$name", method), encrypter.ENCODING_CHARSET )
} catch(e) {
log.error "Problem decrypting '$name'", e
}
log.trace "Decrypting '$name' with '$method' -> '${res?.size()}' bytes"
return res
}
}
Another option is to use a JDBC driver that encrypts/decrypts data on the fly, two way. Bear in mind that any solution will probably invalidate searches by encrypted fields.
IMHO the best solution is the one proposed by longneck, it will make everything much easier, from administration to development. Besides, bear in mind that any solution with client-side encryption will render all your db data unusable outside of the client, ie, you will not be able to use nice tools like a jdbc client or MySQL query browser, etc.
Jasypt integrates with Hibernate: http://jasypt.org/hibernate3.html. However, queries which use WHERE clauses cannot be used
Generated ids, version, mapped foreign keys - basically everything maintained by Hibernate - are out unless you intend to declare custom CRUD for all of your classes and manually encrypt them in queries.
For everything else you've got a couple of choices:
#PostLoad and #PrePersist entity listeners will take care of all non-query operations.
Implementing custom String / Long / Integer / etc... types to handle encryption will take care of both query and CRUD operations; however the mapping will become rather messy.
You can write a thin wrapper around a JDBC driver (as well as Connection / Statement / PreparedStatement / ResultSet / etc...) to do the encryption for you.
As far as queries go you'll have to handle encryption manually (unless you're going with #2 above) but you should be able to do so via a single entry point. I'm not sure how (or if) Grails deals with this, but using Spring, for example, it would be as easy as extending HibernateTemplate.