How to set default data format for a component in Apache Camel?
I have a number of routes interacting with different ActiveMQ queues. At the moment all of them look like
from("...")
.process(...)
.marshal().json() // (1)
.to("activemq:queue:...");
or
from("activemq:queue:...")
.unmarshal().json() // (2)
.process(...)
.to("...");
I would like to replace lines (1) and (2) with either component or context level configuration. Basically saying only once 'message payload going through ActiveMQ has to be a JSON string'.
I don't like to add any additional routes, processors, headers, URI parameters, etc.
Ideally, it would be applicable for other components and formats besides ActiveMQ and JSON
You could marshall/unmarshall using a named reference to a data format that you can define once (here as "myDefaultFormat") in your Camel Registry:
from("activemq:queue:...")
.unmarshal(myDefaultFormat)
This way, you dont have to repeat .json() everywhere
(but ok, you have yet to repeat the named reference :-$ )
Using interceptors (Based on Claus Ibsen's comment)
new RouteBuilder() {
#Override
public void configure() throws Exception {
// interceptors
interceptSendToEndpoint("activemq:queue:*")
.marshal()
.json();
interceptFrom("activemq:queue:*")
.unmarshal()
.json();
// routes
from("...")
.process(...)
.to("activemq:queue:...");
from("activemq:queue:...")
.process(...)
.to("...");
}
}
Notes:
interceptors have to be defined before any routes in RouteBuilder. Otherwise IllegalArgumentException is thrown explaining situation.
interceptors are not shared and have to be defined in each RouteBuilder.
Related
Can I map Scala functions to JSON; or perhaps via a different way than JSON?
I know I can map data types, which is fine. But I'd like to create a function, map it to JSON send it via a REST method to another server, then add that function to a list of functions in another application and apply it.
For instance:
def apply(f: Int => String, v: Int) = f(v)
I want to make a list of functions that can be applied within an application, over different physical locations. Now I want to add and remove functions to the list. By means of REST calls.
Let's assume I understand security problems...
ps.. If you downvote, you might as well have the decency to explain why
If I understand correctly, you want to be able to send Scala code to be executed on different physical machines. I can think of a few different ways of achieving that
Using tools for distributed computing e.g. Spark. You can set up Spark clusters on different machines and then select to which cluster you want to submit Spark jobs. There are a lot of other tools for distributed computing that might also be worth looking into.
Pass scala code as a string and compile it either within your server side code (here's an example) or by invoking scalac as an external process.
Send the functions as byte code and execute the byte code on the remote machine.
If it fits with what you want to do, I'd recommend option #1.
Make sure that you can really trust the code that you want to execute, to not expose yourself to malicious code.
The answer is you can't do this, and even if you could you shouldn't!
You should never, never, never write a REST API that allows the client to execute arbitrary code in your application.
What you can do is create a number of named operations that can be executed. The client can then pass the name of the operation which the server can look up in a Map[String, <function>] and execute the result.
As mentioned in my comment, here is an example of how to turn a case class into JSON. Things to note: don't question the implicit val format line (it's magic); each case class requires a companion object in order to work; if you have Optional fields in your case class and define them as None when turning it into JSON, those fields will be ignored (if you define them as Some(whatever), they will look like any other field). If you don't know much about Scala Play, ignore the extra stuff for now - this is just inside the default Controller you're given when you make a new Project in IntelliJ.
package controllers
import javax.inject._
import play.api.libs.json.{Json, OFormat}
import play.api.mvc._
import scala.concurrent.Future
#Singleton
class HomeController #Inject()(cc: ControllerComponents) extends AbstractController(cc) {
case class Attributes(heightInCM: Int, weightInKG: Int, eyeColour: String)
object Attributes {
implicit val format: OFormat[Attributes] = Json.format[Attributes]
}
case class Person(name: String, age: Int, attributes: Attributes)
object Person {
implicit val format: OFormat[Person] = Json.format[Person]
}
def index: Action[AnyContent] = Action.async {
val newPerson = Person("James", 24, Attributes(192, 83, "green"))
Future.successful(Ok(Json.toJson(newPerson)))
}
}
When you run this app with sbt run and hit localhost:9000 through a browser, the output you see on-screen is below (formatted for better reading). This is also an example of how you might send JSON as a response to a GET request. It isn't the cleanest example but it works.
{
"name":"James",
"age":24,
"attributes":
{
"heightInCM":187,
"weightInKG":83,
"eyeColour":"green"
}
}
Once more though, I would never recommend passing actual functions between services. If you insist though, maybe store them as a String in a Case Class and turn it into JSON like this. Even if you are okay with passing functions around, it might even be a good exercise to practice security by validating the functions you receive to make sure they're not malicious.
I also have no idea how you'll convert them back into a function either - maybe write the String you receive to a *.scala file and try to run them with a Bash script? Idk. I'll let you figure that one out.
We are using Symfony2 FOSRestBundle with JMSSerializerBundle for developing REST APIs to be consumed by mobile developers.
The API response in JSON format returns 'null' as value of properties wherever applicable, which is generating an exception for the 3rd party library being used by mobile developers.
I don't see a solution from JMSSerializerBundle or FOSRestBundle to overwrite the value as per our requirement.
Workaround so far
I can set default value in entity so that the fresh data will have some default value in database, instead of null. But this doesn't work for a one-to-one/many-to-one relationship objects, as those will return null by default instead of blank object.
Any solution to overwrite the json after serialization ?
You can use a custom visitor to do that:
<?php
namespace Project\Namespace\Serializer;
use JMS\Serializer\Context;
use JMS\Serializer\JsonSerializationVisitor;
class BlankSerializationVisitor extends JsonSerializationVisitor
{
/**
* {#inheritdoc}
*/
public function visitNull($data, array $type, Context $context)
{
return '';
}
}
And then, set it to your serializer with the setSerializationVisitor method or in your config file:
# app/config/config.yml
parameters:
jms_serializer.json_serialization_visitor.class: Project\Namespace\Serializer\BlankSerializationVisitor
When using the FOSRestBundle, in your configuration file (generally app/config/config.yml) you can use this settings to avoid having null values:
fos_rest:
serializer:
serialize_null: false
If you want a custom value, you can use the serializer.post_serialize event.
PS: To have all possible options provided by the bundle, type this command:
php bin/console config:dump-reference fos_rest
I am testing a Restful endpoint in my JUnit and getting an exception as below in the
list which is present as an argument inside the save method,
**"Argument(s) are different! Wanted:"**
save(
"121",
[com.domain.PP#6809cf9d,
com.domain.PP#5925d603]
);
Actual invocation has different arguments:
save(
"121",
[com.domain.PP#5b6e23fd,
com.domain.PP#1791fe40]
);
When I debugged the code, the code broke at the verify line below and threw the
above exception. Looks like the arguments inside the "testpPList" within the save
method is different. I dont know how it becomes different as I construct them in my
JUNit properly and then RestFul URL is invoked.
Requesting your valuable inputs. Thanks.
Code:
#Test
public void testSelected() throws Exception {
mockMvc.perform(put("/endpointURL")
.contentType(TestUtil.APPLICATION_JSON_UTF8)
.content(TestUtil.convertObjectToJsonBytes(testObject)))
.andExpect(status().isOk());
verify(programServiceMock, times(1)).save(id, testpPList);
verifyNoMoreInteractions(programServiceMock);
}
Controller method:
#RequestMapping(value = "/endpointURL", method = RequestMethod.PUT)
public #ResponseBody void uPP(#PathVariable String id, #RequestBody List<PPView> pPViews) {
// Code to construct the list which is passed into the save method below
save(id, pPList);
}
Implementing the Object#equals(Object) can solve it by the equality comparison. Nonetheless, sometimes the object you are validating cannot be changed or its equals function can not be implemented. For such cases, it's recommended using org.mockito.Matchers#refEq(T value, String... excludeFields). So you may use something like:
verify(programServiceMock, times(1)).save(id, refEq(testpPList));
Just wrapping the argument with refEq solves the problem.
Make sure you implement the equals method in com.domain.PP.
[Edit]
The reasoning for this conclusion is that your failed test message states that it expects this list of PP
[com.domain.PP#6809cf9d, com.domain.PP#5925d603]
but it's getting this list of PP
[com.domain.PP#5b6e23fd, com.domain.PP#1791fe40]
The hex values after the # symbol for each PP object is their hash codes. Because they are different, then it shows that they belong to different objects. So the default implementation of equals will say they're not equal, which is what verify() uses.
It's good practice to also implement hashCode() whenever you implement equals(): According to the definition of hashCode, two objects that are equal MUST have equal hashCodes. This ensures that objects like HashMap can use hashCode inequality as a shortcut for object inequality (here, placing objects with different hashCodes in different buckets).
I need to update multiple records using a single HTTP request. An example is selecting a list of emails and marking them as 'Unread'. What is the best (Restful) way to achieve this?
The way I doing right now is, by using a sub resource action
PUT http://example.com/api/emails/mark-as-unread
(in the body)
{ids:[1,2,3....]}
I read this site - http://restful-api-design.readthedocs.io/en/latest/methods.html#actions - and it suggests to use an "actions" sub-collection. e.g.
POST http://example.com/api/emails/actions
(in the body)
{"type":"mark-as-unread", "ids":[1,2,3....]}
Quotes from the referenced webpage:
Sometimes, it is required to expose an operation in the API that inherently is non RESTful. One example of such an operation is where you want to introduce a state change for a resource, but there are multiple ways in which the same final state can be achieved, ... A great example of this is the difference between a “power off” and a “shutdown” of a virtual machine.
As a solution to such non-RESTful operations, an “actions” sub-collection can be used on a resource. Actions are basically RPC-like messages to a resource to perform a certain operation. The “actions” sub-collection can be seen as a command queue to which new action can be POSTed, that are then executed by the API. ...
It should be noted that actions should only be used as an exception, when there’s a good reason that an operation cannot be mapped to one of the standard RESTful methods. ...
Create an algorithm-endpoint, like
http://example.com/api/emails/mark-unread
bulk-update is an algorithm name, a noun. It gets to be the endpoint name in REST, the list of ids are arguments to this algorithm. Typically people send them as URL query arguments in the POST call like
http://example.com/api/emails/mark-unread?ids=1,2,3,4
This is very safe, as POST is non-idempotent and you need not care about any side effects. You might decide differently and if your bulk update carries entire state of such objects opt for PUT
http://example.com/api/emails/bulk-change-state
then you would have to put the actual state into the body of the http call.
I'd prefer a bunch of simple algo like mark-unread?ids=1,2,3,4 rather than one monolithic PUT as it helps with debugging, transparent in logs etc
It a bit complicated to get array of models into an action method as argument. The easiest approach is to form a json string from your client and POST all that to the server (to your action mehtod). You can adopt the following approach
Say your email model is like this:
public class Email
{
public int EmailID {get; set;}
public int StatusID {get; set;}
// more properties
}
So your action method will take the form:
public bool UpdateAll(string EmailsJson)
{
Email[] emails = JsonConvert.DeserializeObject<Emails[]>(EmailsJson);
foreach(Email eml in emails)
{
//do update logic
}
}
Using Json.NET to help with the serialization.
On the client you can write the ajax call as follows:
$.ajax({
url: 'api/emailsvc/updateall',
method: 'post',
data: {
EmailsJson: JSON.stringify([{
ID: 1,
StatusID:2,
//...more json object properties.
},
// more json objects
])
},
success:function(result){
if(result)
alert('updated successfully');
});
Let's say I have some C# DTO's and I want to convert them to TypeScript interfaces using T4 templates and neat little library called TypeLite
On the client side, I have some concrete TypeScript classes (that inherit from Backbone.Model but that's not important) that represent the same DTO defined on the server side.
The intended goal of the interfaces is to act as data contracts and ensure client and server DTOs are kept in sync.
However, this is problematic since TypeScript supports no run-time type checking facilities other than instanceof. The problem with instance of is when I fetch my DTOs from the server they are plain JSON objects and not instances of my model. I need to perform run-time type checking on these DTOs that come in from the server as JSON objects.
I know I can do something like this:
collection.fetch({...}).done((baseModels) => {
baseModels.forEach((baseModel) => {
if(baseModel&& baseModel.SomeProperty && baseModel.SomeOtherProperty){
//JSON model has been "type-checked"
}
});
});
However, there is obvious problems to this because now I need to update in three places if I change or add a property.
Currently the only thing I found is this but it's undocumented, not maintained, and uses node which I have zero experience with so I'll save myself the frustration. Does anybody know of anything similar to perform run-time type checking in TypeScript or some other way to accomplish what I'm after?
It would be great if this was built into TypeLite to generate the interfaces as well as a JSON schema for type checking at run-time. Being that this project is open source somebody should really go ahead and extend it. I'd need some pointers at the least if I would do it myself (thus the question).
More details about my particular problem here (not necessary but if needed extra context)
At runtime you are using plain JavaScript, so you could use this answer as it relates to plain JavaScript:
How do I get the name of an object's type in JavaScript?
Here is a TypeScript get-class-name implementation that can supply the name of the enclosing TypeScript class (the link also has a static separate version of this example).
class ShoutMyName {
getName() {
var funcNameRegex = /function (.{1,})\(/;
var anyThis = <any> this;
var results = (funcNameRegex).exec(anyThis.constructor.toString());
return (results && results.length > 1) ? results[1] : "";
}
}
class Example extends ShoutMyName {
}
class AnotherClass extends ShoutMyName {
}
var x = new Example();
var y = new AnotherClass();
alert(x.getName());
alert(y.getName());
This doesn't give you data about the inheritance chain, just the class you are inspecting.