I am new to wiremock and I am using the wiremock JSON mappings to test some of the scenarios in my project.
In one of the test scenarios, I need to give response 200 OK after 2 retries(500) to the server which is trying to access one of the JSON mappings URL (say /retry).
I am able to achieve this by referring to the stateful behavior of wiremock http://wiremock.org/docs/stateful-behaviour/
But the problem is this it works only when one server or thread is accessing it. But if multiple server/instances/thread will try to access the same this will lead to inconsistent behavior as wiremockis changing the states internally.
So the query is how I make this scenario thread-safe to have consistent result?
P.S.: I am ready to move from JSON mapping to JAVA Usage of wiremock or even JUnits usage. But not sure if that will solve my problem. Also The Junit Usage I don't want to use because I am not doing this changes in my existing project in test module rather I am creating separate wiremockserver to test independently.
Related
I have several json files that represent the payload for different API's(I can map which API to call based on the file name, but other methods could be applied as well),
what is the best practice to populate my data on the application with the help of those json files?
My first though was to use some automation framework(rest assured for example) to accomplish my task, but I think it might be an overkill for my scenario.
p.s. snapshot of DB/query direct to DB is not an option because of the nature of the application.
I have the task to implement an API with Spring Boot and a relational database to save the data from a client (mobile app) and synchronize it.
So far no problem. I have some endpoints to post and get the stored data.
Now I have the task to provide an endpoint that return the complete data in a GET-Request and another to save the complete data of the client via a POST-Request.
My problem is:
How do I store the complete data in one POST-Request(JSON)?
The database has multiple entities with manytomany relationships and if I just POST them then I have some problems with the relations between the Entities.
My approach to GET the complete data was to just create a new Entity with every entity in it. Is this the best solution?
And is this even a good solution to POST the complete data instead of the usage of the other endpoints to get the entities one by one. Or is there another approach to store and restore the complete data from server and client? Whereby I think that posting the complete data makes less sense.
is this even a good solution to POST the complete data instead of the usage of the other endpoints to get the entities one by one
In some scenarios you may want to force update or synchronize the client data with the server, for example, WhatsApp backup now option.
How do I store the complete data in one POST-Request(JSON)
You can make one post endpoint that extracts the data sent from the client and internally use all your repositories or daos for each property.
My approach to GET the complete data was to just create a new Entity
with every entity in it. Is this the best solution
either by doing as you mentioned or by handling it manually in the endpoint like this
also check this one which uses apache camel to aggregate multiple endpoints
I am creating a spring-boot application which will interact with elasticsearch using spring-data. But the problem is, my data in the elasticsearch is unpredictable. That means there can be slight changes in the fields like additional fields or can be totally new field coming in JSON. Please guide me for a solution to address that. Using normal repository is seems not working because I don't have a defined JSON format. Your guide will be highly appreciated.
You need to provide a bit more data on your case.
Normally, when you use #Field annotations, or introducing/dropping a new simple or object field, this should not be a problem at all since spring-data-elasticsearch updates mapping when you save to ElasticsearchRepository. In some cases, e.g. introducing a parent-child rel, you would need to drop and recreate index but this can also be done programatically, if needed.
If you need advanced mapping that is also changing dynamically, then you need to build and execute a mapping update request from your code on save (custom repo).
Currently developing an app for observing changes in a JCR Repository and replicate this changes to another repository. I can't rely on a cluster JCR cause the two repositories won't be on the same network, and the connection is not reliable, so my implementation takes care of the communication problem using a REST API between the two servers, and give some fault tolerance guarantees. The problem is I need to serialize nodes, preferably in a JSON format, and parse that JSON into a Node on the other side.
I've tried Apache Sling, using some internal classes it can serialize perfectly the node into a JSON format, but I can't seem to find a way to deserialize into a Node Object on the other side. Any ideas?
The Apache Sling POST Servlet can import JSON content, see "importing content structures" at http://sling.apache.org/documentation/bundles/manipulating-content-the-slingpostservlet-servlets-post.html
Another option is the new replication module that has recently been contributed, that might be just what you need. It's at http://svn.apache.org/repos/asf/sling/trunk/contrib/extensions/replication/ , there might not be any docs yet but if you ask on the Sling users mailing list I'm sure you'll get help on how to use it.
I am using Unity 2.0 to register a concrete SQL Server repository against an abstract repository like so:
var context = new DataContext(
ConfigurationManager.ConnectionStrings["DevDB"].ConnectionString
);
this.UnityContainer
.RegisterType<AlertQueueRepository, Sql.AlertQueueRepository>(
new InjectionConstructor(context)
);
The context is being shared across all of the other repositories that I have. This works fine within the application, however, if someone else - SSMS query, SSIS package, other application - modifies the database my repositories are unaware of this and will not see the change.
Is this an issue with the way I'm using Unity? Perhaps the contexts are hanging around too long? Or is it something I need to configure with LINQ?
Consider the answer to this question: Multiple application using one database?.
If you really need multiple applications to access/modify the data, consider building a layer on top of the database to service all the requests.
This question was similar to mine it turns out - ASP.NET MVC inject per request.
What I needed to do was have each request register its own context as it has the side-effect that the context gets thrown out after each request. Now I'm just left to consider the performance implications of this pattern.