how can i test remoted MSMQ as a production environment - message-queue

I designed an application that relies on sending and receiving messages in MSMQ remotely,
my problem is I wanna test this in stress environment in which all devices sending and receiving as fast as possible and simultaneously so the issue is I don't have enough machines in my environment to test this case
my question is how can I simulate this case?

Related

Can I edit packets from my server before they reach my Client?

I made a simple Instant Message Chat Client and Server on TCP, that both run off Adobe AIR. It works great and it was a interesting way to learn basic networking programming.
My Question: Is it possible to change the data in the packet sent from the Chat Server before it arrives at the Client without using the Server or Client to do so? Like perhaps a program?
I am new to Network programming so I apologize if this is a dumb question.
Your question is very broad. So the answer is broad as well. Yes. It's possible.
For that you need to get the packets between the client and server to pass through a third program. There are quite a lot of ways to achieve that. Here's non-exhaustive list:
First, on your own machines (client/server) you could get access to the packet from the operating system using various low-level APIs. For instance iptables+nfqueue in Linux or the Windows Filtering Platform on Windows.
Second, you could get access to the packets by intentionally having them communicate through some proxy program which may or may not reside on the same server as the client or the server.
Third, you could get access to the packets by picking them up from the network itself. For instance, you could set up some Linux machine as a router and have it sit between the client and the server (as long as they're not on the same machine). That Linux machine will now have access to all of the packets that pass through it, and it can pass them to various user-space programs using hooks such as the previously mentioned nfqueue.

Why is spark filling the tmp (spark.local.dir) in the machine that submits jobs?

I have a spark 1.2.1 cluster set up in standalone mode with a master and a few slaves. I then let my data scientists enjoy the cluster's power.
All is working fine. However, the dedicated server that my data scientists used to submit spark jobs have its spark.local.dir filled up gradually.
Given that this machine is sitting outside of the cluster, not a master, nor a worker/slave, I wouldn't think that the local spark.local.dir is used in any way by spark. (And why would it? It only shows the logs.)
I could not find a good doc detailing this part of information. Does anybody have an idea?
Not enough information about your setup to be sure, but I am guessing that the jobs are launched in client mode where the driver would be on your client node.
From the spark docs:
In client mode, the driver is launched in the same process as the client that submits the application. In cluster mode, however, the driver is launched from one of the Worker processes inside the cluster, and the client process exits as soon as it fulfills its responsibility of submitting the application without waiting for the application to finish.
I am guessing that in client mode the driver (on your client machine) of the application needs plenty of scratch space to manage the other workers in that case.

How can I write a test module that needs an operative mySQL database?

I'm writing a nodejs npm module that needs a mySQL database to operate.
I would like to write a test module that connects to a "fake" database and do some operations on it.
I already have setup my test database locally in my developing machine, but I would like this tests to work in any machine.
What's the best practice for writing integration tests modules that depends on an operating mySQL database?
Does exists any public service in the net where I can get a temporary mysql user/password where I can do some operations for a limited time/size?
Usually you would set up a continuous integration (CI) system that executes your tests everytime you commit a change to your version control system. The CI system would provide a clean MySQL database your tests will run against. If you use a CI system in the cloud you often can easily configure it to provide the database. E.g. see Travis CI.
If you set up a CI system other developers will still need to run their own MySQL database on their computer if they want to execute the tests. Alternatively, you may use a mock instead of the real database in your tests. For details see: How do you mock MySQL (without an ORM) in Node.js
However, using a mock won't give you sufficient test results since the mock just emulates the database in a simple way. Sometimes the mock may be too simple or just be buggy. So you will need to run at least some of your tests also against the real database. Thus you may choose to run the tests against the real database with your CI system and run the tests against the mock during development.
Not quite fully understanding the question I would suggest Amazon RDS for short lived testing where accessing something over the public internet is required. Amazon web services can get pricey quick if used for any real traffic but still a good option for any proof of concept.

Connecting to Database on Virtual Machine?

Simple question can a Java service layer running on Tomcat7 on a host machine connect to persistent data store (mySQL) running inside a virtual box with portforwarding? I want to know if the hibernate or Jdbc connection strings from host machine work if mySQL server is installed inside a VirtualBox.
Also if it does work can I expect behavioral deviations in terms of speed and connection pooling if everything is packaged into one single system and deployed in a real world web server in a single enviroment?
The short answer is yes, it is possible and will work. You will likely have to play with the firewall settings on your virtual box instance. You don't specify OS, so it's hard to tell you what exactly you'll need to tweak.
As far as deploying this in a real-world environment, if you mean production, you probably should NOT do that. This is a great setup to build on, but not something I would run in production.
To be clear, there won't be any issues behaviorally speaking, it will act as MySQL always acts, but it will absolutely be slower than running it on 'bare metal' -- how much slower will vary based on hardware, workload, etc. and it is generally not a great design for a production deployment..

Synchronization and time keeping of multiple applications

How would I implement a system that will keep 20 applications running on a closed network to stay synchronized whilst performing various tasks?
Each application will be identical, on an identical machine. These machines will have a socket connection to the master application that will issue TCP commands to the units such as Play:"Video1.mp4". It is vital that these videos are played at the same time and keep time with each other.
The only difference between each unit is that the window will be offset on the desktop, so that each one has a different view port on the application - as this will be used in a multi-projector set up.
any solutions/ideas would be greatly appreciated.
I did it some years ago. 5 computers running 5 instances of the same flash app. Evey app was displaying a "slice" of the same huge app and everything needed to be synchronized at fractions of seconds precision.
I used a simple Python script (running on a 6th machine) that was sending OSC messages on the local network. the flash apps were listening through FLOSC to this packets, and were sending to the Python script message about their status.
The stuff was running at the Withney Museum (NY) and at Palais de Tokyo (Paris), so I'm quite confident about the solution :) I hope it helps you
You have to keep tracking and latest updated data in your master application. you have to broadcast your newly updated data to all connected client to deliver updated data. after any update from any client you have to send updated data to all connected clients.
In FMS remote shared object is used to maintain data centrally across the network connected application via FMS. when any client is sending any updated OnSync Event is fired to all client application and data is sync with FMS Remote Shared Object. So this kind of Flow you have to develop for proper synchronization of data across network.
you can also use the RPC system to sync data between all connected application to the Master application. in that you have to init RPC to the client to Master application to send data update and Master application send RPC to all other client which are connected to the Master application.