Farelogix Trace Tool Configuration for Production - configuration

I am looking for the configuration of Farelogix Trace Tool for Production Environment. I need to know the host and the port for Production.

They dont have tracing enabled for production. It adds a lot of overhead to trace and impacts performance.

Related

Connecting to Database on Virtual Machine?

Simple question can a Java service layer running on Tomcat7 on a host machine connect to persistent data store (mySQL) running inside a virtual box with portforwarding? I want to know if the hibernate or Jdbc connection strings from host machine work if mySQL server is installed inside a VirtualBox.
Also if it does work can I expect behavioral deviations in terms of speed and connection pooling if everything is packaged into one single system and deployed in a real world web server in a single enviroment?
The short answer is yes, it is possible and will work. You will likely have to play with the firewall settings on your virtual box instance. You don't specify OS, so it's hard to tell you what exactly you'll need to tweak.
As far as deploying this in a real-world environment, if you mean production, you probably should NOT do that. This is a great setup to build on, but not something I would run in production.
To be clear, there won't be any issues behaviorally speaking, it will act as MySQL always acts, but it will absolutely be slower than running it on 'bare metal' -- how much slower will vary based on hardware, workload, etc. and it is generally not a great design for a production deployment..

Is it possible to patch clustered SQL Server without a BizTalk outage?

We have a BizTalk Server install backed by a clustered SQL environment for high availability.
However, whenever the SQL environment is patched there is a momentary outage as part of node failover. Consequently the host instances stop and BizTalk shuts down (if we move to CU2 the host instances will automatically restart, but this is a separate issue).
This is undesirable, as it prevents incoming web requests and breaks open web service clients. As such, is there a strategy for gracefully patching SQL Server without a BizTalk outage?
It seems this is impossible. Marking this as the accepted answer until someone can pleasantly surprise me otherwise.

Mysql: How to configure mysql proxy for an existing master-slave setup

I want to configure mysql proxy on my test environment to observe the below.
1. Behavior of the proxy
2. How load, CPU usage varies on my test server for read/write distribution.
I googled and able to install proxy on my ubuntu linux.
But I didnt see any thing on configuring it in a step by step manner and how to start or stop this.
Shall some one explore on this and this would be of great help for me.
Thanks in advance
Regards,
UDAY
By default if you run the proxy on the same machine as the server it will listen to port 4040 and query a backend server on the msyql default port of 3036. Other port numbers and server locations can be configured from the command line or with a configuration file.
To distribute queries across servers, add monitoring, profiling etc. you need to provide a Lua script to mysql-proxy. See the example / tutorial scripts in /usr/local/share/docs that came with the installation download. There is work to do for a production implementation.
The basics of how the scripting works can be found here under MySQL Proxy Scripting.
Don't be worried about Lua. The syntax is quite readable given the tutorial examples to work from. As and when you need it lua.org has more details of Lua.

What is the difference between using Glassfish Server -> Local and Remote

I am using Intellij IDEA to develop my applications and I use glassfish for my applications.
When I want to run/debug my application I can configure it from Glassfish Server -> Local and define arguments at there. However there is another section instead of Glassfish Server, there is a Remote section for configuration. I can easily configure and debug my application just defining host and port variables.
So my question is why to need for Glassfish Server Local configuration(except for when defining extra parameters) and what is difference between them(I mean performance or etc.)?
There are a number of development work-flow optimizations and automation that can be performed by an IDE when it is working with a local server. I don't have a strong background in IDEA, so I am not sure which of the following they may have implemented:
using in-place|exploded|directory deployment can eliminate jar/war/ear creation in the IDE and deconstruction in the server. This can be a significant time saver.
linked to 1 is smarter redeployment. In some cases, a file change (like changing a jsp or an html file) does not need to trigger redeployment.
JDBC driver integration allows users to configure their IDE to access a DB and then propagates that configuration (which usually includes driver jars, etc.) into the server's classpath as part of deployment of an app.
access to server log files during deployment and execution.
The ability to start and stop the server... even today, you do need to restart GlassFish sometimes.
view the generated Java sources of a JSP.
Most of these features are not available with a remote server and that has a negative effect on iterative development since the break between edit and validate can be fairly long.
This answer is based on my familiarity with the work that we have done for the NetBeans/GlassFish integration. The guys at IntelliJ are smart, so I would not be surprised if they have other features that are available when you are working with a local server.
Local starts Glassfish for you and performs the deployment. With Remote you start Glassfish manually. Remote can be used to debug apps running on another machines, Local is useful for development and testing.

Comet and long polling requests on DreamHost?

Is there any solution for running these kind of operations on DreamHost or other shared hosting environments where I don't have access to tweak apache?
You certainly can, but as long as Apache HTTP server doesn't provide non-blocking IO capabilities (and each polling connection has a server thread associated to it), you'll be running out of memory very fast (after 2-3k connections).
If you meant Apache Tomcat, NIO is turned off by default, and you need to have access to configuration files in order to change this.