What is the difference between using Glassfish Server -> Local and Remote - configuration

I am using Intellij IDEA to develop my applications and I use glassfish for my applications.
When I want to run/debug my application I can configure it from Glassfish Server -> Local and define arguments at there. However there is another section instead of Glassfish Server, there is a Remote section for configuration. I can easily configure and debug my application just defining host and port variables.
So my question is why to need for Glassfish Server Local configuration(except for when defining extra parameters) and what is difference between them(I mean performance or etc.)?

There are a number of development work-flow optimizations and automation that can be performed by an IDE when it is working with a local server. I don't have a strong background in IDEA, so I am not sure which of the following they may have implemented:
using in-place|exploded|directory deployment can eliminate jar/war/ear creation in the IDE and deconstruction in the server. This can be a significant time saver.
linked to 1 is smarter redeployment. In some cases, a file change (like changing a jsp or an html file) does not need to trigger redeployment.
JDBC driver integration allows users to configure their IDE to access a DB and then propagates that configuration (which usually includes driver jars, etc.) into the server's classpath as part of deployment of an app.
access to server log files during deployment and execution.
The ability to start and stop the server... even today, you do need to restart GlassFish sometimes.
view the generated Java sources of a JSP.
Most of these features are not available with a remote server and that has a negative effect on iterative development since the break between edit and validate can be fairly long.
This answer is based on my familiarity with the work that we have done for the NetBeans/GlassFish integration. The guys at IntelliJ are smart, so I would not be surprised if they have other features that are available when you are working with a local server.

Local starts Glassfish for you and performs the deployment. With Remote you start Glassfish manually. Remote can be used to debug apps running on another machines, Local is useful for development and testing.

Related

How to upload my servers on virtual machine?

I have 2 server at which I am working locally. The first is a front-end in Vuejs, and the second is back-end in Flask. From the client I request an api to the second.
I have to upload these two on a remote Linux VM (Debian), for which I have credentials and I can successfully connect it via PuTTy.
How do I transer my 2 directories to the VM?
Then, I should change the address that the client uses for api requests of the server, that is all? Or I will have to do something else?
You can copy directories by the scp or sftp protocol. In your case, this can be done most easily by the winscp software.
Both scp, sftp (implemented by winscp) and ssh (implemented by putty) use the ssh protocol. Putty is for remote terminal (i.e. you can give commands to the server), while winscp uploads, downloads and manages files on it.
If you are developing something, it is likely that you will need to this deployment more regularly. These softwares are only good for single-time deployments. In professional environments this deployment is automatized and happens quickly.
It is very likely that you also have some database in your project. Here the most common options are either some db-level synchronization, or dumping the database into files and synchronizyng on the file level. But it is already another topic.
It is also unlikely that you will need two different VMs for the vuejs and for the flask. You could wire them together to a single VM, that would make your task far more easy.
You will likely have a hard time to make your deployment on your server well working. This all is just the beginning. But don't worry, after you've learnt it all, it will be easy!

How to run a .NET Core project without doing any configuration in a local development environment and in a CI/CD pipeline?

First let me give some background
We have our own VPS, so we do not wish to use Azure to host our web applications.
We have already successfully created a CI/CD pipeline to our VPS by installing an agent on it for a .NET Core project.
We use Azure DevOps (formerly known as VSTS) to host our code in GIT and handle our backlogs and CI/CD pipelines.
We have several .NET Framework projects where we use XTD transforms to transform our web.config files on delivery/deployment to have the correct connection strings and other configuration properties.
This makes it possible to pull the master branch from our remote repo and have it working in seconds on a previously unused (for this application) development environment without the need for any configuration.
Now to get to my question
The master branch of the .NET Core project for which we already have the CI/CD pipeline in place holds the configuration in the json files for the staging environment it is continuously delivered to. When a developer pull the master branch, he/she first needs to configure these to suite the local debug environment.
This is an undesirable situation for us.
How can we make it so that if we use .NET Core we can use a mechanism that will allow us to have the project work on a local debug environment without any configuration and in the CI/CD pipeline?
What have we already tried?
we have found that we can have multiple versions of the appsettings.json file for the different environments like appsettings.debug.json and than in the static method CreateWebhost of the Program class we can call on or the other. But how we can automate this is something that we haven't been able to figure out or find documentation about.
Okay, so here are some options you can take advantage of TODAY. (there are im sure more options/approaches)
Option A
Configure the master branch to have appsetting.development.json with connection string to DEV database( or lowest environment)
remove any connection string from appsettings.json
Merge master accordingly.
Create environment variables on each of the backend servers for the connection string; ex, system environment variable named ConnectionStrings:cartDB with connection string to the database for the environment for which the backend server used.
The result of this will be that when running using DEVELOPMENT as the environment variable, then it will be able to connect to database everyone can access.
However, since all OTHER web servers have environment variables with connection string, they will take highest level of precedence, and will therefore be the values set when calling something such as
string connectionString = Configuration.GetConnectionString("cartDB");
This will satisfy the requirements you mentioned above.
Option B:
Configure the master branch to have appsetting.development.json with connection string to DEV database( or lowest environment)
remove any connection string from appsettings.json
Place appsetting.staging.json, appsettings.prod.json in source control, and set environment name variable in web servers. :/ not the best of options/advised against.
(its worth mentioning since I have seen this happen, we all have)
Option C
Add appsetting.staging.json, appsettings.prod.json to source control and use a token in place of the connection string value. Then, leverage some type of Tokenization Task to replace those tokens with the appropriate values.

SSIS Connection Managers - SQL Auth in development, Integrated in Production

I'm using SQL 2016 and we're converting over a bunch of SSIS packages (from way back in 2005). In our old architecture we had development and production. We're moving now to source control in VSO and we're staging our deployments. We have local development on developer machines, then we post to Dev, then to QA, then Staging, then finally production.
We've figured out how to use SSIS Environment Variables (AWESOME!) and we're able to run the files on local dev machines from inside Visual Studio using SDT. Then we deploy as project to an ispac file which we copy to the Dev server and import into our SSIS Catalog in SSMS. Then in SSMS we are trying to change the variables for each environment.
The problem is the Data Connection. I was passing the connection string and the authentication password as a parameter into a shared connection. So the connection read those values in from the project parameter when executing. Then we were going to change those values for each environment. It turns out on the server we need to execute using Integrated Security. Since we're testing remotely we can't use Integrated Security on our local machines. So basically local dev is SQL Authentication but Dev, QA, Staging, and Production environments will all be tested on servers using Integrated Security.
I can't seem to get this to work right. I have two Project Parameters DB_ConnectionString and DB_Password. I also have a shared Connection (OLEDB SQL) which in the package is parameterized. We use the Project Parameters for the connection so at execution it's using the project parameters to plug in the string and password.
When I post to live I need Integrated. So I tried putting an Integrated security connection string into the Environment Password fro DB_ConnectionString and then it requires a Password. But that isn't really working right. I'm getting a connection error.
SSIS Error Code DTS_OLEDBERROR
"Invalid authorization specification"
If you can avoid using parameters you will be better off in my opinion. When publishing to the catalog you can set your connection strings based on your environment. This way you can have sql auth on development machines and integrated auth when running in any of your server environments.

django in product , what are the mandatory services?

I am new to django/apache environment. I am preparing the list of services that are mandatory to get django application running without fail.
I could able to get only two of them in my mind.
1) mysqld -> mysql Daemon.
2) apache2 -> apache daemon.
Could you kindly suggest if any other services required, otherwise the django application fails to run?
you need apache2 mod-wgsi to be installed too:
$ sudo apt-get install libapache2-mod-wsgi
and you have to enable apache2 services:
$ sudo a2enmod mod-wsgi
and disable the default site too
and pass to the apache2 configuration and other
django is a framework; a set of tools that allow you to create web applications - any kind of web application.
There are no list of services required; but if you are asking from a systems management point of view; what is needed to support a typical Python web application:
You need a WSGI compatible runtime. This can be mod_wsgi if you are using Apache; gunicorn or uwsgi.
You may need a process manager if you aren't using mod_wsgi (whose processes are controlled by Apache).
You'll need a web server capable of hosting the static assets for the application. This can be Apache, nginx, lighttpd or any other capable web server.
Most applications will also have some sort of database. What database this is, will depend on the application and its requirements (not all features of the django ORM are supported by all databases). So you'll have to check with each individual application. You may choose to provide a "standard" layout; for example MySQL version xx.yy. It could also be that the application is using an external hosted server; in which case your job is just to provide connectivity to the remote hosts.
If you can take care of the above 4, you have a standard layout for host most Python WSGI-based web applications.
Keep in mind that although Python 3 has been widely available; most libraries are still in the process of being ported so making sure your server provides both Python 2.7 and Python 3 runtimes is important.
You should also make sure that the development headers for Python (and the database server you are supporting) are available - this is important if the Python application runs in a virtual environment (as this is best practice) since the drivers will need to be compiled for each virtual environment. The same also applies for any compiled libraries (like PIL).
Django has a nice deployment section in the documentation to help with specifics.

Running Mule Standalone vs Tomcat in Production

There are many ways of deploying Mule ESB into a production environment. According to the documentation, it appears that running Mule as a standalone service is the recommended way of doing so.
Are there any reasons for NOT running Mule standalone in production? I'm sure its stable, but how does it compare to Tomcat as far as performance, reliability, and resource utilization go?
Should I still consider running it within Tomcat for any reason?
Using Tomcat, or any other web container, allows you to use the web tier of that container for HTTP inbound endpoint (via the Servlet transport) instead of either Mule's HTTP or Jetty transports.
Other differences are found in class loading, handling of hot redeployment and logging.
Now the main reason why people do not use Mule standalone is corporate policy, ie "thou shalt deploy on _". When production teams have gained experience babysitting a particular Java app/web server, they want you to deploy your Mule project in that context so they can administer/monitor it in a well-known and consistent manner.
But if you're happy with the inbound HTTP layer you get in Mule standalone and you are allowed to deploy it in production, then go for it. It's production ready.
Mule actually recommends deploying standalone. Inside a container like e.g. tomcat it has to share the threadpool, heap etc... This can obviously prevent it from performing at it's best.
The main reason you'd want to inside a container like tomcat is to get automatic deployment. I.e. you can just update your Mule application .war and the container will restart mule with the new application. This helps in testing.
Also some transports are specific to running inside a container, like the servlet transport. OTOH when designing solution so Mule transports between your container and your servlets your'e doing it wrong.