How to upload my servers on virtual machine? - configuration

I have 2 server at which I am working locally. The first is a front-end in Vuejs, and the second is back-end in Flask. From the client I request an api to the second.
I have to upload these two on a remote Linux VM (Debian), for which I have credentials and I can successfully connect it via PuTTy.
How do I transer my 2 directories to the VM?
Then, I should change the address that the client uses for api requests of the server, that is all? Or I will have to do something else?

You can copy directories by the scp or sftp protocol. In your case, this can be done most easily by the winscp software.
Both scp, sftp (implemented by winscp) and ssh (implemented by putty) use the ssh protocol. Putty is for remote terminal (i.e. you can give commands to the server), while winscp uploads, downloads and manages files on it.
If you are developing something, it is likely that you will need to this deployment more regularly. These softwares are only good for single-time deployments. In professional environments this deployment is automatized and happens quickly.
It is very likely that you also have some database in your project. Here the most common options are either some db-level synchronization, or dumping the database into files and synchronizyng on the file level. But it is already another topic.
It is also unlikely that you will need two different VMs for the vuejs and for the flask. You could wire them together to a single VM, that would make your task far more easy.
You will likely have a hard time to make your deployment on your server well working. This all is just the beginning. But don't worry, after you've learnt it all, it will be easy!

Related

Express/NodeJS application on Cpanel

Ok so I have an app with a Node/Express API and everything works fine on localhost. I'm trying to figure out how to make everything work on CPanel that's running on Apache. The client side stuff but I am unable to fetch any data from the backed. I've searched and looked, yes, but I'm still quite unsure on how to approach this. Do I have to use a Virtual Host and if so what are the specific steps I need to do?
NodeJS doesn't run on Apache or Nginx. Most you can do in these web servers is to set a reverse proxy.
NodeJS has its own web-server. cPanel won't help you in that regard, since you only need to install NodeJS on your server (you must have SSH access-root), and run it from there. You can daemonize your Node process to keep running installing PM2 or Forever (NPM Packages).
Here's a good answer (search before asking, the issue might be solved by then).
Run node.js on cpanel hosting server
cPanel typically runs Apache or another web server that is shared among all the cPanel/unix accounts. The web server listens on port 80. Depending on the domain name in the requested URL, the web server uses "Virtual Hosting" to figure out which cPanel/unix account should process the request, i.e. in which home directory to find the files to serve and scripts to run. If the URL only contains an IP address, cPanel has to default to one of cPanel accounts.
Ordinarily, without root access, a job run by a cPanel account cannot listen on port 80. Indeed, the available ports might be quite restrictive. If 8080 doesn't work, you might try 60000. To access a running node.js server, you'll need to have the port number it's listening on. Since that is the only job listening on that port on that server, you should be able to point your browser to the domain name of any of the cPanel accounts or even the IP address of the server, adding the port number to the URL. But, it's typical to use the domain name for the cPanel account running the node.js job, e.g. http://cPanelDomainName.com:60000/ .
Of course port 80 is the default for web services, and relatively few users are familiar with optional port numbers in URLs. To make things easier for users, you can use Apache to "reverse proxy" requests on port 80 to the port that the node.js process is listening on. This can be done using Apache's RewriteRule directive in a configuration or .htaccess file. This reverse proxying of requests arguably has other benefits as well, e.g. Apache may be a more secure, reliable and manageable front-end for facing the public Internet.
Unfortunately, this setup for node.js is not endorsed by all web hosting companies. One hosting company that supports it, even on its inexpensive shared hosting offerings, is A2Hosting.com. They also have a clearly written description of the setup process in their Knowledge Base.
Finally, it's worth noting that the developers of cPanel are working on built-in node.js support. "If all of the stars align we might see this land as soon as version 68," i.e. perhaps early 2018.
References
Apache Virtual Hosting -
http://httpd.apache.org/docs/2.4/vhosts/
Apache RewriteRule Directive - http://httpd.apache.org/docs/2.4/mod/mod_rewrite.html
A2Hosting.com Knowledge Base Article on Configuring Node.js - https://www.a2hosting.com/kb/installable-applications/manual-installations/installing-node-js-on-managed-hosting-accounts
cPanel Feature Request Thread for node.js Support - https://features.cpanel.net/topic/nodejs-hosting
Related StackOverflow Questions
How to host a Node.Js application in shared hosting
Why node.js can't run on shared hosting?
Is worth to point out that the NodeJS support hasn't yet come to cPanel (as early 2019)

Database application and remote MySql

I would like to create a desktop application that should work with data on a mySql server running on a remote machine.
So each user has a copy of the desktop app and edits data on the remote mySql server.
Now my problem is that the mySql server will not allow connections from other hosts.
Question, is this just the wrong way of creating the app. If not how do I give any host access to the MYsql server.
(I know I can open up for a specific IP but that won't work as the app could be running anywhere)
You should front your database on the server with a thin service layer, where you could do some validation / processing on the data, perform authentication, etc. Your client apps would then expose those methods in your service layer as web services, to which your client apps would communicate using either SOAP/XML, REST/JSON, etc. In general, it is a bad idea to expose your database directly if your application is within a LAN, and a terrible one to expose it on the internet.

Secure Remote mySQL Connection

Since our shared hosting server doesn't allow us to setup Tomcat I decided to install it on our local machine. The local Tomcat server allows us to listen to a certain port for Bancnet transactions which will then be processed and written to the remote site.
Question:
Is it safe for me to set the local PHP application to connect directly to the remote mySQL server? Any suggestions on how to make the connection secure. BTW, I have a self-signed certificate installed in the localhost but not sure how this applies to remote mySQL connection.
You could create a ssh tunnel between MySQL server and client. For more resiliency, use autossh.
If you don't connect over SSL or some other encrypted tunnel, I would absolutely assume that anything you send or receive from MySQL is done so in clear text that can be intercepted and used for malicious purposes from any link along the way. This might be fine for testing purposes with dummy data, but before you put this in production use or pull down live user data for testing, you really should either make arrangements for the data to be stored local to the web app or for there to be an encrypted connection.
Giving you a full overview of how to set up SSL connections to MySQL is beyond the scope of Stack Overflow and it's a bit complicated, but if you want to proceed, check out the documentation and do some research, there are some good informational resources out there.
I'm a bit confused as to the architecture you are trying to describe. What's running where?
If you can't install Tomcat then you probably won't be able to install anything like VPN software on the box.
MySQL can encrypt using SSL provided it has been enabled at compile time and at run time.
Alternatively, it should be fairly trivial to build a webservices tier on top of the remote database.
I would recommend switching to a VPS or managed host though.

What is the difference between using Glassfish Server -> Local and Remote

I am using Intellij IDEA to develop my applications and I use glassfish for my applications.
When I want to run/debug my application I can configure it from Glassfish Server -> Local and define arguments at there. However there is another section instead of Glassfish Server, there is a Remote section for configuration. I can easily configure and debug my application just defining host and port variables.
So my question is why to need for Glassfish Server Local configuration(except for when defining extra parameters) and what is difference between them(I mean performance or etc.)?
There are a number of development work-flow optimizations and automation that can be performed by an IDE when it is working with a local server. I don't have a strong background in IDEA, so I am not sure which of the following they may have implemented:
using in-place|exploded|directory deployment can eliminate jar/war/ear creation in the IDE and deconstruction in the server. This can be a significant time saver.
linked to 1 is smarter redeployment. In some cases, a file change (like changing a jsp or an html file) does not need to trigger redeployment.
JDBC driver integration allows users to configure their IDE to access a DB and then propagates that configuration (which usually includes driver jars, etc.) into the server's classpath as part of deployment of an app.
access to server log files during deployment and execution.
The ability to start and stop the server... even today, you do need to restart GlassFish sometimes.
view the generated Java sources of a JSP.
Most of these features are not available with a remote server and that has a negative effect on iterative development since the break between edit and validate can be fairly long.
This answer is based on my familiarity with the work that we have done for the NetBeans/GlassFish integration. The guys at IntelliJ are smart, so I would not be surprised if they have other features that are available when you are working with a local server.
Local starts Glassfish for you and performs the deployment. With Remote you start Glassfish manually. Remote can be used to debug apps running on another machines, Local is useful for development and testing.

Java EE application deployment on Amazon EC2

We have a Java EE application (EAR file deployed on JBoss, MySQL, MongoDB) which we would like to deploy on an Amazon EC2 instance. I have several questions regarding deployment best practices.
What is the most commonly used Linux AMI which we can rely on for a robust deployment (There are so many Linux variants, and I am not sure which AMI is commonly used, is it Fedora, CentOS, Red Hat, SUSE ...)
How do we handle production upgrades (EAR file modifications or schema upgrades). Are there any tools which are available to handle this installation or rollback of these changes.
What kind of data backup capability is available for the database?
Should I rely on Amazon RDS for MySQL support?
How should I handle support for MongoDB?
This is the first time, I am hosting an web-app and would appreciate some inputs on how to manage the production instance.
I agree with Mark Robinson's answer: Use whichever Unix variant you're most comfortable with. It may pay to pick one with decent cloud support. For my site I use Ubuntu.
I have a common image which is the base of every version deploy I do. I have www.mysite.com pointing to an Elastic IP so I can decide which instance it goes to. The common image has all the software I need installed (Postgres/Postgis/Tomcat/etc) but the database and web server data folders and symlinked to Elastic Block Store (EBS) instances.
When it comes time to do a deploy I start a new instance up, freeze and snapshot the EBS volumes on production and make new volumes. I point my new instance at the new volumes and then install whatever I need to onto that. Once I've smoke tested everything successfully I can switch the Elastic IP to point to the new instance and everything keeps on going.
I'll note that I currently have the advantage where only I can modify the database; no users can. This will become a problem shortly.
If you use the XFS filesystem on top of the EBS volume then you can tell XFS to freeze the file system (so no updates happen) then call the EC2 api to snapshot the volume then unfreeze the file system. The result is that the snapshot is taken quickly and sent to S3. I have a nightly script which does this.
If RDS looks like it will suit your needs then use it. Amazon is building lots of solid tools quickly and this will ease your scalability issues if you have any.
I'm sorry, I have no idea.
Good question!
1) I would recommend going with whatever Linux variant you are most comfortable with. If you have someone who is really keen on CentOS, go with that. Once you have selected your AMI, take it and customize it by configuring how you want it. Then save that AMI as you base-layout. It will make rolling out new machines much easier and save your bacon if EC2 goes down.
2) Upgrades with EC2 can be tres cool. Instead of upgrading a live system, take your pre-configured AMI, update that and save that AMI as myAMI-1.1 (or whatever). That way, you can flip over to the new system almost instantly AND roll back to a previous version in case something breaks. You can also back-up DB instances to S3. It's cheap at about $0.10/GB/Month.
3) It depends where you are storing your DB. If you are storing it on your EC2 instance you are in trouble. The EC2 instances have no persistence storage. So if your machine crashes, you lose everything. I'm not familiar with Amazon DB system but you should also look into Elastic Block Store. It's basically an actual hard-drive you can write to. When you want to upgrade your schema, do a full DB dump to S3 and then do an upgrade of your actual schema. If something goes wrong, you can pull the previous version out of S3.
4) & 5) I have never used those so I can't help you.
What is the most commonly used Linux AMI which we can rely on for a robust deployment (There are so many Linux variants, and I am not sure which AMI is commonly used, is it Fedora, CentOS, Red Hat, SUSE ...)
How do we handle production upgrades (EAR file modifications or schema upgrades). Are there any tools which are available to handle this installation or rollback of these changes.
What kind of data backup capability is available for the database?
Should I rely on Amazon RDS for MySQL support?
How should I handle support for MongoDB?
Any Linux AMI will do the job, what you need is a JRE only. (assuming development work not required). If you need to monitor the JVM behavior then get JConsole installed.
Easiest and painless way is to SSH into the local home directory, transfer the updated class file/EAR file (depends the number of changes applied) and copy and replace into the Tomcat deployment directory, restart apache. (make sure you tested locally before upload to production).
Depends on which database you are using, if you are using MySQL then just do scheduled backup that writes to your home directory so that from time to time you could SSH in and download a copy for backup purpose.
I would not consider reply on Amazon RDS for MySQL support due to 2 reasons: MySQL is small enough and manageable, and also I would want to have total complete control of the database and why pay for more when you can do it yourself FOC?
The usage of MongoDB should be align with the purpose of your application and benefits you gain from that. I would recommend you use MongoDB for static data retrieval like state, country, area etc... where MySQL to be use for transaction data only.
If you can live with deploying your Java EE application on TomEE instead of JBoss, Boxfuse does what you want.
For you Java EE application you literally only have to execute (TomEE uses war files instead of ear files):
boxfuse run my-tomee-app-1.0.war -env=prod
This will
Create AMI containing TomEE and your application ready to boot
Create an Elastic IP or ELB
Create a security group with the correct ports defined
Create an auto-scaling group
Launch your instance(s)
Any subsequent update will be done as a zero downtime blue/green deployment.
More info: https://boxfuse.com/blog/javaee-aws