Have anyone deployed Spring Boot app to DigitalOcean droplet?
I have previously created app on Heroku.com, where I also ordered MySQL Database and deployed my Web API. Due to performance issues, I want to transfer my Spring Boot app to DigitalOcean, but there is a problem: I still want to use DB I ordered on Heroku. I have all the required credentials, but can't find a way to connect my droplet. In Heroku, there is very simple way to do that, all I need to do is to change config variable DATABASE_URL, but here I cannot find the same. I hope you understand my problem and provide simple solution.
Thank You in advance!
What you need is called Environment Variables, here is the docs from DO.
Specify app-level variables on the Environment screen when creating an app. For existing apps, go to the Apps section of the DigitalOcean Control Panel. Click your app, then click the Settings tab. Next to the App-Level Environment Variables heading, click the Edit link.
https://docs.digitalocean.com/products/app-platform/how-to/use-environment-variables/
I am maintaining a couple of spring boot web applications.
They are currently running as WAR files deployed on the same Tomcat instance on two Linux servers.
In front I have a load balancer to distribute the load through url myapps.mydomain.com.
Apart from the actual applications, both backend Tomcat instances exposes /up/up.html to allow the load balancer to know the status of each.
myapps.mydomain.com:
ip-address-1:8080/up/up.html
ip-address-2:8080/up/up.html
Now I am in the process of migrating the applications to OpenShift, and I have all application endpoints including /up/up.html exposed by OpenShift as myapps.openshift.mydomain.com.
For a period I would like to run the OpenShift apps in parallel with the legacy servers through the legacy load balancer - basically:
myapps.mydomain.com:
ip-address-1:8080/up/up.html
ip-address-2:8080/up/up.html
myapps.openshift.mydomain.com:80/up/up.html
such that the load is distributed one third to each.
The guys managing the legacy load balancer claims that this cannot be done:-(
I myself dont know much about load balancers. I have been googling the subject and found articles about routing from "Edge Load Balancers" to Openshift, but I really dont know the right term for what I am trying to do.
I was hoping the load balancer could treat myapps.openshift.mydomain.com as just another blackbox - like the two legacy servers.
Can this be done?
And if so - what is the correct terminology for this concept - what is the proper name for what I am trying to do?
I'd like to create a build chain for open source projects I'm working on. I'm currently using github, travis and coveralls. This is working fine but I'd like to add some kind of static code analyze.
I was thinking about hosting SonarQube on openshift, but problem is that openshift does not allow remote connection to database.
I have come to following solutions, but none of them seems to be easy to achieve:
Any REST API for sonar that could be used instead of raw db access
Any alternative for sonar that could be hosted on openshift
Migrate from travis to jenkins hosted on openshift and use this
Any other (free) alternative to openshift which would allow raw db access
Any other option
1 would be an ideal solution but I've searched all sonar plugins I could find and haven't found any :/
Am I missing something? There is no easy way to host sonar without exposing db access?
It looks like at least one person has gotten SonarQube running on OpenShift using the DIY cartridge:
http://majecek.wordpress.com/2013/12/06/how-to-run-sonarqube-4-0-on-openshift/
I was able to get SonarQube to start following those instructions.
EDIT: databases in OpenShift applications are only exposed publicly in scaled applications. You will want to create your sonar app with the -s option if you need to populate your database from outside OpenShift.
trying to migrate my existing asp.net website which is using mysql to Windows Azure.
I have a few questions
How do i host my existing asp.net application in Windows Azure?
Any good links to recommend for a beginner?
Is it a must to create a windows azure application in order to host my existing website in Azure?
Is it true that mysql will cost $0.12 an hour per web role?
Hosting asp.net applications in Windows Azure is a broad subject. I suggest starting with a tutorial such as this one for initial intro: http://www.asp.net/mvc/tutorials/deployment-to-windows-azure/walkthrough-hosting-an-aspnet-mvc-application-on-windows-azure
Simplest would be to add your existing ASP.NET project as a Web Role to a new Azure project. (Tutorial link above explains how this can be done)
MySQL is not supported in Windows Azure at this time. I suggest either switching to SQL Azure (prices here) or you will need to host MySQL instance elsewhere and connect to it from Azure servers (not recommended due to latency). Installing MySQL on a Windows Azure instances is totally not recommended, since those instances are stateless and Azure can choose to re-image them at any time. (Unless you have a read-only MySQL database and have a way to auto-install it via a setup script)
HTH
One thing to keep in mind, ASP.NET Sites are not supported, it has to be an application. You can see this link for how to convert to an application if needed:
http://msdn.microsoft.com/en-us/library/aa983476.aspx
What are the advantages we get by using Elastic Beanstalk over maually creating EC2 instance and setting up tomcat server and deploy etc for a typical java web applicaion. Are load balancing, Monitoring and autoscaling the only advantages?
Suppose for my web application which uses database I installed the database in the EC2 instance itself. When Autoscalling takes place will the database gets created in the newly created instance or it will be accessing the database I created in the master instance... If it creates just a replica when autoscaling happens how will be data sync happens between the instances?
All the things you mentioned like load balancing, monitoring and auto-scaling are definitely advantages.
However, you have to kind of think about it this way: In a true Platform as a Service (PAAS), the goal is to separate the application from the platform. As a developer, you only worry about your application. The platform is "rented" to you. The platform "instances" are automatically updated, administered, scaled, balanced, etc. for you. You just upload your WAR file and it just works (at least theoretically).
EC2 by itself is not PAAS. It is more like IAAS (Infrastructure as a Service). You still have to take care of the server instances, install software on them, keep them updated, etc.
Elastic Beanstalk is a PAAS system. So are App Engine and Azure among many others.
In a true PAAS system, the DBMS is a separate component from the web application server(s). The reason is obvious: The DBMS cannot be possibly installed on the instances that are being used for the application server because, as instances are created and destroyed based on your traffic, the DBMS would be lost! Having the DBMS and application server on the same machine/instance is not generally a good idea anyway.
In a PAAS system, the DBMS is a separate service. For Amazon, it would be Amazon RDS. Just like with Elastic Beanstalk, where you don't have to worry about the application server and you just upload your WAR file, with RDS, you don't have to worry about the DBMS and you just deploy your database(s).
Elastic Beanstalk and RDS work very well together, especially when deployed in the same availability zone, where the latency would be very low.
Finally, using Elastic Beanstalk doesn't cost anything more than the deployed resources (EC2 instances and the load balancer). However, RDS is not cheap and would definitely be more expensive than using a single EC2 instance for both the application server and the DBMS.
Elastic Beanstalk does more than just load balancing, monitoring, and autoscaling.
1) Manages application versions by storing and managing different versions of your application, allowing you to easily switch back and forth between different versions of your applications.
2) Has the concept of "environments" for each application, allowing you to deploy different versions of your application in each environment. This is handy for example if you want to set up separate QA and DEV environments, and you want to easily deploy a build first in DEV then deploy the same version of the application in QA when your QA team is ready for the next build.
3) Externalizes the important container configuration properties (Tomcat memory settings, for example) to the Elastic Beanstalk console and API. Because of this you can easily save the settings and copy them between environments.
4) View application log files through the console and automatically roll and archive log files to S3. (Admittedly this feature is currently a little weak.)
I had an app deployed both in EC2 dedicated(Nginx & Gunicorn) and Beanstalk Environment(CentOS & Apache2).
My observations:
BeanStalk is Paas. Manually creating an EC2 instance(IAAS), is like doing everything from scratch, but you have solid control.
BeanStalk comes with by default CentOS and Apache(Httpd). You could choose OS in dedicated instance.
These things that mattered to me,
There were lots of 504 errors showing up in Beanstalk environment.
It was difficult to debug when BeanStalk server crashed, as logs would also not show up and could not ssh into machine. This is very important.
Installing/configuring tools like Celery, Redis (need to run another port) etc.,. in dedicated instance is lot more easier.
In my case, I had to scale up (Beanstalk)server in order to run installation of some packages(like pandoc). These things are more simpler in Ubuntu.
Scaling is a lot more easier in BeanStalk. Cloning servers is straightforward in BeanStalk.
I had taken micro in both the cases (dedicated & Beanstalk). I felt dedicated micro instance was better.
Automated deployment in Beanstalk. I had to write scripts to automate the same, which is fine, since it is only once.