I have deployed spring boot application on google compute engine using this link (https://cloud.google.com/community/tutorials/kotlin-springboot-compute-engine#before_you_begin) from my local computer using the cloud SDK command line. I have created the google storage bucket and then followed the steps in the link to deploy the spring boot project. Deployment works fine. But now I have to deploy changes to my deployed project. How can that be achieved using command line without restarting the VM instance?
I have updated the google storage bucket which I provided in the --metadata BUCKET= while creating the instance.
Copied my new jar from the local location after building the project to the google cloud bucket. But after refreshing the URL in the browser can't see the new changes.
As far as I can understand in your description, you need to download the new version from the bucket to your VM, in the same directory where you created the instance-startup.sh as in [1], you can execute the command "gsutil cp gs://${BUCKET}/demo.jar ." this if you replaced the .jar file in the bucket, if the name changed you can change it in the previous command in order to make it match with the new version that you uploaded.
Then you can then stop the java process with the previous jar file, you may use "ps -aux | grep ${jarfilename}" and then "kill $PID", after this you can execute the new version with the command "java -jar $jarfile.jar" making it match with the new version of your jar file.
[1] https://cloud.google.com/community/tutorials/kotlin-springboot-compute-engine#create_a_startup_script
Related
I cloned an existing repository (repository created by a team in my office which deals with subscriptions in a certain app we are working on) which have some database migration files inside the path ..\internal\db\migrations , this is migration files path.
First of all i run the command docker compose up .for an existing docker.yaml , then i run the command go build then go run . .
I made a debug and the app reaches the point when it is about to run the migration file then it displays an error:
Failed to initialize App. Error: first D:\subscription-store: file does not exist
although I checked the paths through debugging and they are correct and at the same time the migration files all are exists.
I am using visual studio code as an editor, Go version 1.15 ,docker and MySQL. I am running on enviroment windows 10.
After debugging and searching , it was found that the repository uses some paths to get the migration files from the local drive . the paths was written for Mac in the code base and i cloned the repository on a windows machine so it didn't work .
The error specifically happened in the call of the function
migrate.NewWithDatabaseInstance(
fmt.Sprintf("file://%s", fullPath),
"mysql",
driver,
)
The generated path for the first parameter was
file//d:\\subscription-store\\....\\db\\migrations
And this is wrong for windows as the driver d: shouldn't be supported in the path .
it is resolved as following
"file:///"+"subscription-store\\....\\db\\migrations"
When the above URL sent to the function rather than the old one , it worked successfully.
Since installing Service Fabric SDK 2.2.207 I'm not able to change the cluster data and log paths (with previous SDKs I could).
I tried:
Editing the registry keys in HKLM\Software\Microsoft\Service Fabric - they just revert back to C:\SfDevCluster\data and C:\SfDevCluster\log when the cluster is created.
Running poweshell: & "C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\DevClusterSetup.ps1" -PathToClusterDataRoot d:\SfDevCluster\data -PathToClusterLogRoot d:\SfDevCluster\log - this works successfully but upon changing the cluster mode to 1-node (newly available configuration with this SDK), the cluster moves to the C drive.
Any help is appreciated!
Any time you switch cluster mode on local dev box, existing cluster is removed and a new one is created. You can pass use \DevClusterSetup.ps1 to switch mode from 5->1 node, by passing -CreateOneNodeCluster to create one node cluster and pass Data and Log root paths to it as well.
My context
I'm having errors in my deployment using AWS EB with my Flask application.
Now I'm inside the EC2 instance via eb ssh and need to explore the deployed source code of the application.
My problem
Where is the deployed application folder?
The source code is zipped and placed in the following directory:
/opt/elasticbeanstalk/deploy/appsource/source_bundle
There is no file extension but it is in the zip file format:
[ec2-user#ip ~]$ file /opt/elasticbeanstalk/deploy/appsource/source_bundle
/opt/elasticbeanstalk/deploy/appsource/source_bundle: Zip archive data, at least v1.0 to extract
Find for a specific/unique filename in source code folder, we will find the location of our application folder which, in AWS EB, to be
/opt/python/current
/opt/python/bundle/2/app
p.s.
Search for YOUR_FILE.py
find / -name YOUR_FILE.py -print
We have downloaded and installed a running instance of Wirecloud in our company server following the steps at:
https://conwet.fi.upm.es/wirecloud/install
We created the instance using the --quick-start command to try the instance, and ran wirecloud using the Django internal web server with the following command:
$ python manage.py runserver 0.0.0.0:8080 --insecure
We are able to enter the instance, and move around the enviroment, but we have encountered a problem when we try to upload a widget to our local workspace. After I search for the widget in my computer (previously downloaded from the Fi-lab marketplace), we get the next message:
Error adding packaged resource: Internal Server Error.
We also tried to download the zip file of the widget from github, unzip it and recompress it as a wgt file (compress as a zip but changing the extension to .wgt) and we get the same answer from our wirecloud instance; but if we try to upload the same package to the instance in fi-lab, it uploads successfully.
We don't know if it's because of the quick-start installation we made or if we have to modify something from our widget files in order to be able to upload it to our local instance.
Solved
The problem was in the config.xml file: the name of the attributes and the structure of the widgets unable to upload were different from the template of the config.xml file posted at the users guide.
After changing it to follow the structure of the template it works fine.
My widget example was the NGSI Updater. The thing is that it uploads perfectly in the instance at FiLab, even though the config.xml file had a different structure from the one of the template; but it encounters an error when uploading it to the Wirecloud local instance at my server.
I'm trying to save an App snapshot on OpenShift, however it complains that my application isn't found. When I type rhc apps my application is correctly listed, not sure what I could be doing wrong.
For example:
appname # http://appname-domain.rhcloud.com
when I run rhc snapshot save -a appname, I get:
Application 'appname' not found.
If the application is not in your default namespace, then you will need to add the -n option to your rhc snapshot save command. That could be your issue.