I have a Tomcat 8 web app running on OpenShift Pro, it has 2Gb storage allocated to it.
I want to increase the amount of storage to 4Gb.
In the Web Console, when I look at the Persistent Storage Claim the only Action available is Delete. I am reluctant to do this for obvious reasons...
What is the correct way to increase the storage available to an application?
Apparently there is no way to modify storage for an application. You will need to create a new one of the required size. You can then start a pod which mounts both volumes and copy contents from one to the other. When you have cut over to new volume and happy release claim on the original.
1) I think you should be able to edit the size of a persistent volume using the CLI interface. If you execute:
oc edit pvc <name of volume>
You should get an editor with a YAML representation of the persistent volume, where you can edit the size of the claim.
2) This is really a feature which Red Hat should support. I would open a ticket on the support page so that this gets added in the future.
Related
We are trying to create a horizontaly Scalable web service via Google Compute Engine.
In order to do so, we have created an Instance Template and a Group of Instances based on this new template. The group of instances create a new virtual machine (we chose Debian) in which we can install our NodeJs application and other stuff.
We unlucky found out that when the VM is turned off everything inside the VM is erased. We would like to create a Snapshot or a Disk Image in order to avoid completely rebuild an instance from sketch, but we encountered two problems:
You can't create a Disk Image while the VM is running but if we turn it off we would lose all data in it.
It is possible to create a Snapshot of a VM while it is running but when you create a new instance from the Snapshot we can't link/join the new instance to the Group of Instances.
How can we get to the solution with those tools?
Thanks
Although it is recommended to shutdown your VM instance before creating an image, it is possible to create an image of a running system.
Connect to your instance (SSH, RDP, etc.)
Shutdown the applications that you can such as databases, etc. This purpose is to minimize disk activity and changes to the file system.
Sync the file system to disk. Linux sudo sync. Windows: Sysinternals wrote Sync which will help. Sysinternals's Sync
Go to the Google Console -> Compute Engine ->Disks.
Select your the disk drive for the VM instance.
At the top of the screen will be a button CREATE IMAGE.
Click the button and complete the dialog.
Make sure that you click the button Keep instance running (not recommended).
Once the image completes, I would launch a new instance and verify that you have everything and that everything is working as expected.
Note: You can also create a disk snapshot.
I downloaded a qcow2 image from Atomic official site, but I really frustrated with the steps to start this qcow2 image, and no helpful clear tips from Google.
Anyone can give me some clear hints on how to start the qcow2 vm? Thanks.
The image name is: Fedora-Atomic-25-20170131.0.x86_64.qcow2
The Fedora Atomic Host (FAH) qcow is a cloud image, so it expects a Metadata source. Metadata is all the configuration bits a generic cloud image uses to get configured. Specifically, it requires something that the cloud-init package recognizes. You can read more about cloud-init here. If you just want to fire off some one-off VMs for testing, a tool you can use is testcloud.
Using testcloud to launch the VM, you'll be able to log in with the user 'fedora' (which is the default in Fedora based cloud images) and the password 'passw0rd' (you can change this default in the config).
The other option is to download the installer ISO, and then you can install into a fresh VM and not have to worry about metadata at all. You can find that here, under "Other Downloads" on the right hand side.
The #fedora-cloud channel on Freenode is a good place to check if you have any other questions.
We currently have an application that runs on one dedicated server. I'd like to move it to OpenShift. It has:
A public-facing web app written in PhP
A Java app for administrators running on Wildfly
A Mysql database
A filesystem containing lots of images and documents that must be accessible to both the Java and PhP apps. A third party ftp's a data file to the server every day, and a perl script loads that into the db and the file system.
A perl script occasionally runs ffmpeg to generate videos, reading images from and writing videos to the filesystem.
Is Openshift a good solution for this, or would it be better to use AWS directly instead (for instance because they have dedicated file system components?)
Thanks
Michael Davis
Ottawa
The shared file system will definitely be the biggest issue here. You could get around it by setting up your applications to use Amazon S3 or some other shared Cloud file system though fairly easily.
As for the rest of the application, if I were setting this up I would:
Setup a scaled PHP application, even if you set the scaling to just use 1 gear this will allow you to put the MySQL database on it's own gear, and even choose a different size for it, such as having medium web gears (that run php) and a large gear that runs the MySQL database. This will also allow your wildfly gear to access the database since it will have a FQDN (fully qualified domain name) that any of your applications on your account can reach. However, keep in mind that it will use a non-standard port instead of 3306.
Then you can setup your WildFly server as whatever size you want, but, keep in mind that the MySQL connection variables will not be there, you will have to put them into your java application manually.
As for the perl script, depending on how intensive it is, you could run it on it's own whatever sized gear with some extra storage, or you could co-locate it with either the php or java application as a cron job. You can have it store the files on Amazon S3 and pull them down/upload them as it does the ffmpeg operations on them. Since OpenShift is also hosted on Amazon (In the US-EAST region) these operations should be pretty fast, as long as you also put your S3 bucket in the US-EAST region.
Those are my thoughts, hope it helps. Feel free to ask questions if you have them. You can also visit http://help.openshift.com and under "Contact Us" click on "Submit a request" and make sure you reference this StackOverflow question so I know what you are talking about, you can ask any questions you might have and we can discuss solutions for them.
I want to setup seperate amazon ec2 instance where i store all my images uploaded via my website by users. I want to be able to show images from this exclusive server. I know how to setup DNS names which would point to this server. But i would like to know how to setup the directories, for example if i refer to an image url as http://images.mydomain.com/images/sample.jpg, then
images.mydomain.com is the server name and
images should be the folder name
now the question is should a webserver be running on this server which is what will serve the images or can i just make images folder public so that it is visible to entire world? How do avoid directory listing?
Pointer to any documentation would be greatly appreciated.
It certainly is possible to set up a separate EC2 instances to serve your images. You may have good reasons to do that--for example, you may want to authorize only specific users or groups of users to access certain images, in a way that's closely controlled by program logic.
OTOH, if you're just looking to segment the access of image/media files away from the server that provides HTML/web content, you will get much better performance / scalability by moving those files to a service that is specifically tuned for storage and web access. Amazon's S3 (Simple Storage Service) is one relatively straightforward option. Amazon's CloudFront content distribution network (CDN) or a competing CDN would be an even higher performance option.
Using a CDN for file access does add the complexity of configuring the CDN, but if you're going to the trouble of segmenting media access from your primary web server, and if you're expecting any significant I/O load, I've found it to be a high-return-for-effort-expended approach.
I would definitely not implement this as you are planning. You should store all your images in an Amazon S3 bucket and serve them via Amazon's CloudFront CDN. Why go through the hassle of setting up and maintaining an EC2 instance to do what Amazon has already done? S3 provides infinite storage, manages permissions, metadata, etc. CloudFront provides fast access to your images, caching them at edge locations all around the world. Additionally, you can use Amazon Route 53 (or some other DNS service) to point various CNAMEs to your CloudFront distribution.
If you're interested in this approach I'd be happy to provide more info on how to set this up.
Yes, you will definitly need to run a webserver on the machine. Otherwise it will not bepossible for clients to connect via http/port 80 and view the images in a browser. This has nothing to do with directory listing enabled. Once you have a webserver running, you can disable directory listing in its configuration.
Install an apache on your server and run it (http://httpd.apache.org/docs/2.0/install.html). You then setup what's called a 'site' in its configuration which is pointing to a local directory which will then be the base directory for your server. It could, for example, be /home/apache on a Unix system. There you create your images folder. If your apache is setup correctly you can then access your images via http://images.mydomain.com/images/sample.jpg.
Currently, I am running wordpress as my blog engine on free hosting, but I'm planning to move to use git-based blog engine(Jekyll, Toto) on Ruby platform. Then I see Heroku provides free account features, but I don't see any detail on bandwidth, disk spaces, requests?
Heroku provides, for free, a 5MB database
Heroku provides, for free, 1 dyno. A dyno is an instance of your application running and responding to requests. If each instance of your application can serve each request in 100ms, then you get 600 requests/minute with the free account.
Your application code and its assets (the slug) are limited to 300 MB in total. Your application also has access to the local filesystem, which can serve as an ephemeral scratch space for that specific dyno, and should be able to store at least 1 GB of data.
There is a 2TB/month limit on bandwidth.
Here is the problem I had....
"We have photo and file upload for several features in our app, but they do not save.
I have read on stackoverflow that "You are limited to 100MB of disk space, but you are not permitted to save any files (including user uploads) to disk because the filesystem is readonly. The 100MB of disk space is for your application code and other assets. The 100MB is the maximum slug size, and includes all gems referenced by your project."
We need our users to be able to successfully upload files and have them save. How do we make this happen?"
Here is Heroku Support's response...
"Hi, the filesystem is writeable on cedar, and can handle significantly more than 100MB; at least 1GB.
That said, it's dyno-local and ephemeral; see https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem
For permanent storage, we recommend something like S3: https://devcenter.heroku.com/articles/s3
Hope this helps."
For those who are going to come here after me, you can get the hobby pack if you are a student and have the GitHub developer pack, Here are the details: Heroku for GitHub students
Heads up: Heroku free tier is going away soon
"Starting November 28, 2022, free Heroku Dynos, free Heroku Postgres, and free Heroku Data for RedisĀ® plans will no longer be available. If you have apps using any of these resources, you must upgrade to paid plans by this date to ensure your apps continue to run and retain your data. See our blog and FAQ for more info."
"What happens if I take no action on my free apps or databases or do not upgrade to a paid plan?
free dynos will be scaled down to 0 and hobby-dev databases will be deleted starting November 28, 2022."
REF:
https://devcenter.heroku.com/articles/free-dyno-hours
https://help.heroku.com/RSBRUH58/removal-of-heroku-free-product-plans-faq
https://blog.heroku.com/next-chapter
Also, loading your page might take a long time (5-10 sec)
If a free dyno isn't accessed frequently it goes into sleep mode. After that there is a delay for the dyno to become active again. For me this takes 5-10sec. You cannot fool the system by accessing it frequently because this is consuming your free dyno hours.