Retrieving files from Google Compute Engine - google-compute-engine

So I was using the free Google Compute engine trial. But I didn't realize in time that the trial is about to end, and now there are important files on the compute engine I'd really need to get.
How would I go on about retrieving files from it?
The support told me that the project is still active, just my billing is closed (since the trial ran out)
I tried to use the scp command in gcloud SDK shell, but couldn't really understand how to use it.
The virtual machine is running Windows Server 2016 (desktop).
Any help is highly appreciated.
I'm not very experienced here, but would really need the files nevertheless.
Thanks.

According to the documentation, once the free trial ends, all virtual machines still exist but are stopped (so you won't be able to access them -- and scp doesn't work on Windows anyway).
If you need access to the data, you'll need to upgrade the trial to a paid account, start the virtual machine, copy the data off it, and then stop it; you'll be billed for the time the virtual machine is running (so you may wish to downsize it to save money before you start it).
You only have 30 days from the end of the trial to do this; after this time, the VMs (and their disks) will be deleted (and unrecoverable by anyone).

Related

Google Compute Engine VM stopped by Google. Will not restart

At the end of the Google Cloud trial period, my VM was stopped by Google. This is understandable, but what comes next doesn't seem so. I have now upgraded to a paid account, but I am still unable to restart the VM. Google cloud support will not help turn the VM back on. They've told me that my only support option is to post my issue here on Stackoverflow. It seems odd that a business like Google would rely on outsourcing it's technical support to the people here.
So here I am. I thank you in advance for reading my issue.
The error message is:
"Starting VM instance "myinstance" failed. Error: Google Compute Engine is not ready for use yet in the project. It may take several minutes if Google Compute Engine has just been enabled, or if this is the first time you use Google Compute Engine in the project."
I have tried multiple times over two days and I still get this same error. The one thing Google did check and verify is that my billing is set up correctly. The dashboard shows that the billing is linked to the project that contains this VM.
Can anyone suggest any troubleshooting steps or solutions?
Thank you.

Openshift OKD Excessive Logging

So I installed a single host Openshift OKD v3.11 cluster. I installed it on a VM running Centos 7.8.2003.
It seems to have installed ok except that it continually streams verbose logs to /var/log/messages. Around 5 logs per second and all seem to be about throttling requests. Example of a typical log message:
******Jun 13 15:49:13 centos7 journal: I0613 14:49:13.011402 1 request.go:485] Throttling request took 196.341689ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-cert-signer/serviceaccounts/service-serving-cert-signer-sa*****
The only reference I have managed to find is a question here but the access to the discussion is only available to those with deep pockets.
https://access.redhat.com/solutions/3348921
I assume these logs are nothing to worry about and so my main question is what is the "best"/cleanest/simplest/easiest way to ensure the Openshift cluster doesn't continue to fill up /var/log/messages but will still log any important messages there?
I would recommend looking at the root cause for this behavior. These messages indicate that there are a lot of requests coming to your API. Typically this is due to some application performing calls in a tight loop leading to this many messages. In your case check your openshift-service-cert-signer if you can see any warnings or an abnormal amount of log messages.
If you want to get rid of the throttling messages, you can increase the amount of Queries per second (QPS) for the API server: Recommended Practices for OKD Master Hosts (lower part).
The only reference I have managed to find is a question here but the access to the discussion is only available to those with deep pockets. https://access.redhat.com/solutions/3348921
I do not understand why you're saying that, as I can access that document with my free Red Hat account without any subscriptions. Have you tried with a free account as it says on the site?
Simon's answer was helpful but I've finally got to the bottom of this.
The problem was simply that the version of Docker I had installed was old. At the time of writing the latest version of Centos is 7.8.2003 and if you install that and then simply run "yum install docker" hoping that you'll get something at least reasonably new and certainly compatible with the rest of the linux installation, you'll probably be making a mistake.
The right thing to do is to follow the simple steps here:
https://docs.docker.com/engine/install/centos/
The reason I found the problem was because excessive logging of my openshift cluster wasn't the only issue. I started seeing strange behaviour of other containers. A process of trial and error narrowed down the issue to the default Centos version of docker. Once I followed the page above all my problem vanished including the original problem of /var/log/message getting hammered by openshift containers.
The main reason I decided to answer my own question was because surely someone else is going to be as impatient/thick as me and simply install Centos7 then try "yum install docker" without knowing they're about to enter a world of pain.

ExpressionEngine : git : local development : remote database

To those of you that are trying to be good little developers and version control their ExpressionEngine sites with git, how do you handle your database?
In my limited experience with multiple developers working on one ExpressionEngine site, we've had to all run off of a single MySQL development database running on a remote web server. For those of you that have tried this, it is PAINFULLY slow. Page loads can easily take 5-10 seconds making development extremely difficult. It would be quicker to work off of a remote development server. I am trying to steer away from working off of a remote MySQL server in order to be able to work from anywhere and not depend on Internet connection speed/quality.
Just wondering how others handle their MySQL databases.
Do all of your developers run off of one central database? Have you dealt with slowness issues like we have?
Do you keep your database under version control? How do you handle export/imports among multiple developers and multiple branches?
With one developer I can import/export/commit the database very easily but as soon as you add another developer to the mix, it gets very VERY muddy. Looking forward to hearing everyone's thoughts on this mammoth topic.
Thanks!
It seems there is a lot of time lost on failing DNS requests, with a remote database.
Start your MySQL server with start mysqld with --skip-name-resolve. (More information on this topic can be found here: http://dev.mysql.com/doc/refman/5.0/en/host-cache.html)
Having a remote database still seems to be the best way for us to work on a project with multiple developers.
I almost always use a central database for development. Depending which host you use, the speed difference may not be huge.
Obviously, if you're not making changes to the database, i.e. only doing template development, keeping the database in sync is not as needed, so you could potentially bring up a local copy of the database. You just have to remember to repeat any database changes, if you do end up making some.
As far as version control, I keep a copy of my base EE install's SQL file in my base repository. Other than that I don't usually keep copies of the database in Git, so I don't do a lot of importing/exporting, etc.
Have you looked at the EE Profiler recently? You'll probably notice in the neighborhood of 20-80 queries on your home page depending on it's complexity.
The problem is that, for each query, MySQL must execute a remote request for data, download the response, and then present ExpressionEngine it's data. The 20-80 round trips to the database is what's causing your delay and I don't think there is much you can do about it. When using a remote (outside our network) database, I get the same delay as you.
When MySQL is running on your machine or the production server, it doesn't have the added network requests causing latency in it's requests for data. This is the difference.
As for fixes, all you can do is move to a database hosted on your internal network. We have a Linux machine that mimics our production environment that we use for staging. Since it's on our network, we can use the local IP address in our database.php file. This is much faster.
The problem that we still have is the issue of channels/fields/entries. When a developer is working on a new section, they'll likely need to create a new channel and fields and/or new entries. When we're ready to push that functionality to production, we have to manually make those changes on the production server as there is no way to reliably export them. I am hopeful of this addon though---we'll see.
In my company (4 developers) we each run our own DB locally. But recently I tested Rackspace Cloud Databases (but there are other cloud db providers) for a heavy DB that could become difficult to run on a little laptop. It's relatively less expensive than running our own db server, and it can be setup or deleted in the minute.

How to log Windows 7 network traffic & disk usage with MySQL?

I'm running Windows 7 Pro and have a few servers running. One of the servers is a SSH / file server that was made via Cygwin. I already have logging setup internally using syslog-d; however, it does not provide adequate logging. When a user is connected to the server I can see him/her in the Windows 7 Resource Monitor and it shows his/her IP address as well as how much data is being sent/received. When a user is downloading a file from the file server I can also see in the resource monitor what file he/she is downloading by looking at the disk usage.
Herein lies the first question: How can I log users' IP address, the time they connect & disconnect, what files they download, and what their download speed was, to a database in MySQL?
In addition to the aforementioned server, I also use IIS to host a website, and would like to have some sort of networking logging.
If I could find a tool that would work for both of these servers that would be the best solution.
I did some searching and found a program called Snort that looks like it would work for the network side of things, but not for the disk usage. I'm not familiar with this program at all, but at first glance maybe it could accomplish part of what I want to do? Maybe there is an easier/better way?
I'm pretty new to MySQL and know very little about network and disk logging so any and all help and guidance would be much appreciated. Thanks!
Advanced Web Statistics does a pretty good job of making sense of the IIS log files, and though it will give you more information than you need, it will certainly give you the information you want. It is open source, and my hosting provider uses it for the ASP.NET sites I have developed.
As far as logging the information to MySQL:
I am assuming that you already have, or know how to get the information and you simply want to log it to a MySQL DB.
1st, you will need to create the database.
2nd, you need the MySQL connector for your programming language of choice. The MySQL ADO.NET connector is excellent and easy to use. I am also assuming you know at least one programming language and how to connect it to a database. If not, I recommend C# with ADO.NET-- it is super easy and there are plenty of tutorials online.
3rd, write a program to send your information to the database, when you receive it.

Heroku free account limited?

Currently, I am running wordpress as my blog engine on free hosting, but I'm planning to move to use git-based blog engine(Jekyll, Toto) on Ruby platform. Then I see Heroku provides free account features, but I don't see any detail on bandwidth, disk spaces, requests?
Heroku provides, for free, a 5MB database
Heroku provides, for free, 1 dyno. A dyno is an instance of your application running and responding to requests. If each instance of your application can serve each request in 100ms, then you get 600 requests/minute with the free account.
Your application code and its assets (the slug) are limited to 300 MB in total. Your application also has access to the local filesystem, which can serve as an ephemeral scratch space for that specific dyno, and should be able to store at least 1 GB of data.
There is a 2TB/month limit on bandwidth.
Here is the problem I had....
"We have photo and file upload for several features in our app, but they do not save.
I have read on stackoverflow that "You are limited to 100MB of disk space, but you are not permitted to save any files (including user uploads) to disk because the filesystem is readonly. The 100MB of disk space is for your application code and other assets. The 100MB is the maximum slug size, and includes all gems referenced by your project."
We need our users to be able to successfully upload files and have them save. How do we make this happen?"
Here is Heroku Support's response...
"Hi, the filesystem is writeable on cedar, and can handle significantly more than 100MB; at least 1GB.
That said, it's dyno-local and ephemeral; see https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem
For permanent storage, we recommend something like S3: https://devcenter.heroku.com/articles/s3
Hope this helps."
For those who are going to come here after me, you can get the hobby pack if you are a student and have the GitHub developer pack, Here are the details: Heroku for GitHub students
Heads up: Heroku free tier is going away soon
"Starting November 28, 2022, free Heroku Dynos, free Heroku Postgres, and free Heroku Data for RedisĀ® plans will no longer be available. If you have apps using any of these resources, you must upgrade to paid plans by this date to ensure your apps continue to run and retain your data. See our blog and FAQ for more info."
"What happens if I take no action on my free apps or databases or do not upgrade to a paid plan?
free dynos will be scaled down to 0 and hobby-dev databases will be deleted starting November 28, 2022."
REF:
https://devcenter.heroku.com/articles/free-dyno-hours
https://help.heroku.com/RSBRUH58/removal-of-heroku-free-product-plans-faq
https://blog.heroku.com/next-chapter
Also, loading your page might take a long time (5-10 sec)
If a free dyno isn't accessed frequently it goes into sleep mode. After that there is a delay for the dyno to become active again. For me this takes 5-10sec. You cannot fool the system by accessing it frequently because this is consuming your free dyno hours.