I'm really new at GCP.
At this time we have Multiple MSSQL Servers that heavily use UNC paths, and in trying to setup the filestore, mount it to a windows VM, I cannot seem to get UNC paths to work at all in the Microsoft Server 2019 instance to use UNC paths.
Does GCP support UNC paths from File Store.
Yes, GCP supports UNC paths for the File Store.
The most common protocols for exporting file shares are Server Message Block (SMB) for Windows and Network File System (NFS) for Linux (and in some cases Windows).
You can map/mount an SMB share created with Cloud Volumes Service by using its uniform naming convention (UNC) path, which the UI displays.
This document has FAQs and suggestions on the working of SMB.
Note:
Cloud Volumes Service doesn't let you use more than 10 characters for the cloud volumes SMB name.
A UNC path follows the pattern \<sharename> (for example, \cvssmb-d2e6.cvsdemo.internal\quirky-youthful-hermann).
Related
I have 2 server at which I am working locally. The first is a front-end in Vuejs, and the second is back-end in Flask. From the client I request an api to the second.
I have to upload these two on a remote Linux VM (Debian), for which I have credentials and I can successfully connect it via PuTTy.
How do I transer my 2 directories to the VM?
Then, I should change the address that the client uses for api requests of the server, that is all? Or I will have to do something else?
You can copy directories by the scp or sftp protocol. In your case, this can be done most easily by the winscp software.
Both scp, sftp (implemented by winscp) and ssh (implemented by putty) use the ssh protocol. Putty is for remote terminal (i.e. you can give commands to the server), while winscp uploads, downloads and manages files on it.
If you are developing something, it is likely that you will need to this deployment more regularly. These softwares are only good for single-time deployments. In professional environments this deployment is automatized and happens quickly.
It is very likely that you also have some database in your project. Here the most common options are either some db-level synchronization, or dumping the database into files and synchronizyng on the file level. But it is already another topic.
It is also unlikely that you will need two different VMs for the vuejs and for the flask. You could wire them together to a single VM, that would make your task far more easy.
You will likely have a hard time to make your deployment on your server well working. This all is just the beginning. But don't worry, after you've learnt it all, it will be easy!
What is the story for local development for API M? I want to pull down a local copy of API M for local developers to build their API's against, is that possible (also configuring the backend resources to local copies of the code that would run in the cloud)?
I was thinking maybe a docker image of API M, configured for running locally on a developer machine for our environment. Is this possible?
Yes, dev tools support is not there yet, but you can run it locally: https://learn.microsoft.com/en-us/azure/api-management/self-hosted-gateway-overview. It still requires connection to cloud though and configuration must be done either through Azure Portal or ARM.
I have a Google Firebase app with a Cloud Functions back-end. I'm using the node.js 10 runtime which is Ubuntu 18.04. Users can upload files to Google Cloud Storage, and that triggers a GCF function.
What I'd like that function to do is copy that file to Google Cloud Filestore (that's File with an L), Google's new NFS-mountable file server. They say the usual way to do that is from the command line with gsutil rsync or gcloud compute scp.
My question is: are either or both of those commands available on GCF nodes? Or is there another way? (It would be awesome if I could just mount the Filestore share on the GCF instance, but I'm guessing that's nontrivial.)
Using NFS based storage is not a good idea in this environment. NFS works by providing a mountable file system, something that will not work in Cloud Functions environment as the file system is read only with the exception of /tmp folder.
You should consider using cloud native storage systems like GCS for which the Application Default Credentials are already setup. See the list of supported services here.
According to the official documentation Cloud Filestore documentation
Use Cloud Filestore to create fully managed NFS file servers on Google
Cloud Platform (GCP) for use with applications running on Compute
Engine virtual machines (VMs) instances or Google Kubernetes Engine
clusters.
You can not mount the Filestore on GCF.
Also, you can not execute gsutil or gcloud commands from a Google cloud function Writing Cloud Functions.
Google Cloud Functions can be written in Node.js, Python, and Go, and
are executed in language-specific runtimes
I was looking at a README file that raised some questions about database persistence on Openshift.
Note: Every time you push, everything in your remote repo dir gets recreated
please store long term items (like an sqlite database) in the OpenShift
data directory, which will persist between pushes of your repo.
The OpenShift data directory is accessible relative to the remote repo
directory (../data) or via an environment variable OPENSHIFT_DATA_DIR.
https://github.com/ryanj/nodejs-custom-version-openshift/blob/master/README#L24
However, I could find no confirmation of this on the Openshift website. Is this README out of date? I'd rather not test this, so it would be much appreciated if anyone had any firsthand knowledge they'd be willing to share.
Yep, that readme file is up to date regarding SQLite. All gears have SQLite installed on them. Data should be stored in the persistent storage directory on your gear. This does not apply to MySQL/MongoDB/PostgreSQL as those databases are add-on cartridges pre-configured to use persistent storage, whereas SQLite is simply installed and available for use.
See the first notice found in the OpenShift Origin documentation here: https://docs.openshift.org/origin-m4/oo_cartridge_guide.html
Specifically:
Cartridges and Persistent Storage: Every time you push, everything in
your remote repo directory is recreated. Store long term items (like
an sqlite database) in the OpenShift data directory, which will
persist between pushes of your repo. The OpenShift data directory can
be found via the environment variable $OPENSHIFT_DATA_DIR.
The official OpenShift Django QuickStart shows the design pattern you should follow for adding SQLite to your application via the deploy action hook. See: https://github.com/openshift/django-example/blob/master/.openshift/action_hooks/deploy
I'm running the latest Mercurial and Python 2.6; IIS6 is using the wildcard ISAPI method to attach the site to the Mercurial hgwebdir_wsgi
[paths]
\ = \\COMP3254\TestRepo\*
[web]
baseurl = /
allow_push = *
push_ssl = false
style = monoblue
The setup works perfectly if I reference the local drive E:\repo* but doesnt work if I specify the network as above; I've given the server (MERCDEV01$) full permissions on the shared folder on COMP3254, I can't think of any other reason it wouldn't work.
Could be a delegation problem. You can NTLM-authenticate to the webserver (and your credentials will be trusted on its local drives), but the webserver can't authenticate you to a remote location, because it does not know your password (i.e. it can't delegate your credentials).
Have you tried setting up the network location as a virtual directory on the server? You can then enter credentials IIS will cache and use for accesses to that location.
hgwebdir doesn't work for me either with UNC paths - It means you can't use it as a server without hosting the repos on the same machine as the web server.
Ok.. I solved it. The user running the web server (Apache in my case) defaulted to Local System, which does not have permission to access network resources. When I changed it to a user with domain access (and gave that user admin access to the local machine) everything worked fine.
Running a web server with that sort of privilege is of course risky - but in my case it is an internal server. Presumably you can cherry pick just the right permissions to do this in a more secure manner.