I am running Google Drive File Stream Version: 25.252.289.1612 on OS X 10.12.6 which mounts/Volumes/GoogleDrive/
Open a Bash shell (I use iTerm3). When I ls (or ls -a) this directory, I get no results. But if I use tab completion, I can see the two subdirectories, 'My Drive' and 'Team Drive' as well as some other stuff. As I cd into subdirectories, again ls shows no files but tab completion shows them. And I can operate on files/directories. So if there is a file foo.gsheets , I can open * open foo.gsheets* which will launch in Chrome, for example. Or if I had foo.txt I could vi foo.txt. I checked that I had read/write/execute permission on files and dirs. And even tried to sudo ls .
Any ideas? Here is my mount
drivefs on /Volumes/GoogleDrive (dfsfuse_DFS, local, nodev, nosuid, synchronous, mounted by aberezin)
Maybe related to the implemenation of osxfuse
https://github.com/osxfuse/osxfuse/issues/503
Related
According to https://docs.ipfs.io/guides/concepts/pinning/ , running the command ipfs add hello.txt apparently "pins" the file "hello.txt", yet why don't I see the file listed afterwards when I run the command ipfs files ls? It only lists files I added with the IPFS desktop app. Why is "hello.txt" not in the list now?
Also, I found a list of so-called "pinned" objects, by running the command ipfs pin ls, however none of the CID's that show up there correspond to "hello.txt", or even any of the previously mentioned files added using the IPFS desktop app.
How does one actually manage pinned files?
cool to see some questions about IPFS pop up here! :)
So, there are two different things:
Pins
Files/Folders (Called MFS)
They both overlap heavily, but it's best to describe that the MFS is basically a locally alterable filesystem with a mapping of 'objects' as files and folders.
You have a root ( / ) in your local IPFS client, where you can put files and folders.
For example you can add a folder recursively:
ipfs add -r --recursive /path/to/folder
You get a CID (content ID) back. This content ID represents the folder, all its files and all the file structure as a non-modifiable data structure.
This folder can be mapped to a name in your local root:
ipfs files cp /ipfs/<CID> /<foldername>
A ipfs files ls will now show this folder by name, while an ipfs pin ls --type=recursive will show the content-ID as pinned.
If you use the (Web)GUI, files will show up under the 'files' tab, while the pins show up under the 'pins' tab.
Just a side note, you don't have to pin a file or folder stored in your MFS, everything stored there will be permanently available.
If you going to change the folders, subfolders, files, etc in your MFS, the folder will get a different Content-ID and your pin will still make sure the old version is held on your client.
So if you add another file to your folder, by something like cat /path/to/file | ipfs files write --create /folder/<newfilename>, the CID of your folder will be different.
Compare ipfs files stat --hash /folder and afterwards again.
Hope I didn't fully confuse you :D
Best regards
Ruben
Answer:ipfs pin ls --type recursive
It's simple. Just run that command.
Some further notes: the type can be "direct", "recursive", "indirect", and "all". I ran these commands with these results ("Error: context canceled" means that I canceled the command with ctrl+c):
ipfs pin ls --type all - took too long, "Error: context canceled"
ipfs pin ls --type direct - took too long, "Error: context canceled"
ipfs pin ls --type indirect - took too long, "Error: context canceled"
ipfs pin ls --type recursive - worked, showed multiple, probably all, pins of mine
I don't really know what types other than recursive mean. You can read about them from the output of this command: ipfs pin ls --help.
I want to copy a file from my laptop to the Compute Instance. Can i copy the file to a folder which does not exists but is created during the copy?
Example:
I have a file index.php and i want to copy it to /var/www/test
The folder test is not present.
When i run a command:
gcloud compute copy-files index.php user#instance-1:/var/www/test
It does not give any error. But when i ssh into the instance it shows test under /var/www but
cd /var/www/test
gives me:
-bash: cd: test: Not a directory
How can i create a directory and then copy a file?
This is an interesting set of circumstances leading to surprising results. To understand what's going on, it's useful to know that the gcloud compute copy-files command runs the scp command under the covers.
In this case, the scp command is interpreting /var/www/test as a destination path. Because /var/www exists, but /var/www/test does not, it's interpreting the test portion as the name of the file you want to save on the remote machine. So it's dutifully copying the contents of index.php into a file called /var/www/test on the remote machine.
To get the results you want, you should remove the file (with rm /var/www/test), and create a directory (with mkdir /var/www/test). If you were starting with a fresh machine, you could achieve the desired result like this:
gcloud compute ssh user#instance-1 --command='mkdir /var/www/test'
gcloud compute copy-files index.php user#instance-1:/var/www/test
Problem:
When I save a php file to a development server over sshfs, the group is changed from apache to wheel. Given that the file belongs to my user and is readable by apache, this breaks the dev copy. I have to reset the group after every save.
Server file permissions:
As recommended by the Drupal community, my user owns the files but they are grouped to apache.
How I'm mapped:
I have a script that helps me do a local map to development servers running RHEL 7.1, change the folder in term, open a file browser, and start PHP Storm. I've set up the following convenience function for myself to do all this in one nice command.
$ sshmap devsvr
Which executes this function:
function sshmap {
if [[ -z $2 ]]; then
basefolder="/var/www/html/"
else
basefolder=$2
fi
fusermount -uz ~/mount/$1
mkdir -p ~/mount/$1
sshfs me#$1.domain.ext:$basefolder ~/mount/$1
# file browser
nemo ~/mount/$1 &
# change local folder to that mount point
cd ~/mount/$1
# start phpstorm
pstorm $HOME/mount/$1 2>/dev/null &
}
Question:
How can I control this and prevent this change? Is there an argument in sshfs that I'm missing?
Errata:
Safe write is turned on in PHPSTorm
I mistakenly installed the Google Cloud SDK to the wrong directory on my local machine (I installed it to my Google Drive folder, which is not ideal). What is the preferred method of moving the folder? I haven't tried anything yet for fear of creating issues with environment variables that may have been set during installation. I'm running OS X on my local machine.
The Cloud SDK is self contained, and so the google-cloud-sdk directory can generally be moved to wherever you like. The only thing that is configured outside that directory is your ~/.bash_profile file (only if you said yes during the installation process) which adds the SDK to your PATH and installs command tab completion. If you had the installer update that, probably the easiest thing to do is just delete the google-cloud-sdk directory entirely and reinstall in the location you want. The installer will re-update your ~/.bash_profile with the new location.
Here is the magic script.. just change the PREV_DIR & NEW_DIR variables
PREV_DIR=/Users/some_user/Downloads/google-cloud-sdk
NEW_DIR=/Users/some_user/google-cloud-sdk
function z(){
if test -f "$1"; then; sed -i "" -e "s#$PREV_DIR#$NEW_DIR#g" $1 ; fi
}
z ~/.zshrc
z ~/.zprofile
z ~/.bashrc
z ~/.bash_profile
z ~/.kube/config
for zsh, just move the folder to desired location and update .zshrc, check for following lines and set new path:
# The next line updates PATH for the Google Cloud SDK.
... path.zsh.inc ...
# The next line enables shell command completion for gcloud.
... completion.zsh.inc ...
I'd like to be able to run sublime from a mounted drive, but not look in the local users /Library for its settings, but instead have them on the mounted drive too. Is there a way to remap the settings to another folder or make sublime portable in this manner.
You can do this with a symbolic link in the user's ~/Library/Application Support directory. First, copy the ~/Library/Application Support/Sublime Text 2 folder to the mounted drive (for example, /Volumes/MyDrive/Settings/Sublime Text 2). Then, run the following commands from Terminal.app or your favorite replacement (the $ is just the command prompt, don't type it):
$ cd /Users/UserName/Library/Application\ Support
$ rm -rf Sublime\ Text\ 2
$ ln -s /Volumes/MyDrive/Settings/Sublime\ Text\ 2 Sublime\ Text\ 2
and you should be all set. The first command changes to the right directory (obviously, replace UserName with your user name...), the second deletes the original folder (make sure you've copied it before you run this!), and the third creates a symbolic link to the new folder where the old one was before.
This should work for any type of mounted drive, including USB sticks, network shares, and external hard drives, as well as Dropbox, Google Drive, and similar services.
Just move %appdata% to Data folder in your sublime text folder (this one containing sublime_text.exe). This method works both for ST2 and ST3
http://docs.sublimetext.info/en/latest/basic_concepts.html#the-data-directory