I have a desktop running garuda linux and a steam deck.
The desktop is running a usb hard drive enclosure in a raid 5 with a share
`[raid_share]
comment = raid share
path = /mnt/md0/share/
public = yes
only guest = yes
writable = yes
read only = no
write list = guest
printable = no`
Below is what I have to mount the share manually. It mounts but as read only. This same share has read write on windows 11.
owner and others are both listed as having rwx permissions and I even tried using chmod 777 on the folder I want write access to on the steam deck.
`sudo mount -t cifs -o rw,guest //192.168.68.51/raid_share /home/deck/media/terramaster`
I've done all the googles I can think of to try to find what I'm doing wrong. I had a nas before and that worked, but I ruined my steam os partition and I had to reimage it and now I'm trying to get everything working again.
I'd prefer to not have the fstab provided if someone is able to help as I'd like to try on my own to figure that out with any help I get.
Related
I'm working on a project that I have stored on my Google Drive mount on Windows, and I would like to use Linux for portions of that project. The Windows Subsystem for Linux has served me well for most of my projects, but I've never had the need to mount a network drive. While it's not imperative that I use my Google Drive mount for this project (I could easily place it in my /downloads or /documents folder), I was curious as to how I could access my Google Drive from WSL.
I attempted to create a new mount via:
sudo mkdir /mnt/googledrive
This successfully created the directory, and then I used the command:
sudo mount -t drvfs G: /mnt/googledrive
This too seemed to be successful.
I was able cd to the /mnt/googledrive directory, but I couldn't access any of my files (it reported the '.' location was unavailable).
Perhaps I've simply misunderstood what I was doing?
Any help would be greatly appreciated!
I found a workaround, not using the "Google Drive" application but the "Backup and Sync" for individuals (https://www.google.com/drive/download/).
Basicaly it's doing the same for me but in a different way. Backup and Sync will permit you to backup your drive to Google but also Sync your Google Drive localy.
By choosing to sync your drive localy, you can even select some folders, the files are sync to the "C" drive under your user profile at the same level of your "My Documents" folder.
Using that way, you can access your files from your linux with the working /mnt/c/... link.
If that answer is too late for you, might be still in time for others ;-)
It downloads openshift into C:\Users\[user]\.minishift\machines folder. How to change this location to, say, D:\My VMs\? The config set is not very helpful in explaining setting which config for which.
Minishift verision: v1.15.1
Platform: Windows
Driver: Hyper-V
Any help would be greatly appreciated.
It looks like the machines directory can't be set directly through config. It is set relative to a base directory in instance_dirs.go.
That base directory, by default, is the .minishift directory in the home directory of the user, e.g. C:\Users\[user]\.minishift on Windows, but this can be overridden by setting the environment variable MINISHIFT_HOME.
The base directory could also be a profile directory, if you are not using the default profile (the default being minishift).
$ minishift profile list
- minishift Stopped
$ minishift profile myprofile
Profile 'myprofile' set as active profile.
The machines directory for myprofile would then be created under $MINISHIFT_HOME/profiles/myprofile/machines, e.g. on Windows C:\Users\[user]\.minishift\profiles\myprofile\machines.
So you can set MINISHIFT_HOME and move the whole contents of the .minishift directory, including machines, somewhere else but it doesn't look like you can move just machines alone.
Perhaps, you could solve this at the OS-level by creating a symlink between C:\Users\[user]\.minishift\machines and D:\My VMs\.
In case it helps others and so they don't need to test the different ways of using symlink as well as to expand on #codemonkey great answer this is what I did to use symlink as my C drive had no available space. I'm also using hyper-v as the driver.
Note: I do have minishift.exe installed in the apps folder on my D drive
Note 2: I did have to run the command prompt in admin mode
From the C:\Users\[user]\.minishift folder I moved the "machines" folder to D:\Apps\minishift-1.32.0-windows-amd64\
I first tried a soft link which didn't work, I then tried a hadr link, but I was getting errors so I used a "directory junction" link with the /J switch as such C:\WINDOWS\system32>mklink /J C:\Users\[user]\.minishift\machines D:\Apps\minishift-1.32.0-windows-amd64\machines
You should get the following result Junction created for C:\Users\[user]\.minishift\machines <<===>> D:\Apps\minishift-1.32.0-windows-amd64\machines
Then if necessary run minishift delete --clear-cache WARNING this will delete any previous images and hosts you might have!
Then start minishift as normal with minishift start
Grab a cup of coffee or go smoke a cigarette or vape as it will take awhile for the OpenShift server to be started.
Hope this answer might help others who face a similar issue.
I am a little confused about start up scripts and the command line options. I am building a small raspberry pi based server for my node applications. In order to provide maximum protection against power failures and flash write corruption, the root file system is read only, and that embraces the home directory of my main user, were the production versions of my apps (two of them) are stored. Because the .pm2 directory here is no good for logs etc I currently set PM2_HOME environment variable to a place in /var (which has 512kb unused space around it to ensure writes to i. The eco-system.json file reads this environment variable also to determine where to place its logs.
In case I need to, I also have a secondary user with a read write home directory in another (protected by buffer space around it) partition. This contains development versions of my application code which because of the convenience of setting environments up etc I also want to monitor with PM2. If I need to investigate a problem I can log in to that user and run and test the application there.
Since this is a headless box, and with watchdog and kernel panic restarts built in, I want pm2 to start during boot and at minimum restart the two production apps. Ideally it should also starts the two development versions of the app also but I can live without that if its impossible.
I can switch the read only root partition to read/write - indeed it does so automatically when I ssh into my production user account. It switches back to read only automatically when I log out.
So I went to this account to try and create a startup script. It then said (unsurprisingly) that I had to run a sudo command like so:-
sudo su -c "env PATH=$PATH:/usr/local/bin pm2 startup ubuntu -u pi --hp /home/pi"
The key issue for me here is the --hp switch. I went searching for some clue as to what it means. Its clearly a home directory, but it doesn't match PM2_HOME - which is set to /var/pas in my case to take it out of the read only area. I don't want to try and and spray my home directory with files that shouldn't be there. So am asking for some guidance here
I found out by experiment what it does with an "ubuntu" start up script. It uses it to set PM2_HOME in the script by appending "/.pm2" to it.
However there is nothing stopping you editing the script once it has created it and setting PM2_HOME to whatever you want.
So effectively its a helper for the script, but only that and nothing more special.
I've been having some problems with my application not loading the views (sometimes).
I am running a Debian server with php-fpm and nginx (php5.6.8 and nginx 1.8.0) Both compiled from source. On top of that I am running Lavavel 4.2.
So far I've had the problem in both Chrome and Firefox (chrome simply stops loading and shows the error, firefox does not show an error but shows a incomplete version of the view).
So far I've checked the permissions of both nginx and PHP, they both run as the same user (www-data:www-data).
My php-fpm socket is configured as:
[sitename]
listen = /var/run/php5-fpm/sitename.sock
listen.backlog = -1
listen.owner = www-data
listen.group = www-data
listen.mode=0660
; Unix user/group of processes
user = folderuser
group = www-data
; Choose how the process manager will control the number of child processes.
pm = dynamic
pm.max_children = 75
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 20
pm.max_requests = 500
; Pass environment variables
env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp
Note that I set user to folderuser because the folder where the files for the site are located is owned by folderuser (folderuser:www-data).
Furthermore, permissions inside laravel folders are configured as 755 (775 for cache and upload folders so that www-data can write cache files)
I have disabled any kind of serverside php cache (except for zend opcache).
I've also tried disabling "prefetch resources to load pages more quickly" feature in chrome, which did not solve the problem.
As a last resort I've tried this solution:
/*
|--------------------------------------------------------------------------
| Fix for Chrome / PHP 5.4 issue
| http://laravel.io/forum/02-08-2014-another-problem-only-with-chrome
|--------------------------------------------------------------------------
*/
App::after(function($request, $response)
{
$content = $response->getContent();
$contentLength = strlen($content);
$response->header('Content-Length', $contentLength);
});
And some variants to this script, but I got some content length mismatches (more often than the net::ERR_INCOMPLETE_CHUNKED_ENCODING errors.
So to sum up, I've checked permissions and user/group settings serverside, I've disabled serverside caching (except for zend), I've messed around with chrome settings and I've tried a script for laravel, none of which solved the issue I am having. Note that the issue happens at random intervals at random pages on the site.
I really do not know what the next step towards solving my problem would be as the solutions above are the only ones I've found on the internet.
I would really appreciate some help.
Edit: I have a beta version of the same application running off another server with the exact same configuration (only difference is in hardware, more memory though), the issue does not present there.
Also, I forgot the mention that the application does not run with HTTPS (currently). The beta version however is running with HTTPS.
Edit The server where the issue is present has 2048 MB RAM, the beta server has 8192 MB RAM.
Edit I inspected the response with fiddler when the error occured, it simply cuts of the response at some point for no reason.
You might want to check if the folder /var/lib/nginx is owned by www-data too. I had this problem that, when the response page was too big, the Nginx worker process tried to use this folder and failed, because it was owned by nginx and the worker process ran under www-data. By doing chown -R www-data:www-data /var/lib/nginx, the problem was fixed.
If anyone finds this in the future, my net::ERR_INCOMPLETE_CHUNKED_ENCODING were as a result of having run out of space. Have a look at your disk usage and see if that's why!
I've seen a similar problem on my Nginx Server running on the latest Debian. I'm running a Wordpress site with Advanced Custom Fields installed. On the advanced custom fields it says that the problem could potentially be with the max_input_vars value in the php.ini file. I increased my value from 1000 to 3000 and that fixed the issue on one of my sites.
You can check out this link to see if it might help you. http://www.advancedcustomfields.com/faq/limit-number-fields/
I've built nice browsing window which shows all of the pdf files on my (or any user) Google Drive for managing purposes.
What i looking to do is simple, i want to take a pdf file from my google drive (i have all the info related to this file - "downloadUrl","webContentLink" etc) and just copy it to my server (remote).
Any thoughts?
I guess I'm pretty late here, but this may help other people too.
You could try using Grive. Here's a straightforward tutorial: http://xmodulo.com/2013/05/how-to-sync-google-drive-from-the-command-line-on-linux.html
Even if you don't have root access on the server, you can simply build from source, and:
$ mkdir ~/google_drive
$ cd ~/google_drive
$ grive -a
You'll receive an auth URL which you need to paste on your browser and click on "Allow Access" and you're done. Go to the google_drive dir on your server and run grive to sync between your local dir and your GDrive.