Here's the situation:
We use Google Apps for Business. We have one Google Drive folder -- "Folder A" -- that contains about 30 sub-folders. Each of these sub-folders contains hundreds of files and folders within it. You can assume that I am the owner of all files and folders on Google Drive. I am also the Google Apps superadmin. Folder A has a very well thought-out structure, with as many as eight levels of folders in the folder hierarchy. We need to share Folder A with 40 different computers -- folder structure, files, everything. These 40 computers are display terminals, so each is used by dozens of people every day. It's crucial for us that all 40 computers have exactly the same folder structure because people have to frequently move from one display computer to another, and they have to make a presentation in which every second matters, so we can't have them spend 5 to 10 minutes each time figuring out the folder structure of the computer they are standing at. For business reasons + potential delays, we can't have people sign in using their individual Google accounts.
Here's what I did:
created a new account ("display#domain.com")
shared Folder A with display#domain.com (at "can view" permission level)
on all 40 computers, logged in to display#domain.com's Google Drive and synced everything
My problem is
For some reason, Google Drive allows users to move, delete, or do pretty much whatever they want to folders and files -- even if they have only "can view" access. Yes, this doesn't affect the original shared folder / file, but is still a huge problem because:
If any random user goes to any of the 40 computers and accidentally deletes a file or moves it, then this affects the other 39 computers as well (because Google Drive syncs across all 40 computers)
Even if I share Folder A ("can view" access only) with 40 different new accounts (display1#domain.com, display2#domain.com, ...), a user can still mess up the folder structure by going to -- let's say -- computer 17 and moving or deleting folders. So everyone who uses computer 17 from that point onwards will struggle because the folder structure has been tampered with. Yes, the original Folder A, owned by me, will still be in perfect condition, so there is no data loss. But I have no way of knowing that the folder structure for computer 17 has been messed up. So to make sure that every computer has the correct folder structure just like my original Folder A, I need to manually go to each of the 40 computers every day and check or re-sync to Google Drive. That's going to be crazy!
So ideally we need some way to make Folder A read-only, i.e., users can access the content but can't tamper with the overall folder structure or delete files. We're open to getting creative solutions and happy to do as much work as required, as long as it's one-time work.
Your problem is the Drive Sync app which is bi-directional. If I understand you correctly, you want uni-directional sync. My recommendation would be to replace Drive Sync with your own app that implements the behaviour you're looking for.
I'm responding very late, but thought I'd share what I found (for future users with a similar problem).
Short version: there is no solution here. Google Drive will allow users to tamper with folder structures, even if they've been given only view access. Philosophically, Google probably wants each user to create his/her own folder structure.
Creating our own Google Drive lookalike, as pinoyyid suggested, wasn't really an option for business reasons (we're completely entrenched in the Google ecosystem so makes sense to stick to Drive). So what I end up doing is look through change activity in Google Drive (online, on my computer) on a daily basis, keeping an eye out for any changes to the folder structure. I then:
- undo that change
- approach the person who made that change and tell them they went wrong
Takes about 15 minutes per day
I will also eventually get around to automating this (using AppsScript I guess) but that's for later.
Thank you to all those who thought about the problem. Hopefully, Google will allow for a variety of use-cases at a later time.
I know this is an old question, but in case anyone else comes here looking for a solution, try using a different Google Drive Client. I've tried the programs WebDrive and RaiDrive before, and both of them offer the ability to use sync Google Drive to a virtual drive, and set the drive to read-only mode in the settings.
How about changing user permissions the local filesystems of all the 40 computers to "read only"? Should achieve the desired result.
Related
I have a database file which is located on my own google drive (private) and it's updated on a daily basis.
I wrote a python script in order to track the data from the database file, however I need to download the DB file to my pc, and then running my script locally, every single day.
I am wondering if there are any better options, so I wouldn't have to download the DB file and move it manually.
From a slight searching on the web I found that there is no way to run the script in the google drive folder (obviously due to security issues), and using google cloud platform is not a real option since it's not a free service (and as I understood there is no free trial).
Anyways, any method that would make my life easier would be happily accepted.
Sorry
That's not possible AFAIK. At least in the way you have asked the question.
It may be that you are looking for a Database hosting service, which, as a rule, are not free. I remember seeing a SQL viewer around, I don't know if it is still available and I don't think it was accessible via a local Python script.
Google Cloud Platform, as other services, do however offer a free tier of services - though this depends on how much use you need to give it and also on your location. Though this can get quite involved quite quickly. There are also various options to choose from. For example Big Query may be something you are interested in.
A Possible Workaround
I assume that the reason you don't want to download it is because it is tedious, and not because your network connection or hard drive can't handle it, if so:
The simple workaround may just be to find a way to automatically download the database via the (Google Drive API)[https://developers.google.com/drive] with your Python Script, modify it, or read it, and then upload it again if you need to. This can be done all within one script using the API. So all you would need to do is run the Python script and you wouldn't need to manually download and move it into a folder. You would also not need to worry about running over the free tier and getting billed for it.
You could even use the Google Drive Desktop software to keep the database file synced locally. Since it is local, I doubt the software would be able to block a Python script on the file system.
If you don't even want to think about it, you could set up the Python script with a CRON job or a Scheduled task if you are on Windows.
At our company we have a Google Spreadsheets which is shared by a link with different employees. This spreadsheet is saved on a Google Drive to which only I have access. The link is configured as such that anyone with the link can edit the spreadsheet since all employees need to be able to make changes to the file.
Although this is very useful, it also presents a risk in the form of data loss. If a user were to (accidentally) delete or alter the wrong data and saves the file, this data is permanently lost.
To prevent this I was wondering if it is possible to automatically have a backup created, say every day. Ideally, this backup is saved in the same Google Drive. I know I could install the desktop client and have the file backed up by our daily company backup, but it seems a bit ridiculous to install it for just one file. I'm sure there has to be another solution to this, ie with scripts.
I followed the advice of St3ph and tried revision history. Not exactly what I meant, but an acceptable solution nonetheless.
Is there any maximum number of files that can reside in a Google Drive folder? Are there performance hits when a lot of files (for instance, a million of them) stay in the same folder?
From what I understand (mostly on reading how the API works), Google Drive has no real concept of "folder". Folders are just represented by a specific kind of file and folder belonging is just described within the files' metadata, but files by themselves are just a long unstructured list of blobs with metadata. This would suggest that having a big number of files in the same directory should not be a big problem.
But I would like to have more expert opinions on the matter.
(of course folders with a lot of files are going to hurt if a synchronize with my disk; but I am just going to query them with the API)
EDIT I am not going to use the web UI. The types of queries that I am going to perform are to post a file in this giant folder and retrieve a file given its name. Basically this means I am using this folder as a hash table. So I guess the actual question is: if a make a query like
'big_folder_id' in parents and title = 'some_key'
(assuming that there is just one file named some_key in the folder), is the performance impact associated with the fact that in the folder identified by big_folder_id there are a lot of files going to be bearable?
Performance hit will be on the UI side. Scrolling to the bottom of a long files list will take a very long time. Also, in a folder's web view (i.e. if you share it with 'anyone with a link can view' permission) only the first 500 files will be displayed, with no way to see the rest of the files.
From API access perspective - it depends on what you are doing with the API. For example, if you try to get a list of files in a folder with a lot of files in it, you will likely run into script execution timeout (6 minutes max).
I think Google recently started limiting this. They now have a 500k item limit per folder (root folder is exempt from this limit): https://developers.google.com/drive/api/v3/handle-errors#resolve_a_403_error_number_of_items_in_folder
I designed my system thinking there's no limit, and my logs indicate they started enforcing my account at 2020-06-15T17:13:37.020232715Z. At the time I had reached 3 232 458 files in a single folder. Limit is at 500k, so this is further evidence that this quota was retroactively added and enforcement was started without warning which brought my system down.
More proof is that this error code (numChildrenInNonRootLimitExceeded) started existing in this document somewhere between 2020-04-12 and 2020-06-11:
https://web.archive.org/web/20200412153122/https://developers.google.com/drive/api/v3/handle-errors
=> not present
https://web.archive.org/web/20200611105741/https://developers.google.com/drive/api/v3/handle-errors
=> present
Also, web search for that error code turn up very few links. The only non-Google result I find is dated 2020-06-11: https://scrapbox.io/ci7lus/Error:_The_limit_for_this_folder's_number_of_children_(files_and_folders)_has_been_exceeded.#5eeb086bae0f140000d5c509
We are creating a web application using MySQL as our database, is there a way that some files from the hosting site of our application can be sync to the user's google drive?
There's no direct way of synchronizing your local data with Drive.
However, you can pseudo-sync with little load on your server by using Changes. Basically, you can get list of file changes since the time you specify. If I were you, I make a cron job of checking file changes from Drive and Files.get specific files that have changed.
Besides what JunYoung Gwak suggests -> to ask google by polling them you will also have to keep a last edited date in your app and there might be cases when the local file is newer. will have to keep the same time zone as google to make it work for <24 hour changes.
So both sides will need to have a give changes since -date-time-in-timezone and a way to take a file for one place and update to the other.
Might have conflicts that need to be resolved by a diff tool.
David Fenton recently mentioned in another thread that
"The only proper place for any Access app (since Windows 2000, in fact) is the folder the %AppData% environment variable points to."
I greatly respect David's knowledge, especially in all matters relating to Access, but I'm confused by this statement.
What is the advantage of following this advice, especially in an environment where you are going to have multiple people using the same computer to access your app?
Won't installing to this folder only install the app for one user? And if this is true, won't installing your app multiple times leave multiple, separate copies of your app on the machine? Hard drive space is cheap these days, but I still don't want a front end file and other supporting files (graphics, Word and Excel templates, etc.) copied multiple times onto a machine when one copy will do.
What are your thoughts? Am I missing something key to understanding David's advice?
Yes, this is an issue but the only way around it is, assuming the IT admins allow it, to create a folder in the root of C drive and install the Access FE database file in that folder. That said I'd stil use the Application Data folder even if files are duplicated. As you state hard drives are cheap.
This assumes you don't mean a Terminal Server/Citrix system where users are simultaneously logged into the system.
First off, this is an issue only for a workstation that has multiple users logging on to it. That's pretty uncommon, isn't it?
Second, you admit there's no issue with disk space, so the only real issue is keeping the front end up-to-date, and that issue is really completely orthogonal to the question of where the front end is being stored.
That issue can be addressed by using any of a number of solutions that automatically copy a new version of the front end when the user opens it (if needed). Tony Toews's Auto FE Updater is the best solution I know of. It's quite versatile and easy to use, and Tony's constantly improving it.
So, in short, I don't think there's any issue here at all.
If everything is always the same for every user on a given machine, then multiple copies of a file may not be such a good idea. But when that one exception occurs, you've painted yourself into a corner. They may need a different template version for example.
You seem to be in a rare situation for an Access developer.
You're running into a bit of an issue here, because you're thinking about the environment variable name %appdata%. That variable stores the directory returned by SHGetSpecialFolderPath(CSIDL_APPDATA).
What you're looking for is the directory returned by SHGetSpecialFolderPath(CSIDL_COMMON_APPDATA). There's no environment variable for that directory. This directory is (as the name indicates) common to all users.
The advantage of David's method is that the Access data is protected by NTFS access rights, when it's in CSIDL_APPDATA. A user can only delete his copy. In CSIDL_COMMON_APPDATA, anyone can delete the single shared copy.
It's probably always best to put these advice and tips into perspective. The assumption being made here is if your application is going to be utilized in a multi user mode (that means more than one user in the application of the same time), then it's pretty much assumed that your applications going to be split into two parts. The so called application part (front end), and then the data file only part, or so called backend part.
So, you have a FE and a BE.
In this environment, each individual user within your office will have their own copy of the application placed on their workstation. The BE (data file) is thus assumed to be placed on some share folder on a server.
In the case we're not going to have multiple users running this application, or the application is not really under development, then you really don't need to split your application into two parts. However if you split your application, it means all of your users can safely work and utilize your application while you work on a copy of the next great version of this application release. Without a split environment, you really can't have any workable development cycle.
It is a long time and honored suggestion that if you're going to use access in a multi user environments, each individual user must have a copy of the front end application placed on each individual computer. If you avoid this suggestion, the end result is instability in the general operation of your application.
I have an article here that explains on a conceptual level and doesn't just tell you two split your application, but explains well why you should split your application:
http://www.members.shaw.ca/AlbertKallal/Articles/split/index.htm