Corona SDK JSON usage - json

I have an operating JSON library which I use to load an array of tile IDs. When I double click main.lua directly from file explorer, it runs great, but when I open Corona Simulator and open my project from there or build my project and run it on my testing device, it gives me a null reference error when I attempt to use the data I loaded.
Here is the function to load a table from a JSON file:
function fileIO.loadJSONFile (fileName)
local path = fileName
local contents = ""
local loadingTable = {}
local file = io.open (path, "r")
print (file)
if file then
local contents = file:read ("*a")
loadingTable = json.decode (contents)
io.close (file)
return loadingTable
end
return nil
end
Here is the usage:
function wr:renderChunkFile (path)
local data = fileIO.loadJSONFile (path)
self:renderChunk (data)
end
function wr:renderChunk (data)
local a, b = 1
if (self.img ~= nil) then
a = #self.img + 1
self.img[a] = {}
else
self.img[1] = {}
end
if (self.chunks ~= nil) then
b = #self.chunks + 1
self.chunks[b] = display.newGroup ()
else
self.chunks[1] = display.newGroup ()
end
for i = 1, #data do -- Y axis ERROR IS HERE
self.img[a][i] = {}
for j = 1, #data[i] do -- Z axis
self.img[a][i][j] = {}
for k = 1, #data[i][j] do -- X axis
if (data[i + 1] ~= nil) then
if (data[i + 1][j][k] < self.transparentLimit) then
self.img[a][i][j][k] = display.newImage ("images/tiles/"..data[i][j][k]..".png", k*self.tileWidth, display.contentHeight -j*self.tileDepth - i*self.tileThickness)
self.chunks[b]:insert (self.img[a][i][j][k])
elseif(data[i + 1] == nil) then
self.img[a][i][j][k] = display.newImage ("images/tiles/"..data[i][j][k]..".png", k*self.tileWidth, display.contentHeight -j*self.tileDepth - i*self.tileThickness)
self.chunks[b]:insert (self.img[a][i][j][k])
end
end
end
end
end
end
When it gets to the line for i = 1, #data do it tells me it is trying to access the length of a nil field. Where did I go wrong here?
EDIT: I feel the need to give a more clear explanation of what my problem is. I am getting inconsistent results from this program. When I select main.lua in file explorer and open it with Corona Simulator, it works. When I open Corona Simulator and internally navigate to main.lua, it does not work. When I build the project and test it on my device, it does not work. What I really need is some insight into Corona's JSON library and APK internal directory structure requirements (directory nesting limits, naming restrictions, etc.). If someone thinks of something else that might cause the issue I am having, please bring it up! I am open to anything.

Without seeing the entire error message and not knowing what the value of "path" is it's going to be hard to speculate. But Corona SDK uses four base directories:
system.ResourceDirectory -- Same folder as main.lua and is read-only
system.DocumentsDirectory -- Your writable folder where your data lives
system.CachesDirectory -- for downloaded files
system.TemporaryDirectory -- for temp files.
The last three, while in the simulator are in the project's Sandbox master folder. On device who knows where the folders really are.
In your case, if your JSON file is going to be included in with your downloadble app, your .json file should be in the same folder with your main.lua (or a sub folder) and referenced in system.ResourceDirectory.

Related

Difficulties with web scraping

I have just came to an article called The 500 Greatest Songs of All Time and thought "oh that's cool I bet they also made a Spotify/Apple music list that I can follow". Well...they don't.
So in a nutshell, I wonder if it's possible to 1) scrap the website to extract the songs and 2) then do some kind of bulk upload to Spotify to create the list.
Songs' titles and authors are structured like this in the website:
Website screenshot. I have already tried to scrap the web with the importxml() formula in google sheets but with no success.
I understand the scrapping part is easier than the other and, as I am new to programming, I would be happy to manage to partially achieve this goal. I am sure this task can be achieved easily on python.
I feel like explaining everything would go beyond the scope here, so I tried to comment the code well enough.
1. Scrape the songs
I used python3 and selenium, their website doesn't block that.
Be sure to adjust your chromedriver path, and the output path of the .txt file at the bottom if necessary. Once it's done and you have your .txt file you can close it.
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.service import Service
s = Service(r'/Users/main/Desktop/chromedriver')
driver = webdriver.Chrome(service=s)
# just setting some vars, I used Xpath because I know that
top_500 = 'https://www.rollingstone.com/music/music-lists/best-songs-of-all-time-1224767/'
cookie_button_xpath = "// button [#id = 'onetrust-accept-btn-handler']"
div_containing_links_xpath = "// div [#id = 'pmc-gallery-list-nav-bar-render'] // child :: a"
song_names_xpath = "// article [#class = 'c-gallery-vertical-album'] / child :: h2"
links = []
songs = []
driver.get(top_500)
# accept cookies, give time to load
time.sleep(3)
cookie_btn = driver.find_element(By.XPATH, cookie_button_xpath)
cookie_btn.click()
time.sleep(1)
# extracting all the links since there are only 50 songs per page
links_to_next_pages = driver.find_elements(By.XPATH, div_containing_links_xpath)
for element in links_to_next_pages:
l = element.get_attribute('href')
links.append(l)
# extracting the songs, then going to next page and so on until we hit 500
counter = 1 # were starting with 1 here since links[0] is the current page we are already on
while True:
list = driver.find_elements(By.XPATH, song_names_xpath)
for element in list:
s = element.text
songs.append(s)
if len(songs) == 500:
break
driver.get(links[counter])
counter += 1
time.sleep(2)
# verify that there are no duplicates, if there were, something would be off
if len(songs) != len( set(songs) ):
print('you f***** up')
else:
print('seems fine')
with open('/Users/main/Desktop/output_songs.txt', 'w') as file:
file.writelines(line + '\n' for line in songs)
2. Prepare Spotify
Go to the Spotify Developer Dashboard and create an
account (use your Spotify acc).
Then create an app, call it whatever you want.
On your app click settings and whitelist http://localhost:8888/callback
On your app click "users and access" and add your Spotify account
Leave the tab open, we'll come back to it
3. Prepare Your Environment
You need Node.js so make sure that is installed on your machine
Download this from Spotifys GitHub
Unzip it, cd into the folder and run npm install
Go into the authorization_code folder and open app.js in a editor
Find var scope and append ' playlist-modify-public' to the string, this is so that your app can access you Spotify playlists, see here
Now go back to the app in your Spotify Developer Dashboard we'll need to copy the Client ID and the Client Secret into the var client_id and var client_secret respectively (in the app.js file). var redirect_uri will be
http://localhost:8888/callback - don't forget to save your changes.
4. Run the Spotify side of things
cd into the authorization_code folder and run app.js with node app.js (this is basically a server running on your PC)
Now if that works leave it running and go to http://localhost:8888, authorise your Spotify account there
There copy the full token, including the overflow, use inspect element to get it
Adjust the user_id and auth variables as well as the path to the output_songs.txt (at with open) in the following python script and run that, songs which are not found will be printed out at the end, give it a search with Google. They are usually on Spotify as well but Google seem to have the better search algorithm (surprised Pikachu face).
import requests
import re
import json
# this is NOT you display name, it's your user name!!
user_id = 'YOUR_USERNAME'
# paste your auth token from spotify; it can time out then you have to get a new one, so dont panic if you get a bunch of responses in the 400s after some time
auth = {"Authorization": "Bearer YOUR_AUTH_KEY_FROM_LOCALHOST"}
playlist = []
err_log = []
base_url = 'https://api.spotify.com/v1'
search_method = '/search'
with open('/Users/main/Desktop/output_songs.txt', 'r') as file:
songs = file.readlines()
# this querys spotify does some magic and then appends the tracks spotify uri to an array
def query_song_uris():
for n, entry in enumerate(songs):
x = re.findall(r"'([^']*)'", entry)
title_len = len(entry) - len(x[0]) - 4
title = x[0]
artist = entry[:title_len]
payload = {
'q': (entry),
'track:': (title),
'artist:': (artist),
'type': 'track',
'limit': 1
}
url = base_url + search_method
try:
r = requests.get(url, params=payload, headers=auth)
print('\nquerying spotify; ', r)
c = r.content.decode('UTF-8')
dic = json.loads(c)
track_uri = dic["tracks"]["items"][0]["uri"]
playlist.append(track_uri)
print(track_uri)
except:
err = f'\nNr. {(len(songs)-n)}: ' + f'{entry}'
err_log.append(err)
playlist.reverse()
query_song_uris()
# creates a playlist and returns playlist id
def create_playlist():
payload = {
"name": "Rolling Stone: Top 500 (All Time)",
"description": "music for old men xD with occasional hip hop appearences. just kidding"
}
url = base_url + f'/users/{user_id}/playlists'
r = requests.post(url, headers=auth, json=payload)
c = r.content.decode('UTF-8')
dic = json.loads(c)
print(f'\n\ncreating playlist #{dic["id"]}; ', r)
return dic["id"]
def add_to_playlist():
playlist_id = create_playlist()
while True:
if len(playlist) > 100:
p = playlist[:100]
else:
p = playlist
payload = {"uris": (p)}
url = base_url + f'/playlists/{playlist_id}/tracks'
r = requests.post(url, headers=auth, json=payload)
print(f'\nadding {len(p)} songs to playlist; ', r)
del playlist[ : len(p) ]
if len(playlist) == 0:
break
add_to_playlist()
print('\n\ncheck your spotify :)')
print("\n\n\nthese tracks didn't make it, check manually:\n")
for line in err_log:
print(line)
print('\n\n')
Done
If you don't want to run the code yourself, heres the playlist:
https://open.spotify.com/playlist/5fdLKYNFlA4XSvhEl36KXS
If you have trouble, everything from step 2 on is also described here in the Web API quick start or in general in the web API docs.
Regarding Apple Music
So Apple seems very closed up (surprise haha). What I found though is that you can query the i-Tunes store. Given response also contains a direct link to the song(s) on Apple music.
You might be able to go from there.
Get ISRC code from iTunes Search API (Apple music)
PS: undeniably regex is witchcraft, but y'all here got my back

Python Full Web Parsing

As of right now I'm attempting to make a simple music player app that streams music or video directly from a Youtube URL, and in order to do that I need the full download of the search page that's used to search for videos to stream. But I'm having some problems with the urlopen module in python 3, which is what I'm using to make the command application. It won't load the ytd-app tag on Youtube, which is what a good deal of the video and playlist references are put on when you first load the search. Anyone know what's going on, or know some type of workaround for it? Thanks!
My code so far:
BASICURL = "https://www.youtube.com/results?"
query = query.split()
ret = ""
stufffound = {}
for x in query:
ret = ret + x + "+"
ret = (ret[:len(ret)-1])
# URL BUILDER
if filtercriteria:
URL = BASICURL + "sp={0}".format(filtercriteria) + "&search_query={0}".format(ret)
else:
URL = BASICURL + "search_query={0}".format(ret)
query = urlopen(str(URL))
passdict = {}
def findvideosonpage(query,dictToAddTo):
for x in (BS(urlopen(query)).read()).findAll(attrs={'class':'yt-simple-endpoint style-scope ytd-video-renderer'})
dictToAddTo[query.index(x)] = x[href]
print(x)
return list([x for _,x in sorted(zip(dictToAddTo.values(), dictToAddTo.keys()))])
# Dictionary is meant to be converted into a list later to order the results

Search files recursively using google drive rest

I am trying to grab all the files created under a parent directory. The parent directory has a lot of sub directories followed by files in those directories.
parent
--- sub folder1
--- file1
--- file2
Currently I am grabbing all the ids of sub folders and constructing a query such as q: 'subfolder1id' in parents or 'subfolder2id' in parents to find the list of files. Then I issue these in batches. If I have 100 folders, I issue 10 search queries for a batch size of 10.
Is there a better way of querying the files using google drive rest api that will get me all the files with one query?
Here is an answer to your question.
Same idea from your scenario:
folderA____ folderA1____folderA1a
\____folderA2____folderA2a
\___folderA2b
There 3 alternative answers that I think you can get an idea from.
Alternative 1. Recursion
The temptation would be to list the children of folderA, for any
children that are folders, recursively list their children, rinse,
repeat. In a very small number of cases, this might be the best
approach, but for most, it has the following problems:-
It is woefully time consuming to do a server round trip for each sub folder. This does of course depend on the size of your tree, so if
you can guarantee that your tree size is small, it could be OK.
Alternative 2. The common parent
This works best if all of the files are being created by your app (ie.
you are using drive.file scope). As well as the folder hierarchy
above, create a dummy parent folder called say "MyAppCommonParent". As
you create each file as a child of its particular Folder, you also
make it a child of MyAppCommonParent. This becomes a lot more
intuitive if you remember to think of Folders as labels. You can now
easily retrieve all descdendants by simply querying MyAppCommonParent
in parents.
Alternative 3. Folders first
Start by getting all folders. Yep, all of them. Once you have them all
in memory, you can crawl through their parents properties and build
your tree structure and list of Folder IDs. You can then do a single
files.list?q='folderA' in parents or 'folderA1' in parents or
'folderA1a' in parents.... Using this technique you can get
everything in two http calls.
Alternative 2 is the most effificient, but only works if you have
control of file creation. Alternative 3 is generally more efficient
than Alternative 1, but there may be certain small tree sizes where 1
is best.
scope = ['https://www.googleapis.com/auth/drive']
credentials = ServiceAccountCredentials.from_json_keyfile_name('your JSON credentials' % path, scope)
service = build('drive', 'v3', credentials=credentials)
folder_tree = "NAME OF THE FOLDER YOU WANT TO START YOUR SEARCH"
folder_ids = {}
folder_ids['NAME OF THE FOLDER YOU WANT TO START YOUR SEARCH'] = folder_id
def check_for_subfolders(folder_id):
new_sub_patterns = {}
folders = service.files().list(q="mimeType='application/vnd.google-apps.folder' and parents in '"+folder_id+"' and trashed = false",fields="nextPageToken, files(id, name)",pageSize=400).execute()
all_folders = folders.get('files', [])
all_files = check_for_files(folder_id)
n_files = len(all_files)
n_folders = len(all_folders)
old_folder_tree = folder_tree
if n_folders != 0:
for i,folder in enumerate(all_folders):
folder_name = folder['name']
subfolder_pattern = old_folder_tree + '/'+ folder_name
new_pattern = subfolder_pattern
new_sub_patterns[subfolder_pattern] = folder['id']
print('New Pattern:', new_pattern)
all_files = check_for_files(folder['id'])
n_files =len(all_files)
new_folder_tree = new_pattern
if n_files != 0:
for file in all_files:
file_name = file['name']
new_file_tree_pattern = subfolder_pattern + "/" + file_name
new_sub_patterns[new_file_tree_pattern] = file['id']
print("Files added :", file_name)
else:
print('No Files Found')
else:
all_files = check_for_files(folder_id)
n_files = len(all_files)
if n_files != 0:
for file in all_files:
file_name = file['name']
subfolders[folder_tree + '/'+file_name] = file['id']
new_file_tree_pattern = subfolder_pattern + "/" + file_name
new_sub_patterns[new_file_tree_pattern] = file['id']
print("Files added :", file_name)
return new_sub_patterns
def check_for_files(folder_id):
other_files = service.files().list(q="mimeType!='application/vnd.google-apps.folder' and parents in '"+folder_id+"' and trashed = false",fields="nextPageToken, files(id, name)",pageSize=400).execute()
all_other_files = other_files.get('files', [])
return all_other_files
def get_folder_tree(folder_id):
global folder_tree
sub_folders = check_for_subfolders(folder_id)
for i,sub_folder_id in enumerate(sub_folders.values()):
folder_tree = list(sub_folders.keys() )[i]
print('Current Folder Tree : ', folder_tree)
folder_ids.update(sub_folders)
print('****************************************Recursive Search Begins**********************************************')
try:
get_folder_tree(sub_folder_id)
except:
print('---------------------------------No furtherance----------------------------------------------')
return folder_ids
folder_ids = get_folder_tree(folder_id)

How do I search sub-folders and sub-sub-folders in Google Drive?

This is a commonly asked question.
The scenario is:-
folderA____ folderA1____folderA1a
\____folderA2____folderA2a
\___folderA2b
... and the question is how do I list all the files in all of the folders under the root folderA.
EDIT: April 2020 Google have announced that multi-parent files is being disabled from September 2020. This alters the narrative below and means option 2 is no longer an option. It might be possible to implement Option 2 using shortcuts. I will update this answer further as I test the new restrictions/features
We are all used to the idea of folders (aka directories) in Windows/nix etc. In the real world, a folder is a container, into which documents are placed. It is also possible to place smaller folders inside bigger folders. Thus the big folder can be thought of as containing all of the documents inside its smaller children folders.
However, in Google Drive, a Folder is NOT a container, so much so that in the first release of Google Drive, they weren't even called Folders, they were called Collections. A Folder is simply a File with (a) no contents, and (b) a special mime-type (application/vnd.google-apps.folder). The way Folders are used is exactly the same way that tags (aka labels) are used. The best way to understand this is to consider GMail. If you look at the top of an open mail item, you see two icons. A folder with the tooltip "Move to" and a label with the tooltip "Labels". Click on either of these and the same dialogue box appears and is all about labels. Your labels are listed down the left hand side, in a tree display that looks a lot like folders. Importantly, a mail item can have multiple labels, or you could say, a mail item can be in multiple folders. Google Drive's Folders work in exactly the same way that GMail labels work.
Having established that a Folder is simply a label, there is nothing stopping you from organising your labels in a hierarchy that resembles a folder tree, in fact this is the most common way of doing so.
It should now be clear that a file (let's call it MyFile) in folderA2b is NOT a child or grandchild of folderA. It is simply a file with a label (confusingly called a Parent) of "folderA2b".
OK, so how DO I get all the files "under" folderA?
Alternative 1. Recursion
The temptation would be to list the children of folderA, for any children that are folders, recursively list their children, rinse, repeat. In a very small number of cases, this might be the best approach, but for most, it has the following problems:-
It is woefully time consuming to do a server round trip for each sub folder. This does of course depend on the size of your tree, so if you can guarantee that your tree size is small, it could be OK.
Alternative 2. The common parent
This works best if all of the files are being created by your app (ie. you are using drive.file scope). As well as the folder hierarchy above, create a dummy parent folder called say "MyAppCommonParent". As you create each file as a child of its particular Folder, you also make it a child of MyAppCommonParent. This becomes a lot more intuitive if you remember to think of Folders as labels. You can now easily retrieve all descdendants by simply querying MyAppCommonParent in parents.
Alternative 3. Folders first
Start by getting all folders. Yep, all of them. Once you have them all in memory, you can crawl through their parents properties and build your tree structure and list of Folder IDs. You can then do a single files.list?q='folderA' in parents or 'folderA1' in parents or 'folderA1a' in parents.... Using this technique you can get everything in two http calls.
The pseudo code for option 3 is a bit like...
// get all folders from Drive files.list?q=mimetype=application/vnd.google-apps.folder and trashed=false&fields=parents,name // store in a Map, keyed by ID // find the entry for folderA and note the ID // find any entries where the ID is in the parents, note their IDs // for each such entry, repeat recursively // use all of the IDs noted above to construct a ... // files.list?q='folderA-ID' in parents or 'folderA1-ID' in parents or 'folderA1a-ID' in parents...
Alternative 2 is the most effificient, but only works if you have control of file creation. Alternative 3 is generally more efficient than Alternative 1, but there may be certain small tree sizes where 1 is best.
Sharing a Python solution to the excellent Alternative 3 by #pinoyyid, above, in case it's useful to anyone. I'm not a developer so it's probably hopelessly un-pythonic... but it works, only makes 2 API calls, and is pretty quick.
Get a master list of all the folders in a drive.
Test whether the folder-to-search is a parent (ie. it has subfolders).
Iterate through subfolders of the folder-to-search testing whether they too are parents.
Build a Google Drive file query with one '<folder-id>' in parents segment per subfolder found.
Interestingly, Google Drive seems to have a hard limit of 599 '<folder-id>' in parents segments per query, so if your folder-to-search has more subfolders than this, you need to chunk the list.
FOLDER_TO_SEARCH = '123456789' # ID of folder to search
DRIVE_ID = '654321' # ID of shared drive in which it lives
MAX_PARENTS = 500 # Limit set safely below Google max of 599 parents per query.
def get_all_folders_in_drive():
"""
Return a dictionary of all the folder IDs in a drive mapped to their parent folder IDs (or to the
drive itself if a top-level folder). That is, flatten the entire folder structure.
"""
folders_in_drive_dict = {}
page_token = None
max_allowed_page_size = 1000
just_folders = "trashed = false and mimeType = 'application/vnd.google-apps.folder'"
while True:
results = drive_api_ref.files().list(
pageSize=max_allowed_page_size,
fields="nextPageToken, files(id, name, mimeType, parents)",
includeItemsFromAllDrives=True, supportsAllDrives=True,
corpora='drive',
driveId=DRIVE_ID,
pageToken=page_token,
q=just_folders).execute()
folders = results.get('files', [])
page_token = results.get('nextPageToken', None)
for folder in folders:
folders_in_drive_dict[folder['id']] = folder['parents'][0]
if page_token is None:
break
return folders_in_drive_dict
def get_subfolders_of_folder(folder_to_search, all_folders):
"""
Yield subfolders of the folder-to-search, and then subsubfolders etc. Must be called by an iterator.
:param all_folders: The dictionary returned by :meth:`get_all_folders_in-drive`.
"""
temp_list = [k for k, v in all_folders.items() if v == folder_to_search] # Get all subfolders
for sub_folder in temp_list: # For each subfolder...
yield sub_folder # Return it
yield from get_subfolders_of_folder(sub_folder, all_folders) # Get subsubfolders etc
def get_relevant_files(self, relevant_folders):
"""
Get files under the folder-to-search and all its subfolders.
"""
relevant_files = {}
chunked_relevant_folders_list = [relevant_folders[i:i + MAX_PARENTS] for i in
range(0, len(relevant_folders), MAX_PARENTS)]
for folder_list in chunked_relevant_folders_list:
query_term = ' in parents or '.join('"{0}"'.format(f) for f in folder_list) + ' in parents'
relevant_files.update(get_all_files_in_folders(query_term))
return relevant_files
def get_all_files_in_folders(self, parent_folders):
"""
Return a dictionary of file IDs mapped to file names for the specified parent folders.
"""
files_under_folder_dict = {}
page_token = None
max_allowed_page_size = 1000
just_files = f"mimeType != 'application/vnd.google-apps.folder' and trashed = false and ({parent_folders})"
while True:
results = drive_api_ref.files().list(
pageSize=max_allowed_page_size,
fields="nextPageToken, files(id, name, mimeType, parents)",
includeItemsFromAllDrives=True, supportsAllDrives=True,
corpora='drive',
driveId=DRIVE_ID,
pageToken=page_token,
q=just_files).execute()
files = results.get('files', [])
page_token = results.get('nextPageToken', None)
for file in files:
files_under_folder_dict[file['id']] = file['name']
if page_token is None:
break
return files_under_folder_dict
if __name__ == "__main__":
all_folders_dict = get_all_folders_in_drive() # Flatten folder structure
relevant_folders_list = [FOLDER_TO_SEARCH] # Start with the folder-to-archive
for folder in get_subfolders_of_folder(FOLDER_TO_SEARCH, all_folders_dict):
relevant_folders_list.append(folder) # Recursively search for subfolders
relevant_files_dict = get_relevant_files(relevant_folders_list) # Get the files
Sharing a javascript solution using recursion to build an array of folders, starting with the first level folder and moving down the hierarchy. This array is composed by recursively cycling through the parent Id's of the file in question.
The extract below makes 3 separate queries to the gapi:
get the root folder id
get a list of folders
get a list of files
the code iterates through the list of files, then creating an array of folder names.
const { google } = require('googleapis')
const gOAuth = require('./googleOAuth')
// resolve the promises for getting G files and folders
const getGFilePaths = async () => {
//update to use Promise.All()
let gRootFolder = await getGfiles().then(result => {return result[2][0]['parents'][0]})
let gFolders = await getGfiles().then(result => {return result[1]})
let gFiles = await getGfiles().then(result => {return result[0]})
// create the path files and create a new key with array of folder paths, returning an array of files with their folder paths
return pathFiles = gFiles
.filter((file) => {return file.hasOwnProperty('parents')})
.map((file) => ({...file, path: makePathArray(gFolders, file['parents'][0], gRootFolder)}))
}
// recursive function to build an array of the file paths top -> bottom
let makePathArray = (folders, fileParent, rootFolder) => {
if(fileParent === rootFolder){return []}
else {
let filteredFolders = folders.filter((f) => {return f.id === fileParent})
if(filteredFolders.length >= 1 && filteredFolders[0].hasOwnProperty('parents')) {
let path = makePathArray(folders, filteredFolders[0]['parents'][0])
path.push(filteredFolders[0]['name'])
return path
}
else {return []}
}
}
// get meta-data list of files from gDrive, with query parameters
const getGfiles = () => {
try {
let getRootFolder = getGdriveList({corpora: 'user', includeItemsFromAllDrives: false,
fields: 'files(name, parents)',
q: "'root' in parents and trashed = false and mimeType = 'application/vnd.google-apps.folder'"})
let getFolders = getGdriveList({corpora: 'user', includeItemsFromAllDrives: false,
fields: 'files(id,name,parents), nextPageToken',
q: "trashed = false and mimeType = 'application/vnd.google-apps.folder'"})
let getFiles = getGdriveList({corpora: 'user', includeItemsFromAllDrives: false,
fields: 'files(id,name,parents, mimeType, fullFileExtension, webContentLink, exportLinks, modifiedTime), nextPageToken',
q: "trashed = false and mimeType != 'application/vnd.google-apps.folder'"})
return Promise.all([getFiles, getFolders, getRootFolder])
}
catch(error) {
return `Error in retriving a file reponse from Google Drive: ${error}`
}
}
// make call out gDrive to get meta-data files. Code adds all files in a single array which are returned in pages
const getGdriveList = async (params) => {
const gKeys = await gOAuth.get()
const drive = google.drive({version: 'v3', auth: gKeys})
let list = []
let nextPgToken
do {
let res = await drive.files.list(params)
list.push(...res.data.files)
nextPgToken = res.data.nextPageToken
params.pageToken = nextPgToken
}
while (nextPgToken)
return list
}
The following works very well but requires an additional call to the API.
It shares the root folder, does a search where file is shared, then removed the share. This works great in our production environments.
userPermission = new Permission()
{
Type = "user",
Role = "reader",
EmailAddress = "AnyEmailAddress"
};
var request = service.Permissions.Create(userPermission, rootFolderID);
var result = request.ExecuteAsync().ContinueWith(t =>
{
Permission permission = t.Result;
if (t.Exception == null)
{
//Do your search here
// make sure you add 'AnyEmailAddress' in readers
service.Files.List......
// then remove the share
var requestDeletePermission = service.Permissions.Delete(rootFolderID, permission.filePermissionID);
requestDeletePermission.Execute();
}
});
For Google Apps Script, I've written this function:
function getSubFolderIdsByFolderId(folderId, result = []) {
let folder = DriveApp.getFolderById(folderId);
let folders = folder.getFolders();
if (folders && folders.hasNext()) {
while (folders.hasNext()) {
let f = folders.next();
let childFolderId = f.getId();
result.push(childFolderId);
result = getSubFolderIdsByFolderId(childFolderId, result);
}
}
return result.filter(onlyUnique);
}
function onlyUnique(value, index, self) {
return self.indexOf(value) === index;
}
With this call:
const subFolderIds = getSubFolderIdsByFolderId('1-id-of-the-root-folder-to-check')
And this for loop:
let q = [];
for (let i in subFolderIds) {
let subFolderId = subFolderIds[i];
q.push('"' + subFolderId + '" in parents');
}
if (q.length > 0) {
q = '(' + q.join(' or ') + ') and';
} else {
q = '';
}
I get the required query part, for the DriveApp.searchFiles call.
A major disadvantage of this approach is the number of requests and the time you'll have to wait for, till you got the complete list - depending on the size of the root directory. I would not call this an ideal solution!
Maybe caching could increase the performance for additional calls, when you take the modification date into account of the drive API query.
I'm curious because, in the Google Drive Browser version, you can search recursively within folders. And it does not take that much time, as my approach.

Why brunch compile everything it found when using function in joinTo

I don't understand why BrunchJS compile all files (in bower_components) when i use a function like this (CoffeeScript):
modules = ['i18n', 'pager', 'core', 'module-comment']
javascripts:
joinTo:
# Main
'/js/master.js': () ->
paths = ['bower_components/bootstrap/**/*', 'app/**/*']
for o in modules
fs.exists '../../../../workbench/dynamix/' + o, (exists) ->
if exists
paths.push '../../../../workbench/dynamix/' + o + '/public/public/**/*'
else
paths.push '../../../../vendor/dynamix/' + o + '/public/public/**/*'
return paths
I want to test if some path exist, if yes put the complete path in a variable to return it to joinTo. I have successfuly get files in workbench/vendor but it get some undesired files from bower_components (don't specified?!)
I would like to optimize this :
javascripts:
joinTo:
# Main
'/js/master.js':
'bower_components/bootstrap/**/*'
'../../../../workbench/dynamix/i18n/public/public/**/*'
'../../../../workbench/dynamix/pager/public/public/**/*'
'../../../../vendor/dynamix/core/public/public/**/*'
'../../../../workbench/dynamix/module-comment/public/public/**/*'
'../../../../workbench/dynamix/module-love-live-music/public/public/**/*'
'../../../../workbench/dynamix/module-rating/public/public/**/*'
'../../../../workbench/dynamix/module-registration/public/public/**/*'
'app/**/*'
I'm sorry i didn't find documentation to use function in joinTo.
Thanks
A function in a joinTo should take a file path as an argument and return true if the path should be included, false if not. This is described in the anymatch documentation.
Your function appears to always return a truthy value, meaning every path Brunch is watching will be included.
What you might have intended to do is use an IIFE, so the return value of the function (invoked during initial code evaluation) gets assigned to the joinTo. In coffeescript you can accomplish this easily using the do keyword, so instead of starting off your function definition with () -> it'd be do -> instead.
Thank you for your time and your answer.
I make the function to get files and test if the path exists and it work fine.
If this could help someone, I let my magic function here
javascripts:
joinTo:
# Main
'/js/master.js': [
'bower_components/bootstrap/**/*'
'bower_components/unveil/**/*'
'app/**/*'
(string) ->
response = false
modules = ['i18n', 'pager', 'core', 'module-comment']
for o in modules
exists = fs.existsSync unixify('../../../../workbench/dynamix/' + o)
if exists
if unixify(string).indexOf(unixify('../../../../workbench/dynamix/' + o + '/public/public/')) != -1
response = true
else
if unixify(string).indexOf(unixify('../../../../vendor/dynamix/' + o + '/public/public/')) != -1
response = true
return response
]