How may I successfully upload a file to a sub-directory of our team drive? Using the drive_find() identifies items in my directory. No matter what I try, the best I can do is get the files to land in the team drive 'root' directory.
I successfully get a list of names, id's and drive resources from this:
drive_find(team_drive = 'Data Analytics Team')
like so:
# A tibble: 29 x 3
name id drive_resource
* <chr> <chr> <list>
1 00.ExampleSubDirectory 1XoNCDizzZMHZ4sbBhnCXb-qokk8TW7Q_ <list [30]>
2 df_iris_in-2019-05-01 1kXSD_t96roqeLuXb0BDJfpCejlyZCa6FSL2YtdeWtxE <list [33]>
3 df_iris_in-2019-05-01 1qT_kRff8J8Qu5ZLxZhGLMDB7gO9O1PTtJ_KHsjItgFI <list [33]>
When I attempt to use the example sub-directory id like so:
td <- team_drive_get(as_id("1XoNCDizzZMHZ4sbBhnCXb-qokk8TW7Q_"))
All I get is this error:
Error: HTTP error [404] Shared drive not found: 1XoNCDizzZMHZ4sbBhnCXb-qokk8TW7Q_
* domain: global
* reason: notFound
* message: Shared drive not found: 1XoNCDizzZMHZ4sbBhnCXb-qokk8TW7Q_
* locationType: parameter
* location: driveId
I get the same results using the url, or the resource id. I have tried everything in the docs here: https://googledrive.tidyverse.org/
https://cran.r-project.org/web/packages/googledrive/googledrive.pdf
How can I specify a path to a subdirectory inside my team drive?
Find the id of the folder you want to write to. It is easiest to navigate to it in the browser. The id is located in the URL.
For example it is 1v4SQb39RTE0MCzrZlLXzxVDB4HPZ8NK7 in this URL: https://drive.google.com/drive/u/0/folders/1v4SQb39RTE0MCzrZlLXzxVDB4HPZ8NK7
Stuff that ID into a googsdrive connecting drive path/socket.
drivepath <- drive_get(as_id("1v4SQb39RTE0MCzrZlLXzxVDB4HPZ8NK7"))
Write your file to csv, being sure to name the csv what you want the spreadsheet to get labelled (minus the .csv extension). (You can do this in a tempfile but then your googlespreadsheet will end up with the tempfile's name.)
write_csv(iris, 'iris_example.csv')
Finally push the file up to your desired directory.
drive_upload('iris_example.csv', type='spreadsheet', path = drivepath)
Related
I am creating a function that copies a folder and pastes it with another name.
Therefore, I have my "Main" folder and according to the query I make copies of the folder called like this: "Main 1", "Main 2", "Main 3"
The way I managed to solve this was through this function:
function TDMMonitor.CopyFolder(Origin, Destination : String) : Boolean;
var
aFiles : TStringDynArray;
InFile, OutFile: string;
Begin
aFiles := TDirectory.GetFiles(Origin, '*.*', TSearchOption.soAllDirectories);
for InFile in aFiles do
Begin
OutFile := TPath.Combine(Destination , TPath.GetFileName(InFile));
TFile.Copy(InFile, OutFile, True);
End;
Result := True;
End;
This works! But my problem right now is that the parent folder has subfolders that are not being copied correctly.
I leave a more visual example of the results of my function below:
"Main" folder:
File.txt
File1.txt
Sub-Folder -> File3.txt
"Main 1" folder:
File.txt
File1.txt
File3.txt
How can I maintain the folder structure that the Main folder follows?
TDirectory.Copy works:
TDirectory.Copy('D:\Path\Main', 'D:\Path\Main 1');
TDirectory.GetFiles() returns an array of absolute paths to each file found. The soAllDirectories flag tells it to search through subfolders recursively. So, you will end up with an array of paths at potentially different levels.
TPath.GetFileName() strips off all folder path info, leaving just the file name.
Based on your example, you are searching recursively through C:\Main\ and you want to copy everything to C:\Main 1\. When the search gives you the file C:\Main\Sub-Folder\File3.txt, your use of TPath.GetFileName() discards C:\Main\Sub-Folder\, and so you concatenate C:\Main 1\ with just File3.txt, thus you copy C:\Main\Sub-Folder\File3.txt to C:\Main 1\File3.txt instead of to C:\Main 1\Sub-Folder\File3.txt.
That is why your subfolders are not copying correctly.
Rather than using TPath.GetFileName(), you would need to replace only the Origin portion of each returned absolute path (C:\Main\) with the Destination path (C:\Main 1\), leaving everything else in the path intact. Thus, C:\Main\Sub-Folder\File3.txt would become C:\Main 1\Sub-Folder\File3.txt.
Otherwise, don't use soAllDirectories at all. Recurse through the subfolders manually using TDirectory.GetDirectories() instead, so that you are handling only 1 level of folders at a time.
I am extracting prosody features from an audio file while using Opensmile using Windows version of Opensmile. It runs successful and an output csv is generated. But when I open csv, it shows some rows that are not readable. I used this command to extract prosody feature:
SMILEXtract -C \opensmile-3.0-win-x64\config\prosody\prosodyShs.conf -I audio_sample_01.wav -O prosody_sample1.csv
And the output of csv looks like this:
[
Even I tried to use the sample wave file given in Example audio folder given in opensmile directory and the output is same (not readable). Can someone help me in identifying where the problem is actually? and how can I fix it?
You need to enable the csvSink component in the configuration file to make it work. The file config\prosody\prosodyShs.conf that you are using does not have this component defined and always writes binary output.
You can verify that it is the standart binary output in this way: omit the -O parameter from your command so it becomesSMILEXtract -C \opensmile-3.0-win-x64\config\prosody\prosodyShs.conf -I audio_sample_01.wav and execute it. You will get a output.htk file which is exactly the same as the prosody_sample1.csv.
How output csv? You can take a look at the example configuration in opensmile-3.0-win-x64\config\demo\demo1_energy.conf where a csvSink component is defined.
You can find more information in the official documentation:
Get started page of the openSMILE documentation
The section on configuration files
Documentation for cCsvSink
This is how I solved the issue. First I added the csvSink component to the list of the component instances. instance[csvSink].type = cCsvSink
Next I added the configuration parameters for this instance.
[csvSink:cCsvSink]
reader.dmLevel = energy
filename = \cm[outputfile(O){output.csv}:file name of the output CSV
file]
delimChar = ;
append = 0
timestamp = 1
number = 1
printHeader = 1
\{../shared/standard_data_output_lldonly.conf.inc}`
Now if you run this file it will throw you errors because reader.dmLevel = energy is dependent on waveframes. So the final changes would be:
[energy:cEnergy]
reader.dmLevel = waveframes
writer.dmLevel = energy
[int:cIntensity]
reader.dmLevel = waveframes
[framer:cFramer]
reader.dmLevel=wave
writer.dmLevel=waveframes
Further reference on how to configure opensmile configuration files can be found here
I have a csv file with more number of headers. Each time I am getting this file, I wish to change the name of the headers.
Sample File:
Name Price .....
Cat 5000
Dog 8000
I wish to change the header name "Price" to "Rate". I can't do it manually each time. Is there any automation tool available so that i can feed in the required changes and the file which will inturn return the file after the applied changes.
Thanks
The qa() function of the ShortRead bioconductor library generates quality statistics from fastq files. The report() function then prepares a report of the various measures in an html format. A few other questions on this site have recommended using the display_html() function of IRdisplay to show html in jupyter notebooks using R (irkernel). However it only throws errors for me when trying to display an html report generated by the report() function of ShortRead.
library("ShortRead")
sample_dir <- system.file(package="ShortRead", "extdata", "E-MTAB-1147") # A sample fastq file
qa_object <- qa(sample_dir, "*fastq.gz$")
qa_report <- report(qa_object, dest="test") # Makes a "test" directory containing 'image/', 'index.html' and 'QA.css'
library("IRdisplay")
display_html(file = "test/index.html")
Gives me:
Error in read(file, size): unused argument (size)
Traceback:
1. display_html(file = "test/index.html")
2. display_raw("text/html", FALSE, data, file, isolate_full_html(list(`text/html` = data)))
3. prepare_content(isbinary, data, file)
4. read_all(file, isbinary)
Is there another way to display this report in jupyter with R?
It looks like there's a bug in the code. The quick fix is to clone the github repo, and make the following edit to the ./IRdisplay/R/utils.r, and on line 38 change the line from:
read(file,size)
to
read(size)
save the file, switch to the parent directory, and create a new tarbal, e.g.
tar -zcf IRdisplay.tgz IRdisplay/
and then re-install your new version, e.g. after re-starting R, type:
install.packages( "IRdisplay.tgz", repo=NULL )
Assume we have a directory with structure like this, I marked directories as (+) and files as (-)
rootdir
+a
+a1
-f1
-f2
+a2
-f3
+b
+b1
+b2
-f4
-f5
-f6
+b3
-f7
-f8
and a given list of files like
/a/a1/f1
/b/b1/b2/f5
/b/b3/f7
I am struggling to find the way to remove every files inside root, except the one in the given list. So after the program executed, the root directory should look like this:
rootdir
+a
+a1
-f1
+b
+b1
+b2
-f5
+b3
-f7
This example just for easier to understand the problem. In reality, the given list include around 4 thousands of files. And the root directory has the size of ~15GB with a hundreds of thousands files inside.
That would be easy to search inside a folder, and to remove files that matched in a given list. Let just say we solve the revert issue, to keep files that matched in a given list.
Programs written in Perl/Python are prefer.
First, store your list of files you want to keep inside an associative container like a Python dict or a map of some kind.
Second, simply iterate (in Python, os.walk) over the entire directory structure, and every time you see a file, check if it is in the associative container of paths to keep. If not, delete it (in Python, os.unlink).
Alternatively:
First, create a temporary directory on the same filesystem.
Second, move (os.renames, which generates new subdirectories as needed) all the "keep" files to the temporary directory, with the same structure.
Third, overwrite (os.removedirs followed by os.rename, or just shutil.move) the original directory with the temporary one.
The os.walk path:
import os
keep = set(['/a/a1/f1', '/b/b1/b2/f5', '/b/b3/f7'])
for dirpath, dirnames, filenames in os.walk('./'):
for name in filenames:
path = os.path.join(dirpath, name).lstrip('.')
print('check ' + path)
if path not in keep:
print('delete ' + path)
else:
print('keep ' + path)
It doesn't do anything except inform you.
It don't think os.walk is too slow, and it gives you the option of keeping by regex patterns or any other criteria.
This is a working code for your problem.
import os
def list_files(directory):
for root, dirs, files in os.walk(directory):
for name in files:
yield os.path.join(root, name)
files_to_delete = {'/home/vedang/Desktop/a.out', '/home/vedang/Desktop/ABC/temp.txt'} #Keep a set instead of list for faster lookups
for f in list_files('/home/vedang/Desktop'):
if f in files_to_delete:
os.unlink(f)
Here is a function which accepts a set of files you wish to keep and the root directory from which you wish to begin deleting files.
It's a classic recursive Depth-First-Search that will remove empty directories after deleting all the unwanted files
import os
def delete_files(keep_list:set, curr_dir):
files = os.listdir(curr_dir)
for f in files:
path = f"{curr_dir}/{f}"
if os.path.isfile(path):
if path not in keep_list:
os.remove(path)
elif os.path.islink(path):
os.unlink(path)
elif os.path.isdir(path):
delete_files(keep_list, path)
files = os.listdir(curr_dir)
if not files:
os.rmdir(curr_dir)
here i got a solution in a different aspect,
suppose we are at linux environment,
first,
find .
to get a long list with all file path/folder explained
second, suppose we got the exclude path list, in order to exclude at your volume ( say thousands ) , we could just append these to the previous list, and
| sort | uniq - c |grep -v "^2"
to get the to delete list,
and third
| xargs rm
to actually do the deletion