Deleting local database sdf file from isolated storage - windows-phone-8

I am trying to delete my local database file which is 'transferDB.sdf' via this code.
PrideDataContext db = new PrideDataContext(App.DBConnectionString);
if (db.DatabaseExists())
{
//db.Dispose();
db.DeleteDatabase();
}
but when I try to run this code it gives me following exception.
$exception {System.IO.IOException: The process cannot access the file 'C:\Data\Users\DefApps\AppData{AF715F29-8BFC-48DE-AD88-E71D8F37BC26}\local\transferDB.sdf' because it is being used by another process.
Why is is trying to delete a file in my local data folder instead of in the real device that I am testing?
Pretty confused. Can somebody help me here? ( I think its referring to one of my emulator's temporary SDF file instead of the Device's one)

Related

Azure ADLS Gen2 file created by Azure Databricks doesn't inherit ACL

I have a databricks notebook that is writing a dataframe to a file in ADLS Gen2 storage.
It creates a temp folder, outputs the file and then copies that file to a permanent folder. For some reason the file doesn't inherit the ACL correctly. The folder it creates has the correct ACL.
The code for the notebook:
#Get data into dataframe
df_export = spark.sql(SQL)
# OUTPUT file to temp directory coalesce(1) creates a single output data file
(df_export.coalesce(1).write.format("parquet")
.mode("overwrite")
.save(TempFolder))
#get the parquet file name. It's always the last in the folder as the other files are created starting with _
file = dbutils.fs.ls(TempFolder)[-1][0]
#create permanent copy
dbutils.fs.cp(file,FullPath)
The temp folder that is created shows the following for the relevant account.
Where the file shows the following.
There is also a mask. I'm not really familiar with masks so not sure how this differs.
The Mask permission on the folder shows
On the file it shows as
Does anyone have any idea why this wouldn't be inheriting the ACL from the parent folder?
I've had a response from Microsoft support which has resolved this issue for me.
Cause: Databricks stored files have Service principal as the owner of the files with permission -rw-r--r--, consequently forcing the effective permission of rest of batch users in ADLS from rwx (directory permission) to r-- which in turn causes jobs to fail
Resolution: To resolve this, we need to change the default mask (022) to custom mask (000) on Databricks end. You can set the following in Spark Configuration settings under your cluster configuration: spark.hadoop.fs.permissions.umask-mode 000
Wow, thats great! I was looking for a solution. Passthrough Authentication might be a proper solution now.
I had the feeling it was part of this acient hadoop bug:
https://issues.apache.org/jira/browse/HDFS-6962 (solved in hadoop-3, now part of spark 3+).
Spark tries to set the ACL's after moving the files, but fails. First the files are created somewhere else in a tmp dir. The tmp-dir rights are inherated by default adls-behaviour.

How to set the path of a CSV file that is in account storage in azure data factory pipeline

I have created a SSIS package that reads from a CSV file (using the Flat file connection manager) and loads records into a database. I have deployed it on Azure data factory pipeline and I need to give the path of the CSV file as a parameter. I have created a azure storage account and uploaded the source file there as shown below.
Can I just give the URL of the source file for the Import file in the SSIS package settings as shown below? I tried it but it is currently throwing 2906 error. I am new to Azure - appreciate any help here.
First, you said Excel and then you said CSV. Those are two different formats. But since you mention the flat file connection manager, I'm going to assume you meant CSV. If not, let me know and I'll update my answer.
I think you will need to install the SSIS Feature Pack for Azure and use the Azure Storage Connection Manager. You can then use the Azure Blob Source in your data flow task (it supports CSV files). When you add the blob source, the GUI should help you create the new connection manager. There is a tutorial on MS SQL Tips that shows each step. It's a couple years old, but I don't think much has changed.
As a side thought, is there a reason you chose to use SSIS over native ADF V2? It does a nice job of copying data from blob storage to a database.

How to save my JSON file after updating (resets after restart program)

So I'm trying to program something using Node.js, I've got a file that is called 'profile.json', that is an object. When something happens, I need to update a value of 'name' to a new name. So I do
'profile.name = name2;'
but after I restart my program everything comes back and I have to change it again. So my problem is how would I save the json after updating it?
It is not saving because you are reading the file and updating it in the application. However, you are not changing anything in the file. Once you read the file and parse the JSON, no link exists to the original file. The JSON exists only in memory. You will want to use the NodeJS File System class to write the file. https://nodejs.org/api/fs.html First, check if the file exists, if it does delete it (or move/rename). Second, save the file using the fs.writeFile method.

Copy/Move files in PDI / Spoon yields 'is not a file' error

I am trying to automate weekly generation of a database. As a first step in this process, I need to obtain a set of files from network location M:\. The process is as follows:
Delete any possibly remaining old source files from my local folder (REMOVE_OLD_FILES).
Obtain the names of the required files using regular expressions (GET_FILES).
Copy the files from the network location to my local folder for further processing (COPY/MOVE FILES)
Step 3 is where I run into trouble, I frequently receive the below error:
Error processing files. Exception : org.apache.commons.vfs.FileNotFoundException: Could not read from "file:///M:/FILESOURCE/FILENAME.zip" because it is a not a file.
However, when I manually locatae the 'erroneous' file on the network location and try to open or copy it, there are no problems. If I then re-run the Spoon job, no errors occur for this file (although the next file might lead to an error).
So far, I have verified that steps 1 and 2 run correctly: more specifically, there are no errors in the file names returned from step 2.
Obviously, I would prefer not having to manually open all the files first to ensure that Spoon can correctly copy them. Does anyone have an idea what might be causing this behaviour?
For completeness, below are the parameters selected in the COPY/MOVE FILES step.
I was facing same issue with different clients and finally i tried with some basic approach and it got resolved. It might help in your case as well.
Also, other users can follow this rule.
Just try this: Create all required folder with Spoon Job "Create a Folder" and inactive/delete those hops from your job or transformation once your folders are created.
This is because, user you are using to delete the file/s is not recognized as Windows User. Once your folder is in place you can remove "Create a Folder" steps from your Job.
The path to the file is wrong. If you are running spoon in a Windows environment you should use the Windows format for filepaths. Try changing from
"file:///M:/FILESOURCE/FILENAME.zip"
To
"M:\FILESOURCE\FILENAME.zip"
By the way, it will only work if M: is an actual drive in the machine. If you want to access a file in the network you should use the network path to the shared folder, this way:
"\\MachineName\M$\FILESOURCE\FILENAME.zip"
or
"\\MachineName\FILESOURCE\FILENAME.zip"
If you try to access a file in a network mounted drive it won't work.

How can I add file locations to a database after they are uploaded using a Perl CGI script?

I have a CGI program I have written using Perl. One of its functions is to upload pics to the server.
All of it is working well, including adding all kinds of info to a MySQL db. My question is: How can I get the uploaded pic files location and names added to the db?
I would rather that instead of changing the script to actually upload the pics to the db. I have heard horror stories of uploading binary files to databases.
Since I am new to all of this, I am at a loss. Have tried doing some research and web searches for 3 weeks now with no luck. Any suggestions or answers would be greatly appreciated. I would really hate to have to manually add all the locations/names to the db.
I am using: a Perl CGI script, MySQL db, Linux server and the files are being uploaded to the server. I AM NOT looking to add the actual files to the db. Just their location(s).
It sounds like you have your method complete where you take the upload, make it a string and toss it unto mysql similar to reading file in as a string. However since your given a filehandle versus a filename to read by CGI. You are wondering where that file actually is.
If your using CGI.pm, the upload, uploadInfo, the param for the upload, and upload private files will help you deal with the upload file sources. Where they are stashed after the remote client and the CGI are done isn't permanent usually and a minimum is volatile.
You've got a bunch of uploaded files that need to be added to the db? Should be trivial to dash off a one-off script to loop through all the files and insert the details into the DB. If they're all in one spot, then a simple opendir()/readdir() type loop would catch them all, otherwise you can make a list of file paths to loop over and loop over that.
If you've talking about recording new uploads in the server, then it would be something along these lines:
user uploads file to server
script extracts any wanted/needed info from the file (name, size, mime-type, checksums, etc...)
start database transaction
insert file info into database
retrieve ID of new record
move uploaded file to final resting place, using the ID as its filename
if everything goes file, commit the transaction
Using the ID as the filename solves the worries of filename collisions and new uploads overwriting previous ones. And if you store the uploads somewhere outside of the site's webroot, then the only access to the files will be via your scripts, providing you with complete control over downloads.