I need to transfer a music files over large size using apache Qpid.
I have a read some blogs, where it mentions the file transfer is possible.
But could not find any examples on it.
Any help will be appreciated
Thanks
Harisha
Files shall be segmented and transferred as blocks.
QPID Messsaging API for sending message.setContent(buf,len);
At receiving dataPtr = message.getContentPtr();
len = message.getContentSize(); to get the binary data
Related
Automatic JSON Files upload to Blob Storage.
Description:
We have a SSIS job which will generate JSON files with data at a server path. We are manually copying the JSON files and dropping them in BLOB storage in order to trigger our logic app.
Now, Could anyone help to provide information on how we can automate the process of copying JSON files to BLOB automatically? ( Like do we have any approach or code to copy the JSON files at a specific time and copy those JSON files in BLOB )
The solution is to listen to the file system change at your server path, then to use Azure Storage SDK to upload these files which be triggered by the file changed event.
As reference, here are some resources about the API or SO threads of file changes listener in different languages, because I don't know what language you want to use.
C# FileSystemWatcher Class
Python How do I watch a file for changes?
Node.js Observe file changes with node.js
For other languages, I think you can easily get their solution by searching. And to upload files to Azure Storage, you just need to refer to Azure offical getstarted tutorials in dfferent languages to write your code.
I'm writing a .NET application which is sending files to a server using AWSSDK.S3 library.
The server that I'm sending files to is an internal solution implementing a subset of Amazon S3 functionality, and currently not supporting chunked upload.
How do I enable single chunk upload?
To change the current behavior I had to set
putRequest.UseChunkEncoding = false;
in Amazon.S3.Transfer.Internal.SimpleUploadCommand.ConstructRequest()
It seems that library clients currently have no way to change it.
Link for the issue: https://github.com/aws/aws-sdk-net/issues/1057
I have uploaded a .avro file on Google Cloud Storage which is about 100MB. It is converted from a 800MB .csv file.
When trying to create a table from this file in the BigQuery web interface, I get the following error after a few seconds:
script: Resources exceeded during query execution: UDF out of memory. (error code: resourcesExceeded)
Job ID audiboxes:bquijob_4462680b_15607de51b9
I checked the BigQuery Quota Policy and I think my file does not exceed it.
Is there a workaround or do I need to split my original .csv in order to get multiple, smaller .avro files ?
Thanks in advance !
This error means that the parser used more memory than allowed. We are working on fixing this issue. In the meantime, if you used compression in the Avro files, try remove it. Using a smaller data block size will also help.
And yes splitting into smaller Avro files like 10MB or less will help too, but the two approaches above are easier if they work for you.
I have a folder with hundreds of files that were saved on a specific format of a given software (in this case it is the Qualisys Track Manager and the file format is .qtm).
This software has the option of exporting the files to another format such as TSV, MAT, C3D,...
My problem: I want to export all my files to TSV format but the only way I know is open the software, go to File->Export->To TSV. And doing this for hundreds of files is time consuming. So I was thinking on writing a script where I could call my files, access the software and it would do the export automatically.
But I have no clue how to do this, I was thinking on writing a script on Notepad++, running on the command window and then I would get all the files on TSV format.
[EDIT] After some research I think maybe a Batch script or a PowerShell script may help me but I have no idea how to run automatically the commands of the software of if it is even possible... (I am using Windows10)
It is highly likely to be a perpetual file format(.qtm) and Powershell/batch would not understand it. Unless this file can be read in a known way (Text XML etc), they would not be able to convert it.
I googled it and seems QTM have a REST API interface. It would be the best chance you have. I'm not sure if the documentation is available publicly, I didn't find it. I'd recommend you contact their support for REST API document/ask if their REST API can handle this task/sample code to get you start.
Then you can make REST API calls with Invoke-RestMethod in a loop from powershell.
I'm currently building an AIR file uploader designed to handle multiple and very large files. I've played around with different methods of breaking up file into chunks(100mb) and progressively uploading each so that I can guard agains a failed upload/disconnection etc.
I have managed to break up the file in smaller files which I then write to a scratch area on the disc however I'm finding that the actual process of writing the file is quite slow and chews up a lot of processing power. My UI basically grinds to a halt when its writing. not to mention that I'm effectively doubling the local disc space of every file.
The other method I used was to read into the original file in 100mb chunks and store that data in a byteArray which I can then upload as a POST data using the URLLoader class. Problem is that this way I cant keep track of the upload progress because the ProgressEvent.PROGRESS does not work properly for POST requests.
What I would like to know is if it's possible to read into the file in my 100mb chunks and upload that data without having to create a new file but still using the FileReference.upload() method in order to listen to all the available events that method gives me. Ie. create a File() that is made up of bytes 0 - 100mb of the original file, then call upload() on that new File.
I can post my code for both methods if that helps.
Cheers, much appreciated
I had such problem, but we were solve it in another way, we decided to write an socket connector, which will connect to server (e.g. FTP/HTTP) and write down to socket this ByteArray, and we did it also in chunks around the same size, and the biggest file we had to upload was BlueRay movie around ~150GB.
So I hope you got some interesting ideas from my message, If you'd like it, I could share some piece of code for you.