SevenZipArchiveException: Invalid archive. open/read error - sevenzipsharp

I got the following error when I try to extract a zip file:
"SevenZip.SevenZipArchiveException: Invalid archive: open/read error! Is it encrypted and a wrong password was provided?
If your archive is an exotic one, it is possible that SevenZipSharp has no signature for its format and thus decided it is TAR by mistake."
Nothing works with zip files, but everything works fine with 7z files. Is it possible to extract zip files with the SevenZipExtractor?
string sourcePath = #"c:/temp/yyy.zip";
using (var file = new SevenZipExtractor(sourcePath))
{
file.ExtractArchive(outputPath);
}

What I found with this error when I encountered it was that it was an issue when I would attempt to decompress a certain set of files. For example, if you were to run the SevenZipCompressor and say it stopped halfway through, this would corrupt the compression of said files, so when you would attempt to decompress the files, the error would occur.
The fix for me was to recompress the set of files and to be sure it ran completely, and then the error went away, allowing the extraction to work.
So the moral of the issue at hand is to look at the source in this case and make sure the files or the archive aren't corrupt.

I've run into the same issue recently with version 18.5.0.
Downgrading the package to 9.38.3 solved the problem for me.

For people still running into this problem: this can also happen when trying to uncompress rar5 files that have filename encrypted turned on.

Related

How should one zip a large folder in Windows 10, upload it to GDrive, then unzip it?

I have a directory consisting of 22 sub-directories. Altogether, the directory is about 750GB in size and I need this data on GDrive so that I can work with it in Google Colab. Obviously uploading this takes an absolute age (particularly with my slow connection) so I would like to zip it, upload it, then unzip it in the cloud.
I am using 7zip and zipping each subdirectory using the zip format and "normal" compression level. (EDIT: Can now confirm that I get the same error for 7z and tar format). Each subdirectory ends up between 14 and 20GB in size. I then upload this and attempt to unzip it in Google Colab using the following code:
drive.mount('/content/gdrive/')
!apt-get install p7zip-full
!7za x "/content/gdrive/My Drive/av_tfrecords/drumming_7zip.zip" -o"/content/gdrive/My Drive/unzipped_av_tfrecords/" -aos
This extracts some portion of the zip file before throwing an error. There are a variety of errors and sometimes the code will not even begin unzipping the file before throwing an error. This is the most common error:
Can not open the file as archive
ERROR: Unknown error -2147024891
Archives with Errors: 1
If I then attempt to rerun the !7za command, it may extract one or 2 files more from the zip file before throwing this error:
terminate called after throwing an instance of 'CInBufferException'
It may also complain about particular files within the zip archive:
ERROR: Headers Error : drumming/yt-g0fi0iLRJCE_23.tfrecords
I have also tried using:
!unzip -n "/content/gdrive/My Drive/av_tfrecords/drumming_7zip.zip" -d "/content/gdrive/My Drive/unzipped_av_tfrecords/"
But that just begins throwing errors:
file #254: bad zipfile offset (lseek): 8137146368
file #255: bad zipfile offset (lseek): 8168710144
file #256: bad zipfile offset (lseek): 8207515648
Although I would prefer a solution in Colab, I have also tried using an app available in GDrive named "Zip Extractor". But that too throws an error and has a dataquota.
This has now happened across 4 zip files and each time I try something new, it takes an a long time to try it out because of the upload speeds. Any explanations for why this is happening and how I can resolve the issue would be greatly appreciated. Also I understand there are probably alternatives to what I am trying to do and they would be appreciated also, even if they do not directly answer the question. Thank you!
I got same problem
Solve it by
new ProcessBuilder(new String[] {"7z", "x", fPath, "-o" + dir)
Use command line array, not just full line!
Luck!
Why does this command behave differently depending on whether it's called from terminal.app or a scala program?

Managing a large SPSS (*.sav) file (4.2 GB)

I have received an SPSS file from survey fielded by another company that allegedly only contains ~1500 respondents, but the file size somehow has ballooned 4.2GB. My hunch is that the reason for this is that the file was from a global survey and the 1500 records that have been selected are from the US only so there are a series of blank variables, metadata for those variables that are included in this file and may also be in multiple languages/alphabets.
I only need a subset of this data, and can likely work with it if I removed the metadata but my issue has been that I can't get the damn thing open to cut down on the number of variables. I have been using the tools at my disposal to try the following workarounds, though I'm sure there are better options:
Opening the file using PSPP (freeware SPSS) - this causes the PSPP to stop responding
Using the R command read.spss (from the foreign package) to write a .csv - this claims that the file has a duplicate variable name and won't proceed further
Using the R command spss.system.file to write a .csv - when I tried this, R has spend a lot of time thinking as it as it attempts to run this and has been running for a couple hours with no apparent success.
Using the PSPP text conversion tool (https://pspp.benpfaff.org/) to create either a dictionary or a .csv file - both of these options crash after the file has completed uploading.
I've gone back to the other company to try have them work on reducing the file size, however I wasn't sure if anyone else had any ideas to do either of the following:
Open the file using another program/converter that could turn it into a .csv or other similarly skinny file format
Use another program to at least read only the variable names included in the file so that I can provide the other company with the specific variables I need
The following command from PSPP should do what you need:
$ pspp-convert originalFile.sav output.csv
In case it doesn't, please provide terminal error message.

SSIS Package not reading the last row in flat file

I have SSIS Package which will load .EXT file into my Database table.
The package Flat File connection manager Editor properties are
Format: Ragged Right
Code Page: 1252 ANSI (Latin-I)
Text Qualifier: <None>
Header Row Delimiter: <LF>
While trying to preview the file before loading, i am able to see all the rows in columns and
preview tab of Flat File connection manager Editor.
But in actual loading of the file, last record alone is not getting imported into table.
It was loading fine and still it is processing the file on daily basis.
Only for two days file, it was not imported last records. I am trying to find the root cause.
I suspected something wrong with the file, but i do not find any differences between the
working and not-working version of files.
Please suggest us to resolve the same. Kindly let me know if any informations required.
I ran into the same issue and did some research to find a solution that worked from me. Apparently the SSIS package had gone through a conversion from an earlier version at one point. When the conversion was done, the text qualifier property on the flat file connection was mangled. It had originally been <none>, but the conversion changed it to _x003C_none_x003E_. I opened the flat file connection manager and changed the text qualifier property on the general tab back to the proper value of <none>.
Credit goes to this thread for providing the answer.
I had a similar issue. My flat file didn't had any text qualifiers. When i added a text qualifier the package ran successfully. My guess is that the file is read as text and the CRLF is not recognized at the last line.
If you can provide a sample of the data from the file

MySQL Workbench 6.1 - Error importing recordset

I'm going to be getting a new computer soon and I don't want to lose all of the data I have entered in my tables, so I decided to test out the feature that allows you to export and import CSV files. I exported a table successfully (data was transferred to Microsoft Excel in CSV file), but when I opened the file in Microsoft Excel and added a few rows and tried to import it back in to MySQL Workbench, I got the following error:
"Error importing recordset
error calling Python module function
SQLIDEUtils.importRecordsetDataFromFile"
I've searched all over for info on this, but can't find any solutions. Does anyone know what I'm doing wrong?
In Workbench, open a MySQL connection and then navigate to [Server] --> [Data Export]. There are several backup options here, including saving the data as an individual file or folder. Choose the databases you want to export, and then click [Start Export].
If you ever prefer using Excel for editing and such, then use the MySQL for Excel plugin to access MySQL databases from within Excel. However, I don't think you need it here.
To export your mySQL data, use mysqldump, which will create all the schema for you.
Excel probably added some stuff to your file and now mySQL can't understand it. The best way to find out is by comparing the files before and after the change.
That error indicates a format problem. If the file is small enough, try opening it in wordpad (or the mac equivalent) and see if there's any difference in the formatting? Could be that the delimiting got a little messed up (this can happen especially with end of row markers in MySQL, I've noticed, it can also happen in mac to pc handoffs). If all else fails you could try exporting using a different format and see if that makes a difference (maybe tsv) when you add new rows.
Another reason can be the line endings used. Depending on the system and editor used to work with the cvs file it the line endings might get changed. For me mysql supported UNIX line endings. And in the editor the line ending had been set to MAC OS 9 since I was using a MAC.
Changing it to UNIX line ending worked.
I found that it might be due to a wrong encoding of the input file.
Using Notepad++ for example (or another similar editor) you need to change file encoding to UTF-8.

insecure string pickle error when uploading and downloading to MKS Integrity

I am getting the exception "ValueError: insecure string pickle" when attempting to run my program after creating a sandbox from MKS.
Hopefully you are still interested in helping if you are still reading this, so here's the full story.
I created an application in Python that analyzes data. When saving specific data from my program, I pickle the file. I correctly read and write it in binary and everything is working correctly on my computer.
I then used py2exe to wrap everything into an .exe. However, in order to get the pickled files to continue to work, I have to physically copy them into the the folder that py2exe. So my pickle is inside of the .exe folder and everything is working correctly when I run the .exe.
Next, I upload everything to MKS (an ALM, here is the Wikipedia page http://en.wikipedia.org/wiki/MKS_Integrity).
When I proceed to create a sandbox of my files and run the program, I get the dreaded "insecure string pickle" error. In other words, I am wondering if MKS screwed something up or added an end of line character to my pickle files. When I compare the contents of the MKS pickle file and the one I created before I uploaded the program to MKS, there are no differences.
I hope this is enough detail to describe my problem.
Please help!
Thanks
Have you tried adding your pickled files to your Integrity sandbox as binaries and not text?
When adding the file, on the Create Archive interface, select the options button, and change data type to "Binary" from "Auto". This will maintain any non-text formatting within the file.