What does STATUS_OBJECT_NAME_COLLISION from a samba share really mean, and could it result in no files being present at all? - samba

I have an issue writing files to a samba share. We don't seem to receive any failure error during the writing of the files, but a second later when we check from a different process, no files have been written. This problem seems to happen sporadically for about 5 minutes, or 10 minutes, and then go away.
The only clue we have is from the samba's logging. There are STATUS_OBJECT_NAME_COLLISION errors present. My understanding is that this means our software is trying to write a new file over a file that already exists. But what I don't understand is why, then, I see no files in that location at all after the process concludes. Could this error mean something else? Could it be caused by the configuration of the fileshare somehow?
Thank you.

The code STATUS_OBJECT_NAME_COLLISION may indicate an attempt to create a file which already exists while overwrite option was not specified.

Is your soft renaming any file on the destination?

Related

BIRT connection issue

Please let me preface by saying I know absolutely nothing about BIRT. I have never worked with it but find myself having to diagnose an issue with our use of it. Doing so, I've become stumped.
Background:
There have been no code changes related to any of our BIRT files in years
We have all versions locked to something specific, so we didn't accidentally roll forward to a newer, incompatible version
We're running in a Docker container now (docker-compose specifically). This is the closest thing to a smoking gun I see -- perhaps when we migrated, some 'done directly on the server' change was left behind? Environment variable, manually downloaded JAR, something like that? But I don't even know what to check, and I cannot access the old version to compare differences.
The problem:
When visiting a URL such as http://localhost/birt/frameset?__report=something.rptdesign&__format=pdf&skey=something&__title=something&Company_id=something we get an error:
+ Multiple exceptions occurred.
What I have tried
Looking into the webapps/birt/logs/ReportEngine_*.log file, I see the following:
org.eclipse.birt.data.engine.odaconsumer.ConnectionManager openConnection
SEVERE: Unable to open connection.
org.eclipse.birt.report.data.oda.jdbc.JDBCException: Missing properties in Connection.open(Properties).
Elsewhere, I've seen that moving some missing JARs into birt/webapps/WEB-INF/lib can address such issues, but doing so has not yielded any different results. I have tried doing so with both mysql-connector-java-8.0.26.jar and org.eclipse.birt.report.data.oda.jdbc-4.8.2.jar but it has not changed the behavior. I think my error is unrelated to that but I thought it was worth a try.
I'm sure that error is correct -- something is missing from Properties in Connection.open(Properties) but I do not know BIRT well enough to know where it's looking for that or how to feed it what it needs/how to know what's missing.
I see that we have a DB.properties file in the parent folder of our rptdesign files like so:
|- something/
|---- something.rptdesign
|- DB.properties
This file contains just a JDBC mysql string like foo.bar.com=jdbc:mysql://something:3306/something
It looks like every .rptdesign file has the same connection information in it, but also contains a username/password. I am able to connect to the database using those credentials.
Any help or other ideas are very appreciated. I can try to provide whatever details are needed in case something isn't clear. Unfortunately, I don't know enough to know what that might be.

Mysterious MySQL error stops local files from being loaded into tables: "file request rejected due to restrictions on access" (error 2068)

I am a new MySQL user trying to follow the introductory tutorial from Oracle.
I was unable to load data into a table from a text file (this step of the tutorial).
When I run this line (where <path> is a file path):
LOAD DATA LOCAL INFILE "<path>/blob.txt" INTO TABLE blob;
I get the error:
ERROR 2068 (HY000): LOAD DATA LOCAL INFILE file request rejected due to restrictions on access.
I have tried:
Placing blob.txt into the directory /usr/local/mysql-8.0.21-macos10.15-x86_64/data and using the full path. I used this directory because the variable datadir is /usr/local/mysql/data.
Placing blob.txt into the aforementioned directory and using only the file name instead of the full path.
Using and not using LOCAL
Setting secure-file-priv='' by creating ~/.my.cn as described here.
Setting local-infile=1.
Granting the FILE permission.
Based on one cryptic comment on a different question, I tried changing file permissions. I chmod 777 -ed the file and its parent directory. But perhaps this needs to be changed for all of the parent directories up the tree, making this an unfeasible solution?
Here's where it gets very strange:
LOAD DATA LOCAL INFILE "blerga blerga blerga bloo" INTO TABLE blob;
returns exactly the same error. That is, it doesn't seem to matter what path I put there. It doesn't even matter whether this is a real path to a real file.
After a few hours of mucking around, I was able to find one thing that worked: placing the files inside the folder containing the tables of my database:
/usr/local/mysql-8.0.21-macos10.15-x86_64/data/dining/blob.txt
(Dining is the name of the database.) Then loading the data works just fine.
So I am left with wondering:
Is this, in fact, the "right" way to do this? Is it safe to be mucking around inside this directory?
I'm guessing that this whole problem arose from a file permissions issue. Is that right? I don't really understand this. Is it something like: the server needed access to files on the client side (both of which are on my own computer)?
Is there a "correct" way to make it possible to load files from elsewhere in my computer into a table? And if so, are there bad security implications of doing this "in real life" -- with actual servers and clients?
Strangely enough, there quite literally does not seem to be any discussion of this error on the internet, aside from that one lonely comment linked above. There is a brief listing of the error code on the Oracle website, but I haven't found so much as a github comment about this error code.
Is this, in fact, the "right" way to do this? Is it safe to be mucking around inside this directory?
No. You should not go into any internal data structure and mess around with files and directories. A normal user can't do that anyway, only the "root" user can.
I'm guessing that this whole problem arose from a file permissions issue. Is that right? I don't really understand this. Is it something like: the server needed access to files on the client side (both of which are on my own computer)?
Yes, that is the case. Keep in mind that the MySQL server is running with it's own user account and cannot access all the files on the host. Usually that is not a problem since the server works only inside his assigned data directory.
Is there a "correct" way to make it possible to load files from elsewhere in my computer into a table? And if so, are there bad security implications of doing this "in real life" -- with actual servers and clients?
The LOCAL keyword is used to specify where the file is read from, from the servers perspective or from the clients perspective/host. Usually you use the LOCAL keyword to get the file from the host (and perspective) where the mysql client is running.
So you should/must use the LOCAL keyword in your query. You do not need the FILE permission because you are using the LOCAL keyword. See the LOAD DATA documentation:
Using LOCAL is a bit slower than letting the server access the files directly, because the file contents must be sent over the connection by the client to the server. On the other hand, you do not need the FILE privilege to load local files.
This also means you don't need to change the secure-file-priv setting.
Since you use LOCAL INFILE, you need the local-infile setting enabled. However, you would get a different error message when this setting is disabled.

SSIS - File system task, Create directory error

I got an error after running a SSIS package that has worked for a long time.
The error was thrown in a task used to create a directory (like this http://blogs.lessthandot.com/wp-content/uploads/blogs/DataMgmt/ssis_image_05.gif) and says "Cannot create because a file or directory with the same name already exists", but I am sure the directory or a file with the same name didn´t exist.
Before throwing error, the task created a file with no extension named as the expected directory. The file has a modified date more than 8 hours prior to the created date wich is weird.
I checked the date in the server and it is correct. I also tried running the package again and it worked.
What happened?
It sounds like some other process or person made a mistake in that directory and created a file that then blocked your SSIS package's directory create command, not a problem within your package.
Did you look at the security settings of the created file? It might have shown an owner that wasn't the credentials your SSIS package runs under. That won't help if you have many packages or processes that all run under the same credentials, but it might provide useful information.
What was in the file? The contents might provide a clue how it got there.
Did any other packages/processes have errors or warnings within a half day of your package's error? Maybe it was the result of another error. that you could locate through the logs of the other process.
Did your process fail to clean up after itself on the last run?
Does that directory get deleted at the start of your package run, at the end of your package run, or at the end of the run of the downstream consumer of the directory contents? If your package deletes it at the beginning, then something that slows the delete could present a race condition that normally resolves satisfactorily (the delete finishes before the create starts) but once in a while goes the wrong way.
Were you (or anyone) making a copy or scan of the directory in question? Sometimes copy programs (i.e. FTP) or scanning programs (anti virus, PII scans) can make a temporary copy of a large item being processed (i.e. that directory) and maybe it got interrupted and left the temp copy behind.
If it's not repeatable then finding out for sure what happened is tough, but if it happens again try exploring the above. Also, if you can afford to, you might want to increase logging. It takes more CPU and disk space and makes reviewing logs slower, but temporarily increasing log details can help isolate a problem like that.
Good luck!

What does Error 3112 indicate when compacting an MDB file?

What does Error 3112 indicate when compacting an MDB file?
The Error description is "Records can't be read; no read permission on 'xyz123.mdb'"
There is a known issue with the Compact function on some versions of Access MDBs. Is the solution in this case to run the Microsoft utility JETCOMP.EXE on this file?
What are the other possible causes of this error?
This could well be a sign of corruption, I would suggest that you treat it like that for now and try doing a compact/repair and also a decompile and see if that snaps it out of it.
This is of course assuming that you do have permissions on the database, you might also want to check which workgroup file you are “joined” to at the moment in case the above does not work
I can't say the error pertains to any one issue I can think of. It possbile that some other routine or part of the applicaton is open and not closed.
I assume this error is occurring for only one application ?
Try creating a blank database file, and then import everything into that file. Does the compact and repair now work? This sounds more like a damaged or currupted file.

What would lead to an "Unknown object in backup file" problem when restoring a backup of a MySQL database?

Unfortunately, the problem is not more specific than that. I've found a few examples of people reporting similar problems by doing a Google search, but I can't find the part of the restore that is actually causing the problem, which might help me track it down on my own.
Suggestions for either resolving this problem or being able to track down the root cause would be appreciated.
There's one bug logged at bugs.mysql.com that references the error you describe:
"Bug #37253 Unable to restore backup file containing BLOBs"
The solution described in that bug is to increase the max_allowed_packet in the MySQL server configuration. The user confirmed that raising the value to 100M allowed him to restore his database.
ANOTHER FIX
I also had this problem! The answers online didn't seem to help (max_allowed_packet and others)
Here's what fixed mine:
Instead of running the Restore function, I imported through MySQL Migration Toolkit (installed with GUI Tools on Windows).
The Migration Toolkit also failed, but had descriptive errors in the Log on the final page. In my case, it was a few incorrect Date fields in my data (usually "0000-00-00") that wouldn't migrate correctly.
Fixing these dates in my tables solved the Restore problem.
Hope this helps somebody else out there.
I have had something similar in the past- it has something to do with how it was backed up. I think some applications put invalid comments in the backup files which cause errors.
My suggestion- if you are stuck trying to restore those files- is to incrementally start backing up from sections of the backup file and find what is causing the problems- which from what I recall the case for me was that they were some text in the file that was inconsequential to remove.