I am running MySQL on Windows 7. I use a scheduled task to insert a record into a table after an action has occurred. However, when the scheduled task runs, nothing is inserted. I have redirected output from the "mysql" line into a log file, but the log is always empty. Running the batch file manually does cause the record to be inserted successfully. The scheduled task runs under the same user account and priveleges as when I run it manually.
Has anyone seen this behavior before?
Never mind. Apparently despite being run as my account, "taskeng" doesn't know where "mysql" is. Writing the full path to the mysql executable solved it.
Related
I am getting an error while running a SQL script to load data. Error is pasted below:
Preparing...
[WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\PRATIK~1\\AppData\\Local\\Temp\\tmpf75l0wi5.cnf'
I have tried uninstalling and installing MySQL several times but nothing is helping.
I faced the same issues while trying to run a MySQL script. I tried to find the process in the temp folder and removed it and tried again but the process seems to start again and appears in the temp folder. I could not run the script, however, I found a workaround, instead of running the script try to open it and run it in the query editor.
Just downgrade your MySQL workbench version.
In my case I downgraded the version from
8.0.25 to 8.0.20.
The sounds like you already had tried that script execution before and stopped it without stopping the mysqld process. So this process (which does the actual import) still holds a file lock on the temporary config file.
Try removing that file and check that all MySQL processes that you don't want are stopped. Then try again.
It seems the actual issue is not related to MySQL itself, but to MySQL Workbench.
The error you're seeing is a generic error coming from Windows itself, not from MySQL. It's unclear how you're running MySQL, for example is it in your localhost, in a Docker environment, or in a remote server.
It seems clear that at least two processes are trying to get an exclusive lock on that temporary file. My guess is that MySQL won't write temporary files to the user folder we're seeing (with your username Pratik).
On Windows, MySQL checks in order the values of the TMPDIR, TEMP, and TMP environment variables. For the first one found to be set, MySQL uses it and does not check those remaining. If none of TMPDIR, TEMP, or TMP are set, MySQL uses the Windows system default, which is usually C:\windows\temp.
Something you can do is to change your MySQL configuration so it uses a specific Temporary path you'll set, restart MySQL and retry running the query. If you see the error contains your new temporary path you've isolated the issue, it is indeed a MySQL problem. If you keep seeing this path you've isolated the issue to MySQL WorkBench.
An alternative approach would be to run the same query from another MySQL client, for example the command-line client mysql; and see if you're getting the same error.
Probably the simpler approach would be to try the queries with dBeaver, another MySQL client, and use that to isolate the issue to either the MySQL server itself or MySQL WorkBench.
This is a common issue for the upgraded version of MySQL, Try using Open Script instead of Run Script and that seems to clear up the issue.
I've found that it was already reported in the official bug tracker: https://bugs.mysql.com/bug.php?id=104841.
I've just checked, and it's still present in MySQL Workbench 8.0.30.
Work Around
Do not try to open the SQL file from this tool bar:
Go to Server > Data Import:
select import from self-contained file
select your target schema
then start import (bottom right btn)
I have a SQL Server 2008 Job that backs up a database, then zips that backup and moves the zipped file. My job runs fine until it gets to the step that calls WinZip, which executes:
c:\program files (x86)\winzip v19.5\winzip32.exe
-m \\RemoteShare\RestrictedFolder\dbBack.zip
x:\SQLInstanceFolder\BackupFolders\dbBack.bak
The job neither completes nor fails; it just stops moving forward. It will generate the dbBack.bak file and create the dbBack.zip file in the remote location, but it won't proceed past there. It seems to be behaving like it is waiting on a pop-up confirmation, but I don't see one when I log in to the console or run the zip from the command line.
I've tried adding -ybc flag to automatically confirm or skip any prompts, but it didn't seem to do anything. The process still didn't complete. I've even tried to > pipe output of the process, but it won't even write my log file.
This is a secured system and infrastructure, but I'm fairly certain I'm not being blocked by a permission. My SQL Server service account that runs the job has access to the folders it needs and it can run the winzip32.exe process. This process ran fine, but we had to upgrade WinZip this past weekend (19.5), and that's when it stopped working properly. We aren't able to roll back to the previous version (10).
Does anyone have any idea on what could be stopping my process or how to make it proceed?
I think I discovered the problem. It turns out, we are using the GUI version of WinZip and calling the executable from the command line. Even though we can't see the GUI, it's still there. So, the prompt to confirm our compression is still there in the program's workflow, we just can't see it and thus can't confirm it. And the confirm flags don't work with the GUI version.
My workaround involved logging in to my SQL server as our service account and running a WinZip operation. When it completed and gave me the Add Complete prompt, I checked Do not display this dialog in the future and clicked OK. This will suppress that prompt when the service account runs its Job.
If someone changes the service account, we'll have to do this again, so our ultimate solution will be to install the WinZip Commmand Line Plugin. Hopefully, when that's done, we won't have to worry about this.
But it works now. :-)
So, a little history on this issue. I had to deploy something into prod that had code from preprod, so I commented out the new line. But missed a character, which caused the job to fail on that night. Next night I fix the SQL in the SSIS job, the same error. No matter how many times I deploy, same error.
So, one of my coworkers decides to go into the integrations Catalog, delete the old catalog and redeploy everything.
Next night, I get a new error that the environment variables are not set. So, I config the job, point it to what I assume is the correct environment variable. Same error. Tonight I realized that the environment variable that it was calling has been renumbered. So, I renamed the job, and recreated it manually pointing the envelope reference to the new environment variables.
Now, when my job tries to connect, it tells me that that username fails to login.
I'm guessing that the issue is that when the config file was recreated, it was recreated minus the password. I'm trying to find out how to check and how to deploy my packages properly.
So, we finally managed to fix the issue.
Turns out that by dropping the project and redeploying we lost the environment variables.
I had to configure the project to use the environment variable. I then had to go to each job step, under configure we also had to click the checkbox to use environment variables.
I'm trying to get mysqldump to do backups to a .sql file automatically, I have being reading that I need to use cron jobs or Windows Task Scheduler; the problem is that I can't find anything online that shows me how to do it.
To do the backups I'm using cmd with the following commands:
mysqldump --user username --password=123 databtable > backup.sql
This command works perfectly, it does create the .sql file but how do I automate it in such a way that it does the backup every certain time.
Hopefully you can help me and thank you so much!
You should use the schtasks command in Windows.
Command syntax details are available here: https://technet.microsoft.com/en-us/library/cc772785(v=ws.10).aspx
You could also use the Task Scheduler application in Windows if you like GUIs:
Create a new basic task
Set the Trigger (run daily, weekly, etc)
Set the Action. (what program to run) Be sure to
include only the executable in the Program field, and put the
command arguments in the Add Arguments field.
You probably want to set your task to 'Run whether the user is logged on or not'. This is achieved by modifying the task, and adjusting the Security Options on the General Tab of the task.
When troubleshooting your task, use the History tab for info regarding job failures.
SSIS package loops through input files. For each file, flatfile parse adds records to a DB table, then file is renames/moved for archiving. After all files, package calls a sproc to delete all year-old records.
Package runs from visual studio OK. Put in SSIS package store, run from there, no problem.
Create an SQL Agent job to run package. Job does something for about five minutes, announces it was successful, but no new records in DB and no renaming of input files.
Package uses dedicated login for SQL Server privileges. Job is run as HOSTNAME-SVC which has read/write privileges on the input directory and the archive directory.
Have you setup logging for the package? You could add a script task to the For-Each Loop Container that runs a Dts.Events.FireInformation command during each loop. This could help you track the file name it finds, the number of loops it does, how long each loop takes, etc. You could also add a logging step at the end so that you know it is at least exiting the For-Each Loop container successfully.
If you find that the package is running successfully but not looping through any files at all, then you may want to test using a simpler package that reads one file only and loads it into a staging table. If that works, then go the next step of looping over all the files in the director and only importing the one file over and over again. If that works, then go the next step of changing the file connection to match the file that it finds in the For-Each Loop Container file enumerator task.
If the package isn't looping over any files and you can't get it to see even the one file you tested loading from the job, then try creating a proxy account with your credentials and running the job as the proxy account. If that works, then you probably have a permissions issue with your service account.
If the package doesn't import anything even with the proxy account, then you may want to log into the server as the service account and try to run the SSIS package in BIDS. If that works, then you may want to deploy it to the server and run the package from the server (which will really use your machine, but at least it uses the ssis definition from the server). If this works, then try running the package from the agent.
I'm not sure I fully understand. The package has already been thoroughly tested under several Windows accounts, and it does find all the files and rename all the files.
Under the Agent, it does absolutely nothing visible, but takes five minutes to do it. NO permissions errors or any other errors. I didn't mention that an earlier attempt DID get permissions errors because we had failed to give the service acount access to the input and output directories.
I cannot log in as the service account to try that because I do not have a pasword for it. But sa isd job owner so it should be able to switch to the service account--and the access errors we got ten days ago show that it can. The package itself has not changed in those ten days. We just deleted the job in order to do a complete "dress rehearsal" of deployment procedure.
So what has changed, I presume, is some detail in the deployment procedure, which unfortunately was not in source control at the time it succeeded.
It seems to be something different about the permissions. We made the problem go away by allowing "everyone" to read the directory on the production server. For some unknown reason, we did not have to do that on the test server.
When the job tried to fetch the file list, instead of getting an error (which would be logged) it got an empty list. Why looping through an empty list took five minutes is still a mystery, as is the lack of permissions. But at least what happened has been identified.
I had a similar problem. Was able to figure out what was happening by setting the logging option of the SQL Server Agent Job.
Edit the step in the job that runs the package, go to the logging tab and pick "SSIS log provider for SQL Server" and, in the configuration string, I picked (using the drop down) the OLEDB connector that was in the package, it happens to connect to SQL Server in question.
I was then able to view more details in the history of that job, and confirmed that it was not finding files. By changing permissions on the directory to match the sql server agent account, the package finally executed properly.
Hope this helps.
You may want to turn logging off after you resolve your issue, depending on how often your package will run and how much information logging provides in your case.
Regards,
Bertin