SSIS - Recreating job caused it to lose access to configuration file - ssis

So, a little history on this issue. I had to deploy something into prod that had code from preprod, so I commented out the new line. But missed a character, which caused the job to fail on that night. Next night I fix the SQL in the SSIS job, the same error. No matter how many times I deploy, same error.
So, one of my coworkers decides to go into the integrations Catalog, delete the old catalog and redeploy everything.
Next night, I get a new error that the environment variables are not set. So, I config the job, point it to what I assume is the correct environment variable. Same error. Tonight I realized that the environment variable that it was calling has been renumbered. So, I renamed the job, and recreated it manually pointing the envelope reference to the new environment variables.
Now, when my job tries to connect, it tells me that that username fails to login.
I'm guessing that the issue is that when the config file was recreated, it was recreated minus the password. I'm trying to find out how to check and how to deploy my packages properly.

So, we finally managed to fix the issue.
Turns out that by dropping the project and redeploying we lost the environment variables.
I had to configure the project to use the environment variable. I then had to go to each job step, under configure we also had to click the checkbox to use environment variables.

Related

Upgrade from ejabberd 2.1.9 to latest (22.10)

tried to serach in documentation but I have a lot of doubts...
I'm running ejabberd version 2.1.9 on an old debian server (5.0.8) with more than 500 users...
Now I was asked to update to the latest version, but I need some advice;
in the documentation I saw there are specific upgrade instructions between versions also from 2.1.x to 16.02
if I upgrade to 16 version can I jump directly to 22.10 or I have to upgrade to every intermediate releases?
or as an alternative, is possible to export users, passowrds, shared rooster ecc..and restore them on a fresh installed 22.10 ?
#badlop
Thank you very much for detailed info :-)
but I'm stuck with backup database... If I use the plain text backup "ejabberdctl dump" give me errors importing table "pubsub_node" on the new server
** Table pubsub_node already exists on ejabberd#localhost, just entering data
Problem 'error {case_clause,
{aborted,
{bad_type,
{pubsub_item,
{"751ca223b3f58d185f3afef05d0e3d4eb236c376",218},
{{1317,45407,740776},{"donkeykong","acme.com",[]}},
{{1317,133197,815914},
{"donkeykong","acme.com","stc"}},
[{xmlelement,"metadata",
[{"xmlns","urn:xmpp:avatar:metadata"}],
[{xmlelement,"info",
[{"id","751ca223b3f58d185f3afef05d0e3d4eb236c376"},
{"type","image/png"},
{"bytes","16541"},
{"width","96"},
{"height","94"}],
[]}]}]}}}}' occurred executing the command.
Stacktrace: [{ejabberd_admin,load_mnesia,1},
{ejabberd_ctl,call_command,3},
{ejabberd_ctl,try_call_command,3},
{ejabberd_ctl,process2,3},
{ejabberd_ctl,process,1},
{rpc,'-handle_call_call/6-fun-0-',5}]
tried editing and removing the pubsub_item, but the problem will show on the next pubsub_item.
If I try to use "ejabberdctl backup" the server reply with this error
Can't store backup in "/tmp/jabba.backup" at node ejabberd#jabba: {"Cannot prepare checkpoint (replica not available)",
[temporarily_blocked,
{{1670,
255465,
408029},
ejabberd#jabba}]}
even if I have only one node
The import error of dump came up even if I try to make a little jump from 2.1.9 to 2.1.13
any Suggestion?
with more than 500 users
An ejabberd server with around 500 online users? That's a small server, I guess you don't even use SQL database, so that's one less thing to worry about.
An ejabberd deployment is composed of:
the code (source or binary)
configuration file (or files)
mnesia internal database, that is stored in the mnesia spool dir (see system install)
SQL database (if you configure it)
log files (only useful for your interest and consult)
When upgrading ejabberd code, it is usual that the release notes mention some changes in the configuration, and some changes in SQL schemas. The changes in the mnesia internal database are implemented inside ejabberd and applied automatically when needed.
I have to upgrade to every intermediate releases?
In general no need. In general when jumping a few releases, you just read the release notes and apply the configuration and SQL schema changes of all the intermediate releases.
But in your case... as it's a big jump, I recommend you to not touch the production server yet. First test the upgrade in another machine (your personal machine, or other unused server, or a laptop, or a docker container...), so you learn how to do it perfectly without annoying the users.
There are many ways; but I were you, or if I were sitting in a chair next to you, this is how I would do it. I would be optimistic and try to jump from from 2.1.9 to 22.10, but slowly, and using a temporary server for testing the process and learning:
In a testing machine install the desired ejabberd version (22.10 or whatever). It should work perfectly, as it's empty and has default configuration. Notice most XMPP clients allow you to login to an account whatever#example.com specifying the IP address and port of the server: in your case you will have to specify the IP address of the new ejabberd server, as the XMPP domain it doesn't match the DNS name.
Obviously, the configuration in the new server will lack customizations that are essential for you (the served domain, what accounts are admins, certificate files, port numbers, custom modules...)
Copy the old configuration file to the new server. Notice they use different formats (ejabberd.cfg in erlang format... and ejabberd.yml in YAML format).
Manually and slowly apply one of your customizations in the new configuration file. Restart ejabberd, check it starts and works correctly, and then repeat with another customization.
Now you have a new ejabberd server running, with all (or most) of your desired configuration. Two things are lacking: the users data, and of course replacing the old ejabberd with the new ejabberd.
In the production ejabberd server, export the mnesia database using "ejabberdctl backup" and "ejabberdctl dump" (better to have two alternatives, in case one doesn't work). Copy those files to the machine that has the new ejabberd server.
In the new ejabberd, run restore specifying the binary backup
(or load specifying the text dump). With some luck, this will end correctly.
Restart ejabberd. It will notice that the mnesia tables use very old schemas, and will automatically update them. This may take a while. With some luck it will end correctly.
If steps 7 or 8 fail, and you are lucky to know what username or what data is problematic, you can try to delete or "fix" that in the text dump file before loading it).
Once you are happy with the new ejabberd server (the new configuration looks great, has all your customizations, and all the users are correctly imported), then it's time to replace the old ejabberd with the new one.
Alternatively, you can try to upgrade from 2.1.9 to 16.02, and later from 16.02 to 22.10
In any case, general recommendations:
take it slowly because there are many steps
be patient because you may face small problems that require fixing before continuing to the next step
be kind to yourself because you never did this
annotate every major problem you face, and how you fixed it (in case you find this problem again this day or in the next days, or eventually somebody else in your organization)
be sure this can be done (in the worst case, with a progressive upgrade from one version to the next :)

SQL Server Job Using WinZip Command Line

I have a SQL Server 2008 Job that backs up a database, then zips that backup and moves the zipped file. My job runs fine until it gets to the step that calls WinZip, which executes:
c:\program files (x86)\winzip v19.5\winzip32.exe
-m \\RemoteShare\RestrictedFolder\dbBack.zip
x:\SQLInstanceFolder\BackupFolders\dbBack.bak
The job neither completes nor fails; it just stops moving forward. It will generate the dbBack.bak file and create the dbBack.zip file in the remote location, but it won't proceed past there. It seems to be behaving like it is waiting on a pop-up confirmation, but I don't see one when I log in to the console or run the zip from the command line.
I've tried adding -ybc flag to automatically confirm or skip any prompts, but it didn't seem to do anything. The process still didn't complete. I've even tried to > pipe output of the process, but it won't even write my log file.
This is a secured system and infrastructure, but I'm fairly certain I'm not being blocked by a permission. My SQL Server service account that runs the job has access to the folders it needs and it can run the winzip32.exe process. This process ran fine, but we had to upgrade WinZip this past weekend (19.5), and that's when it stopped working properly. We aren't able to roll back to the previous version (10).
Does anyone have any idea on what could be stopping my process or how to make it proceed?
I think I discovered the problem. It turns out, we are using the GUI version of WinZip and calling the executable from the command line. Even though we can't see the GUI, it's still there. So, the prompt to confirm our compression is still there in the program's workflow, we just can't see it and thus can't confirm it. And the confirm flags don't work with the GUI version.
My workaround involved logging in to my SQL server as our service account and running a WinZip operation. When it completed and gave me the Add Complete prompt, I checked Do not display this dialog in the future and clicked OK. This will suppress that prompt when the service account runs its Job.
If someone changes the service account, we'll have to do this again, so our ultimate solution will be to install the WinZip Commmand Line Plugin. Hopefully, when that's done, we won't have to worry about this.
But it works now. :-)

Inserting with MySQL while in a scheduled task?

I am running MySQL on Windows 7. I use a scheduled task to insert a record into a table after an action has occurred. However, when the scheduled task runs, nothing is inserted. I have redirected output from the "mysql" line into a log file, but the log is always empty. Running the batch file manually does cause the record to be inserted successfully. The scheduled task runs under the same user account and priveleges as when I run it manually.
Has anyone seen this behavior before?
Never mind. Apparently despite being run as my account, "taskeng" doesn't know where "mysql" is. Writing the full path to the mysql executable solved it.

Magento Module install SQL not running

I have written a module that is refusing point blank to create the tables within my mysql4-install-1.0.0.php file....but only on the live server.
The funny thing is that on my local machine (which is a mirror of the live server (i.e. identical file structure etc)) the install runs correctly and the table is created.
So based on the fact that the files are the same can I assume that it is a server configuration and or permissions problem? I have looked everywhere and I can find no problems in any of the log files (PHP, MySQL, Apache, Magento).
I can create tables ok in test scripts (using core_read/write).
Anyone see this before?
Thanks
** EDIT ** One main difference between the 2 environments is that on the live server the MySQL is remote (not localhost). The dev server is localhost. Could that cause issues?
Is the module which your install script is a part of installed on the live server? (XML file in app/etc/modules/, Module List Module for debugging.)
Is there already a record in the core_resource table for your module? If so, remove it to set your script to re-run.
If you file named correctly? The _modifyResourceDb method in app/code/core/Mage/Core/Model/Resource/Setup.php is where this file is include/run from. Read more here
Probably a permissions issue - a MySQL account used by public-facing code should have as few permissions as possible that still let it get the job done, which generally does NOT allow for creating/altering/dropping tables.
Take whatever username you're connecting to mysql with, and do:
SELECT User, Host
FROM mysql.user
WHERE User='your username here';
This will show you the user#host combos available for that particular username, then you can get actual permissions with
show grants for username#host;
Do this for the two accounts on the live and devlopment server, which will show you what permissions are missing from the live system.
In the Admin->System->Advanced section is your module present and enabled?
Did you actually unpack your module to the right space, e.g. app/code/local/yourcompany/yourmodule ?
Do you have app/etc/modules/yourmodule.xml - I believe that this could be the overlooked file giving rise to your problem.
the cache could be the culprit, if you manually deleted the core_resource row for your module in order to make the setup sql run again, you have to also flush the cache
probably a difference between dev and production servers is cache settings, that would explain why you only see this in production
For me, the issue appeared using Windows for development. Linux system is case sensitive. In my config.xml the setup section was named camelCase while the folder was named all-lowercase. Making them the same made the script run.

SSIS package does nothing when invoked by agent

SSIS package loops through input files. For each file, flatfile parse adds records to a DB table, then file is renames/moved for archiving. After all files, package calls a sproc to delete all year-old records.
Package runs from visual studio OK. Put in SSIS package store, run from there, no problem.
Create an SQL Agent job to run package. Job does something for about five minutes, announces it was successful, but no new records in DB and no renaming of input files.
Package uses dedicated login for SQL Server privileges. Job is run as HOSTNAME-SVC which has read/write privileges on the input directory and the archive directory.
Have you setup logging for the package? You could add a script task to the For-Each Loop Container that runs a Dts.Events.FireInformation command during each loop. This could help you track the file name it finds, the number of loops it does, how long each loop takes, etc. You could also add a logging step at the end so that you know it is at least exiting the For-Each Loop container successfully.
If you find that the package is running successfully but not looping through any files at all, then you may want to test using a simpler package that reads one file only and loads it into a staging table. If that works, then go the next step of looping over all the files in the director and only importing the one file over and over again. If that works, then go the next step of changing the file connection to match the file that it finds in the For-Each Loop Container file enumerator task.
If the package isn't looping over any files and you can't get it to see even the one file you tested loading from the job, then try creating a proxy account with your credentials and running the job as the proxy account. If that works, then you probably have a permissions issue with your service account.
If the package doesn't import anything even with the proxy account, then you may want to log into the server as the service account and try to run the SSIS package in BIDS. If that works, then you may want to deploy it to the server and run the package from the server (which will really use your machine, but at least it uses the ssis definition from the server). If this works, then try running the package from the agent.
I'm not sure I fully understand. The package has already been thoroughly tested under several Windows accounts, and it does find all the files and rename all the files.
Under the Agent, it does absolutely nothing visible, but takes five minutes to do it. NO permissions errors or any other errors. I didn't mention that an earlier attempt DID get permissions errors because we had failed to give the service acount access to the input and output directories.
I cannot log in as the service account to try that because I do not have a pasword for it. But sa isd job owner so it should be able to switch to the service account--and the access errors we got ten days ago show that it can. The package itself has not changed in those ten days. We just deleted the job in order to do a complete "dress rehearsal" of deployment procedure.
So what has changed, I presume, is some detail in the deployment procedure, which unfortunately was not in source control at the time it succeeded.
It seems to be something different about the permissions. We made the problem go away by allowing "everyone" to read the directory on the production server. For some unknown reason, we did not have to do that on the test server.
When the job tried to fetch the file list, instead of getting an error (which would be logged) it got an empty list. Why looping through an empty list took five minutes is still a mystery, as is the lack of permissions. But at least what happened has been identified.
I had a similar problem. Was able to figure out what was happening by setting the logging option of the SQL Server Agent Job.
Edit the step in the job that runs the package, go to the logging tab and pick "SSIS log provider for SQL Server" and, in the configuration string, I picked (using the drop down) the OLEDB connector that was in the package, it happens to connect to SQL Server in question.
I was then able to view more details in the history of that job, and confirmed that it was not finding files. By changing permissions on the directory to match the sql server agent account, the package finally executed properly.
Hope this helps.
You may want to turn logging off after you resolve your issue, depending on how often your package will run and how much information logging provides in your case.
Regards,
Bertin