SSIS Packages transition to another machine - ssis

I have a couple of SSIS packages that resides on my local drive at work that run scheduled jobs. I will be leaving my current position in a couple off days, and would like to help my department transition smoothly.
My question is, for the person that will be picking up my duties, how can I transfer these dtsx packages to their computer short of recreating the packages from scratch on their local drive? If I'm not mistaken the SQL Agent runs off the user login account? After my last day my account will be deleted and none of the jobs will run.
Please provide some insight on what's the best approach. Thank you.

The best approach is to stop running things locally. SSIS packages should be set up and run on the database server and should use server accounts that won't disappear when people leave. We have a separate account for dev, QA, and staging as well as prod as some of the other environments are onteh same server.

While I agree with HLGEM about best practice, you have a couple of "quick and dirty" options.
One really dirty way is to mess with the service account that the SQL Agent uses. Usually it runs as Local Service or Network Service, but you can make it run under domain credentials that have access to the appropriate data sources. Careful, this is a big security hole.
SQL Agent also has a mechanism for running particular job steps under different credentials. Look into "Credentials" and "Proxies". This setup is really valuable at the server level, but is a lot of extra work just to run a job from a developer's machine.
Finally, there is no reason that SQL Agent has to be involved at all. You can run a dtsx package from command line or from an SSIS repository under the current user's credentials. This works fine if you are running packages as needed. Not so good for scheduled runs.

Related

Using versioning on a VM with several users

We are looking for a way to use GitHub on an internal system that we are developing at work. We have developed it in PHP and MySQL, with a fair bit of jQuery/Ajax, on a Windows Server VM running IIS. Other staff can access the frontend over the network using the IP address.
There are currently three people working on it and at the moment we directly edit the file on the VM as we need it to still communicate with the database to check our changes have worked. There is no option to install anything like WAMP on our individual machines and there are the usual group policy restrictions so the only access we have to a database is via the VM. We have been working with copies of files/folders and the database but there is always the risk that then merging these would be a massive task.
I do use GitHub (mainly desktop but I can just about get by with using the command line as long as I have a list of the command in front of me) at home to sync between my PC and Laptop, via GitHub.com and believe that the issues we get with several people needing to update the same file would be eradicated by using it here at work.
However, there are some queries we need to ensure we have straight in our heads before putting forward a request.
Is what we are asking for viable? Can several branches on the same server be worked on at the same time or would this only work on an individual machine.
Given that our network is fairly restricted, is there any way that we can work on the files on our machine and connect to a VM hosted database? I believe that an IDE will allow us to run php files on a standard machine (although a request for Eclipse is now around 6 weeks old and there is still no confirmation that we will get it any time soon) but will this also allow .
The stuff we do is not overly sensitive but the company would certainly not want what we do out there in a public repository (and also would not be likely to pay for a premium GitHub account) so we would need to branch/pull/merge directly from our machines to the VM.
Does anyone have any advice/suggestions/solutions to this? Although GitHub would be a preferred option as I already use it, we are open to any suggestion that will allow three people, on different machines, simultaneously work on a central system while ensuring that we do not overwrite or affect each others stuff.
Setting up a git repo on Windows is not trivial and may require a fair bit of work. You can try using SVN it is fairly straight forward to install on windows and has a better learning curve than Git. I am not saying SVN is better/worse as compared to Git, it's much better suited to your needs. We have a similar setup and we use Tortoise SVN https://subversion.apache.org/ as a client. SVN also has branches and stuff.
SVN for server side repository https://subversion.apache.org/
If you would still prefer Git on windows, check this out - https://www.linkedin.com/pulse/step-guide-setup-secure-git-remote-repository-windows-nivedan-bamal
1) It is possible to work on many branches and then merge them into a single branch. That's the preferred Git development way. You can do the same on SVN.

Sincronize databases with git deployment

So I own a VPS server running CentOS, and decided to use git for deployment. Man! That's fun. Push, done!
I'm really happier than i was with the old ftp approach.
But I wish I could go further, today it deploys automagically all my files, but it doesn't even touch my db. And if I change it in the mods, I have to update it manually. So i was thinking about using some git hooks to do this also automatically.
By now I'm using one git hook at the server, it's a post-receive hook and basically copies files to the production directory when pushed to master.
The prerequisites for the DB deployment are:
It needs to go both ways, if i pull from db, and it's different from my local it should update my local db.
It should be based on modifications and patchs and not the dump of the whole db, this way i can work with the team without compromising other guys work.
I was thinking about keeping a db.sql on the version control, and make a script to analyze it on post-receive (on server) and post-merge(on local), so it can take the mods and apply, and i would keep a database of which mods were applied already (the script should run in both, client and server).
Any of you guys have already done something similar to this? What would you recommend?
Thank you very much already,

ExpressionEngine : git : local development : remote database

To those of you that are trying to be good little developers and version control their ExpressionEngine sites with git, how do you handle your database?
In my limited experience with multiple developers working on one ExpressionEngine site, we've had to all run off of a single MySQL development database running on a remote web server. For those of you that have tried this, it is PAINFULLY slow. Page loads can easily take 5-10 seconds making development extremely difficult. It would be quicker to work off of a remote development server. I am trying to steer away from working off of a remote MySQL server in order to be able to work from anywhere and not depend on Internet connection speed/quality.
Just wondering how others handle their MySQL databases.
Do all of your developers run off of one central database? Have you dealt with slowness issues like we have?
Do you keep your database under version control? How do you handle export/imports among multiple developers and multiple branches?
With one developer I can import/export/commit the database very easily but as soon as you add another developer to the mix, it gets very VERY muddy. Looking forward to hearing everyone's thoughts on this mammoth topic.
Thanks!
It seems there is a lot of time lost on failing DNS requests, with a remote database.
Start your MySQL server with start mysqld with --skip-name-resolve. (More information on this topic can be found here: http://dev.mysql.com/doc/refman/5.0/en/host-cache.html)
Having a remote database still seems to be the best way for us to work on a project with multiple developers.
I almost always use a central database for development. Depending which host you use, the speed difference may not be huge.
Obviously, if you're not making changes to the database, i.e. only doing template development, keeping the database in sync is not as needed, so you could potentially bring up a local copy of the database. You just have to remember to repeat any database changes, if you do end up making some.
As far as version control, I keep a copy of my base EE install's SQL file in my base repository. Other than that I don't usually keep copies of the database in Git, so I don't do a lot of importing/exporting, etc.
Have you looked at the EE Profiler recently? You'll probably notice in the neighborhood of 20-80 queries on your home page depending on it's complexity.
The problem is that, for each query, MySQL must execute a remote request for data, download the response, and then present ExpressionEngine it's data. The 20-80 round trips to the database is what's causing your delay and I don't think there is much you can do about it. When using a remote (outside our network) database, I get the same delay as you.
When MySQL is running on your machine or the production server, it doesn't have the added network requests causing latency in it's requests for data. This is the difference.
As for fixes, all you can do is move to a database hosted on your internal network. We have a Linux machine that mimics our production environment that we use for staging. Since it's on our network, we can use the local IP address in our database.php file. This is much faster.
The problem that we still have is the issue of channels/fields/entries. When a developer is working on a new section, they'll likely need to create a new channel and fields and/or new entries. When we're ready to push that functionality to production, we have to manually make those changes on the production server as there is no way to reliably export them. I am hopeful of this addon though---we'll see.
In my company (4 developers) we each run our own DB locally. But recently I tested Rackspace Cloud Databases (but there are other cloud db providers) for a heavy DB that could become difficult to run on a little laptop. It's relatively less expensive than running our own db server, and it can be setup or deleted in the minute.

Hudson slaves, how to access workspace

Howto configure system to have one master and multiple slaves where building normal c-code with gmake? How slaves can access workspace from master? I guess NFS share is way to go, but if that's not possible any other options?
http://wiki.hudson-ci.org/display/HUDSON/Distributed+builds is there but cannot understand how workspace sharing is handled?
Rsync? From master: SCM job -> done -> rsync to all slaves -> build job and if was done on slave -> rsync workspace back to master?
Any proof of concept or real life solutions?
When Hudson runs a build on a slave node, it does a checkout from source control on that node. If you want to copy other files over from the master node, or copy other items back to the master node after a build, you can use the Copy to Slave plugin.
It's surely a late answer, but may help others.
I'm currently using the "Copy Artifact plug-in" with great results.
http://wiki.hudson-ci.org/display/HUDSON/Copy+Artifact+Plugin
(https://stackoverflow.com/a/4135171/2040743)
Just one way of doing things, others exist.
Workspaces are actually not shared when distributed to multiple machines, as they exist as directories in each of the multiple machines. To solve the coordination of items, any item that needs distributed from one workspace to another is copied into a central repository via SCP.
This means that sometimes I have a task which needs to wait on the items landing in the central repository. To fix this, I have the task run a shell script which polls the repository via SCP for the presence of the needed items, and it errors out if the items aren't available after five minutes.
The only downside to this is that you need to pass around a parameter (build number) to keep the builds on the same page, preventing one build from picking up a previous version built artifact. That and you have to set up a lot of SSH keys to avoid the need to pass a password in when running the SSH scripts.
Like I said, not the ideal solution, but I find it is more stable than the ssh artifact grabbing code for my particular release of Hudson (and my set of SSH servers).
One downside, the SSH servers in most Linux machines seem to really lack performance. A solution like mine tends to swamp your SSH server with a lot of connections coming in at about the same time. If you find the same happens with you, you can add timer delays (easy, imperfect solution) or you can rebuild the SSH server with high-performance patches. One day I hope that the high-performance patches make their way into the SSH server base code, provided that they don't negatively impact the SSH server security.

Using SSIS in a cluster with BizTalk BAM

We set up SQL server in a cluster. We then used one of the BizTalk servers and installed SSIS on it. We then configured BAM. Everything looked fine until we needed to run some of the SSIS jobs to archive stuff. They don't seem to have been created ...
We then tried to change the value in the MsDtsSrvr.ini.xml file and restart the SSIS on the BizTalk server and tested again but still no luck.
What do we need to do and where do we need to change the value in the MsDtsSrvr.ini.xml file?
In this post Richard S. discusses problem but I can still can't seem to get it to work ...
Difficult to say exactly what the problem is based on your comment.
Couple of things you could check:
Is your sql server database installed on a named instance?
What rights does the account that you used to run the install have?