How to have database (mysql) snapshot for each bug submited - mysql

I am trying to find a way/tool to be able to do the following:
testers are testing the application on the main testing environment and find a bug.
They would generate a snapshot of the db used and attach it to the bug
Developer when working on the bug would be able to load this snapshot somewhere on the development server and used the instance created to work on this bug
Once the bug is fixed (and tested successfully) the instance created on the server would be destroyed.
The only thing I could think of is using Virtual Machine. Each developers would have a VM instance on the dev server and would be able to load snapshots of the bug they are working on in it. But this mean taking a snapshot of the whole environment (which is big). More over the testing are done on the dev server which is a replica of live. So nothing is changing except the db/middleware and front end.
I would love to be able to find a tool that would permit to just create/load snapshot of db (and even maybe middleware) and lets devs (so per dev instance) choosing which snapshot they want to work with.
Do you have any ideas of such tools or ways to do so?
I've tried to look around but haven't found anything helpful really.

Related

Classic ASP response times varying extremely

I am working on a set of Classic ASP (VBScript) websites under different domains with 64bit Access (2013) database connection. Server is a shared Windows Server 2012 with IIS 8.5. The sites were not coded by me.
Everything seems to work fine for a time, but after several page calls (sometimes also at the first or only call to the site) the server does not respond for more than 20 to 30 seconds. This means: I can't call ANY page hosted on this server, even all other websites under different domains stop working for that time.
I am not sure, if plain HTML pages will respond, but it seems not. After such an issue everything is running fine again for various periods (up to 1 or 2 minutes), pages show up with normal response time, then this system hang repeats. And so on…
Finding the problem is extremely difficult, because all the sites on this shared hosting server could possibly cause this behaviour, it not necessarily seems to be triggered by my specific page call or subsequent calls, though it could be.
I am not sure, where to even look for the problem. I searched this forum and noticed some interesting answers, but not exactly to our problem. I tried Sysinternal's Process Monitor on a virtual server, where only one specific site is hosted and the same issues exists, but was not able to interpret most of the messages. I looked into event viewer log at this machine and noticed entries saying:
A trappable error (C0000005) occurred in an external object. The script cannot continue running.
But even if that sounds to be a possible reason, I am not sure where to look in the script or a log file, where I could find the trigger of all that. And on the shared host I don't even have the possibility to do that. On our local 'internal webserver' under Windows 10, where local copies of all the sites reside, I can. But I'm not sure, where to start my search.
Any help would be appreciated (and please don't needle me with proposals for switching to ASP.net or SQL - this is not possible at the moment).
I work with huge classic ASP application this error normally happens in a call for a Server.CreateObject('foo'). We have this kind of error here normally at the excel object when someone try to upload a very large .xls file. I would start mapping all the Server.CreateObject.

PhpStorm: loading external files locks up program

I have a problem using PhpStorm:
All of my sites are hosted externally, and I pull them into my local environment PhpStorm project. When I need to pull in a new project (or sync an old one), it locks up PhpStorm during this process, which could be long. If I want to work on a different PhpStorm project in the meantime, I can't do so via PhpStorm.
Does anyone know how to get around this? If it helps, I'm using Microsoft Windows.
There's an open feature request from March 2010 to solve this: https://youtrack.jetbrains.com/issue/WI-1307
For now, you may want to consider one of these workarounds:
Exclude all folders to prevent an initial download. Go back into the settings, remove the exclusions, and then do a background sync.
Creating the project without defining a remote server. Go into the project settings, add the server, and configure the folders you'd like to sync. Then perform a background sync.
Actually I opened up a case with jetBrains about that, the thing is that PhpStorm uses all the CPU at almost 100% until it finish indexing, so this is a big problem with older computers or computers with small resources

Losing data between updates (chrome packaged app)

I'm working on a chrome packaged app that saves a lot of data locally. I recently put it on the chrome store. To my dismay, whenever my user's chrome installation updated the app (v1.1.1 to v1.1.2 for example), all their local data was gone (indexeddb data). Why is this so?
Is it the expected behavior to wipe out all the databases on an update?
Is there any way to prevent this other than not pushing out updates?
(Also where can I report this issue/bug, if it is one?)
Update: filed a bug report, but now I can't reproduce the issue. Not sure if it was fixed or my situation was a fluke.
The documentation is fuzzy on this:
https://developer.chrome.com/trunk/apps/app_lifecycle.html
Preventing data loss
Users can uninstall your app at any time. When uninstalled, no executing code or private data is left behind. This can lead to data loss since the users may be uninstalling an app that has locally edited, unsynchronized data. You should stash data to prevent data loss.
I hope they will elaborate on this, because zapping user data on every upgrade is not a great user experience.
I put in an issue:
http://code.google.com/p/chromium/issues/detail?id=169417
one of the developers got back to me and said:
I can't remember the release numbers off the top of my head, but at
some point when we turned on correct partitioned storage, there would
have been one-time data loss. This was done before packaged apps
rolled out officially to stable. If the loss of data happened across
an chrome upgrade, then I would say it's expected. It certainly
shouldn't be happening anymore.

SQL Server 2008: How do I deploy changes?

I have a SQL Server and a live database that is used in a .Net application.
I want to make changes to the tables on this database without losing the data.
What is the best way of deploying these changes?
SQL Scripts may be one way, but although they can be tested beforehand I do wonder if they are risky as well.
I am sure there are lots of links that can help me here but I am not Googling the right words it would seem.
I deploy using scripts. When my changes involve table/data changes I will make a copy of the destination database and test my deployment to that first. After all the bugs are worked out, then I can deploy live.
Yes, there can always be risks, but at some point you must decide that you have tested enough and move.

How do you actually use Visual Studio Team System database projects to version Sql Server

How are you supposed to correctly use a Visual Studio Team System database project to implement version control on a sql server database?
This might seem overly generic but everything I've found so far online hasn't helped me in being able to achieve anything useful. I have managed to find functionality that appears to be similar to features that are in Redgate's tool Sql Compare but it definitely didn't seem as intuitive as their product.
From my understanding of how these db projects are supposed to work is that you're able to have a version of the database that either lives in Team Foundation Server (or inside the sql server itself) that you can check out to your local machine work on it and then check in the new changes which would allow for simultaneous development to work normally as it does for coding. Was I misinformed? Or is it just a complicated process to get setup?
Related is then how do you use it to deploy changes to the staging/production servers?
We don't use that, we simply script every thing and put it in source control like any other file and ALL deployments to prod are only through scripts pulled down from source control. I think the real key is that nothing gets put on prod except thorugh a source controlled script. Once the developer can't get his change to prod any other way (Devs should not have prod rights), there is no incentive to not put the change in source control.
Funny you should ask. I am the one responsible for getting our production databases under version control, and we're using Visual Studio Database Edition to do it. It is a fantastic tool. The very nice thing about this tool is that not only will it keep your schema under version control but it will validate your database schema as well and permit you to run code analysis against it. It also allows refactoring operations, and many other things.
Typically we work against a local development database, synch the changes back to VSDE, build the database to make sure there are no warnings or errors, and then create a deployment script for deployment to our production databases.
This is a simplified explanation of what and how we doing this, but I think it gives you a general idea of how it can be used. I'd be glad to answer any more specific questions you have.