How to delete existing servers on PhpStorm? - phpstorm

How to delete existing servers on PhpStorm? It seems that it is not even possible to delete the wrong ones.

Those "servers" are deployment entries and they can be managed at Settings/Preferences | Build, Execution, Deployment | Deployment.
Please note that most (if not all) of those entries most likely (in your case, of course) will be shared entries (seen by all projects; it's default and only option in older PhpStorm versions) and therefore potentially can be used by more than one project. It's better if you go trough each existing project and make them project-specific (Visible only for this project check box -- if you do not have such option then you are using quite old PhpStorm version).

Related

How To share configurations between Two or more TRAC environments

hope this is a good spot for my question,
for it i SW Related, but not code related.
We, in our company are using TRAC for Issue tracking and management of the Code links,
I am very satisfied by it, and like how it is working.
i have about several environments (1 per project) and every time we change a setting in the Configurations (e.g. Users & Permissions, Severity, Ticket types, etc...) we need to change all of them.
I Use
[inherit]
file=../../../sharedTrac.ini
and delete the shared parts from the file.
for the preferences, but i didn't find a way to share the Configurations.
this is bad for several reason and the head reason is that is "Bugs me !!!" :p
Can TRAC read its configurations from a central definition, and the data from a local DB?
EDIT:
I noticed all these configurations are in the .db file (sqlite file)...
Is there a Ready made tool to copy the configurations from DB to DB ?
or should i go ahead and analyse what should be copied and how ?
You're almost there. Note though, that local settings will always over-rule inherited ones, so you must delete them in your <env>/conf/trac.ini files to make central configuration effective.
Specifically to the part of configuration inside Trac db: No, there is no sync tool yet. Given that there was one for user accounts that is still a beta after years, there's not much interest. You should use the trac-admin command-line tool (as already advised here) or start to directly sync parts the db by means of own (Python) scripts or custom db syncronisation. For a start have a look at the Trac db schema.
You can try to do this through command line. Just call appropriate "trac-admin" command for each instance. Example one-liner to add user profile:
for D in */; do trac-admin $D session add username "Full Name" user#email.com ; done

Best git mysql versioning system?

I've started using git with a small dev team of people who come and go on different projects; it was working well enough until we started working with Wordpress. Because Wordpress stores a lot of configurations in MySQL, we decided we needed to include that in our commits.
This worked well enough (using msyql dump on pre-commits, and pushing the dumped file into mysql on post-checkout) until two people made modifications to plugins and committed, then everything broke again.
I've looked at every solution I could find, and thought Liquibase was the closest option, but wouldn't work for us. It requires you to specify schema in XML, which isn't really possible because we are using plugins which insert data/tables/modifications automatically into the DB.
I plan on putting a bounty on it in a few days to see if anyone has the "goldilocks solution" of:
The question:
Is there a way to version control a MySQL database semantically (not using diffs EDIT: meaning that it doesn't just take the two versions and diff it, but instead records the actual queries run in sequence to get from the old version to the current one) without the requirement of a developer written schema file, and one that can be merged using git.
I know I can't be the only one with such a problem, but hopefully there is somebody with a solution?
The proper way to handle db versioning is through a version script which is additive-only. Due to this nature, it will conflict all the time as each branch will be appending to the same file. You want that. It makes the developers realize how each others' changes affect the persistence of their data. Rerere will ensure you only resolve a conflict once though. (see my blog post that touches on rerere sharing: http://dymitruk.com/blog/2012/02/05/branch-per-feature/)
Keep wrapping each change within a if then clause that checks the version number, changes the schema or modifies lookup data or something else, then increments the version number. You just keep doing this for each change.
in psuedo code, here is an example.
if version table doesn't exist
create version table with 1 column called "version"
insert the a row with the value 0 for version
end if
-- now someone adds a feature that adds a members table
if version in version table is 0
create table members with columns id, userid, passwordhash, salt
with non-clustered index on the userid and pk on id
update version to 1
end if
-- now some one adds a customers table
if version in version table is 1
create table customers with columns id, fullname, address, phone
with non-clustered index on fullname and phone and pk on id
update version to 2
end if
-- and so on
The benefit of this is that you can automatically run this script after a successful build of your test project if you're using a static language - it will always roll you up to the latest. All acceptance tests should pass if you just updated to the latest version.
The question is, how do you work on 2 different branches at the same time? What I have done in the past is just spun up a new instance that's delimited in the db name by the branch name. Your config file is cleaned (see git smudge/clean) to set the connection string to point to the new or existing instance for that branch.
If you're using an ORM, you can automate this script generation as, for example, nhibernate will allow you to export the graph changes that are not reflected in the db schema yet as a sql script. So if you added a mapping for the customer class, NHibernate will allow you to generate the table creation script. You just script the addition of the if-then wrapper and you're automated on the feature branch.
The integration branch and the release candidate branch have some special requirements that will require wiping and recreating the db if you are resetting those branches. That's easy to do in a hook by ensuring that the new revision git branch --contains the old revision. If not, wipe and regenerate.
I hope that's clear. This has worked well in the past and requires the ability for each developer to create and destroy their own instances of dbs on their machines, although could work on a central one with additional instance naming convention.

MySQL schema source control

At my company we have several developers all working on projects internally, each with their own virtualbox setup. We use SVN to handle the source, but occasionally run into issues where a database (MySQL) schema change is necessary, and this has to be propagated to all of the other developers. At the moment we have a manually-written log file which lists what you changed, and the SQL needed to perform the change.
I'm hoping there might be a better solution -- ideally one linked to SVN, e.g. if you update to revision 893 the system knows this requires database revision 183 and updates your local schema automagically. We're not concerned with the data being synched, just the schema.
Of course one solution would be to have all developers running off a single, central database; this however has the disadvantage that a schema change could break everyone else's build until they do an svn up.
One option is a data dictionary in YAML/JSON. There is a nice article here
I'd consider looking at something like MyBatis Schema Migration tools. It isn't exactly what you describe, but I think it solves your problem in an elegant way and can be used without pulling in core MyBatis.
In terms of rolling your own, what I've always done is to have a base schema file that will create the schema from scratch as well as a delta file that appends all schema changes as deltas, separated by version numbers (you can try and use SVN numbers, but I always find it easier just to manually increment). Then have a schema_version table, which contains that information in it for the live database, the canonical schema file will have that information in it and have a script that will run all changes subsequent to the existing DB version from the delta script.
So you'd have a schema like:
-- Version: 1
CREATE TABLE user (
id bigint,
name varchar(20))
You have the tool manage the schema version table and see something like:
> SELECT * FROM schema_version;
1,2011-05-05
Then you have a few people add to the schema and have a delta file that would look like:
-- Version: 2
ALTER TABLE user ADD email varchar(20);
-- Version: 3
ALTER TABLE user ADD phone varchar(20);
And a corresponding new schema checked in with:
-- Version: 3
CREATE TABLE user (
id bigint,
name varchar(20),
email charchar(20),
phone varchar(20))
When you run the delta script against a database with the initial schema (Version 1), it will read the value from the schema_version table and apply all deltas greater than that to your schema. This gets trickier when you start dealing with branches, but serves as a simple starting point.
There are a couple approaches I've used before or currently use:
Sequential Version Number
Most that use this approach have a separate program that grabs a version number from the database, and then executes any statements associated with database versions higher than that number, finally updating the version number in the database.
So if the version is 37 and there are statements associated with version 1 through 38 in the upgrading application, it will skip 1 through 37 and execute statements to bring the database to version 38.
I've seen implementations that also allow for downgrade statements for each version to undo what the upgrade did, and this allows for taking a database from version 38 back down to version 37.
In my situation we had this database upgrading in the application itself and did not have downgrades. Therefore, changes were source-controlled because they were part of the application.
Directed Acyclic Graph
In a more recent project I came up with a different approach. I use classes that are nodes of a directed acyclic graph to encapsulate the statements to do specific upgrades to the database for each specific feature/bugfix/etc. Each node has an attribute to declare its unique name and the names of any nodes on which it was dependent. These attributes are also used to search the assembly for all upgrade nodes.
A default root node is given as the dependency node for any nodes without dependencies, and this node contains the statements to create the migrationregister table that lists the names of nodes that have already been applied. After sorting all the nodes into a sequential list, they are executed in turn, skipping the ones that are already applied.
This is all contained in a separate application from the main application, and they are source-controlled in the same repository so that when a developer finishes work on a feature and the database changes associated with it, they are committed together in the same changeset. If you pull the changes for the feature, you also pull the database changes. Also, the main application simply needs a list of the expected node names. Any extra or missing, and it knows the database does not match.
I chose this approach because the project often has parallel development by multiple developers, with each developer sometimes having more than 1 thing in development (branchy development, sometimes very branch). Juggling database version numbers was quite the pain. If everybody started with version 37 and "Alice" starts on something and uses version 38 so it will change her database, and "Bob" also starts on work that has to change the database and also uses version 38, someone will need to change eventually. So let's say Bob finishes and pushes to the server. Now Alice, when she pulls Bob's changeset, has to change the version for statements to 39 and set her database version back to 37 so that Bob's changes will get executed, but then hers execute again.
But when all that happens when Alice pulls Bob's changeset is that there's simply a new migration node and another line in the list of node names to check against, things just work.
We use Mercurial (distributed) rather than SVN (client-server), so that's part of why this approach works so well for us.
An easy solution would be to keep a complete schema in SVN (or whatever library). That is, every time you change the schema, run MySQL "desc" to dump out descriptions of all the tables, overwrite the last such schema dump with this, and then commit. Then if you run a version diff, it should tell you what changed. You would, of course, need to keep all the tables in alphabetical order (or some predictable order).
For a different approach: Years ago I worked on a project for a desktop application where we were periodically sending out new versions that might have schema changes, and we wanted to handle these with no user intervention. So the program had a description of what schema it expected. At start up it did some metadata calls to check the schema of the database that it actually had and compared these to what it expected. If then automatically updated the schema to match what it expected. Usually when we added a new column we could simply let it start out null or blank, so this required pretty much zero coding effort once we got the first version to work. When there was some actual manipulation required to populate new fields, we'd have to write custom code, but that was relatively rare.

Maintaining multiple workspaces for each build in Hudson

Is it possible to maintain multiple workspaces for each build in Hudson? Suppose if i want to keep the last 5 builds, is it possible to have the five corresponding workspace folders also? Currently whenever a new build is scheduled it overwrites the workspace.
Right now, the idea is to reuse the workspace.
It is based on the SCM used (a SVN workspace or a Git workspace or a ClearCase snapshot or dynamic view or ...), and in none of those SCM plugins I see the option to build a new workspace or to save (copy) an old one for each run of the Job.
One (poor) solution would be to:
copy the job four times, resulting in 5 jobs to be modified for specifying 5 different workspaces (based on the same SCM configuration, meaning those 5 workspaces select the same versions in each one of them),
and have them scheduled to run one after the other.
As far as I know, there's no built in way to do it.
You do have a couple of options:
As one of your build steps, you could tar (or zip) up the workspace and record it as a build artifact.
Generate a tag with each successful build (e.g. with the Subversion Tagging Plugin)
Although not ideal, you could use the Backup Plugin.
The backup plugin allows you to back up the workspace. So, you could run the plugin after every build and it would archive the workspace.
Again, not ideal, but if this is a must-have requirement, and if it works with the way you're using Hudson, then it could work.
Depending on what you want to do, you have a few options.
If you need the last five workspace for another job, you can use the clone workspace SCMlink text plugin. Since I have never used it, I don't know if you can access the archived workspace manually (through the UI) later.
Another option worth to try, is to use the archive option and archive the whole workspace (I think the filter setting for the archive option would be **/*). You can than download the workspace in a zipped version from every job run. The beauty of this solution is, that the artifacts will be cleaned up when you delete the particular job run (manually or through the job setting to delete old builds).
Of course you can also do it manually and run an copy as the last step of your build. You will need five directories (you can name them 1 to 5). First delete the oldest one and rename the others (4->5, 3->4, ..). The last step would be to copy the workspace to the directory holding the newest copy (in our example 1). This will require you to maintain your own archive job. Therefore I prefer one of the above mentioned options.

How do you maintain revision control of your database structure?

What is the simplest way of keeping track of changes to a projects database structure?
When I change something about the database (eg, add a new table, add a new field to an existing table, add an index etc), I want that to be propagated to the rest of the team, and ultimately the production server, with the minimal fuss and effort.
At the moment, the solution is pretty weak and relies of people remembering to do things, which is an accident waiting to happen.
Everything else is managed with standard revision control software (Perforce in our case).
We use MySQL, so tools that understand that would be helpful, though I would also be interested to learn how other places are handling this anyway, regardless of database engine.
You can dump the schema and commit it -- and the RCS will take care of the changes between versions.
You can get a tool like Sql Compare from Red-Gate which allows you to point to two databases and it will let you know what is different, and will build alter scripts for you.
If you're using .NET(Visual Studio), you can create a Database project and check that into source control.
This has alrady been discussed a lot I think. Anyhow I really like Rails approach to the issue. It's code that has three things:
The version number
The way of applying the changes (updates a version table)
The way of rolling the changes back (sets the version on the version table to the previous)
So, each time you make a changeset you create this code file that can rollback or upgrade the database schema when you execute it.
This, being code, you can commit in any revision control system. You commit the first dump and then the scripts only.
The great thing about this approach is that you can easily distribute the database changes to customers, whereas with a standard just dump the schema and update it approach generating an upgrade/rollback script is a nuisance
In my company each developer is encouraged to save all db sctructure changes to a script files in the folder containing module's revision number. These scripts are kept in svn repository.
When application starts, the db upgrade code compares current db version and code version and if the code is newer - looks into scripts folder and applies all db changes automatically.
This way every instance of application (on production or developers machines) always upgrades db to their code version and it works great.
Of course, some automation could be added - if we find a suitable tool.
Poor mans version control:
Separate file for each object (table, view, etc)
When altering tables, you want to diff CREATE TABLE to CREATE TABLE. Source code history is for communicating a story. You can't do a meaningful diff of CREATE TABLE and ALTER TABLE
Try to make changes to the files, then commit them to source control, then commit them to the SQL database. Most tools poorly support this because you shouldn't commit to source control until you test and you can't test without putting the code into SQL. So in practice, you try to use SQL Redgate to compare your files to the SQL database. Failing that, you adopt a harsh policy of dropping everything in the database and replacing it with what made it into source control.
Change scripts usually are single use, but applications exist, like wordpress, where you need to move the schema from 1.0 to 1.1, 1.1 to 1.5, etc. Each of those should be under source control and modified as such (i.e. as you find bugs in the script that moves you from 1.0 to 1.1, you create a new version of that script, not yet-another script)