Codeigniter upgradable module logic for database process - mysql

i am trying to build my own cms with using Codeigniter.
I wrote some modules already. But, in time i made some changes with them.
Now, if i want to upgrade a module. I send the files with ftp and change database fields with phpmyadmin.
It takes a lot time and high possibility to mis something to change, and for every project i've use this module, i have to repeat these changes again.
Now, I am planning to make an installation system.
my Modules directory structure like below:
/modules
/modules/survey/
/modules/survey/config
/modules/survey/config/autoload.php
/modules/survey/config/config.php
/modules/survey/config/routes.php
/modules/survey/config/install.php
/modules/survey/controllers
/modules/survey/controllers/entry.php...
/modules/survey/models
/modules/survey/models/survey.php...
/modules/survey/views
/modules/survey/views/index.php...
I thought that all modules should have an install.php file in config directory. That is keeping the setting of releated module. Like below:
$config['version'] = 1.1; //or 1.2, 1.3 etc.
$config['module'] = 'Survey Module';
$config['module_slug'] = 'survey';
$config['module_db_table'] = 'module_survey';
I have an installed_modules table already:
id, module, module_slug, version
Now, i am trying to make an installation script. Like below:
Before start , I zip module's files.
1- upload zip file with an installation page to a temp directory
2- unzip the module in this temp direcorty
3- Find install.php
4- Get modules information from install.php
5- Check if this module already in installed_modules table.
6a) If it's not: I will make a new module_survey table. And copy this temp directory into the real modules directory.
6b) If it's already : I have to change the structure of this table without lossing the data added before. Delete all module files and copy the new ones from temp into the modules directory.
7- When everything done, Delete temp directory.
I stucked in 6a and 6b.
For 6a, How should i create e.x 'module_survey' table.
Should i add a $config['db_query'] in install.php like
$config['db_query'] = "CREATE TABLE IF NOT EXISTS `module_survey` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(50) DEFAULT NULL,
`lang_id` int(11) NOT NULL DEFAULT '1',
`usort` int(3) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ;
";
and run this query. Or what is your advice here? There maybe not just one table, there should 2 or more with relations each other for different modules.
and For 6b:
I thought, i should create a new temp table like named "temp_module_survey".
old fields =
$oldFields = $this->db->field_data('module_survey');
for new fields =
$newFields = $this->db->field_data('temp_module_survey');
compare fields which are newly added, which are deleted and which's fieldData has changed.
And
add new fields to oldTable
Delete unnecessary fields from oldTable
and update fields which's fieldData has changed.
Then remove temporary Table.
For a summary, What should i do for database changes without lossing the old data.
I hope i could explain.
Thank you.

Phil Sturgeon's codeigniter-migrations can help.

Related

Synchronizing XML file to MySQL database

My company uses an internal management software for storing products. They want to transpose all the products in a MySql database so they can do available their products on the company website.
Notice: they will continue to use their own internal software. This software can exports all the products in various file format (including XML).
The syncronization not have to be in real time, they are satisfied to syncronize the MySql database once a day (late night).
Also, each product in their software has one or more images, then I have to do available also the images on the website.
Here is an example of an XML export:
<?xml version="1.0" encoding="UTF-8"?>
<export_management userid="78643">
<product id="1234">
<version>100</version>
<insert_date>2013-12-12 00:00:00</insert_date>
<warrenty>true</warrenty>
<price>139,00</price>
<model>
<code>324234345</code>
<model>Notredame</model>
<color>red</color>
<size>XL</size>
</model>
<internal>
<color>green</color>
<size>S</size>
</internal>
<options>
<s_option>aaa</s_option>
<s_option>bbb</s_option>
<s_option>ccc</s_option>
<s_option>ddd</s_option>
<s_option>eee</s_option>
<s_option>fff</s_option>
...
<extra_option>ggg</extra_option>
<extra_option>hhh</extra_option>
<extra_option>jjj</extra_option>
<extra_option>kkk</extra_option>
...
</options>
<images>


</images>
</product>
<product id="5321">
...
</product>
<product id="2621">
...
</product>
...
</export_management>
Some ideas for how can I do it?
Please let me know if my question is not clear. Thanks
EDIT:
I used a SQL like this for each table to fill them with the XML datas:
LOAD XML LOCAL INFILE '/products.xml' INTO TABLE table_name ROWS IDENTIFIED BY '<tag_name>';
Then, checking the tables content I can see that the field "id" (primary key) automatically has mantained itself the same for each respective product row in each tables. That's correct and suprisingly awesome!
The problem now is for the parameter <options> because it contains sub-parameters with same name (<s_option> and <extra_option>). The values of these tags are always different (that is, there is no a specific list of values, they are inserted manually by an employee) and also I don't know how many are for each product. I read that storing them as an array is not so good but if it's the only simple solution I can get it.
The way that I would approach the problem in your case is:
Create a respective set of corresponding tables in the database which in turn will represent the company's Product model by extracting the modelling from your given XML.
Create and use a scheduled daily synchronization job, that probably will executes few SQL commands in order to refresh the data or introduce a new one by parsing the products XMLs into the created tables.
To be more practical about it all:
As for the database's tables, I can easily identify three tables to be created based on your XML, look at the yellow marked elements:
Products
ProductsOptions
ProductsImages
(This diagram created based on an XSD that was generated from your XML)
All rest can be considered as regular columns in the Products table since they're constitutes a 1-1 relationship only.
Next, create the required tables in your database (you can use an XSD2DB Schema converter tool to create the DDL script, I did it manually):
companydb.products
CREATE TABLE companydb.products (
Id INT(11) NOT NULL,
Version INT(11) DEFAULT NULL,
InsertDate DATETIME DEFAULT NULL,
Warrenty TINYINT(1) DEFAULT NULL,
Price DECIMAL(19, 2) DEFAULT NULL,
ModelCode INT(11) DEFAULT NULL,
ModelColor VARCHAR(10) DEFAULT NULL,
Model VARCHAR(255) DEFAULT NULL,
ModelSize VARCHAR(10) DEFAULT NULL,
InternalColor VARCHAR(10) DEFAULT NULL,
InternalSize VARCHAR(10) DEFAULT NULL,
PRIMARY KEY (Id)
)
ENGINE = INNODB
CHARACTER SET utf8
COLLATE utf8_general_ci
COMMENT = 'Company''s Products';
companydb.productsimages
CREATE TABLE companydb.productimages (
Id INT(11) NOT NULL AUTO_INCREMENT,
ProductId INT(11) DEFAULT NULL,
Size VARCHAR(10) DEFAULT NULL,
FileName VARCHAR(255) DEFAULT NULL,
PRIMARY KEY (Id),
CONSTRAINT FK_productsimages_products_Id FOREIGN KEY (ProductId)
REFERENCES companydb.products(Id) ON DELETE RESTRICT ON UPDATE RESTRICT
)
ENGINE = INNODB
AUTO_INCREMENT = 1
CHARACTER SET utf8
COLLATE utf8_general_ci
COMMENT = 'Products'' Images';
companydb.productsoptions
CREATE TABLE companydb.productoptions (
Id INT(11) NOT NULL AUTO_INCREMENT,
ProductId INT(11) DEFAULT NULL,
Type VARCHAR(255) DEFAULT NULL,
`Option` VARCHAR(255) DEFAULT NULL,
PRIMARY KEY (Id),
CONSTRAINT FK_producstsoptions_products_Id FOREIGN KEY (ProductId)
REFERENCES companydb.products(Id) ON DELETE RESTRICT ON UPDATE RESTRICT
)
ENGINE = INNODB
AUTO_INCREMENT = 1
CHARACTER SET utf8
COLLATE utf8_general_ci;
As for the synchronisation job process to take place, you can easily create an MySql event and use the Event Scheduler to control it, I created the required event which is calling a stored-procedure that you'll find below (SyncProductsDataFromXML), look:
CREATE DEFINER = 'root'#'localhost' EVENT
companydb.ProductsDataSyncEvent ON SCHEDULE EVERY '1' DAY STARTS
'2014-06-13 01:27:38' COMMENT 'Synchronize Products table with
Products XMLs' DO BEGIN SET #productsXml =
LOAD_FILE('C:/MySqlXmlSync/products.xml'); CALL
SyncProductsDataFromXML(#productsXml); END;
ALTER EVENT companydb.ProductsDataSyncEvent ENABLE
Now the interesting part is taking place, here is the synchronization stored-procedure (note how the event above is calling it):
CREATE DEFINER = 'root'#'localhost'
PROCEDURE companydb.SyncProductsDataFromXML(IN productsXml MEDIUMTEXT)
BEGIN
DECLARE totalProducts INT;
DECLARE productIndex INT;
SET totalProducts = ExtractValue(productsXml, 'count(//export_management/product)');
SET productIndex = 1;
WHILE productIndex <= totalProducts DO
SET #productId = CAST(ExtractValue(productsXml, 'export_management/product[$productIndex]/#id') AS UNSIGNED);
INSERT INTO products(`Id`, `Version`, InsertDate, Warrenty, Price, ModelCode, Model, ModelColor, ModelSize, InternalColor, InternalSize)
VALUES(
#productId,
ExtractValue(productsXml, 'export_management/product[$productIndex]/version'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/insert_date'),
CASE WHEN (ExtractValue(productsXml, 'export_management/product[$productIndex]/warrenty')) <> 'false' THEN 1 ELSE 0 END,
CAST(ExtractValue(productsXml, 'export_management/product[$productIndex]/price') as DECIMAL),
ExtractValue(productsXml, 'export_management/product[$productIndex]/model/code'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/model/model'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/model/color'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/model/size'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/internal/color'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/internal/size')
);
SET #totalImages = ExtractValue(productsXml, 'count(//export_management/product[$productIndex]/images/image)');
SET #imageIndex = 1;
WHILE (#imageIndex <= #totalImages) DO
INSERT INTO productimages(ProductId, Size, FileName) VALUES(#productId, 'small', EXTRACTVALUE(productsXml, 'export_management/product[$productIndex]/images/image[$#imageIndex]/small'));
SET #imageIndex = #imageIndex + 1;
END WHILE;
SET #totalStandardOptions = ExtractValue(productsXml, 'count(//export_management/product[$productIndex]/options/s_option)');
SET #standardOptionIndex = 1;
WHILE (#standardOptionIndex <= #totalStandardOptions) DO
INSERT INTO productoptions(ProductId, `Type`, `Option`) VALUES(#productId, 'Standard Option', EXTRACTVALUE(productsXml, 'export_management/product[$productIndex]/options/s_option[$#standardOptionIndex]'));
SET #standardOptionIndex = #standardOptionIndex + 1;
END WHILE;
SET #totalExtraOptions = ExtractValue(productsXml, 'count(//export_management/product[$productIndex]/options/extra_option)');
SET #extraOptionIndex = 1;
WHILE (#extraOptionIndex <= #totalExtraOptions) DO
INSERT INTO productoptions(ProductId, `Type`, `Option`) VALUES(#productId, 'Extra Option', EXTRACTVALUE(productsXml, 'export_management/product[$productIndex]/options/extra_option[$#extraOptionIndex]'));
SET #extraOptionIndex = #extraOptionIndex + 1;
END WHILE;
SET productIndex = productIndex + 1;
END WHILE;
END
And you're done, this is the final expected results from this process:
NOTE: I've commit the entire code to one of my GitHub's repositories: XmlSyncToMySql
UPDATE:
Because your XML data might be larger then the maximum allowed for a TEXT field, I've changed the productsXml parameter to a MEDIUMTEXT. Look at this answer which outlines the various text datatypes max allowed size:
Maximum length for MYSQL type text
As this smells like integration work, I would suggest a multi-pass, multi-step procedure with an interim format that is not only easy to import into mysql but which also helps you to wrap your mind around the problems this integration ships with and test a solution in small steps.
This procedure works well if you can flatten the tree structure that can or could be expressed within the XML export into a list of products with fixed named attributes.
query all product elements with an xpath query from the XML, iterate the result of products
query all product attributes relative to the context node of the product from the previous query. Use one xpath per each attribute again.
store the result of all attributes per each product as one row into a CSV file.
store the filenames in the CSV as well (the basenames), but the files into a folder of it's own
create the DDL of the mysql table in form of an .sql file
run that .sql file against mysql commandline.
import the CSV file into that table via mysql commandline.
You should get quick results within hours. If it turns out that products can not be mapped on a single row because of attributes having multiple values (what you call an array in your question), consider to turn these into JSON strings if you can not prevent to drop them at all (just hope you don't need to display complex data in the beginning). Doing so would be violating to target a normal form, however as you describe the Mysql table is only intermediate here as well, I would aim for simpleness of the data-structure in the database as otherwise queries for a simple and fast display on the website will create the next burden.
So my suggestion here basically is: Turn the tree structure into a (more) flat list for both simplification of transition and easier templating for display.
Having an intermediate format here also allows you to replay in case things are going wrong.
It also allows you to mock the whole templating more easily.
Alterantively is is also possible to store the XML of each project inside the database (keep the chunks in a second table so you can keep varchar (variable length) fileds out of the first table) and keep some other columns as (flat) reference columns to query against. If it's for templating needs, turning the XML into a SimpleXMLElement is often very nice to have it being a structured, non-primitive data-type as view object you can traverse and loop over options. Would work similar with JSON however keeping the XML would not break a format boundary and XML can also express more structure than JSON.
You're taking a very technology-centered approach to this. I think it's wise to start by looking at the functional specifications.
It helps to have a simple UML class diagram of the business class Product. Show its attributes as the business sees them. So:
How is Model related to Product? Can there be multiple Models for one Product or the other way around?
What kind of data is stored in the Internal element? In particular: How can Internal's color and size be different from Model's color and size?
And specifically about the web application:
Is the Web application the only application that will be interested in this export?
Should the web application care about versioning or simply display the last available version?
Which specific options are interesting to the web application. Like a discount property or vendor name property or others?
What should the Product details page look like, what data needs to be displayed where?
Will there be other pages on the website displaying product information, and what product information will they list?
Then you'll know which attributes need to be readily available to the web application (as columns in the Product table) and (perhaps) which ones may be simply stored in one big XML blob in the database.

CakePHP can't find table after creating a table

I create a table directly by a query. I only want to Import some data. Therefor i execute a query which is built dynamicly and i try execute this query in a Component-class. (I use a random existing model to execute this query is there a better why?)
$query= "CREATE TABLE IF NOT EXISTS testerdbs (
'Ü1' varchar(6) COLLATE utf8_swedish_ci DEFAULT NULL,
'Ü2' varchar(6) COLLATE utf8_swedish_ci DEFAULT NULL,
'Ü3' int(3) DEFAULT NULL,
'Ü4' varchar(6) COLLATE utf8_swedish_ci DEFAULT NULL,
'Ü5' date DEFAULT NULL
)"
$data = ClassRegistry::init('import_files');
$data->query($query);
This works fine.
In the same request i want to access the created table in the controller.
App::import('Model', "testerdb");
//$this->loadModel("testerdb");
$newTable = ClassRegistry::init("testerdb");
echo '<pre>', print_r($newTable->getColumnTypes()), '</pre>';
If I try to execute this in same request i always get the error:
Error: Table testerdbs for model testerdb was not found in datasource default.
If I do exactly the same request again, everything works fine...
I google about an hour and it seemed that cake cache the model. If I execute this request again cake cache again all the tables and than cake find my new table. So I hoped to load or import the created Table in the same request, but i don't work.
Is there another way to load the table? Where is my mistake?
Thanks for help!
This might be a bit stale, but I just spent the last week trying to work around the problem and maybe this will help someone.
The root problem is that the cache of table names is initialized before you created the temporary table, so the 'setSource' function returns an error that the temporary table does not exist.
The solution is to overrid the 'setSource' function for the Model that you are creating for 'testerdb' and remove the check on table existence (i.e. everything within the test:
if (method_exists($db, 'listSources'))' )
Your model definition should look something like this:
App::uses('AppModel', 'Model');
class testerdb extends AppModel {
public function setSource($tableName) {
$this->setDataSource($this->useDbConfig);
$db = ConnectionManager::getDataSource($this->useDbConfig);
$this->table = $this->useTable = $tableName;
$this->tableToModel[$this->table] = $this->alias;
$this->schema();
}
}
Many thanks to whomever posted the link below. This has worked with my CakePHP 2.0 instance.
http://web2.0goodies.com/blog/uncategorized/mysql-temporary-tables-and-cakephp-1-3/
Why would you only want to have a temporary table? I would just temporarily store whatever data you are importing in an in-memory model or data-structure.
If the table is not temporary, then just create it statically before you run your program.

Why can't I set the auto_increment value in my magento setup script?

I am creating a custom module with a few custom database tables. I need to set the auto_increment value to 5000 rather than having the default of 1. This can be accomplished pretty easily, but I am running into problems when trying to do it via a Magento install script. I want to know why, and how to work around the issue. Here are more details.
When I run the following create statement from a regular mysql client (like Heidi SQL, or the standard cli) the auto_increment value gets set correctly. It gets set to 5000.
CREATE TABLE mytable (
myid INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
other_column INTEGER NULL
) ENGINE=InnoDb DEFAULT CHARSET=UTF8 AUTO_INCREMENT=5000;
But when I put that exact same query into a magento install script the auto_increment is set to 1 after it runs. To be clear, the table is created as I expect, except for the fact that the auto_increment isn't set to 5000. Here is the code in the install script.
file: app/code/local/Mycompany/Mymodule/sql/mymodule_setup/mysql4-install-0.0.1.php
<?php
$installer = $this;
$installer->startSetup();
$installer->run("
CREATE TABLE {$this->getTable('mytable')} (
myid INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
other_column INTEGER NULL
) ENGINE=InnoDb DEFAULT CHARSET=UTF8 AUTO_INCREMENT=5000;
");
$installer->endSetup();
Why is this happening? Are there any workarounds?
(I'll also mention that I have tried to set the auto_increment with an ALTER statement, and I get the same problem)
You can setup your install script as an .sql file instead of .php mysql4-install-0.0.1.sql
Checkout Mage/Core/Model/Resource/Setup.php _modifyResourceDb
try {
switch ($fileType) {
case 'sql':
$sql = file_get_contents($sqlFile);
if ($sql!='') {
$result = $this->run($sql);
} else {
$result = true;
}
break;
case 'php':
$conn = $this->_conn;
$result = include($sqlFile);
break;
default:
$result = false;
}
Short answer: Script wise I don't think its possible, and using the .sql method may not work as well since it is still calling the the run(); method.
Add an auto_increment column in Magento setup script without using SQL
You can also take a look at lib/Varient/Db/Adapter/Pdo/Mysql.php for more deeper reference to whats going on in the background like, multi_query and _splitMultiQuery
Some more further reading for connection methods can be found here also:
ALTER TABLE in Magento setup script without using SQL
More than likely your going to have to go outside the realm of "The Magento Way" of doing things and after your module is installed make a custom post-install script that would run on your table and adjust the auto_increment directly.
I don't think you can set the primary key that way. All my code is built the fillowing way and it works perfectly fine:
<?php
$installer = $this;
$installer->startSetup();
$installer->run("
CREATE TABLE {$this->getTable('mytable')} (
myid INTEGER NOT NULL auto_increment,
other_column INTEGER NULL,
PRIMARY KEY (`myid`)
) ENGINE=InnoDb DEFAULT CHARSET=UTF8 AUTO_INCREMENT=5000;
");
$installer->endSetup();

Can I Import an updated structure into a MySQL table without losing its current content?

We use MySQL tables to which we add new fields from time to time as our product evolves.
I'm looking for a way to export the structure of the table from one copy of the db, to another, without erasing the contents of the table I'm importing to.
For example say I have copies A and B of a table, and I add fields X,Y,Z to table A. Is there a way to copy the changed structure (fields X,Y,Z) to table B while keeping its content intact?
I tried to use mysqldump, but it seems I can only copy the whole table with its content, overriding the old one, or I can use the "-d" flag to avoid copying data (dumping structure only), but this will create an empty table when imported, again overriding old data.
Is there any way to do what I need with mysqldump, or some other tool?
What I usually do is store each and every ALTER TABLE statement run on the development table(s), and apply them to the target table(s) whenever necessary.
There are more sophisticated ways to do this (like structure comparison tools and such), but I find this practice works well. Doing this on a manual step by step basis also helps prevent accidental alteration or destruction of data by structural changes that change a field's type or maximum length.
I just had the same problem and solved it this way:
Export the structure of the table to update.
Export the structure of the development table.
run this code for the first file "update.sql" needs to be changed according to your exported filename.
cat update.sql|awk -F / '{
if(match($0, "CREATE TABLE")) {
{ FS = "`" } ; table = $2
} else {
if(match($0," `")) {
gsub(",",";",$0)
print "ALTER TABLE `" table "` ADD" $0
}
}
}' > update_alter.sql
run the same command for the second file
cat development.sql|awk -F / '{
if(match($0, "CREATE TABLE")) {
{ FS = "`" } ; table = $2
} else {
if(match($0," `")) {
gsub(",",";",$0)
print "ALTER TABLE `" table "` ADD" $0
}
}
}' > development_alter.sql
run this command to find the differences in the output files
diff --changed-group-format='%<' --unchanged-group-format='' development_alter.sql update_alter.sql > update_db.sql
In the file update_db.sql there will now be the code you are looking for.
Lazy way: export your old data and struct, import your actual struct, import only your old data. Works to me in the test.
for your case, it might just need to perform an update
alter table B add column x varchar(255);
alter table B add column y varchar(255);
alter table B add column z varchar(255);
update A,B
set
B.x=A.x,
B.y=A.y,
B.z=A.z
where A.id=B.id; <-- a key that exist on both tables
There is a handy way of doing this but need a little bit editing in a text editor :
This takes about Max 10Min in Gedit Under Linux !!
Export you table & save it in : localTable.sql
Open it in a text edior (Gedit) You will see something like this :
CREATE TABLE IF NOT EXISTS `localTable` (
`id` int(8) NOT NULL AUTO_INCREMENT,
`date` int(10) NOT NULL,
# Lot more Fields .....
#Other Fields Here
After Just Remove :
Anything after the closing ) parenthese
CREATE TABLE IF NOT EXISTS localTable (
Change all , to ; in each line like thats you execute all this once (,\n to ;\n)
remove all ADDPRIMARY KEY (id);ADDKEY created_by (created_by) !
And just Keep Fields you are interested in
You will have this
`id` int(8) NOT NULL AUTO_INCREMENT,
`date` int(10) NOT NULL,
# Lot more Fields .....
#Other Fields Here
Add to the begining of each line ALTER TABLE localTable ADD
ALTER TABLE `localTable` ADD `id` int(8) NOT NULL AUTO_INCREMENT,
ALTER TABLE `localTable` ADD `date` int(10) NOT NULL,
ALTER TABLE `localTable` ADD #to each more Fields .....
#Other Fields Here
That's it we can make this ab Automated Script by adding a Shell Script to do this job .
After you know what you have to do Import it in the 'remoteTable' ;)
Thanks
No it isn't possible because MySql is using mariaDB version. In mariaDB version structure of a table are arranged in memory and that memory shared with your byte data.
So when we try to import a structure (or a table) it alter that whole memory block.

How do I customise the CREATE DATABASE statement in VSTS DB Edition Deploy?

I'm using VSTS Database Edition GDR Version 9.1.31024.02
I've got a project where we will be creating multiple databases with identical schema, on the fly, as customers are added to the system. It's one DB per customer. I thought I should be able to use the deploy script to do this. Unfortunately I always get the full filenames specified on the CREATE DATABASE statement. For example:
CREATE DATABASE [$(DatabaseName)]
ON
PRIMARY(NAME = [targetDBName], FILENAME = N'$(DefaultDataPath)targetDBName.mdf')
LOG ON (NAME = [targetDBName_log], FILENAME = N'$(DefaultDataPath)targetDBName_log.ldf')
GO
I'd expected something more like this
CREATE DATABASE [$(DatabaseName)]
ON
PRIMARY(NAME = [targetDBName], FILENAME = N'$(DefaultDataPath)$(DatabaseName).mdf')
LOG ON (NAME = [targetDBName_log], FILENAME = N'$(DefaultDataPath)$(DatabaseName)_log.ldf')
GO
Or even
CREATE DATABASE [$(DatabaseName)]
I'm not going to be running this on an on-going basis so I'd like to make it as simple as possible, for the next guy. There are a bunch of options for deployment in the project properties, but I can't get this to work the way I'd like.
Any one know how to set this up?
Better late than never, I know how to get the $(DefaultDataPath)$(DatabaseName) file names from your second example.
The SQL you're showing in your first code snippet suggests that you don't have scripts for creating the database files in your VSTS:DB project, perhaps by deliberately excluded them from any schema comparisons you've done. I found it a little counter-intuitive, but the solution is to let VSTS:DB script the MDF and LDF in you development environment, then edit those scripts to use the SQLCMD variables.
In your database project, go to the folder Schema Objects > Database Level Objects > Storage > Files. In there, add these two files:
Database.sqlfile.sql
ALTER DATABASE [$(DatabaseName)]
ADD FILE (NAME = [$(DatabaseName)],
FILENAME = '$(DefaultDataPath)$(DatabaseName).mdf',
SIZE = 2304 KB, MAXSIZE = UNLIMITED, FILEGROWTH = 1024 KB)
TO FILEGROUP [PRIMARY];
Database_log.sqlfile.sql
ALTER DATABASE [$(DatabaseName)]
ADD LOG FILE (NAME = [$(DatabaseName)_log],
FILENAME = '$(DefaultDataPath)$(DatabaseName)_log.ldf',
SIZE = 1024 KB, MAXSIZE = 2097152 MB, FILEGROWTH = 10 %);
The full database creation script that VSTS:DB, or for that matter VSDBCMD.exe, generates will now use the SQLCMD variables for naming the MDF and LDF files, allowing you to specify them on the command line, or in MSBuild.
We do this using a template database, that we back up, copy, and restore as new customers are brought online. We don't do any of the schema creation with scripts but with a live, empty DB.
Hmm, well it seems that the best answer so far (given the over whelming response) is to edit the file after the fact... Still looking