CakePHP can't find table after creating a table - mysql

I create a table directly by a query. I only want to Import some data. Therefor i execute a query which is built dynamicly and i try execute this query in a Component-class. (I use a random existing model to execute this query is there a better why?)
$query= "CREATE TABLE IF NOT EXISTS testerdbs (
'Ü1' varchar(6) COLLATE utf8_swedish_ci DEFAULT NULL,
'Ü2' varchar(6) COLLATE utf8_swedish_ci DEFAULT NULL,
'Ü3' int(3) DEFAULT NULL,
'Ü4' varchar(6) COLLATE utf8_swedish_ci DEFAULT NULL,
'Ü5' date DEFAULT NULL
)"
$data = ClassRegistry::init('import_files');
$data->query($query);
This works fine.
In the same request i want to access the created table in the controller.
App::import('Model', "testerdb");
//$this->loadModel("testerdb");
$newTable = ClassRegistry::init("testerdb");
echo '<pre>', print_r($newTable->getColumnTypes()), '</pre>';
If I try to execute this in same request i always get the error:
Error: Table testerdbs for model testerdb was not found in datasource default.
If I do exactly the same request again, everything works fine...
I google about an hour and it seemed that cake cache the model. If I execute this request again cake cache again all the tables and than cake find my new table. So I hoped to load or import the created Table in the same request, but i don't work.
Is there another way to load the table? Where is my mistake?
Thanks for help!

This might be a bit stale, but I just spent the last week trying to work around the problem and maybe this will help someone.
The root problem is that the cache of table names is initialized before you created the temporary table, so the 'setSource' function returns an error that the temporary table does not exist.
The solution is to overrid the 'setSource' function for the Model that you are creating for 'testerdb' and remove the check on table existence (i.e. everything within the test:
if (method_exists($db, 'listSources'))' )
Your model definition should look something like this:
App::uses('AppModel', 'Model');
class testerdb extends AppModel {
public function setSource($tableName) {
$this->setDataSource($this->useDbConfig);
$db = ConnectionManager::getDataSource($this->useDbConfig);
$this->table = $this->useTable = $tableName;
$this->tableToModel[$this->table] = $this->alias;
$this->schema();
}
}
Many thanks to whomever posted the link below. This has worked with my CakePHP 2.0 instance.
http://web2.0goodies.com/blog/uncategorized/mysql-temporary-tables-and-cakephp-1-3/

Why would you only want to have a temporary table? I would just temporarily store whatever data you are importing in an in-memory model or data-structure.
If the table is not temporary, then just create it statically before you run your program.

Related

How to Alter Blob to Varchar in Laravel Make:Migration

I need to change a Blob field type to a Varchar(128). The data in the field will fit the target field size, it's text, and shouldn't have a problem with UTF-8.
Sample data, all data is in this format:
{"weight":"0","height":"0","width":"0","length":"0"}
I'm using Laravel Make:migrate to handle the conversion.
How should I write the SQL?
I know how to write a Laravel Migration. I also know how to alter a field. But a Blob isn't a text field nor is a Blob normally converted down to a Varchar. I've done some manual UTF-8 conversion of Blobs in the past and know you can mess up your data if you don't do it right. So my concern here is to not mess up my data with a Laravel Migrate. I don't believe the migrate 'down' method can undo corrupted data.
If my data fits the varchar target size and if the data fits the UTF-8 charset, am I good with a straight forward Alter statement:
DB::query("ALTER TABLE DB1.products CHANGE COLUMN dimensions dimensions varchar(128) DEFAULT NULL;");
You shouldn't use sql for this, just create a migration and use change method
Schema::table('table_name', function ($table) {
$table->string('column_name')->change();
});
https://laravel.com/docs/5.7/migrations#modifying-columns
Considering your comment sql would be
ALTER TABLE tablename MODIFY column_name VARCHAR(128);
Run composer install and then composer update in the console and
drop your table from the database and also delete the migration ,then create a new migration using
php artisan make:migration change_col_datatype --table=table_name
and then make changes as below
Schema::table('your_table_name', function ($table) {
$table->string('your_table_name');
});
public function down()
{
Schema::dropIfExists('tablename');
}
The SQL statment:
\DB::statement('ALTER TABLE products CHANGE COLUMN dimensions dimensions VARCHAR(128) DEFAULT NULL;');
Worked fine.

Doctrine schema update always try to add NOT NULL

I have a fresh Symfony 2.8 installation, with doctrine and MySQL 5.6 stack.
After executing a doctrine:schema:update --force, i can see
Database schema updated successfully! "x" queries were executed
Here is my problem : Even if i execute it multiple time, doctrine always find schema differences.
With the --dump-sql, i can see that all of these queries are related to :
adding NOT NULL on string primary key
adding NOT NULL on datetime
field
However, when i check my database, these columns already have a NOT NULL.
Here is an example on a single property/column :
class MyEntity
{
/**
* #ORM\Id
* #ORM\Column(type="string", length=5, name="cd_key")
* #ORM\GeneratedValue(strategy="AUTO")
*/
private $code;
...
Here is the result of a SHOW CREATE TABLE my_entity; :
CREATE TABLE `my_entity` (
`cd_key` varchar(5) COLLATE utf8_unicode_ci NOT NULL,
`label` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`number` int(11) NOT NULL,
PRIMARY KEY (`cd_key`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ;
And here the query doctrine try to execute with the doctrine:schema:update command :
ALTER TABLE my_entity CHANGE cd_key cd_key VARCHAR(5) NOT NULL;
I clean my Symfony cache between each command execution.
I try to add nullable=false on #Column annotation (event if it's already defined as an #Id), but no effect.
a doctrine:schema:validate don't find any mapping problem (except sync of course)
I try to drop and recreate the full database, but no effet.
Any ideas ?
This issue has been reported in 2017 at least here, here and here and supposed to be fixed by this PR.
Updating doctrine/dbal would be the solution (not working for me though):
$ composer require doctrine/dbal:^2.7.1
Unsetting the server version (mysql/mariadb) from the configuration would also fix the problem (still not for me though).
If one is using migrations he can still adjust them manually (but his schema will always be unsynced).
I've encountered a similar problem. For me deleting the table using SQL and then running again DOCTRINE:SCHEMA:UPDATE --FORCE worked for me.
It seems that doing some SQL requests manualy confuses Doctrine.
Saying that, i'm assuming you've put #ORM\Table(name="my_entity") and #ORM\Entity(repositoryClass="myrepository") over your class definition ;).
Hope it helped.

Synchronizing XML file to MySQL database

My company uses an internal management software for storing products. They want to transpose all the products in a MySql database so they can do available their products on the company website.
Notice: they will continue to use their own internal software. This software can exports all the products in various file format (including XML).
The syncronization not have to be in real time, they are satisfied to syncronize the MySql database once a day (late night).
Also, each product in their software has one or more images, then I have to do available also the images on the website.
Here is an example of an XML export:
<?xml version="1.0" encoding="UTF-8"?>
<export_management userid="78643">
<product id="1234">
<version>100</version>
<insert_date>2013-12-12 00:00:00</insert_date>
<warrenty>true</warrenty>
<price>139,00</price>
<model>
<code>324234345</code>
<model>Notredame</model>
<color>red</color>
<size>XL</size>
</model>
<internal>
<color>green</color>
<size>S</size>
</internal>
<options>
<s_option>aaa</s_option>
<s_option>bbb</s_option>
<s_option>ccc</s_option>
<s_option>ddd</s_option>
<s_option>eee</s_option>
<s_option>fff</s_option>
...
<extra_option>ggg</extra_option>
<extra_option>hhh</extra_option>
<extra_option>jjj</extra_option>
<extra_option>kkk</extra_option>
...
</options>
<images>


</images>
</product>
<product id="5321">
...
</product>
<product id="2621">
...
</product>
...
</export_management>
Some ideas for how can I do it?
Please let me know if my question is not clear. Thanks
EDIT:
I used a SQL like this for each table to fill them with the XML datas:
LOAD XML LOCAL INFILE '/products.xml' INTO TABLE table_name ROWS IDENTIFIED BY '<tag_name>';
Then, checking the tables content I can see that the field "id" (primary key) automatically has mantained itself the same for each respective product row in each tables. That's correct and suprisingly awesome!
The problem now is for the parameter <options> because it contains sub-parameters with same name (<s_option> and <extra_option>). The values of these tags are always different (that is, there is no a specific list of values, they are inserted manually by an employee) and also I don't know how many are for each product. I read that storing them as an array is not so good but if it's the only simple solution I can get it.
The way that I would approach the problem in your case is:
Create a respective set of corresponding tables in the database which in turn will represent the company's Product model by extracting the modelling from your given XML.
Create and use a scheduled daily synchronization job, that probably will executes few SQL commands in order to refresh the data or introduce a new one by parsing the products XMLs into the created tables.
To be more practical about it all:
As for the database's tables, I can easily identify three tables to be created based on your XML, look at the yellow marked elements:
Products
ProductsOptions
ProductsImages
(This diagram created based on an XSD that was generated from your XML)
All rest can be considered as regular columns in the Products table since they're constitutes a 1-1 relationship only.
Next, create the required tables in your database (you can use an XSD2DB Schema converter tool to create the DDL script, I did it manually):
companydb.products
CREATE TABLE companydb.products (
Id INT(11) NOT NULL,
Version INT(11) DEFAULT NULL,
InsertDate DATETIME DEFAULT NULL,
Warrenty TINYINT(1) DEFAULT NULL,
Price DECIMAL(19, 2) DEFAULT NULL,
ModelCode INT(11) DEFAULT NULL,
ModelColor VARCHAR(10) DEFAULT NULL,
Model VARCHAR(255) DEFAULT NULL,
ModelSize VARCHAR(10) DEFAULT NULL,
InternalColor VARCHAR(10) DEFAULT NULL,
InternalSize VARCHAR(10) DEFAULT NULL,
PRIMARY KEY (Id)
)
ENGINE = INNODB
CHARACTER SET utf8
COLLATE utf8_general_ci
COMMENT = 'Company''s Products';
companydb.productsimages
CREATE TABLE companydb.productimages (
Id INT(11) NOT NULL AUTO_INCREMENT,
ProductId INT(11) DEFAULT NULL,
Size VARCHAR(10) DEFAULT NULL,
FileName VARCHAR(255) DEFAULT NULL,
PRIMARY KEY (Id),
CONSTRAINT FK_productsimages_products_Id FOREIGN KEY (ProductId)
REFERENCES companydb.products(Id) ON DELETE RESTRICT ON UPDATE RESTRICT
)
ENGINE = INNODB
AUTO_INCREMENT = 1
CHARACTER SET utf8
COLLATE utf8_general_ci
COMMENT = 'Products'' Images';
companydb.productsoptions
CREATE TABLE companydb.productoptions (
Id INT(11) NOT NULL AUTO_INCREMENT,
ProductId INT(11) DEFAULT NULL,
Type VARCHAR(255) DEFAULT NULL,
`Option` VARCHAR(255) DEFAULT NULL,
PRIMARY KEY (Id),
CONSTRAINT FK_producstsoptions_products_Id FOREIGN KEY (ProductId)
REFERENCES companydb.products(Id) ON DELETE RESTRICT ON UPDATE RESTRICT
)
ENGINE = INNODB
AUTO_INCREMENT = 1
CHARACTER SET utf8
COLLATE utf8_general_ci;
As for the synchronisation job process to take place, you can easily create an MySql event and use the Event Scheduler to control it, I created the required event which is calling a stored-procedure that you'll find below (SyncProductsDataFromXML), look:
CREATE DEFINER = 'root'#'localhost' EVENT
companydb.ProductsDataSyncEvent ON SCHEDULE EVERY '1' DAY STARTS
'2014-06-13 01:27:38' COMMENT 'Synchronize Products table with
Products XMLs' DO BEGIN SET #productsXml =
LOAD_FILE('C:/MySqlXmlSync/products.xml'); CALL
SyncProductsDataFromXML(#productsXml); END;
ALTER EVENT companydb.ProductsDataSyncEvent ENABLE
Now the interesting part is taking place, here is the synchronization stored-procedure (note how the event above is calling it):
CREATE DEFINER = 'root'#'localhost'
PROCEDURE companydb.SyncProductsDataFromXML(IN productsXml MEDIUMTEXT)
BEGIN
DECLARE totalProducts INT;
DECLARE productIndex INT;
SET totalProducts = ExtractValue(productsXml, 'count(//export_management/product)');
SET productIndex = 1;
WHILE productIndex <= totalProducts DO
SET #productId = CAST(ExtractValue(productsXml, 'export_management/product[$productIndex]/#id') AS UNSIGNED);
INSERT INTO products(`Id`, `Version`, InsertDate, Warrenty, Price, ModelCode, Model, ModelColor, ModelSize, InternalColor, InternalSize)
VALUES(
#productId,
ExtractValue(productsXml, 'export_management/product[$productIndex]/version'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/insert_date'),
CASE WHEN (ExtractValue(productsXml, 'export_management/product[$productIndex]/warrenty')) <> 'false' THEN 1 ELSE 0 END,
CAST(ExtractValue(productsXml, 'export_management/product[$productIndex]/price') as DECIMAL),
ExtractValue(productsXml, 'export_management/product[$productIndex]/model/code'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/model/model'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/model/color'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/model/size'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/internal/color'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/internal/size')
);
SET #totalImages = ExtractValue(productsXml, 'count(//export_management/product[$productIndex]/images/image)');
SET #imageIndex = 1;
WHILE (#imageIndex <= #totalImages) DO
INSERT INTO productimages(ProductId, Size, FileName) VALUES(#productId, 'small', EXTRACTVALUE(productsXml, 'export_management/product[$productIndex]/images/image[$#imageIndex]/small'));
SET #imageIndex = #imageIndex + 1;
END WHILE;
SET #totalStandardOptions = ExtractValue(productsXml, 'count(//export_management/product[$productIndex]/options/s_option)');
SET #standardOptionIndex = 1;
WHILE (#standardOptionIndex <= #totalStandardOptions) DO
INSERT INTO productoptions(ProductId, `Type`, `Option`) VALUES(#productId, 'Standard Option', EXTRACTVALUE(productsXml, 'export_management/product[$productIndex]/options/s_option[$#standardOptionIndex]'));
SET #standardOptionIndex = #standardOptionIndex + 1;
END WHILE;
SET #totalExtraOptions = ExtractValue(productsXml, 'count(//export_management/product[$productIndex]/options/extra_option)');
SET #extraOptionIndex = 1;
WHILE (#extraOptionIndex <= #totalExtraOptions) DO
INSERT INTO productoptions(ProductId, `Type`, `Option`) VALUES(#productId, 'Extra Option', EXTRACTVALUE(productsXml, 'export_management/product[$productIndex]/options/extra_option[$#extraOptionIndex]'));
SET #extraOptionIndex = #extraOptionIndex + 1;
END WHILE;
SET productIndex = productIndex + 1;
END WHILE;
END
And you're done, this is the final expected results from this process:
NOTE: I've commit the entire code to one of my GitHub's repositories: XmlSyncToMySql
UPDATE:
Because your XML data might be larger then the maximum allowed for a TEXT field, I've changed the productsXml parameter to a MEDIUMTEXT. Look at this answer which outlines the various text datatypes max allowed size:
Maximum length for MYSQL type text
As this smells like integration work, I would suggest a multi-pass, multi-step procedure with an interim format that is not only easy to import into mysql but which also helps you to wrap your mind around the problems this integration ships with and test a solution in small steps.
This procedure works well if you can flatten the tree structure that can or could be expressed within the XML export into a list of products with fixed named attributes.
query all product elements with an xpath query from the XML, iterate the result of products
query all product attributes relative to the context node of the product from the previous query. Use one xpath per each attribute again.
store the result of all attributes per each product as one row into a CSV file.
store the filenames in the CSV as well (the basenames), but the files into a folder of it's own
create the DDL of the mysql table in form of an .sql file
run that .sql file against mysql commandline.
import the CSV file into that table via mysql commandline.
You should get quick results within hours. If it turns out that products can not be mapped on a single row because of attributes having multiple values (what you call an array in your question), consider to turn these into JSON strings if you can not prevent to drop them at all (just hope you don't need to display complex data in the beginning). Doing so would be violating to target a normal form, however as you describe the Mysql table is only intermediate here as well, I would aim for simpleness of the data-structure in the database as otherwise queries for a simple and fast display on the website will create the next burden.
So my suggestion here basically is: Turn the tree structure into a (more) flat list for both simplification of transition and easier templating for display.
Having an intermediate format here also allows you to replay in case things are going wrong.
It also allows you to mock the whole templating more easily.
Alterantively is is also possible to store the XML of each project inside the database (keep the chunks in a second table so you can keep varchar (variable length) fileds out of the first table) and keep some other columns as (flat) reference columns to query against. If it's for templating needs, turning the XML into a SimpleXMLElement is often very nice to have it being a structured, non-primitive data-type as view object you can traverse and loop over options. Would work similar with JSON however keeping the XML would not break a format boundary and XML can also express more structure than JSON.
You're taking a very technology-centered approach to this. I think it's wise to start by looking at the functional specifications.
It helps to have a simple UML class diagram of the business class Product. Show its attributes as the business sees them. So:
How is Model related to Product? Can there be multiple Models for one Product or the other way around?
What kind of data is stored in the Internal element? In particular: How can Internal's color and size be different from Model's color and size?
And specifically about the web application:
Is the Web application the only application that will be interested in this export?
Should the web application care about versioning or simply display the last available version?
Which specific options are interesting to the web application. Like a discount property or vendor name property or others?
What should the Product details page look like, what data needs to be displayed where?
Will there be other pages on the website displaying product information, and what product information will they list?
Then you'll know which attributes need to be readily available to the web application (as columns in the Product table) and (perhaps) which ones may be simply stored in one big XML blob in the database.

Codeigniter upgradable module logic for database process

i am trying to build my own cms with using Codeigniter.
I wrote some modules already. But, in time i made some changes with them.
Now, if i want to upgrade a module. I send the files with ftp and change database fields with phpmyadmin.
It takes a lot time and high possibility to mis something to change, and for every project i've use this module, i have to repeat these changes again.
Now, I am planning to make an installation system.
my Modules directory structure like below:
/modules
/modules/survey/
/modules/survey/config
/modules/survey/config/autoload.php
/modules/survey/config/config.php
/modules/survey/config/routes.php
/modules/survey/config/install.php
/modules/survey/controllers
/modules/survey/controllers/entry.php...
/modules/survey/models
/modules/survey/models/survey.php...
/modules/survey/views
/modules/survey/views/index.php...
I thought that all modules should have an install.php file in config directory. That is keeping the setting of releated module. Like below:
$config['version'] = 1.1; //or 1.2, 1.3 etc.
$config['module'] = 'Survey Module';
$config['module_slug'] = 'survey';
$config['module_db_table'] = 'module_survey';
I have an installed_modules table already:
id, module, module_slug, version
Now, i am trying to make an installation script. Like below:
Before start , I zip module's files.
1- upload zip file with an installation page to a temp directory
2- unzip the module in this temp direcorty
3- Find install.php
4- Get modules information from install.php
5- Check if this module already in installed_modules table.
6a) If it's not: I will make a new module_survey table. And copy this temp directory into the real modules directory.
6b) If it's already : I have to change the structure of this table without lossing the data added before. Delete all module files and copy the new ones from temp into the modules directory.
7- When everything done, Delete temp directory.
I stucked in 6a and 6b.
For 6a, How should i create e.x 'module_survey' table.
Should i add a $config['db_query'] in install.php like
$config['db_query'] = "CREATE TABLE IF NOT EXISTS `module_survey` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(50) DEFAULT NULL,
`lang_id` int(11) NOT NULL DEFAULT '1',
`usort` int(3) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ;
";
and run this query. Or what is your advice here? There maybe not just one table, there should 2 or more with relations each other for different modules.
and For 6b:
I thought, i should create a new temp table like named "temp_module_survey".
old fields =
$oldFields = $this->db->field_data('module_survey');
for new fields =
$newFields = $this->db->field_data('temp_module_survey');
compare fields which are newly added, which are deleted and which's fieldData has changed.
And
add new fields to oldTable
Delete unnecessary fields from oldTable
and update fields which's fieldData has changed.
Then remove temporary Table.
For a summary, What should i do for database changes without lossing the old data.
I hope i could explain.
Thank you.
Phil Sturgeon's codeigniter-migrations can help.

Why can't I set the auto_increment value in my magento setup script?

I am creating a custom module with a few custom database tables. I need to set the auto_increment value to 5000 rather than having the default of 1. This can be accomplished pretty easily, but I am running into problems when trying to do it via a Magento install script. I want to know why, and how to work around the issue. Here are more details.
When I run the following create statement from a regular mysql client (like Heidi SQL, or the standard cli) the auto_increment value gets set correctly. It gets set to 5000.
CREATE TABLE mytable (
myid INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
other_column INTEGER NULL
) ENGINE=InnoDb DEFAULT CHARSET=UTF8 AUTO_INCREMENT=5000;
But when I put that exact same query into a magento install script the auto_increment is set to 1 after it runs. To be clear, the table is created as I expect, except for the fact that the auto_increment isn't set to 5000. Here is the code in the install script.
file: app/code/local/Mycompany/Mymodule/sql/mymodule_setup/mysql4-install-0.0.1.php
<?php
$installer = $this;
$installer->startSetup();
$installer->run("
CREATE TABLE {$this->getTable('mytable')} (
myid INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
other_column INTEGER NULL
) ENGINE=InnoDb DEFAULT CHARSET=UTF8 AUTO_INCREMENT=5000;
");
$installer->endSetup();
Why is this happening? Are there any workarounds?
(I'll also mention that I have tried to set the auto_increment with an ALTER statement, and I get the same problem)
You can setup your install script as an .sql file instead of .php mysql4-install-0.0.1.sql
Checkout Mage/Core/Model/Resource/Setup.php _modifyResourceDb
try {
switch ($fileType) {
case 'sql':
$sql = file_get_contents($sqlFile);
if ($sql!='') {
$result = $this->run($sql);
} else {
$result = true;
}
break;
case 'php':
$conn = $this->_conn;
$result = include($sqlFile);
break;
default:
$result = false;
}
Short answer: Script wise I don't think its possible, and using the .sql method may not work as well since it is still calling the the run(); method.
Add an auto_increment column in Magento setup script without using SQL
You can also take a look at lib/Varient/Db/Adapter/Pdo/Mysql.php for more deeper reference to whats going on in the background like, multi_query and _splitMultiQuery
Some more further reading for connection methods can be found here also:
ALTER TABLE in Magento setup script without using SQL
More than likely your going to have to go outside the realm of "The Magento Way" of doing things and after your module is installed make a custom post-install script that would run on your table and adjust the auto_increment directly.
I don't think you can set the primary key that way. All my code is built the fillowing way and it works perfectly fine:
<?php
$installer = $this;
$installer->startSetup();
$installer->run("
CREATE TABLE {$this->getTable('mytable')} (
myid INTEGER NOT NULL auto_increment,
other_column INTEGER NULL,
PRIMARY KEY (`myid`)
) ENGINE=InnoDb DEFAULT CHARSET=UTF8 AUTO_INCREMENT=5000;
");
$installer->endSetup();