I think I know the answer but I am not confident enough yet to just go with it.
I am thinking i have redundant info that can be fixed by just making a table for the the description . Below in my monster inserts i have 'mdesc' it has the same data that is in the dialogue inserts i just have it as 'intro'. Should i make a monster description table to hold a mdescid and and description? something like this
CREATE TABLE `mdesc`(
`mdescid` int(15) UNSIGNED NOT NULL,
`monsterdesc` varchar(50) NOT NULL
);
SO then i could just put the mdescid in both my dialogue table and the monsters table and get rid of the intro insert
--monster inserts
INSERT INTO monster(monsterid,monstername,monsterloc,mdesc,weaponval)values(1,dragon,11,'a dragon',1);
INSERT INTO monster(monsterid,monstername,monsterloc,mdesc,weaponval)values(2,spider,8,'a poisonus spider',2);
INSERT INTO monster(monsterid,monstername,monsterloc,mdesc,weaponval)values(3,wasps,7,'a swarm of wasps',3);
INSERT INTO monster(monsterid,monstername,monsterloc,mdesc,weaponval)values(4,enchantress,13,'a seductive enchantress',4);
INSERT INTO monster(monsterid,monstername,monsterloc,mdesc,weaponval)values(5,bellyfish,5,'a slimy belly fish',5);
-- dialogue inserts
INSERT INTO dialogue(monsterid,intro,attack,outcome1,outcome2,outcome3)
values(1,'"a dragon."','"you fight and,"','" kill it with your sword!"','"it kills and eats you."','" you both run away!"');
There's no reason to spread the info for one monster over three tables. That's called a one-to-one relationship, one row of a table relates to only row of another table, and vice versa. It's rarely useful.
What you could do is change dialogue to have the monster, situation, and text.
create table monster_dialogues (
monster_id integer not null
foreign key(monster_id) references monsters(id)
situation varchar(255) not null,
dialogue varchar(255) not null
unique(monster_id, situation)
)
This is a one-to-many relationship, each monster has many dialogues. This avoids having to add a new column every time you have a new situation.
And instead of reproducing the same basic text over and over, have a default in your code.
-- Note, avoid including formatting like quotes and conjunctions in your data.
-- It restricts how the data can be used.
-- Leave the formatting of dialogue into sentences to the application.
select coalesce(dialogue, 'you fight')
from monster_dialogues
where monster_id = ?
and situation = 'attack'
Note that using a database for this is probably overkill. Unless there's a very large number of monsters or you need to search through a large number of dialogues a simple JSON file would do.
{
"dragon": {
"damage": 100,
"health": 200,
"dialogues": {
"attack": "you fight",
}
}
}
Related
The SQLite JSON1 extension has some really neat capabilities. However, I have not been able to figure out how I can update or insert individual JSON attribute values.
Here is an example
CREATE TABLE keywords
(
id INTEGER PRIMARY KEY,
lang INTEGER NOT NULL,
kwd TEXT NOT NULL,
locs TEXT NOT NULL DEFAULT '{}'
);
CREATE INDEX kwd ON keywords(lang,kwd);
I am using this table to store keyword searches and recording the locations from which the search was ininitated in the object locs. A sample entry in this database table would be like the one shown below
id:1,lang:1,kwd:'stackoverflow',locs:'{"1":1,"2":1,"5":1}'
The location object attributes here are indices to the actual locations stored elsewhere.
Now imagine the following scenarios
A search for stackoverflow is initiated from location index "2". In this case I simply want to increment the value at that index so that after the operation the corresponding row reads
id:1,lang:1,kwd:'stackoverflow',locs:'{"1":1,"2":2,"5":1}'
A search for stackoverflow is initiated from a previously unknown location index "7" in which case the corresponding row after the update would have to read
id:1,lang:1,kwd:'stackoverflow',locs:'{"1":1,"2":1,"5":1,"7":1}'
It is not clear to me that this can in fact be done. I tried something along the lines of
UPDATE keywords json_set(locs,'$.2','2') WHERE kwd = 'stackoverflow';
which gave the error message error near json_set. I'd be most obliged to anyone who might be able to tell me how/whether this should/can be done.
It is not necessary to create such complicated SQL with subqueries to do this.
The SQL below would solve your needs.
UPDATE keywords
SET locs = json_set(locs,'$.7', IFNULL(json_extract(locs, '$.7'), 0) + 1)
WHERE kwd = 'stackoverflow';
I know this is old, but it's like the first link when searching, it deserves a better solution.
I could have just deleted this question but given that the SQLite JSON1 extension appears to be relatively poorly understood I felt it would be more useful to provide an answer here for the benefit of others. What I have set out to do here is possible but the SQL syntax is rather more convoluted.
UPDATE keywords set locs =
(select json_set(json(keywords.locs),'$.**N**',
ifnull(
(select json_extract(keywords.locs,'$.**N**') from keywords where id = '1'),
0)
+ 1)
from keywords where id = '1')
where id = '1';
will accomplish both of the updates I have described in my original question above. Given how complicated this looks a few explanations are in order
The UPDATE keywords part does the actual updating, but it needs to know what to updatte
The SELECT json_set part is where we establish the value to be updated
If the relevant value does not exsit in the first place we do not want to do a + 1 on a null value so we do an IFNULL TEST
The WHERE id = bits ensure that we target the right row
Having now worked with JSON1 in SQLite for a while I have a tip to share with others going down the same road. It is easy to waste your time writing extremely convoluted and hard to maintain SQL in an effort to perform in-place JSON manipulation. Consider using SQLite in memory tables - CREATE TEMP TABLE... to store intermediate results and write a sequence of SQL statements instead. This makes the code a whole lot eaiser to understand and to maintain.
I have two tables which are Many-To-One mapped. However, it is important to maintain the order of the second table, so when I use automapping, Fluent automapper creates a bag. I changed this to force a list by using this command:
.Override(Of ingredients)(Function(map) map.HasMany(Function(x) x.PolygonData).AsList())
(VB.NET syntax)
So I say "AsList" and instead of using a bag, the mapping xml which gets generated contains a list now. Fine so far. However,
the statement generated cannot be handled by MySQL. I use MySQL55Dialect to create the statements and I use exactly that version. But it creates the following create:
create table `ingredients` (
Id INTEGER NOT NULL AUTO_INCREMENT,
Name FLOAT,
Amout FLOAT,
Soup_id INTEGER,
Index INTEGER,
primary key (Id)
)
It crashes because of the line "Index INTEGER," but I don't know what to do here. Any ideas?
Thanks!!
Best,
Chris
I would suspect that Index could be a keyword for MySQL. To avoid such conflict, we can define different Index column name (sorry for C# notation)
HasMany(x => x.PolygonData)
.AsList(idx => idx.Column("indexColumnName").Type<int>())
My company uses an internal management software for storing products. They want to transpose all the products in a MySql database so they can do available their products on the company website.
Notice: they will continue to use their own internal software. This software can exports all the products in various file format (including XML).
The syncronization not have to be in real time, they are satisfied to syncronize the MySql database once a day (late night).
Also, each product in their software has one or more images, then I have to do available also the images on the website.
Here is an example of an XML export:
<?xml version="1.0" encoding="UTF-8"?>
<export_management userid="78643">
<product id="1234">
<version>100</version>
<insert_date>2013-12-12 00:00:00</insert_date>
<warrenty>true</warrenty>
<price>139,00</price>
<model>
<code>324234345</code>
<model>Notredame</model>
<color>red</color>
<size>XL</size>
</model>
<internal>
<color>green</color>
<size>S</size>
</internal>
<options>
<s_option>aaa</s_option>
<s_option>bbb</s_option>
<s_option>ccc</s_option>
<s_option>ddd</s_option>
<s_option>eee</s_option>
<s_option>fff</s_option>
...
<extra_option>ggg</extra_option>
<extra_option>hhh</extra_option>
<extra_option>jjj</extra_option>
<extra_option>kkk</extra_option>
...
</options>
<images>


</images>
</product>
<product id="5321">
...
</product>
<product id="2621">
...
</product>
...
</export_management>
Some ideas for how can I do it?
Please let me know if my question is not clear. Thanks
EDIT:
I used a SQL like this for each table to fill them with the XML datas:
LOAD XML LOCAL INFILE '/products.xml' INTO TABLE table_name ROWS IDENTIFIED BY '<tag_name>';
Then, checking the tables content I can see that the field "id" (primary key) automatically has mantained itself the same for each respective product row in each tables. That's correct and suprisingly awesome!
The problem now is for the parameter <options> because it contains sub-parameters with same name (<s_option> and <extra_option>). The values of these tags are always different (that is, there is no a specific list of values, they are inserted manually by an employee) and also I don't know how many are for each product. I read that storing them as an array is not so good but if it's the only simple solution I can get it.
The way that I would approach the problem in your case is:
Create a respective set of corresponding tables in the database which in turn will represent the company's Product model by extracting the modelling from your given XML.
Create and use a scheduled daily synchronization job, that probably will executes few SQL commands in order to refresh the data or introduce a new one by parsing the products XMLs into the created tables.
To be more practical about it all:
As for the database's tables, I can easily identify three tables to be created based on your XML, look at the yellow marked elements:
Products
ProductsOptions
ProductsImages
(This diagram created based on an XSD that was generated from your XML)
All rest can be considered as regular columns in the Products table since they're constitutes a 1-1 relationship only.
Next, create the required tables in your database (you can use an XSD2DB Schema converter tool to create the DDL script, I did it manually):
companydb.products
CREATE TABLE companydb.products (
Id INT(11) NOT NULL,
Version INT(11) DEFAULT NULL,
InsertDate DATETIME DEFAULT NULL,
Warrenty TINYINT(1) DEFAULT NULL,
Price DECIMAL(19, 2) DEFAULT NULL,
ModelCode INT(11) DEFAULT NULL,
ModelColor VARCHAR(10) DEFAULT NULL,
Model VARCHAR(255) DEFAULT NULL,
ModelSize VARCHAR(10) DEFAULT NULL,
InternalColor VARCHAR(10) DEFAULT NULL,
InternalSize VARCHAR(10) DEFAULT NULL,
PRIMARY KEY (Id)
)
ENGINE = INNODB
CHARACTER SET utf8
COLLATE utf8_general_ci
COMMENT = 'Company''s Products';
companydb.productsimages
CREATE TABLE companydb.productimages (
Id INT(11) NOT NULL AUTO_INCREMENT,
ProductId INT(11) DEFAULT NULL,
Size VARCHAR(10) DEFAULT NULL,
FileName VARCHAR(255) DEFAULT NULL,
PRIMARY KEY (Id),
CONSTRAINT FK_productsimages_products_Id FOREIGN KEY (ProductId)
REFERENCES companydb.products(Id) ON DELETE RESTRICT ON UPDATE RESTRICT
)
ENGINE = INNODB
AUTO_INCREMENT = 1
CHARACTER SET utf8
COLLATE utf8_general_ci
COMMENT = 'Products'' Images';
companydb.productsoptions
CREATE TABLE companydb.productoptions (
Id INT(11) NOT NULL AUTO_INCREMENT,
ProductId INT(11) DEFAULT NULL,
Type VARCHAR(255) DEFAULT NULL,
`Option` VARCHAR(255) DEFAULT NULL,
PRIMARY KEY (Id),
CONSTRAINT FK_producstsoptions_products_Id FOREIGN KEY (ProductId)
REFERENCES companydb.products(Id) ON DELETE RESTRICT ON UPDATE RESTRICT
)
ENGINE = INNODB
AUTO_INCREMENT = 1
CHARACTER SET utf8
COLLATE utf8_general_ci;
As for the synchronisation job process to take place, you can easily create an MySql event and use the Event Scheduler to control it, I created the required event which is calling a stored-procedure that you'll find below (SyncProductsDataFromXML), look:
CREATE DEFINER = 'root'#'localhost' EVENT
companydb.ProductsDataSyncEvent ON SCHEDULE EVERY '1' DAY STARTS
'2014-06-13 01:27:38' COMMENT 'Synchronize Products table with
Products XMLs' DO BEGIN SET #productsXml =
LOAD_FILE('C:/MySqlXmlSync/products.xml'); CALL
SyncProductsDataFromXML(#productsXml); END;
ALTER EVENT companydb.ProductsDataSyncEvent ENABLE
Now the interesting part is taking place, here is the synchronization stored-procedure (note how the event above is calling it):
CREATE DEFINER = 'root'#'localhost'
PROCEDURE companydb.SyncProductsDataFromXML(IN productsXml MEDIUMTEXT)
BEGIN
DECLARE totalProducts INT;
DECLARE productIndex INT;
SET totalProducts = ExtractValue(productsXml, 'count(//export_management/product)');
SET productIndex = 1;
WHILE productIndex <= totalProducts DO
SET #productId = CAST(ExtractValue(productsXml, 'export_management/product[$productIndex]/#id') AS UNSIGNED);
INSERT INTO products(`Id`, `Version`, InsertDate, Warrenty, Price, ModelCode, Model, ModelColor, ModelSize, InternalColor, InternalSize)
VALUES(
#productId,
ExtractValue(productsXml, 'export_management/product[$productIndex]/version'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/insert_date'),
CASE WHEN (ExtractValue(productsXml, 'export_management/product[$productIndex]/warrenty')) <> 'false' THEN 1 ELSE 0 END,
CAST(ExtractValue(productsXml, 'export_management/product[$productIndex]/price') as DECIMAL),
ExtractValue(productsXml, 'export_management/product[$productIndex]/model/code'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/model/model'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/model/color'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/model/size'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/internal/color'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/internal/size')
);
SET #totalImages = ExtractValue(productsXml, 'count(//export_management/product[$productIndex]/images/image)');
SET #imageIndex = 1;
WHILE (#imageIndex <= #totalImages) DO
INSERT INTO productimages(ProductId, Size, FileName) VALUES(#productId, 'small', EXTRACTVALUE(productsXml, 'export_management/product[$productIndex]/images/image[$#imageIndex]/small'));
SET #imageIndex = #imageIndex + 1;
END WHILE;
SET #totalStandardOptions = ExtractValue(productsXml, 'count(//export_management/product[$productIndex]/options/s_option)');
SET #standardOptionIndex = 1;
WHILE (#standardOptionIndex <= #totalStandardOptions) DO
INSERT INTO productoptions(ProductId, `Type`, `Option`) VALUES(#productId, 'Standard Option', EXTRACTVALUE(productsXml, 'export_management/product[$productIndex]/options/s_option[$#standardOptionIndex]'));
SET #standardOptionIndex = #standardOptionIndex + 1;
END WHILE;
SET #totalExtraOptions = ExtractValue(productsXml, 'count(//export_management/product[$productIndex]/options/extra_option)');
SET #extraOptionIndex = 1;
WHILE (#extraOptionIndex <= #totalExtraOptions) DO
INSERT INTO productoptions(ProductId, `Type`, `Option`) VALUES(#productId, 'Extra Option', EXTRACTVALUE(productsXml, 'export_management/product[$productIndex]/options/extra_option[$#extraOptionIndex]'));
SET #extraOptionIndex = #extraOptionIndex + 1;
END WHILE;
SET productIndex = productIndex + 1;
END WHILE;
END
And you're done, this is the final expected results from this process:
NOTE: I've commit the entire code to one of my GitHub's repositories: XmlSyncToMySql
UPDATE:
Because your XML data might be larger then the maximum allowed for a TEXT field, I've changed the productsXml parameter to a MEDIUMTEXT. Look at this answer which outlines the various text datatypes max allowed size:
Maximum length for MYSQL type text
As this smells like integration work, I would suggest a multi-pass, multi-step procedure with an interim format that is not only easy to import into mysql but which also helps you to wrap your mind around the problems this integration ships with and test a solution in small steps.
This procedure works well if you can flatten the tree structure that can or could be expressed within the XML export into a list of products with fixed named attributes.
query all product elements with an xpath query from the XML, iterate the result of products
query all product attributes relative to the context node of the product from the previous query. Use one xpath per each attribute again.
store the result of all attributes per each product as one row into a CSV file.
store the filenames in the CSV as well (the basenames), but the files into a folder of it's own
create the DDL of the mysql table in form of an .sql file
run that .sql file against mysql commandline.
import the CSV file into that table via mysql commandline.
You should get quick results within hours. If it turns out that products can not be mapped on a single row because of attributes having multiple values (what you call an array in your question), consider to turn these into JSON strings if you can not prevent to drop them at all (just hope you don't need to display complex data in the beginning). Doing so would be violating to target a normal form, however as you describe the Mysql table is only intermediate here as well, I would aim for simpleness of the data-structure in the database as otherwise queries for a simple and fast display on the website will create the next burden.
So my suggestion here basically is: Turn the tree structure into a (more) flat list for both simplification of transition and easier templating for display.
Having an intermediate format here also allows you to replay in case things are going wrong.
It also allows you to mock the whole templating more easily.
Alterantively is is also possible to store the XML of each project inside the database (keep the chunks in a second table so you can keep varchar (variable length) fileds out of the first table) and keep some other columns as (flat) reference columns to query against. If it's for templating needs, turning the XML into a SimpleXMLElement is often very nice to have it being a structured, non-primitive data-type as view object you can traverse and loop over options. Would work similar with JSON however keeping the XML would not break a format boundary and XML can also express more structure than JSON.
You're taking a very technology-centered approach to this. I think it's wise to start by looking at the functional specifications.
It helps to have a simple UML class diagram of the business class Product. Show its attributes as the business sees them. So:
How is Model related to Product? Can there be multiple Models for one Product or the other way around?
What kind of data is stored in the Internal element? In particular: How can Internal's color and size be different from Model's color and size?
And specifically about the web application:
Is the Web application the only application that will be interested in this export?
Should the web application care about versioning or simply display the last available version?
Which specific options are interesting to the web application. Like a discount property or vendor name property or others?
What should the Product details page look like, what data needs to be displayed where?
Will there be other pages on the website displaying product information, and what product information will they list?
Then you'll know which attributes need to be readily available to the web application (as columns in the Product table) and (perhaps) which ones may be simply stored in one big XML blob in the database.
I'm having an issue with trying to perform a full text search on a table in mysql. The table in question has the following create command:
CREATE TABLE IF NOT EXISTS `resume_contents`
(`resumeid` int(100) NOT NULL AUTO_INCREMENT,
`jobseekerid` varchar(255) NOT NULL,
`resumecontents` text,
PRIMARY KEY (`resumeid`),
UNIQUE KEY `jobseekerid` (`jobseekerid`),
FULLTEXT KEY `resumecontents` (`resumecontents`) )
ENGINE=MyISAM DEFAULT CHARSET=latin1 ;
So basically just 3 columns with fulltext on one column only. Then when I try and do a full text search with the following command:
SELECT * FROM `resume_contents` WHERE MATCH (resumecontents) AGAINST('')
I get an empty result set everytime, no matter what words are in the against clause. What stupid error am I making here?
As an example here is a row I'm testing with, it has a resume sample from resume.com.
INSERT INTO `resume_contents` (`resumeid`, `jobseekerid`, `resumecontents`) VALUES (2, '14', 'Objectives\nI am looking for a position at This Company where I can maximize my programming skills, project management, leadership skills, and create new and exciting tools.\n\n\nSummary\nI have had 3 years experience in web design and development. I am fimilar with a lot of the languages and software used in creating projects for the web. Futhermore, I have great project management skills and also work well either alone or with a group. I have also worked on small projects and large public releases for large companies.\n\n\nEducation\nDesign Media \nCollege Humor, Seattle, WA\nGraduated: January 2008 \nGrade: College\n\nEmployment History\nDecember 2007 – Present: Web Designer \nCompany: ComStream\nSeattle, Washington\nWorked as team lead on many different client projects.\n\n\nProfessional Skills\nAdobe Photoshop – Expert\n\nAdobe Illustrator – Expert\n\nAdobe Dreamweaver – Advanced\n\nPhp – Advanced\n\n\nQualification/Certification\nJanuary 2006: New Media Design \nOrganization: School of Technology\nAward: \nSan Fransico\nLanguages\nCzech – Beginner\n\nVietnamese – Conversational\n\n\nImmigration / Work Status\nDecember 2041 – Permanent Resident - United States');
Then I search with :
SELECT * FROM `resume_contents` WHERE MATCH(`resumecontents`) AGAINST ('Company')
Or just about any word or phrase (any length, any word) which I know is in there... and still it returns an empty set.
I know it must be something simple I'm missing but I don't know what it could be.
What words? You know, that there are certain words that are not considered like "the" or "how", words that just appear to often to be meaningful. Then the words have to have a minimum length.
And finally!
In addition, words that are present in 50% or more of the rows are considered common and do not match.
I bet that is the case here. Read more about it here.
I'm trying to implement a simple TPH example from http://msdn.microsoft.com/en-us/library/dd793152.aspx. I have two tables:
PERSON
[PersonID] [int] IDENTITY(1,1) NOT NULL,
[PersonTypeID] [int] NOT NULL,
[Name] [varchar](50) NOT NULL,
[HourlyRate] [int] NULL
PersonType
[PersonTypeID] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](50) NOT NULL
In EF designer, I follow the tutorial and create a new Entity called Employee and specify Person as the base type. Then move the HourlyRate property to Employee. In the Mapping Details window, I map the entity to the Person table and it properly maps HourlyRate property to the correct DB field. Then I set Person to abstract.
If I build it now without specifying a condition and a discriminator in the Employee entity, it builds fine. If I follow the tutorial and specify HourlyRate as the condition and use "Is" "Not Null" as the discriminator, it builds fine.
But I want to use PersonTypeID as the discriminator and an Employee should have a PersonTypeID of 1. So I select PersonTypeID as the the condition field, "=" as the operator, and 1 as the value. When I build, VS tells me the it's successful but also has something in the Error window.
Error 3032: Problem in mapping fragments starting at line
798:Condition member 'Person.PersonTypeID' with a condition other than
'IsNull=False' is mapped. Either remove the condition on
Person.PersonTypeID or remove it from the mapping.
I read in another article that I need to delete the PersonType navigation property in the Person Entity. So I tried that but still got the same error.
I thought I was able to get this to work before on another project but I'm not sure what changed for this one. The only thing different that I can think of is that I recently updated to EF 4.1.
This is what I have set up in the designer
Any suggestions are greatly appreciated!
I figured it out. Adding the PersonType entity to the designer threw everything off. The key was to delete the PersonType navigation property as well as the PersonTypeID property. But since I included the PersonType entity, deleting PersonTypeID broke the foreign key constraint.
By not including PersonType entity, I can delete PersonTypeID and everything compiles successfully.