I struggling to create a table that sets table parameters as well as creating the columns.
I am using MySQL server.
I require that the table meets the following criteria:
The table should be Called CUSTOMER with the columns CUST, LOCX, LOCY.
The column CUST will be a 1 up serial starting 1001 and will be the primary key.
LOCX and LOCY will contain X and Y Integers no greater than +-11, and will be foreign keys to other tables.
For info: I then intend to add my data to the table using the INSERT INTO function in a separate query that I already have.
Any direction on the construction of a query to create a table meeting the requirements above will be greatly appreciated
you can create a new table with a MySQL-GUI if you have problems with that.
These GUI-tools usually provide a New-Table button that also allows you to define your table without writing any code. They are often limited but should be more than sufficient for your needs. there are 1-month trial versions for paid versions and even completely free GUIs so you don't have to buy anything.
after that use the following code to retrieve "perfect" SQL from MySQL:
show create table your_schema_name.your_table_name
do that a few times and study the code. Soon you will be able to write create-table statements and include more complex column definitions on your own. It will also be easier to understand the MySQL Documentation which can be confusing and somehow intimidating with its completeness for beginners.
Related
I am trying to design my database. for now my schema is as below:
There exist 3 fixed tables already and during time the numbers of my tables will be increasing with the same fields but different values in tables (mm, xx, yy, zz,...). There is not any duplication between each table.
a) user table, who are the internal users.
b) map_platform_user, which mapped the internal users with the external users. Internal users are our users and external users are our partner users.
c) platform, is a partners' platforms.
d) mm or xx, is the user information that are difference for each platforms.
I have created the look up table called map_platform_user. This table have a field user_platform_id that I though to create a relation with the mm table. (PK-FK).
1) I though to create a multiple references like mm_user_id(PK), xx_user_id(PK),... to the user_platform_id (PK). However I am pretty sure that this solution is wrong and I can not have several references to one table.
2) My next solution is to create each time a new field to the table map_platform_user such as user_platform_id1, user_platform_id2,... but this solution requires altering each time the table map_platform_user.
3) The third solution I thought about that is to not create the extra tables and store everything in the map_platform_user table, so if the new platforms are coming based on our demands, I store them in the field user_platform_id. In this case I have to add name,email,.. to the table map_platform_user.
I would like to mention that the user_platform_id is an external_id which should be received from our clients.
The version of mysql is 5.5.47and the engine in InnoDB. I will be appreciated for any solution and discussion in order to improve the performance of my db. I will have a big amount of data.
This is something that has bothered me for a long time and i still have been unable to find an answer.
I have a huge system with alot of different features. What is common for this system is of course that my users can
create, update, read & delete
different parts of my system.
For simple reasons lets say i have an application that has the following features:
Document administration
Video administration
User administration
Salery administration
(Please do note i took these at random just to prove a point that all of these would have their own separate tables and does not necessarily be connected).
Now i wish to create some sort of logging system. So that when ever someone either create,update or delete an entity it will be recorded.
Now as far as i can see i can do this two ways.
1.
Create a logging table for each of the 4 features that is in my system. However with this method i am required to create a logging table for each new feature i add to the system. i would also have to combine data from X number of tables if i wish to create a log which potentially could be a huge task!
2.
i could create something like the following:
However once again i would have to add a col for each new feature i will add.
So my question is what is the best way for creating logging database architecture
Or is there an easier way?
Instead of one target_xx for each feature, you could do it this way:
target_id | target_type
1 video
4 document
5 user
2 user
or even better. A table with target types and insert only the respective id's on target_type
Something like this:
if you want to capture for each table creation and update date, i would just use the default and the update event from mysql. You can define the fields like this for a table:
ALTER TABLE table
ADD COLUMN CreateDate Datetime DEFAULT CURRENT_TIMESTAMP,
ADD COLUMN LastModifiedDate Datetime ON UPDATE CURRENT_TIMESTAMP;
You can add these 2 fields in all tables. If you want to use one central table for logging (which might be more difficult to manage, because you always need to create joins, maybe also worse performance), then I would work with triggers.
We are building an analytics engine which has to store attribute preference score for each user. We are expecting 400 attributes and they may change(at what frequency is not known as yet). We are planning to store this in Redshift.
My qs is:
Should we store as 1 row per user with 400 cols(1 column for each attribute)
or should we go for a table structure like
(uid, attribute id, attribute value, preference score) which will be (20-400)rows by 3 columns
Which kind of storage would lead to a better performance in Redshift.
Should be really consider NoSQL for this?
Note:
1. This is a backend for real time application with increasing number of users.
2. For processing, the above table has to be read with entire information of all attibutes for one user i.e indirectly create a 1*400 matrix at runtime.
Please help me which desgin would be ideal for such a use case. Thank you
You can go for tables like given in this example and then use bitwise functions
http://docs.aws.amazon.com/redshift/latest/dg/r_bitwise_examples.html
Bitwise functions are here
For your problem, I would suggest a two table design. Its more pain in the beginning but will help in future.
First table would be a key value kind of first table, which would store all the base data and would be kind of future proof, where you can add/remove more attributes, but this table will continue working.
And a N(400 in your case) column 2nd table. This second table you can build using the first table. For the second table, you can start with a bare minimum set of columns .. lets say only 50 out of those 400. So that querying this table would be really fast. And the structure of this table can be refreshed periodically to match with the current reporting requirements. Also you will always have the base table in case you need to backfill any data.
I'm working on a project to make a digital form of this paper
this paper (can't post image)
and the data will displayed on a Web in a simple table view. There will be NO altering, deleting, updating. It's just displaying (via SELECT * of course) the data inputted.
The data will be inserted via android app and stored in a single table which has 30 columns in mysql.
and the question is, is it a good idea if i use a single table? because i think there will be no complex operation in the sql.
and the other question is, am i violating some rules for this method?
I need your opinion. thanks.
It's totally ok to use only one table, if that suits your needs. What you can do to make the database a little bit 'smarter' is add new tables for attributes in your paper that will be repeated. So, for example, the Soil Type could be another table where there are two columns, ID and Description, and you will use it as a foreign key in each record in the main table. You need this if you want your database to be in 3NF.
To sum up, yes you can have one table if that's all you need. However, adding more tables might help save some space and make your database more flexible. It's up to you to decide! :)
I am a bit rusty with mysql and trying to jump in again..So sorry if this is too easy of a question.
I basically created a data model that has a table called "Master" with required fields of a name and an IDcode and a then a "Details" table with a foreign key of IDcode.
Now here's where its getting tricky..I am entering:
INSERT INTO Details (Name, UpdateDate) Values (name, updateDate)
I get an error: saying IDcode on details doesn't have a default value..so I add one then it complains that Field 'Master_IDcode' doesn't have a default value
It all makes sense but I'm wondering if there's any easy way to do what I am trying to do. I want to add data into details and if no IDcode exists, I want to add an entry into the master table. The problem is I have to first add the name to the fund Master..wait for a unique ID to be generated(for IDcode) then figure that out and add it to my query when I enter the master data. As you can imagine the queries are going to probably get quite long since I have many tables.
Is there an easier way? where everytime I add something it searches by name if a foreign key exists and if not it adds it on all the tables that its linked to? Is there a standard way people do this? I can't imagine with all the complex databases out there people have not figured out a more easier way.
Sorry if this question doesn't make sense. I can add more information if needed.
p.s. this maybe a different question but I have heard of Django for python and that it helps creates queries..would it help my situation?
Thanks so much in advance :-)
(decided to expand on the comments above and put it into an answer)
I suggest creating a set of staging tables in your database (one for each data set/file).
Then use LOAD DATA INFILE (or insert the rows in batches) into those staging tables.
Make sure you drop indexes before the load, and re-create what you need after the data is loaded.
You can then make a single pass over the staging table to create the missing master records. For example, let's say that one of your staging table contains a country code that should be used as a masterID. You could add the master record by doing something along the lines of:
insert
into master_table(country_code)
select distinct s.country_code
from staging_table s
left join master_table m on(s.country_code = m.country_code)
where m.country_code is null;
Then you can proceed and insert the rows into the "real" tables, knowing that all detail rows references a valid master record.
If you need to get reference information along with the data (such as translating some code) you can do this with a simple join. Also, if you want to filter rows by some other table this is now also very easy.
insert
into real_table_x(
key
,colA
,colB
,colC
,computed_column_not_present_in_staging_table
,understandableCode
)
select x.key
,x.colA
,x.colB
,x.colC
,(x.colA + x.colB) / x.colC
,c.understandableCode
from staging_table_x x
join code_translation c on(x.strange_code = c.strange_code);
This approach is a very efficient one and it scales very nicely. Variations of the above are commonly used in the ETL part of data warehouses to load massive amounts of data.
One caveat with MySQL is that it doesn't support hash joins, which is a join mechanism very suitable to fully join two tables. MySQL uses nested loops instead, which mean that you need to index the join columns very carefully.
InnoDB tables with their clustering feature on the primary key can help to make this a bit more efficient.
One last point. When you have the staging data inside the database, it is easy to add some analysis of the data and put aside "bad" rows in a separate table. You can then inspect the data using SQL instead of wading through csv files in yuor editor.
I don't think there's one-step way to do this.
What I do is issue a
INSERT IGNORE (..) values (..)
to the master table, wich will either create the row if it doesn't exist, or do nothing, and then issue a
SELECT id FROM master where someUniqueAttribute = ..
The other option would be stored procedures/triggers, but they are still pretty new in MySQL and I doubt wether this would help performance.