SQL query to copy table with specified columns - mysql

I've a table that 'll populate columns dynamically like col1,col2,col3... at runtime and i'm copying this table into another table having columns col1,col2,col3,col4,col5 more than this i.e. maximum number of columns it support. But currently when ever i copied dynamic generated table into current table having max columns it giving me error like
Dynamic table columns:
DateInterval, DataType, Seqno, Channel1_data, Channel1_status, Channel2_data, Channel2_status
Table columns used for copying dynamic table:
DateInterval, DataType, Seqno, Channel1_data, Channel1_status, Channel2_data, Channel2_status, Channel3_data, Channel3_status, Channel4_data, Channel4_status
Query:
SELECT DateInterval, DataType, Seqno, Channel1_data, Channel1_status, Channel2_data, Channel2_status, Channel3_data, Channel3_status, Channel4_data, Channel4_status
FROM #TableName
'No value given for one or more required parameters'
Tell me how can i overcome this problem.
Thanks,
#nag

Nag review this posting: Is it ever okay to violate the first normal form
In this posting you will find a way i solved a problem where i needed a variable number of fields, that would grow and shring over time, in a table. It has minimal internal storage while still allowing for enough growth room for my need if the criteria was met.

Related

error with "as" when creating a computed column

I am trying to make a database for hospital visits and I want to alter the "Visit" table and add a new column which shows the total cost for the patients medication (by multiplying charge(from table "medication")*quantity(from table "getsmed")).
However, I wrote this code in MySQL workbench but it won't run and would under line the world "as" with the caption ("as" is not valid in this position, expecting: BIT, BOOL, BOOLEAN, DATETIME, TIME, ENUM...)
alter table visit
add total_charge as (medication.Mcharge*getsmed.Quantity)
;
In MySQL, you need to specify a type when adding a computed column:
alter table visit
add total_charge decimal(20, 4) as (Mcharge * Quantity);
However, a computed column can only directly reference columns values in the same row. It cannot "reach out" to other tables. This gives you two choices:
Use a user-defined function to retrieve a value from another table.
Use a view rather than a computed column.
I would recommend the second solution.
I can't actually suggest any specific code, because you have not provided enough information in the question.

Pulling dimension from another fact table in SSRS Dax query

I have two fact tables. I would like to pull columns from a dimension which is part of another fact table using Dax query. The table is coming from a tabular cube. So far I have tried:
EVALUATE
SUMMARIZE(
'vwFCML'
,'Vessel'[VName]
,'Port'[PCountry]
,'PO'[Type]
)
Vessel[VName] and Port[PCountry] dimensions are from the vwFCML fact table while the PO[Type] is from another fact table called OrdTable. I get the error
The column 'Type' specified in the 'SUMMARIZE' function was not found in the input table
I am new to dax and any help would be greatly appreciated, thank you in advance.
This issue happens if the table listed in summarize section has no relation to other tables.
In your case, there is a possibility that the vwFCML table has no relation with PO table.
If you have another table, e.g. Dim_Vessel that is somehow linked to both vwFCML and PO, try using that as the summarize table (even if you are not using any column from that table at all).
e.g.
EVALUATE
SUMMARIZE(
'Dim_Vessel'
,'Vessel'[VName]
,'Port'[PCountry]
,'PO'[Type]
,....
)
Hope this make some sense?

Redshift Usage - 1 row by 400 columns per user or (20-400) rows by 4 columns per user

We are building an analytics engine which has to store attribute preference score for each user. We are expecting 400 attributes and they may change(at what frequency is not known as yet). We are planning to store this in Redshift.
My qs is:
Should we store as 1 row per user with 400 cols(1 column for each attribute)
or should we go for a table structure like
(uid, attribute id, attribute value, preference score) which will be (20-400)rows by 3 columns
Which kind of storage would lead to a better performance in Redshift.
Should be really consider NoSQL for this?
Note:
1. This is a backend for real time application with increasing number of users.
2. For processing, the above table has to be read with entire information of all attibutes for one user i.e indirectly create a 1*400 matrix at runtime.
Please help me which desgin would be ideal for such a use case. Thank you
You can go for tables like given in this example and then use bitwise functions
http://docs.aws.amazon.com/redshift/latest/dg/r_bitwise_examples.html
Bitwise functions are here
For your problem, I would suggest a two table design. Its more pain in the beginning but will help in future.
First table would be a key value kind of first table, which would store all the base data and would be kind of future proof, where you can add/remove more attributes, but this table will continue working.
And a N(400 in your case) column 2nd table. This second table you can build using the first table. For the second table, you can start with a bare minimum set of columns .. lets say only 50 out of those 400. So that querying this table would be really fast. And the structure of this table can be refreshed periodically to match with the current reporting requirements. Also you will always have the base table in case you need to backfill any data.

How to best check if a SQL table contents have not changed?

Assuming I have the following table named "contacts":
id|name|age
1|John|5
2|Amy|2
3|Eric|6
Is there some easy way to check whether or not this table changes much like how a sha/md5 hash works when getting the checksum for a file on your computer?
So for example, if a new row was added to this table, or if a value was changed within the table, the "hash" or some generated value shows that the table has changed.
If there is no direct mechanism, what is the best way to do this (could be some arbirary hash mechanism, as long as the method puts emphasis on performance and minimizing latency)? Could it be applied to multiple tables?
There is no direct mechanism to get that information through SQL.
You could consider adding an additional LastModified column to each row. To know the last time the table was modified, select the maximum value for that column.
You could achieve a similar outcome by using a trigger on the table for INSERT, UPDATE and DELETE, which updates a separate table with the last modified timestamp.
If you want to know if something has changed, you need something to compare. For example a date. You can add a table with two columns, the tablename and the timestamp, and program a trigger for the events on the table you are interested to control, so this trigger will update the timestamp column of this control table.
If the table isn't too big, you could take a copy of the entire table. When you want to check for changes, you can then query the old vs. new data.
drop table backup_table_name;
CREATE TABLE backup_table_name LIKE table_name;
INSERT INTO backup_table_name SELECT * FROM `table_name`;

MySQL Database Design Questions

I am currently working on a web service that stores and displays money currency data.
I have two MySQL tables, CurrencyTable and CurrencyValueTable.
The CurrencyTable holds the names of the currencies as well as their description and so forth, like so:
CREATE TABLE CurrencyTable ( name VARCHAR(20), description TEXT, .... );
The CurrencyValueTable holds the values of the currencies during the day - a new value is inserted every 2 minutes when the market is open. The table looks like this:
CREATE TABLE CurrencyValueTable ( currency_name VARCHAR(20), value FLOAT, 'datetime' DATETIME, ....);
I have two questions regarding this design:
1) I have more than 200 currencies. Is it better to have a separate CurrencyValueTable for each currency or hold them all in one table?
2) I need to be able to show the current (latest) value of the currency. Is it better to just insert such a field to the CurrencyTable and update it every two minutes or is it better to use a statement like:
SELECT value FROM CurrencyValueTable ORDER BY 'datetime' DESC LIMIT 1
The second option seems slower.. I am leaning towards the first one (which is also easier to implement).
Any input would be greatly appreciated!!
p.s. - please ignore SQL syntax / other errors, I typed it off the top of my head..
Thanks!
To your questions:
I would use one table. Especially if you need to report on or compare data from multiple currencies, it will be incredibly improved by sticking to one table.
If you don't have a need to track the history of each currency's value, then go ahead and just update a single value -- but in that case, why even have a separate table? You can just add "latest value" as a field in the currency table and update it there. If you do need to track history, then you will need the two tables and the SQL you posted will work.
As an aside, instead of FLOAT I would use DECIMAL(10,2). After MySQL 5.0, this will actually have improved results when it comes to currency handling with rounding.
It is better to have one table holding all currencies
If there is need for historical prices, then the table needs to hold them. A reasonable compromise in many situations is to split the price table into a full list of historical prices and another table which only has the current prices.
Using data type float can be troublesome. Please be sure you know what you are doing. If not, use a database currency data type.
As your webservice is transactional it is better if you'd have to access less tables at the same time. Since you will be reading and writing a lot, I would suggest having a single table.
Its better to insert a field to the CurrencyTable and update it rather than hitting two tables for a single request.