SQL Server 2008 - how to automatically drop and create and output table? - sql-server-2008

I would like to set up a table within a SQL Server DB that stores the results from a long and complex query that takes almost an hour to run. After running the query the rest of the analysis is done by colleagues using Excel pivot tables.
I would prefer not to output the results to text, and want to keep it within SQL Server and then just set up Excel to pivot directly from the server.
My problem is that the output will not always have exactly the same columns, and manually setting up an output table to INSERT INTO every time would be tedious.
Is there a way to create a table on the fly based on the type of data you are selecting?
E.g. if I want to run:
SELECT
someInt,
someVarchar,
someDate
FROM someTable
And insert this into a table called OutputTable, which has to look like this
CREATE TABLE OutputTable
(
someInt int null
someVarchar varchar(255) null,
someDate date null,
) ON [primary]
Is there some way to make SQL Server interrogate the fields in the select statement and then automatically generate the CREATE TABLE script?
Thanks
Karl

SELECT
someInt,
someVarchar,
someDate
INTO dbo.OutputTable
FROM someTable
...doesn't explicitly generate a CREATE script (at least not one you can see) but does the job!

Related

Increment column value on SELECT query

I am trying to build an API and one of the endpoints will return a random row from my database. In the database I have a table in which I want a "views" column to be updated every time I run a SELECT query on a row.
My table looks something like this:
CREATE TABLE `movies` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`title` varchar(256) NOT NULL,
`description` text,
`views` int(11) NOT NULL DEFAULT 0,
PRIMARY KEY (`id`)
);
The row is selected by ordering the table with rand() and then limiting the result by 1, like so:
SELECT * FROM table ORDER BY rand() LIMIT 1;
Is something like this below possible?
SELECT * FROM table ORDER BY rand() LIMIT 1
UPDATE table SET views = +1 WHERE (selected row?);
I'm new to SQL queries, so I don't know if this is the best way or even possible at all. Should I run a new query after this one has completed that updates the value instead?
Usually, every table has a Primary Key, i.e. a unique ID of every single row. Since you have a result of your SELECT query and it's only 1 row, you always can make a consequent update query like UPDATE table SET views = views + 1 WHERE id = <returned_record_id>. Here we assume that the column id is a Primary Key column. This pair of queries need to be issued by the application code. If you want to achieve SELECT + UPDATE functionality as a single SQL statement, consider using stored procedures.
While the aforementioned approach is technically possible, it might have a few performance problems. First of, ORDER BY rand() often has a poor performance. Also, having an update on each select could have bad performance implications.
No what you want is not possible .as, select and update commands can not be used togethor in a single transaction.
You can do it seperately
You need to create a procedure for this in your database like:
CREATE PROCEDURE `procedure_name`()
BEGIN
SELECT * FROM table ORDER BY rand() LIMIT 1 ;
UPDATE table SET views = +1 WHERE (selected row?) ;
END
and then call it
call procedure_name();
You can check only as there are many ways to write a procedure.
Thanks
Unfortunately, what you want to do is not possible, at least not without a lot of work. SQL in general -- and MySQL in particular -- offer a capability called triggers.
Triggers allow you to do take actions when something happens in the database. For instance, if you want to check that values are correct, you can write an insert/update trigger to check the values and reject improper ones. Or, if you want to stash deleted records into an audit table, a trigger is the way to go.
What you are describing could be implemented using a trigger on a "select". Such a beast does not exist.
What are your options? Well, the simplest is to do this in your application. When a movie is selected, then you can update views. Of course, that only increments the views where you have the code.
You can move this code into a stored procedure. This simplifies the application code. It just has to "know" to use the stored procedure. But, there is no enforcement mechanism.
You can make this more enforceable by using permissions. Basically, don't allow access to the underlying table except through the stored procedure. This is closest to what you want.

extract data from sql, modify it and save the result to a table

This may seem like a dumb question. I am wanting to set up an SQL db with records containing numbers. I would like to run an enquiry to select a group of records, then take the values in that group, do some basic arithmetic on the numbers and then save the results to a different table but still have them linked with a foreign key to the original record. Is that possible to do in SQL without taking the data to another application and then importing it back? If so, what is the basic function/procedure to complete this action?
I'm coming from an excel/macro/basic python background and want to investigate if it's worth the switch to SQL.
PS. I'm wanting to stay open source.
A tiny example using postgresql (9.6)
-- Create tables
CREATE TABLE initialValues(
id serial PRIMARY KEY,
value int
);
CREATE TABLE addOne(
id serial,
id_init_val int REFERENCES initialValues(id),
value int
);
-- Init values
INSERT INTO initialValues(value)
SELECT a.n
FROM generate_series(1, 100) as a(n);
-- Insert values in the second table by selecting the ones from the
-- First one .
WITH init_val as (SELECT i.id,i.value FROM initialValues i)
INSERT INTO addOne(id_init_val,value)
(SELECT id,value+1 FROM init_val);
In MySQL you can use CREATE TABLE ... SELECT (https://dev.mysql.com/doc/refman/8.0/en/create-table-select.html)

MySQL copy row from one table to another with multiple NOT IN criteria

We have an old FoxPro DB that still has active data being entered into it. I am in the process of writing a series of .bat files that will update a MySQL database for our web applications that I'm working on.
Our FoxPro databases were never set up with unique IDs or anything useful like that so I'm having to have the query look at a few different fields.
Here's my query thus far:
//traininghistory = MySQL DB
//traininghistory_test = FoxPro DB
INSERT INTO traininghistory
WHERE traininghistory_test.CLASSID NOT IN(SELECT CLASSID FROM traininghistory)
AND traininghistory_test.EMPID NOT IN(SELECT EMPID FROM traininghistory)
What I'm After is this:
I need an query that looks at the 600,000+ entries in the FoxPro DB (traininghistory_test in my code) and compares to the 600,000+ entries in the MySQL DB (traininghistory in my code) and only inserts the ones where the columns CLASSID and EMPID are new- that is, they are NOT in the traininghistory table.
Any thoughts on this (or if you know a simpler/more efficient way to execute this query in MySQL) are greatly appreciated.
One option is to use a outer join / null check:
insert into traininghistory
select values
from traininghistory_test tht
left join traininghistory th on tht.empid = th.empid
and tht.classid = th.classid
where th.empid is null
It's also worth noting, your current query may leave out records since it's not comparing empid and classid in the same records.
One way ist.
CREATE ONE UNIQUE INDEX ON THE COLUMS (CLASSID, EMPID),
THEN
INSERT IGNORE INTO traininghistory SELECT * or fieldlist FROM traininghistory_test;
Thats all

mysql peformance INSERT into table SELECT for report

I am working on a mysql query for a report. The idea is to have a simple table say 'reportTable' with the values being fetched from various places. I could then use the reportTable more easily without remembering lots of joins etc and also share this table for other projects.
Should I break down the inner insert part of the query so it does
chunks at a time I will be adding probably tens of thousands of rows?
INSERT INTO reportTable
(
-- long query grabbing results from various places
SELECT var1 FROM schema1.table1
SELECT var2 FROM schema2.table1
SELECT var2 FROM schema2.table1
etc
)
This addresses your concerns that inserting data takes too long and so on. I understood it like you rebuild your table each time. So, instead of doing so, just fetch the data that is new and not already in your table. Since looking up if the data is already present in your report table might be expensive, too, just get the delta. Here's how:
Make sure that in every table you need a column like this is present:
ALTER TABLE yourTable ADD COLUMN created timestamp DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP;
The ON UPDATE clause is of course optionally, don't know if you need to keep track of changes. If so, give me a comment and I can provide you with a solution with which you can keep a history of your data.
Now you need a small table that holds some meta information.
CREATE TABLE deltameta (tablename varchar(50), LSET timestamp, CET timestamp);
LSET is short for Last Successful Extraction Time, CET for Current Extraction Time.
When you get your data it works like this:
UPDATE deltameta SET CET = CURRENT_TIMESTAMP WHERE tablename = 'theTableFromWhichYouGetData';
SELECT #varLSET := LSET, #varCET := CET FROM deltameta WHERE tablename = 'theTableFromWhichYouGetData';
INSERT INTO yourReportTable (
SELECT whatever FROM aTable WHERE created >= #varLSET AND created < #varCET
);
UPDATE deltameta SET LSET = CET WHERE tablename = 'theTableFromWhichYouGetData';
When anything goes wrong during inserting your script stops and you get the same data the next time you run it. Additionally you can work with transactions here, if you need to roll back. Again, write a comment if you need help with this.
I may be wrong, but you seem to be talking about a basic view. You can read an introduction to views here: http://techotopia.com/index.php/An_Introduction_to_MySQL_Views, and here are the mysql view docs: http://dev.mysql.com/doc/refman/5.0/en/create-view.html

multiple temporary tables?

This might be a basic question: I am using a temporary table in some of my php code like so:
CREATE TEMPORARY TABLE ttable( `d` DATE NOT NULL , `p` DECIMAL( 11, 2 ) NOT NULL , UNIQUE KEY `date` ( `date` ) );
INSERT INTO ttable( d, p ) VALUES ( '$d' , '$p' );
SELECT * FROM ttable;
As we scale up our site, will this ever be a problem? ie, will user1's ttable & user2's ttable ever get mixed up & user1 sees user2's ttable & vice versa? Is it better to create a unique name for each unique temporary table?
thx
Temporary tables are session-specific. Every time you connect to a host (in PHP, this is done with mysql_connect), temporary tables that you create exist only within that session/connection.
It is almost always better to find a different way than using temporary tables.
The only time I would consider them is under the following conditions:
The activity is rare. Meaning, a given user MIGHT do this once a week.
It is used as a holding container prior to doing a regular full import of data.
It deals with data whose structure is unknown prior to being filled.
All three of those really go with building some type of generic bulk import routines where the data mapping is defined at run time.
If you find yourself creating temp tables frequently in the application, there's probably a better way.
Scalability is going to depend on the amount of data being loaded and frequency of temp table usage. For a low trafficked site it might be okay.
We're in the process of ripping out a ton of temp table usage by a client's app. 90% of the queries in their system result in a temp table being created. Analysis of all the queries have shown that the original dev used this mechanism simply because they didn't understand SQL. We're doing this because performance has radically dropped off as new users are added to the system.
Can you post a use case? Maybe we can help provide an alternate mechanism.
UPDATE:
Now that we have a use case, here is a simple table structure to accomplish what you need.
Table ZipCodes
ZipCode char(5) [or char(10) depending on need]
CityName varchar(50)
*other columns as necessary such as latitude or whatever.
Table TempReadings
ZipCode char(5) [foreign key to the ZipCode table]
ReadingDate datetime
Temperature float (or some equivalent)
To get all the temp readings for a given zip code you would do something like:
select ZipCode, ReadingDate, Temperature
from TempReadings
if you need info from the main ZipCode table:
select Z.ZipCode, Z.CityName, TR.ReadingDate, TR.Temperature
from ZipCodes Z
inner join TempReadings TR on (TR.ZipCode = Z.ZipCode)
add where clauses as necessary. Note that none of the above requires having a separate table per zip code.