I have a query
SELECT ultrait_wpl_properties.id, location1_name, location3_name, location4_name,
field_312, field_42, post_code, lot_area, price, bedrooms, bathrooms, field_308,
googlemap_lt, googlemap_ln, street, street_no, ultrait_wpl_property_types.name,
ultrait_wpl_items.item_name
FROM ultrait_wpl_properties
JOIN ultrait_wpl_property_types
ON ultrait_wpl_properties.property_type = ultrait_wpl_property_types.id
JOIN ultrait_wpl_items ON ultrait_wpl_properties.id = ultrait_wpl_items.id
ORDER BY ultrait_wpl_properties.id
As you can see I'm using JOINS. When it comes to inserting a new record is there a way to create an INSERT statement that will incorporate these?
What I mean is if I have the data for all the fields specified in the above query can I write an INSERT that will insert the data into the correct tables? I am currently researching this matter also but if anyone could provide any insight it would be appreciated.
No.
Or the slightly longer answer: no, the SQL standard does not provide this option.
There is a workaround: you can create a STORED PROCEDURE that accepts the input from the query and splits it into multiple INSERT statements. However: this solution will not have any of the flexibility of standard SQL statements.
Also it is not a simple matter if you have three tables joined by keys you may as well have 2 updates and one insert instead of three inserts.
Related
I have a MySQL database for an investor to track his investments:
the 'deal' table has info about the investments, including different categories for the investment (asset_class).
Another table ('updates') tracks updates on a specific investment (investment name, date, and lots of financial details.)
I want to write a query that allows the user to select all updates from 'updates' under a specific asset_class. However, as mentioned, asset_class is in the investment table. I wrote the following query:
SELECT *
FROM updates
WHERE updates.invest_name IN (SELECT deal.deal_name
FROM deal
WHERE deal.asset_class = '$asset_class'
);
I'm using PHP, so $asset_class is the selected variable of asset_class.
However, the query only returns unique update names, but I want to see ALL updates for the given asset_class, even if several updates are made under one investment name.
Any advice? Thanks!
Your query should do what you intend. In general, though, this type of query would be written using a JOIN. More importantly use parameter placeholders instead of munging query strings:
SELECT u.*
FROM updates u JOIN
deal d
ON u.invest_name = d.deal_name
WHERE d.asset_class = ?;
This can take advantage of indexes on deal(asset_class, deal_name) and updates(invest_name).
The ? represents a parameter that you pass into the query when you run it. The exact syntax depends on how you are making the call.
I have a few tables in SQL that require content filtering, primarily for profanity. I want to allow my application(s) to insert data they want and have the server replace any profanity with asterisks such that I do not need to implement filtering on a variety of platforms.
I know triggers could be used for future, however, I am trying to determine the most efficient way to complete this task.
Here are some details:
There are 2 tables I need to ensure has content filtering as they are public facing: feedback and users. Here are the particular fields:
Table -> Fields
Feedback -> Subject, Message
Users -> Firstname, Lastname, Alias
I am relatively new to MySQL and know that having a table of values to replace may be the easiest-to-modify option.
My question is:
How would I join 2 tables and replace particular chars with asterisks using key words located in a third table?
I have these queries so far to locate the columns of interest, just not sure how to incorporate the replacement function and the ability to check both at the same time:
SELECT u.firstname, u.lastname, u.username FROM users u, feedback f, terms t;
SELECT f.subject, f.message FROM feedback f;
You are better off creating a new column (named alias or similar) and storing values with asterisks in there than writing a SELECT query and performing find-replace. Following are the advantages:
Handling this scenario in trigger means you will only perform this operation when a record gets inserted or updated, whereas in SELECT query, each read will need replacing.
You can't really use join here because (a) each value of feedback and user table needs to be compared with all the values of terms table and (b) this needs to be performed for all the columns that might contain these words. So, it's more of a use case for cursor than join.
I have 2 tables, users(~1000 users) and country(~50 countries). A user can support many countries so I am planning to create a mapping table, user_country. However, since I have 1000 users and 50 countries, I will have a maximum of 50000 entries for this table. Is this the best way to implement this or is there a more appropriate method for this?
If this is the best way, how can i add a user supporting many countries to this table using only one SQL statement?
For ex:
INSERT INTO user_country(userid, countrycode)
VALUES ('user001','US'),
('user001','PH'),
('user001','KR'),
('user001','JP')
Above SQL statement will be too long if a user supports all 50 countries. And I have to do it for 1000 users. Anyone have any ideas the most efficient way to implement this?
From the point of view of database design, a table like your user_country is the only sensible way to go. 50000 records are a breeze for MySQL, and having them together with the appropriate indexes will open up all ways of future use for those data.
As far as I can see, this is unrelated to the problem of many large SQL insert statements. No matter how you represent the data in the database, you will have to write statements containing, for each user, a list of countries.
This is a one-time action, right? So it doesn't need to be a masterpiece in software engineering. What I sometimes do is load the raw data in Excel, line by line, then write a formula that "calculates" the appropriate SQL statement for the first line, and copy this formula for all lines. Then throw all these statements at the database. Even if there are tens of thousands of them, it's not much effort.
Personally I'd do the insert based on a select:
INSERT INTO user_country SELECT 'user001', countryid from countries WHERE countryid IN ('US', 'PH', 'KR', 'JP');
You need to adapt to your column names.
The alternative of storing list of countries in a single column usercountries varchar (255) as US,FR,KR and so on would be possible as well - but you'd lose the ability to select users based on the country they support. In fact you don't lose it - but
SELECT * FROM users WHERE usercountries like '%KR%';
Is not a good query in terms of index usage. But as you only have 1000 users a tablescan will be mighty quick as well.
I want to try and keep this as one query and not use PHP, but it's proving to be tough.
I have a table called applications, that stores all the applications and some basic information about them.
Then, I have a table with all the types of applications in it, and that table contains a reference to another table which stores more specific data about the specific type of application in question.
select applications.id as appid, applications.category, type.title as type, type.id as tid, type.valuefld, type.tablename
from applications
left join type on applications.typeid=type.id
left join department on type.deptid=department.id
where not isnull(work_cat)
and work_cat != ''
and applications.deleted=0
and datei between '10-04-14' and '11-04-14'
order by type, work_cat
Now, in the old version, there is another query on every single result. Over hundreds of results... that sucks.
This is the query I'd like to integrate so I can get all the data in one result row. (Old is ASP, I'm re-writing it in PHP)
query = "select sum("&adors.fields("valuefld")&") as cost, description from "&adors.fields("tablename")&" where appid = '"&adors.fields("tablename")&"'"
Prepared statements, I'm aware, are the best solution, but for now they are not an option.
You can't do this with a plain SQL query - you need to have a defined set of tables that your query is based on. The fact that your current implementation queries from whatever table is named by tablename from the first result-set means that to get this all in one query, you will have to restructure your data. You have to know what tables you're querying from rather than having it dynamic.
If the reason for these different tables is the different information stored in each requiring different record (column) structures, you might want to look into Key/Value pair storage in a large table. Once you combine the dynamically named ones into a single location you can integrate your two queries together.
I am a bit rusty with mysql and trying to jump in again..So sorry if this is too easy of a question.
I basically created a data model that has a table called "Master" with required fields of a name and an IDcode and a then a "Details" table with a foreign key of IDcode.
Now here's where its getting tricky..I am entering:
INSERT INTO Details (Name, UpdateDate) Values (name, updateDate)
I get an error: saying IDcode on details doesn't have a default value..so I add one then it complains that Field 'Master_IDcode' doesn't have a default value
It all makes sense but I'm wondering if there's any easy way to do what I am trying to do. I want to add data into details and if no IDcode exists, I want to add an entry into the master table. The problem is I have to first add the name to the fund Master..wait for a unique ID to be generated(for IDcode) then figure that out and add it to my query when I enter the master data. As you can imagine the queries are going to probably get quite long since I have many tables.
Is there an easier way? where everytime I add something it searches by name if a foreign key exists and if not it adds it on all the tables that its linked to? Is there a standard way people do this? I can't imagine with all the complex databases out there people have not figured out a more easier way.
Sorry if this question doesn't make sense. I can add more information if needed.
p.s. this maybe a different question but I have heard of Django for python and that it helps creates queries..would it help my situation?
Thanks so much in advance :-)
(decided to expand on the comments above and put it into an answer)
I suggest creating a set of staging tables in your database (one for each data set/file).
Then use LOAD DATA INFILE (or insert the rows in batches) into those staging tables.
Make sure you drop indexes before the load, and re-create what you need after the data is loaded.
You can then make a single pass over the staging table to create the missing master records. For example, let's say that one of your staging table contains a country code that should be used as a masterID. You could add the master record by doing something along the lines of:
insert
into master_table(country_code)
select distinct s.country_code
from staging_table s
left join master_table m on(s.country_code = m.country_code)
where m.country_code is null;
Then you can proceed and insert the rows into the "real" tables, knowing that all detail rows references a valid master record.
If you need to get reference information along with the data (such as translating some code) you can do this with a simple join. Also, if you want to filter rows by some other table this is now also very easy.
insert
into real_table_x(
key
,colA
,colB
,colC
,computed_column_not_present_in_staging_table
,understandableCode
)
select x.key
,x.colA
,x.colB
,x.colC
,(x.colA + x.colB) / x.colC
,c.understandableCode
from staging_table_x x
join code_translation c on(x.strange_code = c.strange_code);
This approach is a very efficient one and it scales very nicely. Variations of the above are commonly used in the ETL part of data warehouses to load massive amounts of data.
One caveat with MySQL is that it doesn't support hash joins, which is a join mechanism very suitable to fully join two tables. MySQL uses nested loops instead, which mean that you need to index the join columns very carefully.
InnoDB tables with their clustering feature on the primary key can help to make this a bit more efficient.
One last point. When you have the staging data inside the database, it is easy to add some analysis of the data and put aside "bad" rows in a separate table. You can then inspect the data using SQL instead of wading through csv files in yuor editor.
I don't think there's one-step way to do this.
What I do is issue a
INSERT IGNORE (..) values (..)
to the master table, wich will either create the row if it doesn't exist, or do nothing, and then issue a
SELECT id FROM master where someUniqueAttribute = ..
The other option would be stored procedures/triggers, but they are still pretty new in MySQL and I doubt wether this would help performance.