This question already has answers here:
SQL split values to multiple rows
(12 answers)
Closed 4 months ago.
I have data stored as text within a MySQL database that has values separated by newlines which I need to split into rows. A single text field might look like this:
---
'2022-06-19': no_capacity
'2022-06-20': no_capacity
'2022-06-21': available
'2022-06-22': available
...
Yes this is an example of a single text field. There can be anywhere between 100-500 \n's. The dates themselves are also unpredictable.
I need each one of these 'rows' within the entry to be returned as an actual row in a table (and then columns split on the colon, but that is less important).Can this be done using a MySQL query?
You should be able to use something similar to this answer but for a newline character. This can't be done using a regular Mysql query, but rather a stored procedure similar to the answers in that post, which I'm not familiar with. Not sure if this helps, other answers might be necessary.
Related
This question already has answers here:
Making changes to multiple records based on change of single record with SQL
(3 answers)
Closed 5 years ago.
I'd like to save some data in a MySQL table. The data have their orders which can be modified at my will. Say I have 10 rows. I want to move the 10th row to the 5th position, or insert some more rows between 2nd and 3rd position. Then, with a viewer I can get the data with the order I set. How can I implement such a table?
As I thought, I would save the order as float number in a new column. Each time I change the order, say, move 10th row between 5th and 6th, I would get the order number of 5th and 6th and get the average number of them, and update the order column of 10th row. Then I can get the data that ORDER BY order column. But I don't think it's a good idea. Any help about this problem?
You don't order the table, that makes no sense. You are actually not interested in the order the entries are placed in that table. Consider it random.
You are interested in the order you want to see the entries in. For that you create an additional column, call it "select_order" or "priority", however you like. In that you store simple integers which you use to describe the order you want to see the entries in.
Now you can "re-order" the entries however you like by changing those numbers in that order column. At query time you add an ORDER BY select_order clause to your SELECT query and will receive the entries in exactly the order you want.
This is the standard approach for relational database models. Which does not mean that there are no other approaches that might be interesting to look into for very special situations:
a priority table instead of a column which is joined during the SELECT query. This might make sense for situations with much more write than read operations. Note the much however.
a multiple column approach for situations where you can group entries and only re-order inside such groups. That dramatically reduces the number of entries you have to updated in case or re-ordering.
I have job in Talend that is designed to bring together some data from different databases: one is a MySQL database and the other a MSSQL database.
What I want to do is match a selection of loan numbers from the MySQL database (about 82,000 loan numbers) to the corresponding information we have housed in the MSSQL database.
However, the tables in MSSQL to which I am joining the data from MySQL are much larger (~ 2 million rows), are quite wide, and thus cost much more time to query. Ideally I could perform an inner join between the two tables based on the loan number, but since they are in different databases this is not possible. The inner join that is performed inside a tMap occurs after the Lookup input has already returned its data set, which is quite large (especially since this particular MSSQL query will execute a user-defined function for each loan number).
Is there any way to create a global variable out of the output from the MySQL query (namely, the loan numbers selected by the MySQL query) and use that global variable as an IN clause in the MSSQL query?
This should be possible. I'm not working in MySQL but I have something roughly equivalent here that I think you should be able to adapt to your needs.
I've never actually answered a Stackoverflow question and while I was typing this the page started telling me I need at least 10 reputation to post more than 2 pictures/links here and I think I need 4 pics, so I'm just going to write it out in words here and post the whole thing complete with illustrations on my blog in case you need more info (quite likely, I should think!)
As you can see, I've got some data coming out of the table and getting filtered by tFilterRow_1 to only show the rows I'm interested in.
The next step is to limit it to just the field I want to use in the variable. I've used tMap_3 rather than a tFilterColumns because the field I'm using is a string and I wanted to be able to concatenate single quotes around it but if you're using an integer you might not need to do that. And of course if you have a lot of repetition you might also want to get a tUniqueRows in there as well to save a lot of unnecessary repetition
The next step is the one that does the magic. I've got a list like this:
'A1'
'A2'
'B1'
'B2'
etc, and I want to turn it into 'A1','A2','B1','B2' so I can slot it into my where clause. For this, I've used tAggregateRow_1, selecting "list" as the aggregate function to use.
Next up, we want to take this list and put it into a context variable (I've already created the context variable in the metadata - you know how to do that, right?). Use another tMap component, feeding into a tContextLoad widget. tContextLoad always has two columns in its schema, so map the output of the tAggregateRows to the "value" column and enter the name of the variable in the "key". In this example, my context variable is called MyList
Now your list is loaded as a text string and stored in the context variable ready for retrieval. So open up a new input and embed the variable in the sql code like this
"SELECT distinct MY_COLUMN
from MY_SECOND_TABLE where the_selected_row in ("+
context.MyList+")"
It should be as easy as that, and when I whipped it up it worked first time, but let me know if you have any trouble and I'll see what I can do.
I have a MySQL table with 4 millions of records having a field like "hello#xyz.com22-03-2015". Concatenated date is not fixed for all 4 million records. I am wondering how can I remove the numbers or any string after #xyz using mysql. One possible solution must be somehow with Regular expression and I know that Mysql does not allow replace using regex, so I am wondering how this particular task can be completed. I want to remove everything after #xyz.com
Many thanks
This question already has answers here:
Find and replace entire MySQL database
(12 answers)
Closed 9 years ago.
I have a wordpress website and I'm going to change it's URL. The problem is that when I search for its actual URL in the database, I have like 200 results.
I would like to search in the whole database for the actual url and replace it with the new one.
I know that if I have the exact table and exact column, I can do :
UPDATE
Table
SET
Column = Replace(Column, 'find value', 'replacement value')
But how can I generalize this code to my whole database ?
Thank You !
Perhaps not the best, one solution is to export the whole database, open the sql file in a text editor, perform the search-replace and importing again.
This question already has an answer here:
How to get Avalilable lookup columns in Loookup transformation into Lookup not match output path in ssis? [closed]
(1 answer)
Closed 9 years ago.
Can anyone suggest how to get Available lookup columns in Loookup transformation into Lookup No match ouput path?
please suggest me any way how can I get those columns in Available lookup columns into Lookup No match ouput path?
Thanks in advance....
I don't think you can - perhaps use a derived column transform before the lookup to add the lookup columns then do a "replace" on the lookup MATCHES to replace the columns where there is data.
The no Match output will then have the columns as they are already in the data flow.
Hope this makes sense....