mysql update adding row instead of updating - mysql

i have a shell script that issues a sql query that will replace a filename in the database with another filename. it also will replace the full path by concatenating the name of the file with the name of the directory stored in a different table. it works for me but when another user tries it on another database (identical structure but different data) it will add a new record instead of updating it. i've searched but i cant figure out why it works for me and not them. all i can think of is maybe it is a permission issue. here is the query:
UPDATE My.files
SET strFilename="'$NFescaped'"
WHERE strFilename="'$DFescaped'";
UPDATE My.episode
SET c18 =
(SELECT concat (
(SELECT strPath
FROM My.path
WHERE path.idpath=
(SELECT idpath
FROM My.files
WHERE strFilename="'$NFescaped'")), "'$NFescaped'"))
WHERE c18 =
(SELECT concat (
(SELECT strPath
FROM .path
WHERE path.idpath=
(SELECT idpath
FROM My.files
WHERE strFilename="'$NFescaped'")), "'$DFescaped'"));

Related

SELECT * FROM db1 WHERE db1.table.value = db2.table.value

I'm working with mySQL db and trying to display the correct data for the user. In order to do that I check if the data that I call from one backend is equal to username from another backend like so
SELECT * FROM db1 WHERE db1.table.value = db2.table.value
Names of databases are A and B.
SELECT *
FROM `A.onboardings`
, `B.loginsystem`
WHERE onboardings.sales_email = loginsystem.username
The problem is I get an error A.A.onboardings doesn't exists and A.B.loginsystem doesn't exist pls help :(
You must use this form - from A onboardings
You have to put the backticks in the right pace, or else mysql things your table is called A.onboardings
As seen bleow the needs to be around the database and the table name
And the use of aliases helps to keep even in big queries a good overview and yu have to write less
"SELECT * FROM `A`.`onboardings` a1,`B`.`loginsystem` b1 WHERE a1.sales_email = b1.username"
Try this one( Change the query according to your DB name, table, and matching column name)
SELECT * FROM mydatabase1.tblUsers INNER JOIN mydatabase2.tblUsers ON mydatabase1.tblUsers.UserID = mydatabase2.tblUsers.UserID
The problem is that
`A.onboardings`
is not the same as
A.onboardings
The first is a table reference where there table name has a period in it. The second is for the onboardings table in database A.
In addition, you should be using JOIN!!!
SELECT *
FROM A.onboardings o JOIN
B.loginsystem ls
ON o.sales_email = ls.username;
If you feel compelled to escape the identifies -- which I do not recommend -- then:
SELECT *
FROM `A`.`onboardings` o JOIN
`B`.`loginsystem` ls
ON o.sales_email = ls.username;

Getting the input List of a WHERE IN Filter for SQL QUERY From A File or a Local Table

I have a simple SQL Query. However, the query has a where filter which takes a list.
The list contains at least 2000 items and it is becoming extremely inconvenient to put the long list into the query itself.
I was trying to find if I can create a table/ file and call that into the query instead.
EXAMPLE CODE:
Select * from Table_XXXX where aa = 'yy' and date > zzz and mylist = [..............]
So instead of the list above, I will like to call the file (locally) in which the elements of the list reside or a table (locally and not in the database) in which the elements are in a column...
Any help will be appreciated.
First you would store the contents of the file/list in a table. And after that you can use the in condition
create table mylist(x int);
insert into mylist values(<all values in your file>);
select *
from Table_XXXX tt
where tt.aa = 'yy'
and tt.date > zzz
and tt.mylist IN (select x from mylist)

Query mysql table using ruby by reading filter data from file

I have to run a query against a table (tableA) and fetch 10 million or so id's (varchar) and then use those id's in the WHERE clause of a different table (tableB) on a different server. I am doing this using ruby.
Here is what I currently have:
I fetch required id's from table1 using the below code and place it in a txt file:
system("#{queryToExecute} -e \"#{queryFromSource} \"> #{csvfilename}.txt")
queryToExecute is a straight forward select statement on table1:
Sample rows from the txt file
7fa6530f-309e-435f-a64b-099514bccfb3
b408db72-2929-4121-a6fb-5589520914fb
942eb682-2ce0-4462-b773-144e9255e37c
ed3adc78-7fa3-423c-932b-b97a46b2fa07
I then do some cleanup on the text file with results using:
system("sed '1d' #{csvfilename}.txt > #{csvfilename}_temp.txt;mv #{csvfilename}_temp.txt #{csvfilename}.txt")
system("paste -d',' -s #{csvfilename}.txt > #{csvfilename}_temp.txt;mv #{csvfilename}_temp.txt #{csvfilename}.txt")
filedata = File.read(csvfilename+".txt")
filedata = filedata.gsub!(",","','")
filedata = filedata.gsub!("\n","")
filedata = "'" + filedata + "'"
#ids = filedata
After cleanup, this is what I have in #ids:
'7fa6530f-309e-435f-a64b-099514bccfb3','b408db72-2929-4121-a6fb-5589520914fb','942eb682-2ce0-4462-b773-144e9255e37c','ed3adc78-7fa3-423c-932b-b97a46b2fa07'
Now I substitute #ids in my new query and execute same as above.
My final query looks something like this:
SELECT col1, col2 FROM Table2 WHERE ids IN #{#ids}
The problem I am facing is when there are millions of id's to search, It does not return complete result set. However, if I limit my id's from table1 I get required results..
Since the query is huge I am not able to print and validate.
Is there a better way to do this?

Update multiple rows with corresponding values from another table MySQL

Using MySQL, I'm working on a script that will import data from a CSV file. I've gotten to the point where the script is finished for importing data for a single user, however that I want to extend to all users now. A statement I currently have is the following:
UPDATE werte
SET werte=(SELECT Date_Enrollment
FROM THKON01.data
WHERE auto_patient_id = 1020)
WHERE folder_id = 1525
AND number=4;
Now what I want is to use the enrollment dates from all users (so I would omit the "WHERE auto_patient_id ..." statement) and insert them into all corresponding rows. Here lies the problem. I tried for two users at once with the statement
UPDATE werte
SET werte=(SELECT Date_Enrollment
FROM THKON01.data
WHERE auto_patient_id = 1020
OR auto_patient_id = 1051)
WHERE folder_id between 1524 AND 1525
AND number=4;
However this gave me an error that said "Query returned multiple rows", referring to the inner query of SELECT Date_Enrollment.
Note that the auto_patient_id's are not sequentially numbered, so I can't use a "between" there.
EDIT: For clarification
I have two tables. One, werte, is where I want the values to be stored to. THKON01.data is the table I want to read the values from. In case of this example, I want the Date_Enrollment values to be written into the werte table. Let's say I have 3 users I want to do this for, then the structure for THKON01.data looks like this:
auto_patient_id Date_Enrollment
1020 01.01.1911
1050 02.01.1912
1073 03.01.1913
... ...
Now I want to insert this into the werte table which looks like this:
folder_id werte
1525 <empty>
1526 <empty>
1527 <empty>
... ...
I want them to be inserted so that the first value of THKON01.data (01.01.1911) gets copied to the first value in werte (field of folder_id 1525), the second (02.01.1912) gets to the second (folder_id 1526), and so forth. Folder_id is sequentially numbered, auto_patient_id is not. I hope that clarifies this a little.
If you have some links between fields auto_patient_id and folder_id you can try something like this
UPDATE werte
SET werte=(SELECT Date_Enrollment
FROM THKON01.data
WHERE (your_link))
WHERE number=4;
here your_link can be of THKON01.data.auto_patient_id = werte.somefield or THKON01.data.auto_patient_id = somefunction(werte.folder_id)
It will select only one record at a time and update all your records that fall under outer where condition.
update
if you want to use some bash script you can use smth like this
$folder_id = 1 # or some other start number
mysql -e "SELECT Date_Enrollment FROM THKON01.data" | while read Date_Enrollment; do
mysql -e "update werte set werte = $Date_Enrollment where folder_id = $folder_id"
$folder_id = $folder_id + 1
done
You said that your folder id are in order, so we can just add 1 each time instead of fetching them from result.
I'm absolutely not proficient with bash scripting so this script could be not working, but I hope that the idea is clear.

How can I sanitize my DB from these duplicates

I have a table with the following fields:
id | domainname | domain_certificate_no | keyvalue
An example for the output of a select statement can be as:
'57092', '02a1fae.netsolstores.com', '02a1fae.netsolstores.com_1', '55525772666'
'57093', '02a1fae.netsolstores.com', '02a1fae.netsolstores.com_2', '22225554186'
'57094', '02a1fae.netsolstores.com', '02a1fae.netsolstores.com_3', '22444356259'
'97168', '02aa6aa.netsolstores.com', '02aa6aa.netsolstores.com_1', '55525772666'
'97169', '02aa6aa.netsolstores.com', '02aa6aa.netsolstores.com_2', '22225554186'
'97170', '02aa6aa.netsolstores.com', '02aa6aa.netsolstores.com_3', '22444356259’
I need to sanitize my db such that: I want to remove the domain names that have repeated keyvalue for the first domain_certificate_no (i.e, in this example, I look for the field domain_certificate_no: 02aa6aa.netsolstores.com_1, since it is number 1, and has repeated value for the key, then I want to remove the whole chain which is 02aa6aa.netsolstores.com_2 and 02aa6aa.netsolstores.com_3 and this by deleting the domain name that this chain belongs to which is 02aa6aa.netsolstores.com.
How can I automate the checking process for the whole DB. So, I have a query that checks any domain name in the pattern ('%.%.%) EDIT: AND they have share domain name (in this ex: netsolstores.com) , if it finds cert no. 1 that belongs to this domain name has a repeated key value, then delete. Otherwise no. Please, note tat, it is ok for domain_certificate_no to have repeated value if it is not number 1.
EDIT: I only compare the repeated valeues for the same second level domain name. Ex: in this question, I compare the values that share the domain name: .netsolstores.com. If I have another domain name, with sublevel domains, I do the same. But the point is that I don't need to compare the whole DB. Only the values with shared domain name (but different sub domain).
I'm not sure what happens with '02aa6aa.netsolstores.com_1' in your example.
The following keeps only the minimum id for any repeated key:
with t as (
select t.*,
substr(domain_certificate_no,
instr(domain_certificate_no, '_') + 1, 1000) as version,
left(domain_certificate_no, instr(domain_certificate_no, '_') - 1) as dcn
from t
)
select t.*
from t join
(select keyvalue, min(dcn) as mindcn
from t
group by keyvalue
) tsum
on t.keyvalue = tsum.keyvalue and
t.dcn = tsum.mindcn
For the data you provide, this seems to do the trick. This will not return the "_1" version of the repeats. If that is important, the query can be pretty easily modified.
Although I prefer to be more positive (thinking about the rows to keep rather than delete), the following should delete what you want:
with t as (
select t.*,
substr(domain_certificate_no,
instr(domain_certificate_no, '_') + 1, 1000) as version,
left(domain_certificate_no, instr(domain_certificate_no, '_') - 1) as dcn
from t
),
tokeep as (
select t.*
from t join
(select keyvalue, min(dcn) as mindcn
from t
group by keyvalue
) tsum
on t.keyvalue = tsum.keyvalue and
t.dcn = tsum.mindcn
)
delete from t
where t.id not in (select id from tokeep)
There are other ways to express this that are possibly more efficient (depending on the database). This, though, keeps the structure of the original query.
By the way, when trying new DELETE code, be sure that you stash a copy of the table. It is easy to make a mistake with DELETE (and UPDATE). For instance, if you leave out the WHERE clause, all the rows will disappear, after the long painful process of logging all of them. You might find it faster to simply select the desired results into a new table, validate them, then truncate the old table and re-insert them.