Extract character from db to another db with all items - mysql

I tried to copy the data related to the guid of a character to another db (same account) but it always appears without any item
I am use deleted/insert
This from inventori db for example:
DELETE FROM `character_inventory` WHERE `item`=item;
INSERT INTO `character_inventory` VALUES (guid, bag, slot, item);
Ever export/import the character appear without items: no equiment and no inventory
psd: I executed the query with the server turned off and on with the same result

You can use this query to import the table content from another database.
INSERT INTO db1.`character_inventory` SELECT * FROM db2.`character_inventory` WHERE guid=XXX;
You need also to copy the item_instance table probably, so use:
INSERT INTO db1.`item_instance` SELECT * FROM db2.`item_instance`;

Related

INSERT INTO for JSON array

I'm getting text files where some columns are populated with a JSON array. I import the file into a staging table. I want to use INSERT INTO to get the file into another table with the JSON array column parsed into two columns.
I need one column named 'military_focus' and another named 'health_insurance_focus' that will be populated with 'true' or 'false'. Using this SELECT statement presents the data as I need it.
SELECT
[network_id]
[network_name],
[network_type],
[service_type_ids],
JSON_VALUE(focus,'$.military_focus') AS military_focus,
JSON_VALUE(focus,'$.health_insurance_focus') AS health_insurance_focus,
[created_at],
[updated_at],
[LoadDt],
[FileNM]
FROM
[Med_Stage].[Provider].[networks]
I'm trying to use that with an INSERT INTO to get it into another table with the appropriate columns. I get an error that the SELECT values do not match the number of INSERT columns since I'm going from one 'focus' column in the Staging table to two columns in the destination table.
INSERT INTO [Med].[Provider].[networks]
(
[network_id],
[network_name],
[network_type],
[service_type_ids],
[military_focus],
[health_insurance_focus],
[created_at],
[updated_at],
[LoadDt],
[FileNM]
)
SELECT
[network_id]
[network_name],
[network_type],
[service_type_ids],
JSON_VALUE(focus,'$.military_focus') AS military_focus,
JSON_VALUE(focus,'$.health_insurance_focus') AS health_insurance_focus,
[created_at],
[updated_at],
[LoadDt],
[FileNM]
FROM
[Med_Stage].[Provider].[networks]
Yes, the fiddle is sufficent, being able to test immediately highlights the issue.
You're just missing a comma after the first column
SELECT
[network_id], /* <-- missing comma*/
[network_name],
[network_type],
[service_type_ids],
JSON_VALUE(focus,'$.military_focus') AS military_focus,
JSON_VALUE(focus,'$.health_insurance_focus') AS health_insurance_focus,
[created_at],
[updated_at],
[LoadDt],
[FileNM]
FROM
[Med_Stage].[Provider].[networks]

SQL filling a table importing data from another table and math

I am trying to develop software for one of my classes.
It is supposed to create a table contrato where I would fill the info of the clients and how much are they going to pay and how many payments they will make to cancel the contract.
On the other hand I have another table cuotas which should be filled by importing some info from table1 and I'm trying to perform the math and save the payment info directly into the SQL. But it keeps telling me I cant save the SQL because of error #1241
I'm using PHPMyAdmin and Xampp
Here is my SQL code
INSERT INTO `cuotas`(`Ncontrato`, `Vcontrato`, `Ncuotas`) SELECT (`Ncontrato`,`Vcontrato`,`Vcuotas`) FROM contrato;
SELECT `Vcuotaunit` = `Vcontrato`/`Ncuotas`;
SELECT `Vcuotadic`=`Vcuotaunit`*2;
Can you please help me out and fix whatever I'm doing wrong?
Those selects are missing a FROM clause.
So it's unknown from which table or view they have to take the columns.
You could use an UPDATE after that INSERT.
INSERT INTO cuotas (Ncontrato, Vcontrato, Ncuotas)
SELECT Ncontrato, Vcontrato, Vcuotas
FROM contrato;
UPDATE cuotas
SET Vcuotaunit = (Vcontrato/Ncuota),
Vcuotadic = (Vcontrato/Ncuota)*2
WHERE Vcuotaunit IS NULL;
Or use 1 INSERT that also does the calculations.
INSERT INTO cuotas (Ncontrato, Vcontrato, Ncuotas, Vcuotaunit, Vcuotadic)
SELECT Ncontrato, Vcontrato, Vcuotas,
(Vcontrato/Ncuota) as Vcuotaunit,
(Vcontrato/Ncuota)*2 as Vcuotadic
FROM contrato;

Inserting into MySQL tables through SparkSQL, by querying from the same table

I have a MySQL table that was created in MySQL like that:
create table nnll (a integer, b integer)
I've initialized pyspark (2.1) and executed the code:
sql('create table nnll using org.apache.spark.sql.jdbc options (url "jdbc:mysql://127.0.0.1:3306", dbtable "prod.nnll", user \'user\', password \'pass\')')
sql('insert into nnll select 1,2')
sql('insert into nnll select * from nnll')
From some reason, I get the exception:
AnalysisException: u'Cannot insert overwrite into table that is also being read from.;;\nInsertIntoTable Relation[a#2,b#3] JDBCRelation(prod.nnll) [numPartitions=1], OverwriteOptions(false,Map()), false\n+- Project [a#2, b#3]\n +- SubqueryAlias nnll\n +- Relation[a#2,b#3] JDBCRelation(prod.nnll) [numPartitions=1]\n'
It seems like my insert statement is translated into insert overwrite statement by spark, because I'm trying to insert to the same table that I'm querying (on the same partition, I have only one).
Is there any way to avoid this, and make spark translate this query to a regular query?
Thank you very much!

Update multiple mysql rows with 1 query?

I am porting client DB to new one with different post titles and rows ID's , but he wants to keep the hits from old website,
he has over 500 articles in new DB , and updating one is not an issue with this query
UPDATE blog_posts
SET hits=8523 WHERE title LIKE '%slim charger%' AND category = 2
but how would I go by doing this for all 500 articles with 1 query ? I already have export query from old db with post title and hits so we could find the new ones easier
INSERT INTO `news_items` (`title`, `hits`) VALUES
('Slim charger- your new friend', 8523 )...
the only reference in both tables is product name word within the title everything else is different , id , full title ...
Make a tmp table for old data in old_posts
UPDATE new_posts LEFT JOIN old_posts ON new_posts.title = old_posts.title SET new_posts.hits = old_posts.hits;
Unfortunately that's not how it works, you will have to write a script/program that does a loop.
articles cursor;
selection articlesTable%rowtype;
WHILE(FETCH(cursor into selection)%hasNext)
Insert into newTable selection;
END WHILE
How you bridge it is up to you, but that's the basic pseudo code/PLSQL.
The APIs for selecting from one DB and putting into another vary by DBMS, so you will need a common intermediate format. Basically take the record from the first DB, stick it into a struct in the programming language of your choice, and prefrom an insert using those struct values using the APIs for the other DBMS.
I'm not 100% sure that you can update multiple records at once, but I think what you want to do is use a loop in combination with the update query.
However, if you have 2 tables with absolutely no relationship or common identifiers between them, you are kind of in a hard place. The hard place in this instance would mean you have to do them all manually :(
The last possible idea to save you is that the id's might be different, but they might still have the same order. If that is the case you can still loop through the old table and update the number table as I described above.
You can build a procedure that'll do it for you:
CREATE PROCEDURE insert_news_items()
BEGIN
DECLARE news_items_cur CURSOR FOR
SELECT title, hits
FROM blog_posts
WHERE title LIKE '%slim charger%' AND category = 2;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
OPEN news_items_cur;
LOOP
IF done THEN
LEAVE read_loop;
END IF;
FETCH news_items_cur
INTO title, hits;
INSERT INTO `news_items` (`title`, `hits`) VALUES (title, hits);
END LOOP;
CLOSE news_items_cur;
END;

SQL Server 2008: insert into table in batches

I have a linked server (Sybase) set up in SQL Server from which I need to draw data. The Sybase server sits on the other side of the world, and connectivity is pretty shoddy. I would like to insert data into one of the SQL Server tables in manageable batches (e.g. 1000 records at a time). I.e I want to do;
INSERT IN [SQLServerTable] ([field])
SELECT [field] from [LinkedServer].[DbName].[dbo].[SybaseTable]
but I want to fetch 1000 records at a time and insert them.
Thanks
Karl
I typically use python with the pyodbc module to perform batches like this against a SQL server. Take a look and see if it is an option, if so I can provide you an example.
You will need to modify a lot of this code to fit your particular situation, however you should be able to follow the logic. You can comment out the cnxn.commit() line to rollback the transactions until you get everything working.
import pyodbc
#This is an MS SQL2008 connection string
conn='DRIVER={SQL Server};SERVER=SERVERNAME;DATABASE=DBNAME;UID=USERNAME;PWD=PWD'
cnxn=pyodbc.connect(conn)
cursor=cnxn.cursor()
rowCount=cursor.execute('SELECT Count(*) from RemoteTable').fetchone()[0]
cnxn.close()
count=0
lastID=0
while count<rowCount:
#You may want to close the previous connection and start a new one in this loop. Otherwise
#the connection will be open the entire time defeating the purpose of performing the transactions in batches.
cnxn=pyodbc.connect(conn)
cursor=cnxn.cursor()
rows=cursor.execute('SELECT TOP 1000 ID, Field1, Field2 FROM INC WHERE ((ID > %s)) ' % (lastID)).fetchall()
for row in rows:
cursor.execute('INSERT INTO LOCALTABLE (FIELD1, FIELD2) VALUES (%s, %s)' % (row.Field1, row.Field2))
cnxn.commit()
cnxn.close()
#The [0] assumes the id is the first field in the select statement.
lastID=rows[len(rows)-1][0]
count+=len(rows)
#Pause after each insert to see if the user wants to continue.
raw_input("%s down, %s to go! Press enter to continue." % (count, rowCount-count))