I'm trying to insert some initial data into a django table.
I've tried sqlcustom and sqlall, but it doesn't seems to work...
And followed the instructions and created a .sql file with some inserts statements in it, like this:
INSERT INTO charts_wave (id, ...) VALUES (...);
This file it is in the following path:
project/charts/sql/wave.sql
Maybe I didn't understand the purpose of sqlcustom and sqlall.
After running the commands, I run one final the syncdb command.
Next I test it by calling the Wave.objects.all() in the python shell and it returns a empty list.
What I'm doing wrong?? How can I insert this data with the sql file.
The method with fixtures it seems hardcode for the really extensive data that I've to insert.
Thanks for now.
To me it seems you are looking for Django fixtures. They provide a way to import initial data into your Django apps. So instead of importing your data yourself, including a fixture in your app, when you will run syncdb, the data will be automatically imported.
Related
I am trying to copy the data from one table to another table. Normally using the SELECT command, we can read the whole table, and using the INSERT command we can insert the data into another table. But I don't want to use raw SQL command, I want to use SQLAlchemy ORM to copy and insert. Is there any way to do it?
Are you just trying to add an entry to a database, or are you trying to duplicate an entry?
Adding would be done by simply doing:
ed_user = User(name='ed', fullname='Ed Jones', nickname='edsnickname')
session.add(ed_user)
session.commit()
The example was taken from the official documentation. The commit will actually write the data added to the session, to the database.
EDIT:
you'll have to write something that parses the file into objects and add those objects to the database. Depends on what kind of file, if it's a database export, then you can just import with your preferred database tool. You can have a look at this blog post as well. Bottom-line is that if you want to import from csv / excel / txt, you'll have to write something for it.
I have a .sql file from Oracle which contains create table/index statements and a lot of insert statements(around 1M insert).
I can manually modify the create table/index part(not too much), but for the insert part there are some Oracle functions like to_date.
I know MySql has a similar function STR_TO_DATE but the usage of the parameter is different.
I can connect to MySQL, but the .sql file is the only thing I got from Oracle.
Is there any way I can import this Oracle .sql file into MySQL?
Thanks.
Although the above job can be done by manually editing the script appropriately however there are products available which can be of use. Refer to the link for more information on one such product.
P.S. I am not affiliated in any way to the product
Since you mention about insert script basically i think you will be inserting data for this you can use any ETL tool, like open source tool like Pentaho data integrator, pretty simple to do, just search table to table transformation from different database connection on youtube to learn you should be able to connect to both mysql and oracle database else this wont help, but all the table structures you should create manually in the source database for data - you can just load it using ETL, no need to edit for every single line of insert if its more than 100 may be its very painful thing to do.
How can I import a SQL-file (yes, sql not csv) with phpmyadmin so that it replaces or updates the data while importing?
I did not find option for that. I also created another temporary database where I imported the sql-file in question (having only INSERT -lines, only data no structure), and then went to export to select suitable option like INSERT ... SELECT ... ON DUPLICATE KEY UPDATE ..but did not find one or anything that would help in the situation.
So how can I achieve that? If not with phpMyAdmin, is there a program that transforms "insert" sql file to "update on duplicate", or even from "insert" to "delete" after which I could then re-import with original file?
How I came to this, if it helps the above or if someone has better solutions to previous steps:
I have a semi-large (1 GB) DB file to import, which I have then divided to multiple smaller files to get it imported. One of them being the structuce sql-dump and rest the data. When still trying to get the large file through, trying to adjust timeout settings through htaccess or phpmyadmin import options did not help - always getting the timeout anyway. Since those did not work, I found a program by Janos Rusiczki (https://rusiczki.net/2007/01/24/sql-dump-file-splitter/) to split the sql file into smaller ones (good program thanks Janos!). It also separated the structure from the data.
However after 8 succesfull imports I got timeout again, after phpmyadmin already imported part of the file. Thus I ended up in current situation. I know, I can always delete all and start over with even smaller partial files, but.. I am sure there is a better way to do this. There has to be a way to replace the files on import, or do some other way described above.
Thanks for any help! :)
Cribbing from INSERT ... ON DUPLICATE KEY (do nothing), you can use a regular expression to make every INSERT into an INSERT IGNORE in your sql file and it will pass over all the entries that have already been imported.
Note that will also ignore other errors, but other than timeout errors don't seem likely in this context.
Ok, let me start off by saying that I'm don't have the slightest clue how to start off with this. I have an sqlite database. For simplicity lets just say that the table that I want to read is 'data' and data contains two fields, say (id, name). How could I go about creating a shell script to read the information from the 'data' sqlite table and insert it into a MYSQL table with the exact same table structure? I realise that it would be simpler to just insert the data into MYSQL to begin with and cut out the sqlite step all together, but this is not possible (unfortunately). I really appreciate any help!
http://web.archive.org/web/20121018070614/http://sqlite.phxsoftware.com/forums/p/941/4725.aspx
[corrected dead link. there is a C# script which accomplishes the objective]
I wonder if there is a (native) possibility to create a MySQL table from an .xls or .xlsx spreadsheet. Note that I do not want to import a file into an existing table with LOAD DATA INFILE or INSERT INTO, but to create the table from scratch. i.e using the header as columns (with some default field type e.g. INT) and then insert the data in one step.
So far I used a python script to build a create statement and imported the file afterwards, but somehow I feel clumsy with that approach.
There is no native MySQL tool that does this, but the MySQL PROCEDURE ANALYSE might help you suggest the correct column types.
With a VB Script you could do that. At my client we have a script which takes the worksheet name, the heading names and the field formats and generates a SQL script containing a CREATE TABLE and a the INSERT INTO statements. We use Oracle but mySQL is the same principle.
Of course you could do it even more sophisticated by accessing mySQL from Excel by ODBC and post the CREATE TABLE and INSERT INTO statements that way.
I cannot provide you with the script as it is the belonging of my client but I can answer your questions on how to write such a script if you want to write one.