I have a file with large number of columns and I want to input this file in mysql table.
The thing is if we have file with, say, 8 columns then we will first create table by -
CREATE TABLE `input` (
`idInput` varchar(45) DEFAULT NULL,
`row2` varchar(45) DEFAULT NULL,
`col3` varchar(45) DEFAULT NULL,
`col4` varchar(45) DEFAULT NULL,
`col5` varchar(45) DEFAULT NULL,
`col6` varchar(45) DEFAULT NULL,
`col7` varchar(45) DEFAULT NULL,
`col8` varchar(45) DEFAULT NULL
);
then we will input the file by -
LOAD DATA INFILE "FILE" INTO TABLE input;
But the thing is, I have file with 150 columns and I want to insert this file in mysql table automatically (so that I should not have to create table first). The first row of my file is header and it should be as column names in table and also each column and each row has different datatype.
So is there any easy way to do this so that after that I can do different things with this table?
I am using mysql command line client version 5.5.20 (windows 7).
You can try using SequelPro mysql client.
With this tool you can use the option "File->Import", and in the window "CSV Import Field Mapping", instead of selecting to import into an existing table, you can choose the button "New".
It's better if your CSV have a header line describing the column names, so it gives the right column names. The tool also is good at guessing the types of the columns according to the content.
You can eventually experience problems if VARCHAR(255) is setted as default type for fields of text type. If it is the case, change the type of those fields to TEXT type.
use phpmyadmin. It has the ability to create table base on the first line of the file and guess the table structure. Click "Import" link and select your file. Don't forget to select the Format to fit your file format, mostly CSV.
If the file is too big to fit into phpmyadmin, sometimes I "head" the file and use the head file in phpmyadmin to create the table, then import the file using the LOAD DATA command.
It makes my life easier.
I don't think this is possible using straight-up MySQL. Somehow the column definitions would have to be guessed. You'll prob have to go with a 2ndary language to read out the first row, make the table and then import.
You can do this using mysqldump though.
As I understand it, you have a generated text file with different data types ready to load from the command line. Here are instructions from MySQL.
to create:
https://dev.mysql.com/doc/refman/5.7/en/mysql-batch-commands.html
to alter:
https://dev.mysql.com/doc/refman/5.7/en/alter-table.html
all command lines which I also use. (if someone has a handy video that describes every step of how to use one of those MySQL Developer Environments though, that might be kinda of nice, one that it doesn't take 20 steps to load a table, though always probably be faster to type one in by hand and use one step or edit a dump.).
Related
Receiving the following error message:
Error
Static analysis:
1 errors were found during analysis.
This option conflicts with "AUTO_INCREMENT". (near "AUTO_INCREMENT" at position 692)
SQL query:
-- phpMyAdmin SQL Dump -- version 2.8.2.4 -- http://www.phpmyadmin.net -- -- Host: localhost:3306 -- Generation Time: Mar 23, 2020 at 03:58 PM -- Server version: 5.0.45 -- PHP Version: 5.2.3 -- -- Database: weir-jones -- -- -------------------------------------------------------- -- -- Table structure for table categories -- CREATE TABLE categories ( number int(11) NOT NULL auto_increment, section varchar(255) NOT NULL, parent_id varchar(10) NOT NULL, title varchar(200) NOT NULL, type varchar(255) NOT NULL, content text NOT NULL, display_order int(11) NOT NULL, PRIMARY KEY (number) ) ENGINE=MyISAM AUTO_INCREMENT=126 DEFAULT CHARSET=utf8 AUTO_INCREMENT=126
MySQL said: Documentation
1046 - No database selected
============================================
I have tried importing with all compatibility modes. No luck.
old database is gone, cannot export again.
Any help would be appreciated.
Brendan
If you ask for the 1046 No database selected then it is what it means. You exported a table from a database without the USE xxx.
So I would suggest try importing this within a database or add the USE clause on top on your SQL file.
Another thing:
If you ask a question on Stackoverflow make sure to read the "formatting rules". Wich means you can organzie your question.
It is actually quite hard to read what error you have. Use emphasis, code blocks and such things like:
CREATE table_blub
col1 CHAR(120) NOT NULL,
col2 INT(5)...
By this someone can better read what is code and what is the error and of course what is the actual question.
Eurobetics is correct, this is because the .sql file doesn't specify what database to work with. That's no problem, you can just create the database on your new server and import the file in to that. Since you're importing through phpMyAdmin, first use the "New" text in the left-hand navigation area to create a new database (you don't need to put any tables in it). Once the database is created, phpMyAdmin puts you in the database structure page. (If you happen to navigate away or are coming back after you've already created the database, just click the existing database in the navigation pane). Look at the tabs along the top row and use the "Import" tab there (or drag and drop your .sql file here).
Being inside that database page tells phpMyAdmin that you want to import to that database specifically, whereas if you're on the main home page, the Import button there isn't attached to any particular database, which leads to your error.
You could technically add the SQL commands to create the database and USE the database in to the .sql file, but in this case that doesn't seem like it's needed and would just be making more work for you.
I am trying to move my file based organizing json files to mariadb. Approximately there are 2,000,000 json files where in my file based system are zipped. The total storage space for the zipped json files is 7GB.
When i inserted all the records to Mariadb the table storage became 35GB.
i altered my table to be compress and the table size is 15GB.
Is there a way to reduce even more the table size?
Is it normal for the storage to double when data is added to mariadb?
this is my table
CREATE TABLE `sbpi_json` (
`fileid` int(11) NOT NULL,
`json_data` longtext COLLATE utf8_bin NOT NULL,
`idhash` char(32) COLLATE utf8_bin NOT NULL,
`sbpi` int(15) NOT NULL,
`district` int(2) NOT NULL,
`index_val` int(2) NOT NULL,
`updated` text COLLATE utf8_bin NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin ROW_FORMAT=COMPRESSED;
ALTER TABLE `sbpi_json`
ADD PRIMARY KEY (`fileid`),
ADD UNIQUE KEY `idhash` (`idhash`),
ADD KEY `sbpi` (`sbpi`);
The JSON column in question is json_data, correct? It averages (uncompressed) about 10KB, correct? In the file implementation, there are multiple 'versions' of each, correct? If so, how do you tell which one you want to deliver to the user?
Most compression techniques give you 3:1; InnoDB compression gives you 2:1. This is partially because it has things that it can't (or won't) compress.
Compressing just the JSON column (in client code) and storing it in a MEDIUMBLOB will probably take less space in InnoDB than using COMPRESSED. (But this will not be a huge savings.)
Focus on how you pick which 'version' of the JSON do deliver to the user. Optimize the schema around that. Then decide on how to store the data.
Given that the table can efficiently say which file contains the desired JSON, then that will be the best approach. And use some normal, fast-to-uncompress technique; don't focus on maximal-compression.
If char(32) COLLATE utf8_bin is a hex string, use ascii, not utf8.
If it is hex, then UNHEX to further shrink it to only BINARY(16).
When a row is bigger than 8KB, some of the data (probably json_data) is stored "off-record". This implies an extra disk access and disk allocation is a bit more sloppy. Hence, storing that column as a file ends up taking about the same amount of time and space.
The OS probably allocates space in 4KB chunks. InnoDB uses 16KB blocks.
It's the text type that takes too much space.
You can try to replace it with a smaller variant of text type if you can give for granted that that much lenght is ok.
Also replacing char(32) with varchar(32) will help if those values are not always full lenght.
Or you can go with varchar even for the textual field, but keep eyes on what's on this answer before doing so.
Hope I helped!
So in this case, I will get the whole database schema multiple times. But everytime the tables structure might be slightly different than the previous one. Since I already have data inside, is there a way to write a query to compare with the existing table and just adding new columns?
For example I already have this table in my database.
CREATE TABLE `Ages` (
`AgeID` int(11) DEFAULT NULL,
`AgeName` varchar(32) DEFAULT NULL,
`AgeAbbreviation` varchar(13) DEFAULT NULL,
`YouthAge` varchar(15) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
And in the new schema that I get has the same table but with different columns.
CREATE TABLE `Ages` (
`AgeID` int(11) DEFAULT NULL,
`AgeName` varchar(32) DEFAULT NULL,
`AgeAbbreviation` varchar(13) DEFAULT NULL,
`YouthAge` varchar(15) DEFAULT NULL,
`AgeLimit` varchar(20) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
In this case the column AgeLimit will be add to the existing table.
You should be able to do it by looking at the table definitions in the metadata tables (information_schema).
You can always look into the existing schema using the information_schema database, which holds the metadata.
You can then import your new schema into a temporary database, creating all tables according to the new schema and then again look into the metadata.
You might be able to use dynamic sql inside a stored procedure to execute alter table statements created from that differences at runtime
But I think, this is a lot easier from the backend nodejs server, because you can easily do step 1 and 2 also from nodejs (it's in fact just querying a bunch of tables) and you have way more possibilities to calculate the differences, create and execute the appropriate queries.
EDIT 1
If you don't have the possiblility of creating a temporary database from the new schema, you will have to find some other way, to extract information from it. I suspect you have a sql-script with (among others) a bunch of CREATE TABLE ... statements, because that's typically what mysqldump creates. So you'll have to parse this script. Again, this seems to be way easier in javascript, if it even is possible in a MySQL stored procedure. If your schema is as well structured as your examples, it's actually just a few lines of code.
EDIT 2
And maybe you can event get some inspiration from here: Compare two MySQL databases There are some tools mentioned which do a synchronization between databases.
I'm dealing with this problem in my MYSQL database for several hours now. I work with OS X 10.8.4 and use the tool Sequel Pro to work with my database. The table I have troubles with looks like this:
CREATE TABLE `Descriptions` (
`id` int(11) unsigned zerofill NOT NULL AUTO_INCREMENT,
`company` varchar(200) DEFAULT NULL,
`overview` mediumtext,
`trade` mediumtext,
PRIMARY KEY (`id`))
ENGINE=InnoDB AUTO_INCREMENT=1703911 DEFAULT CHARSET=utf8;
I imported a csv file like this:
LOAD DATA LOCAL INFILE 'users/marc/desktop/descriptions kopie.txt'
INTO TABLE descriptions
FIELDS TERMINATED BY ';'
LINES TERMINATED BY '\n'
(#dummy, company, overview, trade)
When I look at the data in my table now, everything looks the way I expect (SELECT * Syntax). But I can't work with the data. When I try to select the company 'SISTERS', from which I know that it exists, it gives me no results. Also the fields "overview" and "trade" are not NULL when there's no data, it is just an empty string. The other tables in the database works just fine with the imported data. Somehow MySQL just doesn't see the values as something to work with, it doesn't bothers to read them.
What I tried so far:
- I used text wrangler to convert the csv to txt (utf-8) and loadet it into the database, did not work
- I changed the fields into BLOB and back to varchar/mediumtext to force mysql to do something with the data, did not work
- I tried to use the Sequel Pro Import function, did not change anything
- I tried to make a new table and copy the old one into it, did not change anything
- I tried to force mysql to change the values by using the concat syntax (just adding random variables which I could delete later again)
Could it have something to do with the collation settings? Could it has something to do with my regional settings (Switzerland) on my OS X) Any other ideas? I would appreciate any help very much.
Kind Regards,
Marc
I could solve the problems. When I opened the csv in Text Wrangler and let the invisible characters show, it was full of red reversed question marks. Those sneaky bastards, they messed up everything. I don't now what they are, but they were the problem. I removed them with the "Zap Gremlins..." option.
I need to store image and resume of user in the data base.
I am using mysql data base and php5. I need to know which data types I should use.
And also how do I set a limit (maximum size) for uploaded data.
What you need, according to your comments, is a 'BLOB' (Binary Large OBject) for both image and resume.
Perfect answer for your question can be found on MYSQL site itself.refer their manual(without using PHP)
http://forums.mysql.com/read.php?20,17671,27914
According to them use LONGBLOB datatype. with that you can only store images less than 1MB only by default,although it can be changed by editing server config file.i would also recommend using MySQL workBench for ease of database management
This can be done from the command line. This will create a column for your image with a NOT NULL property.
CREATE TABLE `test`.`pic` (
`idpic` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
`caption` VARCHAR(45) NOT NULL,
`img` LONGBLOB NOT NULL,
PRIMARY KEY(`idpic`)
)
TYPE = InnoDB;
From here