Importing csv file cyrillic characters to Mysql - PHPMyadmin - mysql

Hello hard question here,
I am trying to import a csv file with cyrillic characters such as (RUSSIAN cities, they may be over 40.000) - This is displayed in the notepad++ editor with the encoding set to UTF-8 without bom -:
RU,101000,Москва,Москва,,,,,,55.7522,37.6156,4
RU,101194,Москва 194,Москва,,,,,,55.7522,37.6156,1
RU,101300,Москва 300,Москва,,,,,,55.7522,37.6156,1
and then I import it using the import tab in phpmyadmin.
once when I browse the data imported. I have the following characters:
RU 101000 МоÑква МоÑква
RU 101194 МоÑква 194 МоÑква
RU 101300 МоÑква 300 МоÑква
I've already set up the database to utf8_general_ci, I tried utf8_unicode_ci and utf8_lithuanian_ci... I do not know what to do to force an utf8 display in the phpmyadmin panel.
Is there a solution to import the data trhogh with a SQL input?
Thanks in advance!!

It looks like the problem is related to the connection pipe phpMyAdmin is using to connect to the MySQL server.
According to this solution, phpMyAdmin uses cpg_db_connect() to connect to your db server and it needs to be updated to include SET NAMES 'utf8'. They give the following example:
function cpg_db_connect()
{
global $CONFIG;
$result = #mysql_connect($CONFIG['dbserver'], $CONFIG['dbuser'], $CONFIG['dbpass']);
if (!$result) {
return false;
}
if (!mysql_select_db($CONFIG['dbname']))
return false;
mysql_query("SET NAMES 'utf8'", $result); # <-- see here
return $result;
}
If you cannot edit your phpMyAdmin install (say you're on a shared server), just install phpMyAdmin on your own site in a virtual/subdirectory and then find and edit this function. phpMyAdmin is just a php site like any other.
The purer way to do this (especially if you have character concerns) would be to import the data over ssh using mysql (e.g. mysql -u dbuser -p dbname < import.sql).

Related

Windows batch file - connect to remote MySQL database save resulting text Output

I normally work with PHP/MySQL. A client wants to send variables from a .bat file - to a remote MySQL - where I will then manipulate them for display etc. I do not know how to connect and send these variables from a bat file in Windows.
I have small .bat file on windows, that simply writes a few variables to a text file.
#echo off
#echo Data: > test.txt
#echo VAR_1=777 >> test.txt
#echo VAR_2=245.67 >> test.txt
The result of the .bat file is a text file test.txt created with various details in it.
I would like the .bat file commands to also:
1) connect to a remote MySQL database
connect -> '8580922.hostedresource.com'
2) save to a basic table on a remote MySQL database:
INSERT INTO `My_Database`.`My_Table` (
`VAR_1` ,
`VAR_2` ,
)
VALUES (
'777',
'245.67'
);
Is this possible?
Is so - how?
I don't have MySQL Installed and I'm not familiar with it but here is a crack at something to try, based on info from the linked page.
REM This needs to be set to the right path
set bin=C:\Program Files\MySQL\MySQL Server 5.6\bin
REM set the host name and db
SET DBHOST=8580922.hostedresource.com
SET DBNAME=MyDatabase
REM set the variables and the SQL
SET VAR_1=777
SET VAR_2=245.67
SET SQL="INSERT INTO `My_Database`.`My_Table` (`VAR_1`,`VAR_2`) VALUES ( '%VAR_1%',
'%VAR_2%');"
"%bin%/mysql" -e %SQL% --user=NAME_OF_USER --password=PASSWORD -h %DBHOST% %DBNAME%
PAUSE
Please try that and post back the resulting error message. There are many reasons that it won't work, but you need to try it to find out.
I'm not sure where test.txt comes into this but it would be a good idea export the whole SQL statement to a text file then use the correct MySQL command line switch to just run the file instead of generating the SQL inside the batch file.
There's a bit more here.
connecting to MySQL from the command line

Existing MySQL database is not importing to localhost

I am getting this error when I am trying to import my existing database to localhost. The database imports to web host servers but importing to the localhost.
The error is;
Static analysis:
2 errors were found during analysis.
Ending quote ' was expected. (near "" at position 28310)
4 values were expected, but found 3. (near "(" at position 28266)
PhpMyAdmin is kinda dumb since it cannot import what it itself exported. It escapes single quotes as '' instead of \' and then breaks its teeth on strings like this:
''I can''t do this anymore!''
You can either:
replace '' → \', or
import via mysql.exe:
mysql -uuser -ppass dbName < file.sql
open your .sql script file in any editor(like notepad++) and
You need to replace \'' with \' (for new version of phpmyadmin)
or
You need to replace \' with \'' (for old version of phpmyadmin)
when you will replace it from all content of sql file then
it will work for you.
ref:https://stackoverflow.com/a/41376791/2298211
This might happen because the database - size that you export is too big.
THE SOLUTION FOR ME WAS:
Choose from Export method:
Custom - display all possible options
Format: SQL
Output:
In Compression - choose the option zipped
export the database as zip , (ex: database_name.sql.zip)
import it on local, and from time to time if it throws an error for taking too long , you can resume the import, by press on resume and resubmit - and choose again the same database and will continue from where stopped before.
I attached a picture with these settings:

How to pass secure_auth to MySQL login via SQLalchemy

I'm working on the front end of a webapp, and my co-developer is using Pyramid and SQAlchemy. We've just moved from SQLite to MySQL. I installed MySQL 5.6.15 (via Homebrew) on my OS X machine to get the Python MySQLdb install to work (via pip in a virtualenv).
Because in MySQL >= 5.6.5 secure_auth is now ON by default I can only connect to the remote database (pre 5.6.5) with the --skip-secure-auth flag, which works fine in a terminal.
However, in the Python Pyramid code, it only seems possible to add this flag as an argument to create_engine(), but I can't find create_engine() in my co-dev's code, only the connection string below in an initialisation config file. He's not available, this isn't my area of expertise, and we launch next week :(
sqlalchemy.url = mysql+mysqldb://gooddeeds:deeds808letme1now#146.227.24.38/gooddeeds_development?charset=utf8
I've tried appending various "secure auth" strings to the above with no success. Am I looking in the wrong place? Has MySQLdb set secure_auth to ON because I'm running MySQL 5.6.15? If so, how can I change that?
If you are forced to use the old passwords (bah!) when using MySQL 5.6, and using MySQLdb with SQLAlchemy, you'll have to add the --skip-secure-auth to an option file and use URL:
from sqlalchemy.engine.url import URL
..
dialect_options = {
'read_default_file': '/path/to/your/mysql.cnf',
}
engine = create_engine(URL(
'mysql',
username='..', password='..',
host='..', database='..',
query=dialect_options
))
The mysql.cnf would contain:
[client]
skip-secure-auth
For Pyramid, you can do the following. Add a line in your configuration ini-file that holds the connection arguments:
sqlalchemy.url = mysql://scott:tiger#localhost/test
sqlalchemy.connect_args = { 'read_default_file': '/path/to/foo' }
Now you need to change a bit the way the settings are read and used. In the file that launches your Pyramic app, do the following:
def main(global_config, **settings):
try:
settings['sqlalchemy.connect_args'] = eval(settings['sqlalchemy.connect_args'])
except KeyError:
settings['sqlalchemy.connect_args'] = {}
engine = engine_from_config(settings, 'sqlalchemy.')
# rest of code..
The trick is to evaluate the string in the ini file which contains a dictionary with the extra options for the SQLAlchemy dialect.

How do I configure pyodbc to correctly accept strings from SQL Server using freeTDS and unixODBC?

I can not get a valid string from an MSSQL server into python. I believe there is an encoding mismatch somewhere. I believe it is between the ODBC layer and python because I am able to get readable results in tsql and isql.
What character encoding does pyodbc expect? What do I need to change in the chain to get this to work?
Specific Example
Here is a simplified python script as an example:
#!/usr/bin/env python
import pyodbc
dsn = 'yourdb'
user = 'import'
password = 'get0lddata'
database = 'YourDb'
def get_cursor():
con_string = 'DSN=%s;UID=%s;PWD=%s;DATABASE=%s;' % (dsn, user, password, database)
conn = pyodbc.connect(con_string)
return conn.cursor()
if __name__ == '__main__':
c = get_cursor()
c.execute("select id, name from recipe where id = 4140567")
row = c.fetchone()
if row:
print row
The output of this script is:
(Decimal('4140567'), u'\U0072006f\U006e0061\U00650067')
Alternatively, if the last line of the script is changed to:
print "{0}, '{1}'".format(row.id, row.name)
Then the result is:
Traceback (most recent call last):
File "/home/mdenson/projects/test.py", line 20, in <module>
print "{0}, '{1}'".format(row.id, row.name)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128)
A transcript using tsql to execute the same query:
root#luke:~# tsql -S cmw -U import -P get0lddata
locale is "C"
locale charset is "ANSI_X3.4-1968"
using default charset "UTF-8"
1> select id, name from recipe where id = 4140567
2> go
id name
4140567 orange2
(1 row affected)
and also in isql:
root#luke:~# isql -v yourdb import get0lddata
SQL> select id, name from recipe where id = 4140567
+----------------------+--------------------------+
| id | name |
+----------------------+--------------------------+
| 4140567 | orange2 |
+----------------------+--------------------------+
SQLRowCount returns 1
1 rows fetched
So I have worked at this for the morning and looked high and low and haven't figured out what is amiss.
Details
Here are version details:
Client is Ubuntu 12.04
freetds v0.91
unixodbc 2.2.14
python 2.7.3
pyodbc 2.1.7-1 (from ubuntu package) & 3.0.7-beta06 (compiled from source)
Server is XP with SQL Server Express 2008 R2
Here are the contents of a few configuration files on the client.
/etc/freetds/freetds.conf
[global]
tds version = 8.0
text size = 64512
[cmw]
host = 192.168.90.104
port = 1433
tds version = 8.0
client charset = UTF-8
/etc/odbcinst.ini
[FreeTDS]
Description = TDS driver (Sybase/MS SQL)
Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so
Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsS.so
CPTimeout =
CPReuse =
FileUsage = 1
/etc/odbc.ini
[yourdb]
Driver = FreeTDS
Description = ODBC connection via FreeTDS
Trace = No
Servername = cmw
Database = YourDB
Charset = UTF-8
So after continued work I am now getting unicode characters into python. Unfortunately the solution I've stumbled upon is about as satisfying as kissing your cousin.
I solved the problem by installing the python3 and python3-dev packages and then rebuilding pyodbc with python3.
Now that I've done this my scripts now work even though I am still running them with python 2.7.
So I don't know what was fixed by doing this, but it now works and I can move on to the project I started with.
Any chance you're having a problem with a BOM (Byte Order Marker)? If so, maybe this snippet of code will help:
import codecs
if s.beginswith( codecs.BOM_UTF8 ):
# The byte string s begins with the BOM: Do something.
# For example, decode the string as UTF-8
if u[0] == unicode( codecs.BOM_UTF8, "utf8" ):
# The unicode string begins with the BOM: Do something.
# For example, remove the character.
# Strip the BOM from the beginning of the Unicode string, if it exists
u.lstrip( unicode( codecs.BOM_UTF8, "utf8" ) )
I found that snippet on this page.
If you upgrade the pyodbc to version 3 the problem will be solved.

.db file and MySQL

I am having real issues with a .db file its around 20gb in size with three tables and the rest data.
I am on a mac so i am having to use some crappy apps but it wont open in Access.
Does any one know what software will produce a .db file and what software will allow me to open it and export it as a CSV or MySQL file ?
Also if the connection was interrupted during transit could this effect the file ?
Since mac is BSD-based now, try opening a terminal and executing the command file /path/to/large/db -- it should tell you at least what file type the DB is, and from there you can determine what program to use to open it. It might be MySQL, might be PostGreSQL, might be SQLite -- file will tell you.
Example:
$ file a.db
a.db: SQLite 3.x database
$ file ~/.kde/share/apps/amarok/mysqle/amarok/tracks.{frm,MYD,MYI}
~/.kde/share/apps/amarok/mysqle/amarok/tracks.frm: MySQL table definition file Version 10
~/.kde/share/apps/amarok/mysqle/amarok/tracks.MYD: data
~/.kde/share/apps/amarok/mysqle/amarok/tracks.MYI: MySQL MISAM compressed data file Version 1
So it's SQLite v3? Then try
sqlite3 /path/to/db
and you can perform pretty much standard SQL from the CLI. At the CLI, you can type .tables to list all the tables in that DB. -- Or if you prefer a GUI, there are a few options listed in this question. Accepted answer was SQLite manager for Firefox.
Then you could drop tables or delete as you see fit.
Here's an example of dumping a csv to stdout:
$ sqlite3 -separator ',' -list a.db "SELECT * FROM t"
3,4
3,5
100,200
And to store it to a file -- the > operator redirects output to a file you name:
$ sqlite3 -separator ',' -list a.db "SELECT * FROM t" > a.csv
$ cat a.csv # puts the contents of a.csv on stdout
3,4
3,5
100,200
-separator ',' indicates that fields should be delimited by a comma; -list means to put row data on the same line, using the delimiter; a.db indicates which db to use; and "SELECT * FROM t" is just the SQL command to execute.
I'm not a Mac user but if it's a SQLite file I've heard great things about Base.