mosquitto - disable subscribing without authorization - acl

I am using mosquitto version 1.4.10 with tls-certificate. I am using this plugin https://github.com/mbachry/mosquitto_pyauth to authorize a user.And it works well for mosquitto_pub ( as in, when someone tries to publish , it gets authorized by the module first ).
However, it seems that mosquitto_sub is able to subscribe to anything without authorizing. How do I force security when someone is just trying to access a topic in read only mode?
I went through the mosquitto.conf file and cant seem to find anything related to this.
for example, I am able to subscribe like this:
mosquitto_sub --cafile /etc/mosquitto/ca.crt --cert /etc/mosquitto/client.crt --key /etc/mosquitto/client.key -h ubuntu -p 1883 -t c/# -d
and able to see messages coming from some publisher like this:
mosquitto_pub --cafile /etc/mosquitto/ca.crt --cert /etc/mosquitto/client.crt --key /etc/mosquitto/client.key -h ubuntu -p 1883 -t c/2/b/3/p/3/rt/13/r/123 -m 32 -q 1
What I am trying to do is prevent mosquitto_sub reading all messages at root level without authorization .
the python code that does the authorization looks like this : ( auth data is stored in cassandra db )
import sys
import mosquitto_auth
from cassandra.cluster import Cluster
from cassandra import ConsistencyLevel
## program entry point from mosquitto...
def plugin_init(opts):
global cluster, session, select_device_query
conf = dict(opts)
cluster = Cluster(['192.168.56.102'])
session = cluster.connect('hub')
select_device_query = session.prepare('SELECT * from devices where uid=?')
select_device_query.consistency_level = ConsistencyLevel.QUORUM
print 'Cassandra cluster initialized'
def acl_check(clientid, username, topic, access):
device_data = session.execute(select_device_query, [username])
if device_data.current_rows.__len__() > 0:
device_data = device_data[0]
# sample device data looks like this :
# Row(uid=u'08:00:27:aa:8f:91', brand=3, company=2, device=15617, property=3, room=490, room_number=u'3511', room_type=13, stamp=datetime.datetime(2016, 12, 12, 6, 29, 54, 723000))
subscribable_topic = 'c/' + str(device_data.company) \
+ '/b/' + str(device_data.brand) \
+ '/p/' + str(device_data.property) \
+ '/rt/' + str(device_data.room_type) \
+ '/r/' + str(device_data.room) \
+ '/#'
matches = mosquitto_auth.topic_matches_sub(subscribable_topic, topic)
print 'ACL: user=%s topic=%s, matches = %s' % (username, topic, matches)
return matches
return False
function acl_check seems to be always called when mosquitto_pub tries to connect, but never called when mosquitto_sub connects.
the C code behind this python module is here: https://github.com/mbachry/mosquitto_pyauth/blob/master/auth_plugin_pyauth.c

add the following to your mosquitto.conf
...
allow_anonymous false
...
This will stop users without credential from logging on to the broker.
You can also add an acl rule for the anonymous user if there are certain topics you would want unauthenticated clients to be able to see.

Related

How do I find out what's buffering the communication from qemu to pexpect?

I have a Python2 program that runs qemu with a FreeBSD image.
expect()ing lines out output works.
However, expect()ing output that does not have its line terminated (such as when waiting for a prompt like login:) does not, this times out.
I suspect something in the communication between qemu and my program is doing line buffering, but how do I find out which of them it is? Candidates that I can think of:
FreeBSD itself. I find that unlikely, it shows prompts when running interactively, and qemu's -nographics options shouldn't make a difference for the emulated VM (but I may be wrong).
Something in the setup of the pty. I have zero experience with ptys. If that's the issue, this would be a bug in pexpect since pexpect is setting the pty up.
A bug in pexpect.
Something in my own script... but I have no clue what that could be.
For reference, here's the stripped-down code (including download and unpack, should anybody want to play with it):
#! /usr/bin/env python2
import os
import pexpect
import re
import sys
import time
def run(cmd):
'''Run command, log to stdout, no timeout, return the status code.'''
print('run: ' + cmd)
(output, rc) = pexpect.run(
cmd,
withexitstatus=1,
encoding='utf-8',
logfile=sys.stdout,
timeout=None
)
if rc != 0:
print('simple.py: Command failed with return code: ' + rc)
exit(rc)
download_path = 'https://download.freebsd.org/ftp/releases/VM-IMAGES/12.0-RELEASE/amd64/Latest'
image_file = 'FreeBSD-12.0-RELEASE-amd64.qcow2'
image_file_xz = image_file + '.xz'
if not os.path.isfile(image_file_xz):
run('curl -o %s %s/%s' % (image_file_xz, download_path, image_file_xz))
if not os.path.isfile(image_file):
# Reset image file to initial state
run('xz --decompress --keep --force --verbose ' + image_file_xz)
#cmd = 'qemu-system-x86_64 -snapshot -monitor none -display curses -chardev stdio,id=char0 ' + image_file
cmd = 'qemu-system-x86_64 -snapshot -nographic ' + image_file
print('interact with: ' + cmd)
child = pexpect.spawn(
cmd,
timeout=90, # FreeBSD takes roughly 60 seconds to boot
maxread=1,
)
child.logfile = sys.stdout
def expect(pattern):
result = child.expect([pexpect.TIMEOUT, pattern])
if result == 0:
print("timeout: %d reached when waiting for: %s" % (child.timeout, pattern))
exit(1)
return result - 1
if False:
# This does not work: the prompt is not visible, then timeout
expect('login: ')
else:
# Workaround, tested to work:
expect(re.escape('FreeBSD/amd64 (freebsd)')) # Line before prompt
time.sleep(1) # MUCH longer than actually needed, just to be safe
child.sendline('root')
# This will always time out, and terminate the script
expect('# ')
print('We want to get here but cannot')

How to store MQTT Mosquitto publish events into MySQL? [duplicate]

This question already has an answer here:
Is there a way to store Mosquitto payload into an MySQL database for history purpose?
(1 answer)
Closed 4 years ago.
I've connected a device that communicates to my mosquitto MQTT server (RPi) and is sending out publications to a specified topic. What I want to do now is to store the messages published on that topic on the MQTT server into a MySQL database. I know how MySQL works, but I don't know how to listen for these incoming publications. I'm looking for a light-weight solution that runs in the background. Any pointers or ideas on libraries to use are very welcome.
I've done something similar in the last days:
live-collecting weatherstation-data with pywws
publishing with pywws.service.mqtt to mqtt-Broker
python-script on NAS collecting the data and writing to MariaDB
#!/usr/bin/python -u
import mysql.connector as mariadb
import paho.mqtt.client as mqtt
import ssl
mariadb_connection = mariadb.connect(user='USER', password='PW', database='MYDB')
cursor = mariadb_connection.cursor()
# MQTT Settings
MQTT_Broker = "192.XXX.XXX.XXX"
MQTT_Port = 8883
Keep_Alive_Interval = 60
MQTT_Topic = "/weather/pywws/#"
# Subscribe
def on_connect(client, userdata, flags, rc):
mqttc.subscribe(MQTT_Topic, 0)
def on_message(mosq, obj, msg):
# Prepare Data, separate columns and values
msg_clear = msg.payload.translate(None, '{}""').split(", ")
msg_dict = {}
for i in range(0, len(msg_clear)):
msg_dict[msg_clear[i].split(": ")[0]] = msg_clear[i].split(": ")[1]
# Prepare dynamic sql-statement
placeholders = ', '.join(['%s'] * len(msg_dict))
columns = ', '.join(msg_dict.keys())
sql = "INSERT INTO pws ( %s ) VALUES ( %s )" % (columns, placeholders)
# Save Data into DB Table
try:
cursor.execute(sql, msg_dict.values())
except mariadb.Error as error:
print("Error: {}".format(error))
mariadb_connection.commit()
def on_subscribe(mosq, obj, mid, granted_qos):
pass
mqttc = mqtt.Client()
# Assign event callbacks
mqttc.on_message = on_message
mqttc.on_connect = on_connect
mqttc.on_subscribe = on_subscribe
# Connect
mqttc.tls_set(ca_certs="ca.crt", tls_version=ssl.PROTOCOL_TLSv1_2)
mqttc.connect(MQTT_Broker, int(MQTT_Port), int(Keep_Alive_Interval))
# Continue the network loop & close db-connection
mqttc.loop_forever()
mariadb_connection.close()
If you are familiar with Python the Paho MQTT library is simple, light on resources, and interfaces well with Mosquitto. To use it simply subscribe to the topic and set up a callback to pass the payload to MySQL using peewee as shown in this answer. Run the script in the background and call it good!

failing to decrypt blob passwords only once in a while using amazon kms

import os, sys
AWS_DIRECTORY = '/home/jenkins/.aws'
certificates_folder = 'my_folder'
SUCCESS = 'success'
class AmazonKMS(object):
def __init__(self):
# making sure boto3 has the certificates and region files
result = os.system('mkdir -p ' + AWS_DIRECTORY)
self._check_os_result(result)
result = os.system('cp ' + certificates_folder + 'kms_config ' + AWS_DIRECTORY + '/config')
self._check_os_result(result)
result = os.system('cp ' + certificates_folder + 'kms_credentials ' + AWS_DIRECTORY + '/credentials')
self._check_os_result(result)
# boto3 is the amazon client package
import boto3
self.kms_client = boto3.client('kms', region_name='us-east-1')
self.global_key_alias = 'alias/global'
self.global_key_id = None
def _check_os_result(self, result):
if result != 0 and raise_on_copy_error:
raise FAILED_COPY
def decrypt_text(self, encrypted_text):
response = self.kms_client.decrypt(
CiphertextBlob = encrypted_text
)
return response['Plaintext']
when using it
amazon_kms = AmazonKMS()
amazon_kms.decrypt_text(blob_password)
getting
E ClientError: An error occurred (AccessDeniedException) when calling the Decrypt operation: The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access.
stacktrace is
../keys_management/amazon_kms.py:77: in decrypt_text
CiphertextBlob = encrypted_text
/home/jenkins/.virtualenvs/global_tests/local/lib/python2.7/site-packages/botocore/client.py:253: in _api_call
return self._make_api_call(operation_name, kwargs)
/home/jenkins/.virtualenvs/global_tests/local/lib/python2.7/site-packages/botocore/client.py:557: in _make_api_call
raise error_class(parsed_response, operation_name)
This happens in a script that runs once an hour.
it's only failing 2 -3 times a day.
after a retry it succeed.
Tried to upgraded from boto3 1.2.3 to 1.4.4
what is the possible cause for this behavior ?
My guess is that the issue is not in anything you described here.
Most likely the login-tokes time out or something along those lines. To investigate this further a closer look on the way the login works here is probably helpful.
How does this code run? Is it running inside AWS like on Lambda or EC2? Do you run it from your own server (looks like it runs on jenkins)? How is the login access established? What are those kms_credentials used for and how do they look like? Do you do something like assumeing a role (which would probably work through access tokens which after some time will no longer work)?

IRC bot pong doesn't work

I've created bot, using code from this page.
Everything was good, when I was trying to reach irc.rizon.net. But problem arrives, when I've changed server to irc.alphachat.net.
#!/usr/bin/env python3
import socket
server = 'irc.alphachat.net'
channel = '#somechannel'
NICK = 'somenick'
IDENT = 'somenick'
REALNAME = 'somenick'
port = 6667
def joinchan(chan):
ircsock.send(bytes('JOIN %s\r\n' % chan, 'UTF-8'))
def ping(): # This is our first function! It will respond to server Pings.
ircsock.send(bytes("QUOTE PONG \r\n", 'UTF-8'))
def send_message(chan, msg):
ircsock.send(bytes('PRIVMSG %s :%s\r\n' % (chan, msg), 'UTF-8'))
ircsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
ircsock.connect((server, port)) # Here we connect to the server using the port 6667
ircsock.send(bytes("USER "+ NICK +" "+ NICK +" "+ NICK +" :This bot\n", 'UTF-8')) # user authentication
ircsock.send(bytes("NICK "+ NICK +"\n", 'UTF-8')) # here we actually assign the nick to the bot
joinchan(channel) # Join the channel using the functions we previously defined
while 1: # Be careful with these! it might send you to an infinite loop
ircmsg = ircsock.recv(2048).decode() # receive data from the server
ircmsg = ircmsg.strip('\n\r') # removing any unnecessary linebreaks.
print(ircmsg) # Here we print what's coming from the server
if ircmsg.find(' PRIVMSG ')!=-1:
nick=ircmsg.split('!')[0][1:]
if ircmsg.find("PING :") != -1: # if the server pings us then we've got to respond!
ping()
if ircmsg.find(":Hello "+ NICK) != -1: # If we can find "Hello Mybot" it will call the function hello()
hello()
Problem is with ping command because I don't know how to answer to server:
:irc-us2.alphachat.net NOTICE * :*** Looking up your hostname...
:irc-us2.alphachat.net NOTICE * :*** Checking Ident
:irc-us2.alphachat.net NOTICE * :*** Found your hostname
:irc-us2.alphachat.net NOTICE * :*** No Ident response
PING :CE661578
:irc-us2.alphachat.net 451 * :You have not registered
With IRC, you should really split each line up by ' ' (space) into chunks to process it - something like this should work after your print (untested)
The reason it's not working is because you're not replying to PINGs properly
chunk = ircmsg.split(' ')
if chunk[0] == 'PING': # This is a ping
ircsock.send(bytes('PONG :%s\r\n' % (chunk[1]), 'UTF-8')) # Send a pong!
if chunk[1] == 'PRIVMSG': # This is a message
if chunk[3] == ':Hello': # Hey, someone said hello!
send_message(chunk[2], "Hi there!") # chunk[2] is channel / private!
if chunk[1] == '001': # We've logged on
joinchannel(channel) # Let's join!
send_message(channel, "I've arrived! :-)") # Announce to the channel
Normally the command / numeric is found in the second parameter (chunk[1]) - The only exception I can think of is PING which is found in the first (chunk[0])
Also note that I moved joinchannel() - you should only be doing this after you're logged on.
Edit: Didn't realise the age of this post. Sorry!
I believe you just need to make a small change to the string you send in response to the ping request.
try using:
ircsock.send(bytes("PONG pingis\n", "UTF-8"))
This ping response works for me on freenode.

Quick easy way to migrate SQLite3 to MySQL? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Anyone know a quick easy way to migrate a SQLite3 database to MySQL?
Everyone seems to starts off with a few greps and perl expressions and you sorta kinda get something that works for your particular dataset but you have no idea if it's imported the data correctly or not. I'm seriously surprised nobody's built a solid library that can convert between the two.
Here a list of ALL the differences in SQL syntax that I know about between the two file formats:
The lines starting with:
BEGIN TRANSACTION
COMMIT
sqlite_sequence
CREATE UNIQUE INDEX
are not used in MySQL
SQLite uses CREATE TABLE/INSERT INTO "table_name" and MySQL uses CREATE TABLE/INSERT INTO table_name
MySQL doesn't use quotes inside the schema definition
MySQL uses single quotes for strings inside the INSERT INTO clauses
SQLite and MySQL have different ways of escaping strings inside INSERT INTO clauses
SQLite uses 't' and 'f' for booleans, MySQL uses 1 and 0 (a simple regex for this can fail when you have a string like: 'I do, you don't' inside your INSERT INTO)
SQLLite uses AUTOINCREMENT, MySQL uses AUTO_INCREMENT
Here is a very basic hacked up perl script which works for my dataset and checks for many more of these conditions that other perl scripts I found on the web. Nu guarantees that it will work for your data but feel free to modify and post back here.
#! /usr/bin/perl
while ($line = <>){
if (($line !~ /BEGIN TRANSACTION/) && ($line !~ /COMMIT/) && ($line !~ /sqlite_sequence/) && ($line !~ /CREATE UNIQUE INDEX/)){
if ($line =~ /CREATE TABLE \"([a-z_]*)\"(.*)/i){
$name = $1;
$sub = $2;
$sub =~ s/\"//g;
$line = "DROP TABLE IF EXISTS $name;\nCREATE TABLE IF NOT EXISTS $name$sub\n";
}
elsif ($line =~ /INSERT INTO \"([a-z_]*)\"(.*)/i){
$line = "INSERT INTO $1$2\n";
$line =~ s/\"/\\\"/g;
$line =~ s/\"/\'/g;
}else{
$line =~ s/\'\'/\\\'/g;
}
$line =~ s/([^\\'])\'t\'(.)/$1THIS_IS_TRUE$2/g;
$line =~ s/THIS_IS_TRUE/1/g;
$line =~ s/([^\\'])\'f\'(.)/$1THIS_IS_FALSE$2/g;
$line =~ s/THIS_IS_FALSE/0/g;
$line =~ s/AUTOINCREMENT/AUTO_INCREMENT/g;
print $line;
}
}
Here is a list of converters (not updated since 2011):
https://www2.sqlite.org/cvstrac/wiki?p=ConverterTools (or snapshot at archive.org)
An alternative method that would work nicely but is rarely mentioned is: use an ORM class that abstracts specific database differences away for you. e.g. you get these in PHP (RedBean), Python (Django's ORM layer, Storm, SqlAlchemy), Ruby on Rails (ActiveRecord), Cocoa (CoreData)
i.e. you could do this:
Load data from source database using the ORM class.
Store data in memory or serialize to disk.
Store data into destination database using the ORM class.
Here is a python script, built off of Shalmanese's answer and some help from Alex martelli over at Translating Perl to Python
I'm making it community wiki, so please feel free to edit, and refactor as long as it doesn't break the functionality (thankfully we can just roll back) - It's pretty ugly but works
use like so (assuming the script is called dump_for_mysql.py:
sqlite3 sample.db .dump | python dump_for_mysql.py > dump.sql
Which you can then import into mysql
note - you need to add foreign key constrains manually since sqlite doesn't actually support them
here is the script:
#!/usr/bin/env python
import re
import fileinput
def this_line_is_useless(line):
useless_es = [
'BEGIN TRANSACTION',
'COMMIT',
'sqlite_sequence',
'CREATE UNIQUE INDEX',
'PRAGMA foreign_keys=OFF',
]
for useless in useless_es:
if re.search(useless, line):
return True
def has_primary_key(line):
return bool(re.search(r'PRIMARY KEY', line))
searching_for_end = False
for line in fileinput.input():
if this_line_is_useless(line):
continue
# this line was necessary because '');
# would be converted to \'); which isn't appropriate
if re.match(r".*, ''\);", line):
line = re.sub(r"''\);", r'``);', line)
if re.match(r'^CREATE TABLE.*', line):
searching_for_end = True
m = re.search('CREATE TABLE "?(\w*)"?(.*)', line)
if m:
name, sub = m.groups()
line = "DROP TABLE IF EXISTS %(name)s;\nCREATE TABLE IF NOT EXISTS `%(name)s`%(sub)s\n"
line = line % dict(name=name, sub=sub)
else:
m = re.search('INSERT INTO "(\w*)"(.*)', line)
if m:
line = 'INSERT INTO %s%s\n' % m.groups()
line = line.replace('"', r'\"')
line = line.replace('"', "'")
line = re.sub(r"([^'])'t'(.)", "\1THIS_IS_TRUE\2", line)
line = line.replace('THIS_IS_TRUE', '1')
line = re.sub(r"([^'])'f'(.)", "\1THIS_IS_FALSE\2", line)
line = line.replace('THIS_IS_FALSE', '0')
# Add auto_increment if it is not there since sqlite auto_increments ALL
# primary keys
if searching_for_end:
if re.search(r"integer(?:\s+\w+)*\s*PRIMARY KEY(?:\s+\w+)*\s*,", line):
line = line.replace("PRIMARY KEY", "PRIMARY KEY AUTO_INCREMENT")
# replace " and ' with ` because mysql doesn't like quotes in CREATE commands
if line.find('DEFAULT') == -1:
line = line.replace(r'"', r'`').replace(r"'", r'`')
else:
parts = line.split('DEFAULT')
parts[0] = parts[0].replace(r'"', r'`').replace(r"'", r'`')
line = 'DEFAULT'.join(parts)
# And now we convert it back (see above)
if re.match(r".*, ``\);", line):
line = re.sub(r'``\);', r"'');", line)
if searching_for_end and re.match(r'.*\);', line):
searching_for_end = False
if re.match(r"CREATE INDEX", line):
line = re.sub('"', '`', line)
if re.match(r"AUTOINCREMENT", line):
line = re.sub("AUTOINCREMENT", "AUTO_INCREMENT", line)
print line,
I usually use the Export/import tables feature of IntelliJ DataGrip.
You can see the progress in the bottom right corner.
[]
If you are using Python/Django it's pretty easy:
create two databases in settings.py (like here https://docs.djangoproject.com/en/1.11/topics/db/multi-db/)
then just do like this:
objlist = ModelObject.objects.using('sqlite').all()
for obj in objlist:
obj.save(using='mysql')
Probably the quick easiest way is using the sqlite .dump command, in this case create a dump of the sample database.
sqlite3 sample.db .dump > dump.sql
You can then (in theory) import this into the mysql database, in this case the test database on the database server 127.0.0.1, using user root.
mysql -p -u root -h 127.0.0.1 test < dump.sql
I say in theory as there are a few differences between grammars.
In sqlite transactions begin
BEGIN TRANSACTION;
...
COMMIT;
MySQL uses just
BEGIN;
...
COMMIT;
There are other similar problems (varchars and double quotes spring back to mind) but nothing find and replace couldn't fix.
Perhaps you should ask why you are migrating, if performance/ database size is the issue perhaps look at reoginising the schema, if the system is moving to a more powerful product this might be the ideal time to plan for the future of your data.
I've just gone through this process, and there's a lot of very good help and information in this Q/A, but I found I had to pull together various elements (plus some from other Q/As) to get a working solution in order to successfully migrate.
However, even after combining the existing answers, I found that the Python script did not fully work for me as it did not work where there were multiple boolean occurrences in an INSERT. See here why that was the case.
So, I thought I'd post up my merged answer here. Credit goes to those that have contributed elsewhere, of course. But I wanted to give something back, and save others time that follow.
I'll post the script below. But firstly, here's the instructions for a conversion...
I ran the script on OS X 10.7.5 Lion. Python worked out of the box.
To generate the MySQL input file from your existing SQLite3 database, run the script on your own files as follows,
Snips$ sqlite3 original_database.sqlite3 .dump | python ~/scripts/dump_for_mysql.py > dumped_data.sql
I then copied the resulting dumped_sql.sql file over to a Linux box running Ubuntu 10.04.4 LTS where my MySQL database was to reside.
Another issue I had when importing the MySQL file was that some unicode UTF-8 characters (specifically single quotes) were not being imported correctly, so I had to add a switch to the command to specify UTF-8.
The resulting command to input the data into a spanking new empty MySQL database is as follows:
Snips$ mysql -p -u root -h 127.0.0.1 test_import --default-character-set=utf8 < dumped_data.sql
Let it cook, and that should be it! Don't forget to scrutinise your data, before and after.
So, as the OP requested, it's quick and easy, when you know how! :-)
As an aside, one thing I wasn't sure about before I looked into this migration, was whether created_at and updated_at field values would be preserved - the good news for me is that they are, so I could migrate my existing production data.
Good luck!
UPDATE
Since making this switch, I've noticed a problem that I hadn't noticed before. In my Rails application, my text fields are defined as 'string', and this carries through to the database schema. The process outlined here results in these being defined as VARCHAR(255) in the MySQL database. This places a 255 character limit on these field sizes - and anything beyond this was silently truncated during the import. To support text length greater than 255, the MySQL schema would need to use 'TEXT' rather than VARCHAR(255), I believe. The process defined here does not include this conversion.
Here's the merged and revised Python script that worked for my data:
#!/usr/bin/env python
import re
import fileinput
def this_line_is_useless(line):
useless_es = [
'BEGIN TRANSACTION',
'COMMIT',
'sqlite_sequence',
'CREATE UNIQUE INDEX',
'PRAGMA foreign_keys=OFF'
]
for useless in useless_es:
if re.search(useless, line):
return True
def has_primary_key(line):
return bool(re.search(r'PRIMARY KEY', line))
searching_for_end = False
for line in fileinput.input():
if this_line_is_useless(line): continue
# this line was necessary because ''); was getting
# converted (inappropriately) to \');
if re.match(r".*, ''\);", line):
line = re.sub(r"''\);", r'``);', line)
if re.match(r'^CREATE TABLE.*', line):
searching_for_end = True
m = re.search('CREATE TABLE "?([A-Za-z_]*)"?(.*)', line)
if m:
name, sub = m.groups()
line = "DROP TABLE IF EXISTS %(name)s;\nCREATE TABLE IF NOT EXISTS `%(name)s`%(sub)s\n"
line = line % dict(name=name, sub=sub)
line = line.replace('AUTOINCREMENT','AUTO_INCREMENT')
line = line.replace('UNIQUE','')
line = line.replace('"','')
else:
m = re.search('INSERT INTO "([A-Za-z_]*)"(.*)', line)
if m:
line = 'INSERT INTO %s%s\n' % m.groups()
line = line.replace('"', r'\"')
line = line.replace('"', "'")
line = re.sub(r"(?<!')'t'(?=.)", r"1", line)
line = re.sub(r"(?<!')'f'(?=.)", r"0", line)
# Add auto_increment if it's not there since sqlite auto_increments ALL
# primary keys
if searching_for_end:
if re.search(r"integer(?:\s+\w+)*\s*PRIMARY KEY(?:\s+\w+)*\s*,", line):
line = line.replace("PRIMARY KEY", "PRIMARY KEY AUTO_INCREMENT")
# replace " and ' with ` because mysql doesn't like quotes in CREATE commands
# And now we convert it back (see above)
if re.match(r".*, ``\);", line):
line = re.sub(r'``\);', r"'');", line)
if searching_for_end and re.match(r'.*\);', line):
searching_for_end = False
if re.match(r"CREATE INDEX", line):
line = re.sub('"', '`', line)
print line,
http://sqlfairy.sourceforge.net/
http://search.cpan.org/dist/SQL-Translator/
aptitude install sqlfairy libdbd-sqlite3-perl
sqlt -f DBI --dsn dbi:SQLite:../.open-tran/ten-sq.db -t MySQL --add-drop-table > mysql-ten-sq.sql
sqlt -f DBI --dsn dbi:SQLite:../.open-tran/ten-sq.db -t Dumper --use-same-auth > sqlite2mysql-dumper.pl
chmod +x sqlite2mysql-dumper.pl
./sqlite2mysql-dumper.pl --help
./sqlite2mysql-dumper.pl --add-truncate --mysql-loadfile > mysql-dump.sql
sed -e 's/LOAD DATA INFILE/LOAD DATA LOCAL INFILE/' -i mysql-dump.sql
echo 'drop database `ten-sq`' | mysql -p -u root
echo 'create database `ten-sq` charset utf8' | mysql -p -u root
mysql -p -u root -D ten-sq < mysql-ten-sq.sql
mysql -p -u root -D ten-sq < mysql-dump.sql
I wrote this simple script in Python3. It can be used as an included class or standalone script invoked via a terminal shell. By default it imports all integers as int(11)and strings as varchar(300), but all that can be adjusted in the constructor or script arguments respectively.
NOTE: It requires MySQL Connector/Python 2.0.4 or higher
Here's a link to the source on GitHub if you find the code below hard to read: https://github.com/techouse/sqlite3-to-mysql
#!/usr/bin/env python3
__author__ = "Klemen TuĊĦar"
__email__ = "techouse#gmail.com"
__copyright__ = "GPL"
__version__ = "1.0.1"
__date__ = "2015-09-12"
__status__ = "Production"
import os.path, sqlite3, mysql.connector
from mysql.connector import errorcode
class SQLite3toMySQL:
"""
Use this class to transfer an SQLite 3 database to MySQL.
NOTE: Requires MySQL Connector/Python 2.0.4 or higher (https://dev.mysql.com/downloads/connector/python/)
"""
def __init__(self, **kwargs):
self._properties = kwargs
self._sqlite_file = self._properties.get('sqlite_file', None)
if not os.path.isfile(self._sqlite_file):
print('SQLite file does not exist!')
exit(1)
self._mysql_user = self._properties.get('mysql_user', None)
if self._mysql_user is None:
print('Please provide a MySQL user!')
exit(1)
self._mysql_password = self._properties.get('mysql_password', None)
if self._mysql_password is None:
print('Please provide a MySQL password')
exit(1)
self._mysql_database = self._properties.get('mysql_database', 'transfer')
self._mysql_host = self._properties.get('mysql_host', 'localhost')
self._mysql_integer_type = self._properties.get('mysql_integer_type', 'int(11)')
self._mysql_string_type = self._properties.get('mysql_string_type', 'varchar(300)')
self._sqlite = sqlite3.connect(self._sqlite_file)
self._sqlite.row_factory = sqlite3.Row
self._sqlite_cur = self._sqlite.cursor()
self._mysql = mysql.connector.connect(
user=self._mysql_user,
password=self._mysql_password,
host=self._mysql_host
)
self._mysql_cur = self._mysql.cursor(prepared=True)
try:
self._mysql.database = self._mysql_database
except mysql.connector.Error as err:
if err.errno == errorcode.ER_BAD_DB_ERROR:
self._create_database()
else:
print(err)
exit(1)
def _create_database(self):
try:
self._mysql_cur.execute("CREATE DATABASE IF NOT EXISTS `{}` DEFAULT CHARACTER SET 'utf8'".format(self._mysql_database))
self._mysql_cur.close()
self._mysql.commit()
self._mysql.database = self._mysql_database
self._mysql_cur = self._mysql.cursor(prepared=True)
except mysql.connector.Error as err:
print('_create_database failed creating databse {}: {}'.format(self._mysql_database, err))
exit(1)
def _create_table(self, table_name):
primary_key = ''
sql = 'CREATE TABLE IF NOT EXISTS `{}` ( '.format(table_name)
self._sqlite_cur.execute('PRAGMA table_info("{}")'.format(table_name))
for row in self._sqlite_cur.fetchall():
column = dict(row)
sql += ' `{name}` {type} {notnull} {auto_increment}, '.format(
name=column['name'],
type=self._mysql_string_type if column['type'].upper() == 'TEXT' else self._mysql_integer_type,
notnull='NOT NULL' if column['notnull'] else 'NULL',
auto_increment='AUTO_INCREMENT' if column['pk'] else ''
)
if column['pk']:
primary_key = column['name']
sql += ' PRIMARY KEY (`{}`) ) ENGINE = InnoDB CHARACTER SET utf8'.format(primary_key)
try:
self._mysql_cur.execute(sql)
self._mysql.commit()
except mysql.connector.Error as err:
print('_create_table failed creating table {}: {}'.format(table_name, err))
exit(1)
def transfer(self):
self._sqlite_cur.execute("SELECT name FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%'")
for row in self._sqlite_cur.fetchall():
table = dict(row)
# create the table
self._create_table(table['name'])
# populate it
print('Transferring table {}'.format(table['name']))
self._sqlite_cur.execute('SELECT * FROM "{}"'.format(table['name']))
columns = [column[0] for column in self._sqlite_cur.description]
try:
self._mysql_cur.executemany("INSERT IGNORE INTO `{table}` ({fields}) VALUES ({placeholders})".format(
table=table['name'],
fields=('`{}`, ' * len(columns)).rstrip(' ,').format(*columns),
placeholders=('%s, ' * len(columns)).rstrip(' ,')
), (tuple(data) for data in self._sqlite_cur.fetchall()))
self._mysql.commit()
except mysql.connector.Error as err:
print('_insert_table_data failed inserting data into table {}: {}'.format(table['name'], err))
exit(1)
print('Done!')
def main():
""" For use in standalone terminal form """
import sys, argparse
parser = argparse.ArgumentParser()
parser.add_argument('--sqlite-file', dest='sqlite_file', default=None, help='SQLite3 db file')
parser.add_argument('--mysql-user', dest='mysql_user', default=None, help='MySQL user')
parser.add_argument('--mysql-password', dest='mysql_password', default=None, help='MySQL password')
parser.add_argument('--mysql-database', dest='mysql_database', default=None, help='MySQL host')
parser.add_argument('--mysql-host', dest='mysql_host', default='localhost', help='MySQL host')
parser.add_argument('--mysql-integer-type', dest='mysql_integer_type', default='int(11)', help='MySQL default integer field type')
parser.add_argument('--mysql-string-type', dest='mysql_string_type', default='varchar(300)', help='MySQL default string field type')
args = parser.parse_args()
if len(sys.argv) == 1:
parser.print_help()
exit(1)
converter = SQLite3toMySQL(
sqlite_file=args.sqlite_file,
mysql_user=args.mysql_user,
mysql_password=args.mysql_password,
mysql_database=args.mysql_database,
mysql_host=args.mysql_host,
mysql_integer_type=args.mysql_integer_type,
mysql_string_type=args.mysql_string_type
)
converter.transfer()
if __name__ == '__main__':
main()
I recently had to migrate from MySQL to JavaDB for a project that our team is working on. I found a Java library written by Apache called DdlUtils that made this pretty easy. It provides an API that lets you do the following:
Discover a database's schema and export it as an XML file.
Modify a DB based upon this schema.
Import records from one DB to another, assuming they have the same schema.
The tools that we ended up with weren't completely automated, but they worked pretty well. Even if your application is not in Java, it shouldn't be too difficult to whip up a few small tools to do a one-time migration. I think I was able to pull of our migration with less than 150 lines of code.
Get a SQL dump
moose#pc08$ sqlite3 mySqliteDatabase.db .dump > myTemporarySQLFile.sql
Import dump to MySQL
For small imports:
moose#pc08$ mysql -u <username> -p
Enter password:
....
mysql> use somedb;
Database changed
mysql> source myTemporarySQLFile.sql;
or
mysql -u root -p somedb < myTemporarySQLFile.sql
This will prompt you for a password. Please note: If you want to enter your password directly, you have to do it WITHOUT space, directly after -p:
mysql -u root -pYOURPASS somedb < myTemporarySQLFile.sql
For larger dumps:
mysqlimport or other import tools like BigDump.
BigDump gives you a progress bar:
Based on Jims's solution:
Quick easy way to migrate SQLite3 to MySQL?
sqlite3 your_sql3_database.db .dump | python ./dump.py > your_dump_name.sql
cat your_dump_name.sql | sed '1d' | mysql --user=your_mysql_user --default-character-set=utf8 your_mysql_db -p
This works for me. I use sed just to throw the first line, which is not mysql-like, but you might as well modify dump.py script to throw this line away.
There is no need to any script,command,etc...
you have to only export your sqlite database as a .csv file and then import it in Mysql using phpmyadmin.
I used it and it worked amazing...
Ha... I wish I had found this first! My response was to this post... script to convert mysql dump sql file into format that can be imported into sqlite3 db
Combining the two would be exactly what I needed:
When the sqlite3 database is going to be used with ruby you may want to change:
tinyint([0-9]*)
to:
sed 's/ tinyint(1*) / boolean/g ' |
sed 's/ tinyint([0|2-9]*) / integer /g' |
alas, this only half works because even though you are inserting 1's and 0's into a field marked boolean, sqlite3 stores them as 1's and 0's so you have to go through and do something like:
Table.find(:all, :conditions => {:column => 1 }).each { |t| t.column = true }.each(&:save)
Table.find(:all, :conditions => {:column => 0 }).each { |t| t.column = false}.each(&:save)
but it was helpful to have the sql file to look at to find all the booleans.
This script is ok except for this case that of course, I've met :
INSERT INTO "requestcomparison_stopword" VALUES(149,'f');
INSERT INTO "requestcomparison_stopword" VALUES(420,'t');
The script should give this output :
INSERT INTO requestcomparison_stopword VALUES(149,'f');
INSERT INTO requestcomparison_stopword VALUES(420,'t');
But gives instead that output :
INSERT INTO requestcomparison_stopword VALUES(1490;
INSERT INTO requestcomparison_stopword VALUES(4201;
with some strange non-ascii characters around the last 0 and 1.
This didn't show up anymore when I commented the following lines of the code (43-46) but others problems appeared:
line = re.sub(r"([^'])'t'(.)", "\1THIS_IS_TRUE\2", line)
line = line.replace('THIS_IS_TRUE', '1')
line = re.sub(r"([^'])'f'(.)", "\1THIS_IS_FALSE\2", line)
line = line.replace('THIS_IS_FALSE', '0')
This is just a special case, when we want to add a value being 'f' or 't' but I'm not really comfortable with regular expressions, I just wanted to spot this case to be corrected by someone.
Anyway thanks a lot for that handy script !!!
This simple solution worked for me:
<?php
$sq = new SQLite3( 'sqlite3.db' );
$tables = $sq->query( 'SELECT name FROM sqlite_master WHERE type="table"' );
while ( $table = $tables->fetchArray() ) {
$table = current( $table );
$result = $sq->query( sprintf( 'SELECT * FROM %s', $table ) );
if ( strpos( $table, 'sqlite' ) !== false )
continue;
printf( "-- %s\n", $table );
while ( $row = $result->fetchArray( SQLITE3_ASSOC ) ) {
$values = array_map( function( $value ) {
return sprintf( "'%s'", mysql_real_escape_string( $value ) );
}, array_values( $row ) );
printf( "INSERT INTO `%s` VALUES( %s );\n", $table, implode( ', ', $values ) );
}
}
echo ".dump" | sqlite3 /tmp/db.sqlite > db.sql
watch out for CREATE statements