MySQL server lost power, partial data loss - mysql

I had a server running mysql. The power was cut to the machine, and mysql (I assume was forced to terminate)
Now, when I try to connect to the database again, the tables exist, but there doesn't appear to be anything in the tables, would there be any chance of a way to fix this.
When I use the SHOW TABLES command, it lists the corrupted table. When I use the SELECT * FROM [corrupted table] it says table does not exist.
I understand if it seems impossible (please let me know if so)
THANKS!

in such case we retrive a data from log file:
1) [This is first link][1],
2) [This is main URL][2]

Related

Is it possible "Database Synchronization"

I have a problem and not sure if this is possible. My web application has a database and i'm using a mysql workbench and using wamp server.
My web app has a database name healthcare, and if I import again another database with the same tables, etc but addition data. I want the first database to be updated only with new values but not replaced.
Is it possible?
Edit: I searched in the net and other related sources and I manage to set my phpmyadmin "Ignore multiple statement errors". When I import the second database (.sql with same tables but with new data) it does not update the first database but the message is successful. Please help, I'll appreciate any help...
in the past ive searched for tools to do some similar database sync tasks - in my experience ive found that none are free & reliable.
have you tried writing some queries to do this manually?
first thing that comes to mind would be figuring out a key you can use to evaluate each row and determine if you should copy said record from database A to database B.
afterwards you could simply do an INSERT(SELECT)
INSERT INTO healthcare_DESTINATION.table (SELECT * FROM healthcare_SOURCE WHERE some_condition = 1);
obviously this is the simplified version - but i've done something very similar utilizing timestamps (eg only copy rows newer than the newest row in the destination table)
hope this helps

How to make MySQL Workbench forget I renamed a table?

Early on in my design phase I made the mistake of renaming a table and then creating a new table with the old name. For some reason, MySQL Workbench decided to remember this fact very well. Now every time I try to synchronize with a database, MySQL Workbench wants to rename the (new) table back to its old name.
I did this:
Rename my_table => my_new_table
Create new table called 'my_table'
And now MySQL Workbench insists this would be the right way to proceed:
Rename my_new_table => my_table
Drop my_new_table
That's clearly a bug in its own right, but how is MySQL Workbench keeping track of what it has done with the tables of a random DB? Is there some metadata somewhere I can clear to make MySQL Workbench to believe that this change has already taken place and need not to be repeated? Clearly Workbench is doing more than just a diff between the model and the existing database...
(I realize I could just ignore the the particular step in synchronization process, but I don't want to do it every time I synchronize and of course any changes to these two particular tables would be never carried out...)
Managed to work around the issue by running synchronization once with "skip DB changes and update model only". Unclear whether there are bits of the schemata out of sync or not, but at least new changes now carry over.
Renaming an object and creating a new one with the same name is a special case which makes it quite difficult for the synchronization process. The reason is that Workbench uses the names to find objects (as there is no other identification mechanism like UUIDs or similar). So extra steps are done to find such a rename situation. Workbench explicitly keeps the old name to have a way to find what to sync how.

MySQL Error 2013 after a failed attempt to import

I was importing one table in a MySQL Server when the power went down. After this event I tried to query the table I was importing, but got the error 2013, only when I'm querying this table (the others work just fine).
I have physical access to the server, tried to execute any query from there (tried to SELECT, and even DROP TABLE) but still got the same error.
Does anybody know a solution where I can re-build only the table (without building the whole schema from scratch?)
I'm adding this as an answer rather than having lots of comments underneath. I must state in advance that I've not used MySQL but I have used SQL server a lot so I'm hoping that something I say may help.
You say the table is still there. Was it created as part of the operation you were doing or had it been there for a while?
What happens if someone else or a different account tries to access this table?
Is there anything on this page that is relevant to your problem?
http://dev.mysql.com/doc/refman/5.0/en/gone-away.html

MSACCESS 2002 Linked Table Performance Really Slow

Am trying to import some selective data and create a table in MS Access db 2002 from a linked table. For some odd reason the performance became really bad all of a sudden when importing the data.
I tried googleing and tried various methods like reparing/compacting the db, Changing the SubDataSheet Name to [None] from [Auto] but either one worked.
Can any one please give me some examples to increase the performance of linked tables.
Thank you.
Rather than selecting information from a linked table and trying to make a local table, when using a database server like MS MSQL, you would be better to create a "Pass Through Query" to do the select work on the server side, and then carry out a simple select * on this pass through to get your data in to a local table. This will give the best results if your first select statement is complex and takes a while for Access to run on a linked table, if that is not the issue then you will need to look at your network speed which connects you to your MS SQL server.

Set up large database in MySQL for analysis in R

I have reached the limit of RAM in analyzing large datasets in R. I think my next step is to import these data into a MySQL database and use the RMySQL package. Largely because I don't know database lingo, I haven't been able to figure out how to get beyond installing MySQL with hours of Googling and RSeeking (I am running MySQL and MySQL Workbench on Mac OSX 10.6, but can also run Ubuntu 10.04).
Is there a good reference on how to get started with this usage? At this point I don't want to do any sort of relational databasing. I just want to import .csv files into a local MySQL database and do the subsetting in with RMySQL.
I appreciate any pointers (including "You're way off base!" as I'm new to R and newer to large datasets... this one's around 80 mb)
The documentation for RMySQL is pretty good - but it does assume that you know the basics of SQL. These are:
creating a database
creating a table
getting data into the table
getting data out of the table
Step 1 is easy: in the MySQL console, simply "create database DBNAME". Or from the command line, use mysqladmin, or there are often MySQL admin GUIs.
Step 2 is a little more difficult, since you have to specify the table fields and their type. This will depend on the contents of your CSV (or other delimited) file. A simple example would look something like:
use DBNAME;
create table mydata(
id INT(11) NOT NULL AUTO_INCREMENT PRIMARY KEY,
height FLOAT(3,2)
);
Which says create a table with 2 fields: id, which will be the primary key (so has to be unique) and will autoincrement as new records are added; and height, which here is specified as a float (a numeric type), with 3 digits total and 2 after the decimal point (e.g. 100.27). It's important that you understand data types.
Step 3 - there are various ways to import data to a table. One of the easiest is to use the mysqlimport utility. In the example above, assuming that your data are in a file with the same name as the table (mydata), the first column a tab character and the second the height variable (with no header row), this would work:
mysqlimport -u DBUSERNAME -pDBPASSWORD DBNAME mydata
Step 4 - requires that you know how to run MySQL queries. Again, a simple example:
select * from mydata where height > 50;
Means "fetch all rows (id + height) from the table mydata where height is more than 50".
Once you have mastered those basics, you can move to more complex examples such as creating 2 or more tables and running queries that join data from each.
Then - you can turn to the RMySQL manual. In RMySQL, you set up the database connection, then use SQL query syntax to return rows from the table as a data frame. So it really is important that you get the SQL part - the RMySQL part is easy.
There are heaps of MySQL and SQL tutorials on the web, including the "official" tutorial at the MySQL website. Just Google search "mysql tutorial".
Personally, I don't consider 80 Mb to be a large dataset at all; I'm surprised that this is causing a RAM issue and I'm sure that native R functions can handle it quite easily. But it's good to learn new skill such as SQL, even if you don't need them for this problem.
I have a pretty good suggestion. For 80MB use SQLite. SQLite is a super public domain, lightweight, super fast file-based database that works (almost) just like a SQL database.
http://www.sqlite.org/index.html
You don't have to worry about running any kind of server or permissions, your database handle is just a file.
Also, it stores all data as a string, so you don't even have to worry about storing the data as types (since all you need to do is emulate a single text table anyway).
Someone else mentioned sqldf:
http://code.google.com/p/sqldf/
which does interact with SQLite:
http://code.google.com/p/sqldf/#9._How_do_I_examine_the_layout_that_SQLite_uses_for_a_table?_whi
So your SQL create statement would be like this
create table tablename (
id INT(11) INTEGER PRIMARY KEY,
first_column_name TEXT,
second_column_name TEXT,
third_column_name TEXT
);
Otherwise, neilfws' explanation is a pretty good one.
P.S. I'm also a little surprised that your script is choking on 80mb. It's not possible in R to just seek through the file in chunks without opening it all up in memory?
The sqldf package might give you an easier way to do what you need: http://code.google.com/p/sqldf/. Especially if you are the only person using the database.
Edit: Here is why I think it would be useful in this case (from the website):
With sqldf the user is freed from having to do the following, all of which are automatically done:
database setup
writing the create table statement which defines each table
importing and exporting to and from the database
coercing of the returned columns to the appropriate class in common cases
See also here: Quickly reading very large tables as dataframes in R
I agree with what's been said so far. Though I guess getting started with MySQL (databases) in general is not a bad idea for the long if you are going to deal with data. I mean I checked your profile which says finance PhD student. I don't know if that means quant. finance, but it is likely that you will come across really large datasets in your career. I you can afford some time, I would recommend to learn something about databases. It just helps.
The documentation of MySQL itself is pretty solid and you can a lot of additional (specific) help here at SO.
I run MySQL with MySQL workbench on Mac OS X Snow Leopard too. So here´s what helped me to get it done comparatively easy.
I installed MAMP , which gives my an local Apache webserver with PHP, MySQL and the MySQL tool PHPmyadmin, which can be used as a nice webbased alternative for MySQL workbench (which is not always super stable on a Mac :) . You will have a little widget to start and stop servers and can access some basic configuration settings (such as ports through your browser) . It´s really one-click install here.
Install the Rpackage RMySQL . I will put my connection string here, maybe that helps:
Create your databases with MySQL workbench. INT and VARCHAR (for categorical variables that contain characters) should be the field types you basically need at the beginning.
Try to find the import routine that works best for you. I don't know if you are a shell / terminal guy – if so you'll like what was suggested by neilfws. You could also use LOAD DATA INFILE which is I prefer since it's only one query as opposed to INSERT INTO (line by line)
If you specify the problems that you have more accurately, you'll get some more specific help – so feel free to ask ;)
I assume you have to work a lot with time series data – there is a project (TSMySQL) around that use R and relational databases (such as MySQL, but also available for other DBMS) to store time series data. Besides you can even connect R to FAME (which is popular among financers, but expensive). The last paragraph is certainly nothing basic, but I thought it might help you to consider if it´s worth the hustle to dive into it a little deeper.
Practical Computing for Biologists as a nice (though subject-specific) introduction to SQLite
Chapter 15. Data Organization and Databases