I truncated 2 tables in my database in MySQL (for a Rails project) so that I could repopulated it with test data. But for some reason the application is still counting how many entries there used to be (250), even though there is only 9 entries now.
I even went into the ruby console using (ruby script/rails console), then truncated using:
ActiveRecord::Base.connection.execute("TRUNCATE TABLE bars;")
but that didn't do anything different then running the query through MySQL. I am pretty confused, the only thing I can think of doing is restarting the server. I am just wondering if there is maybe another way to do this without having to reboot everything.
Printing the search to the logger I can see that the results for the bars are a bunch of nil values for where there used to be a bar_profile, but I have truncated the tables that referenced bars or bar_profiles.
So I don't get why it would just be returning what the results would have been before the tables were truncated. Except now now instead of returning actual results they are just nil.
Related
I have a database in SQL Server that I am trying to convert into a MySQL database, so I can host it on AWS and move everything off-premises. From this link, it seems like normally this is no big deal, although that link doesn't seem to migrate from a .bak file so much as from your local instance of SQL Server that is running and contains the database in question. No big deal, I can work with that.
However when I actually use MySQL Workbench to migrate using these steps, it gets to the Bulk Data Transfer step, and then comes up with odd errors.
I get errors like the following:
ERROR: OptionalyticsCoreDB-Prod.UserTokens:Inserting Data: Data too long for column 'token' at row 1
ERROR: OptionalyticsCoreDB-Prod.UserTokens:Failed copying 6 rows
ERROR: OptionalyticsCoreDB-Prod.UserLogs:Inserting Data: Data too long for column 'ActionTaken' at row 1
ERROR: OptionalyticsCoreDB-Prod.UserLogs:Failed copying 244 rows
However the data should not be "too long." These columns are nvarchar(MAX) in SQL Server, and the data for them is often very short in the specified rows, nothing that approaches the maximum value for an nvarchar.
Links like this and this show that there used to be, almost a decade ago, bugs with nvarchar formats, but they've been fixed for years now. I have checked and even updated and restarted my software and then computer - I have up-to-date versions of MySQL and MySQL Workbench. So what's going on?
What is the problem here, and how do I get my database successfully migrated? Surely it's possible to migrate from SQL Server to MySQL, right?
I have answered my own question... Apparently there IS some sort of bug with Workbench when translating SQL Server nvarchar(MAX) columns. I output the schema migration to a script and examined it, it was translating those columns as varchar(0). After replacing all of them with TEXT columns, the completed migration worked.
Frustrating lesson.
I have tried relentlessly to drop a table, but it doesnt seem to work. It doesn't give me any error message, but it seems like its taking a long time (6+ hrs). The table has around 800K rows with two columns.
Additionally, I can't even access the table using sequel pro or through the command line interface. The sequel pro message says Loading, but it never loads and command line gets stuck in the command without any messags.
However, DESCRIBE command works with the table.
I have similar problem with another table that has fairly less number of rows at 1000+.
However, i have other tables in the database that works perfectly fine.
Are there any issues with this version of mysql (see below) or are there any other methods that will let me delete the table.
I am using MySQL 5.1.73 Source distribution
It is very strange what is happening, I have never seen this before and I am pretty familiar with mysql.
When searching a table using the phpMyAdmin table search feature, the result is empty no matter what I put. For example, searching 77 in the ID column returns empty result. However if I run an SQL query also in phpMyAdmin, and then there is the result. For example, select * from table1 where id = ‘77’;
What is even more strange is this only happens on one table, all other tables the search feature is working fine.
I tried repairing the table but empty result still occurs.
I couldn’t find this happening anywhere to any one online…..
Also restarted sql server.
Are you using cPanel - if yes I just described how to fix the problem on cPanel forums:
http://forums.cpanel.net/f5/unable-use-phpmyadmin-search-users-table-313381.html
If your table has a large number of fields an update via the phpMyAdmin interface can exceed the value for the PHP setting 'max_input_vars'. When this happens some of the internal form fields that phpMyAdmin is expecting to receive on the page your update is being posted to are truncated which causes phpMyAdmin to fail, unfortunately silently, and the page redirects with no warnings back to the blank search form. The default for 'max_input_vars' is 1000. I upped mine in my php.ini file to 5000 with no negative side affects and it solved this problem for me.
The setting 'max_input_vars' has a mode of PHP_INI_PERDIR so if you don't have access to your php.ini file then you may be able to set it in an .htaccess file, your httpd.conf file or your .user.ini (since PHP 5.3) file if you have one. I'm not sure exactly what code you would need for an htaccess file but the PHP code to do it is below.
ini_set('max_input_vars', '5000');
Hopefully that should get you started in the right direction.
Very easy. Go to the table and expose max number of rows as much as is showed in the dropdown. Then you are able to search per big pages. It doesnt fetch text through all the table. It plays only with a page of the table.
I've got a user model for my app, and it has effectively used has_secure_password up til this point. has_secure_password necessitates a password_digest column, and herein lies my recent problem.
I wanted to create a "role" column of type string that separates admins from users - but after migrating, my password_digest got corrupted so that I get an invalid hash error whenever I try to use it in my app. In mysql everything is fine (the password_digest values haven't changed) but in rails console the value returned by User.first.password_digest is something along the lines of:
\#BigDecimal:59d0c60,'0.0',9(18)
Furthermore, unless I change the type of role from string, it gets similarly messed up (although like password_digest, it's totally fine in mysql regardless). Rolling back the migration and getting rid of the "role" column causes password_digest to go back to normal as far as rails console is concerned.
What is going on here?
Here's my database schema:
Here's the result of a sql query fed directly to mysql:
Here's the result of the same query through rails (first time):
Here's the result of the same query through rails (after first time):
Looks like your query is getting auto-explained. See the documentation here http://guides.rubyonrails.org/active_record_querying.html#automatic-explain
I use Kettle for some transformations and ran into a problem:
For one specific row, my DatabaseLookup step hangs. It just doesn't give a result. Trying to stop the transformation results in a never ending "Halting" for the lookup step.
The value given is nothing complicated at all, neither it is different from all other rows/values. It just won't continue.
Doing the same query in the database directly or in a different database tool (e.g. SQuirreL), it works.
I use Kettle/Spoon 4.1, the database is MySQL 5.5.10. It happens with Connector/J 5.1.14 and the one bundled with spoon.
The step initializes flawlessly (it even works for other rows) and I have no idea why it fails. No error message in the Spoon logs, nothing on the console/shell.
weird. Whats the table type? is it myisam? Does your transform also perform updates to the same table? maybe you are locking the table inadvertantly at the same time somehow?
Or maybe it's a mysql 5.5 thing.. But ive used this step extensively with mysql 5.0 and pdi 4.everything and it's always been fine... maybe post the transform?
I just found the culprit:
The lookup takes as a result the id field and gave it a new name, PERSON_ID. This FAILS in some cases! The resulting lookup/prepared statement was something like
select id as PERSON_ID FROM table WHERE ...
SOLUTION:
Don't use underscore in the "New name" for the field! With a new name of PERSONID everything works flawlessly for ALL rows!
Stupid error ...