JHipster liquibase - csv

I'm getting started with JHipster and am attempting to initialize my data using liquibase. I have added two entities via the JHipster yo task and have added my two csv files into the /resources/config/liquibase directory and added the relevant loadData section to my "added entity" change log files to point at the CSV's. I had to update the MD5hash in the databasechangelog table and the app is running BUT, the CSV files don't seem to get picked up via the loadData elements I added to the "added entity" XML files. No data is inserted. Any ideas how to go about running this down?

If you updated the MD5 hashes in the changelog table, I suspect your change log files will not be run because Liquibase will think that they have already been run. I would rather set to null the MD5 hashes and re-start the app.

This was my solution for me:
1. Delete row in databasechangelog
2. Delete table
3. Re-start app
Liquibase re-generated table with csv and loaded all data to database.
I hope I helped you :)

Related

Apache Cassandra Unable to load a csql file

I'm just starting out with Apache Cassandra. I have some csql files that define my data. I have got Cassandra installed on my machine and I did start it as per Apache Cassandra Wiki. Nothing suspicious!
I'm using the CLI to create the namespaces and the tables for which I have some cql files in a specific directory like:
create_tables.cql
load_tables.cql
I was able to successfully do the create_tables.cql, but when I tried to urn the load_tables.cql, I always end up seeing:
/Users/myUser/data/load-test-data.cql:7:Can't open 'test_data.csv' for reading: [Errno 2] No such file or directory: 'test_data.csv'
The load_tables.cql refers to another csv file that contains the test data that I want to populate my database with!
COPY test_table (id, name) FROM 'test_data.csv';
I tried doing al sort of permissions to the data folder where the cql files are, but still I keep getting this message. Any hints as to what I could do to get this solved?
Ok I got this one sorted! It has got to do with the absolute and relative paths. I ended up using an absolute path to where my CSV is located! This solved the issue!

Hadoop switching to new HDFS "image"

I have new HDFS local storage directories (dfs.namenode.name.dir and dfs.datanode.data.dir), the actual local directories under which HDFS stores data (both for namenode, secondarynamenode and datanode), with all the necessary things (edits, fsimage, etc.). I would like to switch from my current HDFS to this new HDFS.
I order to to do this I have stopped the cluster (I run in pseudo-distributed mode), I have edited the hdfs-site.xml config (modified the paths) and started the cluster.
However the NameNode fails to start based on the following error:
FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
org.apache.hadoop.hdfs.server.common.IncorrectVersionException: Unexpected version of storage directory /path/to/NameNodeDir. Reported: -X. Expecting = -Y.
Why this doesn't work? As I said I have a whole "image/snapshot" of a properly working new HDFS (the whole thing: name, secondary, data). I thought that I could simply "swap" the HDFS config for the new one and it should work.
I can not format the NameNode, as the new HDFS "image" contains data. What I am trying to achieve is to "plugin" new HDFS, replacing the old one, without any modifications to the new HDFS "data/meta" files.
One of the potential problem might be the YARN/HDFS version missmatch. As stated here (http://hortonworks.com/blog/hdfs-metadata-directories-explained), the key layoutVersion in VERSION file in namenode/current is the different X from Y. The same key on my previous HDFS "instance" had a Y corresponding to the one visible in the logs. To simplify new HDFS layoutVersion value is X, old HDFS layoutVersion value is Y. I will try to upgrade to YARN 2.6 in order to verify this.
For any help I would be grateful.
regards
SOLUTION
The problem was with HDFS version mismatch, as I have written in the comment to user Mikhail Golubtsov. I was trying to run newer HDFS metadata with the use of binaries from older HDFS version, thus the error. If anybody encounters similar problem, just update/upgrade you YARN/HDFS version to the appropriate one. This has solved the issue for me.

How can I run tests on a pre-populated database in Rails?

So I am writing acceptance tests for a single feature of a large App. I needed a lot of data for this and have a lot of scenarios to test; so I have pre-populated a mysql database using Sequel Pro.
Obviously the appname_test database is in place for the other tests in the app. I would like to know how I could load up a .sql file (which is a sql dump of content) into this database at the start of my tests (before the first context statement).
I am new to development so this is completely new to me. Thanks in advance for any help.
Update:
I have used the yaml_db gem to dump the dev db (db:data:dump) and then load it into the test db (db:data:load ENV_RAILS=test). However, as soon as I run my specs the test db is wiped clean! Is there a way to run db:data:load ENV_RAILS=test from inside the spec file. I have tried:
require 'rake'
Rake::Task['bundle exec db:data:load ENV_RAILS=test'].invoke
but it says Don't know how to build task 'be db:data:load ENV_RAILS=test'
OK, so here is what I did to solve this.
I used the yaml_db gem and rake db:data:dump which creates db/data.yml (a dump of the dev db).
I then had to write a library and rake task which converted the data.yml file into individual fixtures files. I run this new rake task once to create the fixure files.
Then, at the start of my spec I call fixtures :all which populates the test database with all the fixtures.

Mercurial (TortoiseHG) won't commit after manually deleting some files

I removed some database files from my project using the search function in Explorer. After that Mercurial complains that it cannot find the files an refuses to commit. I tried using the shelve tool, but I run then in a bugreport for version 2.5 of TortoiseHG stating that the node holding the database file could not be found.
'
How do I solve this?
It it possible you deleted not only the files in your working directory but also down in the data store itself (.hg/....)? It's possible to do that if you search indelicately in explorer. Here's the command line equivalent:
ry4an#four:~/projects/unblog$ find . -name '*.xml*'
./static/attachments/2005-09-22-isle-royale.gpx.xml
./.hg/store/data/static/attachments/2005-09-22-isle-royale.gpx.xml.i
It is entirely safe and okay for me to delete that .gpx.xml file, but if I deleted every file with .gpx.xml in the name then I'd be deleting the file from the store and corrupting my repository.
Try running hg verify in the repository and see what output you get.

Table is 'read only'

When I want to execute an update query on my table I got an error saying:
1036 - Table data is read only.
How can I fix that?
Table attributes in /var/db/mysql are set to 777.
'Repair Table' function doesnt seems to help.
Is there anything I can do with that?
In my case, mysql config file had innodb_force_recovery = 1. Commenting that out solved the issue. Hope it helps someone.
who owns /var/db/mysql and what group are they in, should be mysql:mysql. you'll also need to restart mysql for changes to take affect
also check that the currently logged in user had GRANT access to update
(This answer is related to the headline, but not to the original question.)
In case you (like me) are trying to temporarily alter data via the MySQL Workbench interface:
If the table does not have a primary key, MySQL Workbench has no way of identifying the row you are trying to alter, so therefore you cannot alter it.
Solution in that case is to either alter the data via another route, or simply to add a primary key to the table.
In any case, I hope it helps someone :)
You should change owner to MYSQL:MYSQL.
Use this command: chown -Rf mysql:mysql /var/lib/mysql/DBNAME
My situation is everytime I needed to edit "innodb_force_recovery = 1" in my.ini to force mysql to start, and the error log showed some error said:
Attempted to open a previously opened tablespace. Previous tablespace mysql/innodb_table_stats uses space ID: 1 at filepath: .\mysql\innodb_table_stats.ibd. Cannot open tablespace profile/profile_commentmeta which uses space ID: 1 at filepath: .\profile\profile_commentmeta.ibd
I didn't know why this file was not able to open and it caused so many other"table read only" problems to other databases too.
So here is how I fixed this problem in a simple way without hurting other files.
1
First of all, make sure if you add innodb_force_recovery = 1
below [mysqld] in my.ini file, and it is working, under path: X:\xampp\mysql\bin\my.ini
2
Then next step, export all the databases through localhost/phpmyadmin under the export tab, and store them somewhere, like this:
3 comment out the data filefolder to data-bak, then create a new data filefolder,
4 Next step, import all .sql database back from phpmyadmin panel, please also copy phpmyadmin filefolder from the old data-bak filefolder to the new data filefolder. If any file is necessary, go back to data-bak filefolder to copy and paste.
Now all fixed and done, don't need to force mysql to start everytime.
Hope this also works for you.
MySQL doesn't have write access to the database file. Check the permissions and the owner of the file.
On windows I use Xampp server I comment the line in my.ini
innodb_force_recovery = 1 to #innodb_force_recovery = 1 the problem resolved
I solved the same issue by editing app. armour configuration file. Found the answer here: https://stackoverflow.com/a/14563327/31755661
maybe you get read only error from your table storage engine.
Check you Storage Engine, maybe if it is MRG_MYISAM change it to MyISAM and try again.
If you are running selinux in enforcing mode then check your /var/log/messages for audit faults. If you see the tell-tale "****" messages about selinux blocking write access to your table files in / then you need to relabel those files so that they have this label:
system_u:object_r:mysqld_db_t:s0
What you could have is a broken label from copying those files over from a user directory (such as during a recovery attempt).
There's a great resource for selinux here:
https://docs.fedoraproject.org/en-US/Fedora/11/html/Security-Enhanced_Linux/sect-Security-Enhanced_Linux-SELinux_Contexts_Labeling_Files-Persistent_Changes_semanage_fcontext.html
Just remember that you will need to do this for all of those files, which could be many. Then you will want to run the "restorecon -R -v " command to get the recursive (-R) application of the new labels. There is no support for -R in the semanage command, as far as I could tell.
For reference, the semanage command to relabel looks like this:
semanage fcontext -a -t mysqld_db_t 'filename'
The quoting of the file name is critical for the command to work.
In my case there was a table with read-only state set and when I tried to restart mysql service it would not even start again and with no descriptive error.
Solution was to run fsck on the drive (with many fixes), which was advised after Ubuntu reboot.
I'm running Ubuntu in VirtualBox under Windows and it often hangs or is having functionality problems.
One other way to receive this error is to create your table with a "Like" statement and use as source a merged table. That way the newly create table is read-only and can't "receive" any new records.
so
CREATE TABLE ic.icdrs_kw37 LIKE ic.icdrs ... #<- a merged table.
then:
REPLACE INTO ic.icdrs_kw37 ... # -> "Table is read-only"
bug or feature?