Mapping HBase Tables(NameSpace) with Apache Phoenix(Schema) - namespaces

I have created a HBase table named test under the namespace np(np:test) and loaded records as below .
create 'np:test','cf'
put 'np:test','1','cf:c1','99'
Also I have created a schema 'np' in phoenix and created a phoenix table to map hbase as below.
create schema 'np';
CREATE TABLE "test"(PK VARCHAR PRIMARY KEY, "np"."c1" VARCHAR);
Can able to scan the inserted data through HBase table but not through the same in Phoenix Table. Mapping was not done..
In addition to this, I also tried creating view as below
CREATE VIEW "test"(PK VARCHAR PRIMARY KEY, "np"."c1" VARCHAR);
Getting
Error: ERROR 505 (42000): Table is read only. (state=42000,code=505)
org.apache.phoenix.schema.ReadOnlyTableException: ERROR 505 (42000): Table is read only.
at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1069)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1434)
at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2624)
at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1040)
at org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:212)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1707)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
Please help to resolve HBase-Phoenix Table Mapping.
Thanks in Advance!

Related

1681 Integer display width is deprecated and will be removed in a future release

when i try to create table with my sets of data and it created with this warings message.
acctualy am a beginer to sql i want learn please help me guys.screen shot for refernce
create schema test; >>first i created schema
use test; >> then i pointed on test database
create table jail(tid int(100),tname varchar(20),punishment int(20)); i tried to create "jail"table under "test"schema,jail table created with warning message

How to Integrate MySql tables Data To Ksql Stream or Tables?

I am trying to build a data pipeline from MySql to Ksql.
Use Case: data source is MySql. I have created a table in MySql.
I am using
./bin/connect-standalone ./etc/schema-registry/connect-avro-standalone.properties ./etc/kafka-connect-jdbc/source-quickstart-sqlite.properties
to start a standalone connector. And it is working fine.
I am starting the consumer with topic name i.e.
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test1Category --from-beginning
when I am inserting the data in MySQL table I am getting the result in consumer as well. I have created KSQL Stream as will with the same topic name. I am expecting the same result in my Kstream as well, But i am not getting any result when i am doing
select * from <streamName>
Connector configuration--source-quickstart-mysql.properties
name=jdbc_source_mysql
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
connection.url=jdbc:mysql://localhost:3306/testDB?user=root&password=cloudera
#comment=Which table(s) to include
table.whitelist=ftest
mode=incrementing
incrementing.column.name=id
topic.prefix=ftopic
Sample Data
MySql
1.) Create Database:
CREATE DATABASE testDB;
2.) Use Database:
USE testDB;
3.) create the table:
CREATE TABLE products (
id INTEGER NOT NULL PRIMARY KEY,
name VARCHAR(255) NOT NULL,
description VARCHAR(512),
weight FLOAT
);
4.) Insert data into the table:
INSERT INTO products(id,name,description,weight)
VALUES (103,'car','Small car',20);
KSQL
1.) Create Stream:
CREATE STREAM pro_original (id int, name varchar, description varchar,weight bigint) WITH \
(kafka_topic='proproducts', value_format='DELIMITED');
2.) Select Query:
Select * from pro_original;
Expected Output
Consumer
getting the data which is inserted in the MySQL table.
Here I am getting the data in MySQL.
Ksql
In-Stream data should be populated which is inserted in Mysql table and reflecting in Kafka topic.
I am not getting expected result in ksql
Help me for this data pipeline.
Your data is in AVRO format but in the VALUE_FORMAT instead of AVRO you've defined DELIMITED. It is important to instruct KSQL the format of the values that are stored in the topic. The following should do the trick for you.
CREATE STREAM pro_original_v2 \
WITH (KAFKA_TOPIC='products', VALUE_FORMAT='AVRO');
Data inserted into kafka topic after executing
SELECT * FROM pro_original_v2;
should now be visible in your ksql console window.
You can have a look at some Avro examples in KSQL here.

Laravel : How can I change default indexing key length in morph table

I have table "invoice_relations" which is morph table.
So in migration I wrote :
$table->morphs('invoice_relations');
But it gives error while running migration as,
Syntax error or access violation: 1059 Identifier na
me 'invoice_relations_invoice_relations_id_invoice_relations_type_index' is too long in /var/www/html/st/sales-tantra/vendor/doctrine/dbal/
lib/Doctrine/DBAL/Driver/PDOStatement.php:105
Change your
$table->morphs('invoice_relations');
To this:
$table->morphs('invoice_relations', 'invoice_relations_morpf_key');
Or this:
$table->unsignedInteger("invoice_relations_id");
$table->string("invoice_relations_type");
$table->index(["invoice_relations_id", "invoice_relations_type"], "YOUR_INDEX_NAME");
But I think name for polymorphic relations name ends 'able', for example relationable.
https://laravel.com/docs/5.6/eloquent-relationships#polymorphic-relations

Entity Framework related objects insertion with stored procedure and auto_increment field

I have a problem inserting a related row through Entity Framework 5. I'm using it with RIA Services and .NET Framework version 4.5. The database system is MySQL 5.6. The connector version is 6.6.5.
It raises a Foreign Key constraint exception.
I've chosen to simplify the model to expose my issue.
LDM
Provider(id, name, address)
Article(id, name, price)
LinkToProvider(provider_id, article_id, provider_price)
// Id's are auto_increment columns.
First I create a new instance of Article. I add an instance of LinkToProvider to the LinkProvider collection of the article. In this LinkToProvider object the product itself is referenced. An existing provider is also referenced.
Then I submit the changes.
Sample code from the DataViewModel
this.CurrentArticle = new Article();
...
this.CurrentArticle.LinkToProvider.Add(
new LinkToProvider { Article = this.CurrentArticle, Provider =
this.ProviderCollection.CurrentItem }
);
...
this.DomainContext.articles.Add(this.CurrentArticle);
this.DomainContext.SubmitChanges();
NOTE :
At the begining Entity Framework inserts the product well. Then it fails because it tries to insert a row in the LinkToPrivder table with an unkown product id like the following.
INSERT
INTO LinkToProvider
VALUES(5, 0, 1.2)
It puts 0 instead of the generated id.
But if I insert a product alone without any relations the product id is generated in the database correctly.
Any help will be much appreciated !
Thank you.
I found the answer.
You need to bind the result from the stored procedure to the id column in the edmx model
So I have to modify my stored procedure to add an instruction to show the last instered id for the article table on the standard output.
SELECT LAST_INSERT_ID() AS NewArticleId;
Then I added the binding with the name of the column name returned by the stored procedure. Here it's NewArticleId.
It's explained here : http://learnentityframework.com/LearnEntityFramework/tutorials/using-stored-procedures-for-insert-update-amp-delete-in-an-entity-data-model/.

SQL Alchemy and generating ALTER TABLE statements

I want to programatically generate ALTER TABLE statements in SQL Alchemy to add a new column to a table. The column to be added should take its definition from an existing mapped class.
So, given an SQL Alchemy Column instance, can I generate the SQL schema definition(s) I would need for ALTER TABLE ... ADD COLUMN ... and CREATE INDEX ...?
I've played at a Python prompt and been able to see a human-readable description of the data I'm after:
>>> DBChain.__table__.c.rName
Column('rName', String(length=40, convert_unicode=False, assert_unicode=None, unicode_error=None, _warn_on_bytestring=False), table=<Chain>)
When I call engine.create_all() the debug log includes the SQL statements I'm looking to generate:
CREATE TABLE "Chain" (
...
"rName" VARCHAR(40),
...
)
CREATE INDEX "ix_Chain_rName" ON "Chain" ("rName")
I've heard of sqlalchemy-migrate, but that seems to be built around static changes and I'm looking to dynamically generate schema-changes.
(I'm not interested in defending this design, I'm just looking for a dialect-portable way to add a column to an existing table.)
After tracing engine.create_all() with a debugger I've discovered a possible answer:
>>> engine.dialect.ddl_compiler(
... engine.dialect,
... DBChain.__table__.c.rName ) \
... .get_column_specification(
... DBChain.__table__.c.rName )
'"rName" VARCHAR(40)'
The index can be created with:
sColumnElement = DBChain.__table__.c.rName
if sColumnElement.index:
sIndex = sa.schema.Index(
"ix_%s_%s" % (rTableName, sColumnElement.name),
sColumnElement,
unique=sColumnElement.unique)
sIndex.create(engine)