I'm using ara for my ansible project to stock playbook output into database (Mysql).
Some Tables are not readable i would like to know how to convert that in order to develop a php page to display thos values:
here's my table description :
mysql> desc data;
+-------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+-------+
| id | varchar(36) | NO | PRI | NULL | |
| playbook_id | varchar(36) | YES | MUL | NULL | |
| key | varchar(255) | YES | | NULL | |
| value | longblob | YES | | NULL | |
| type | varchar(255) | YES | | NULL | |
+-------------+--------------+------+-----+---------+-------+
as you see the value column is longblob so the output is not clear:
mysql> select value from data;
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| value |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| xœmŽË
ƒ0å"¸kMÄG;àÊU«#±5b &!7
ýû&R
¥Åp3Ì$§'fåc’Â!{©” ¸x™ÁQ¢Í"¦ÐíùB©`€‹ãš
b%sopäjTëÌb÷j½9c<×ð_yÑ”»2øaó¢Ipíg;âOºd¬Û€~˜†xÆi~_À¡Ï¿[M“u¼`‘ó*´îáWòìI=N |
| xœmŽË
ƒ0å"¸³&â£f_påªU Ø1““R
¥Åp3Ì$Çæ0
˜ä}–Â!©” 8{™ÃA¢Í#¦Ð©`€«ãšŒb#Ë`päbTçÌjwj»:c<×ð_EÙTY|ŸUÁË6µ_ì„?銱þôÃ4Äã0ÎËûŽCñÝjë˜lšà%‹\Ô¡u
¿’'ìÂ=O
i try to convert those data to use UTF-8 but it gives me null:
SELECT CONVERT(value USING utf8) FROM data;
+---------------------------+
| CONVERT(value USING utf8) |
+---------------------------+
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
+---------------------------+
17 rows in set, 18 warnings (0,00 sec)
I helped design some of those models :) (although I am no longer an active developer on the project).
Have you considered just using the Ara web interface as the UI for this data? It's generally a bad idea to poke directly at the database like this because it typically hasn't been designed as a stable API: there's an excellent chance that some future update will break your code, because the assumption is that only ARA is accessing the database.
In any case:
In order to save space, many of the values stored in the database are compressed using Python's zlib.compress method. This is handled by the CompressedData type, which looks like this:
class CompressedData(types.TypeDecorator):
"""
Implements a new sqlalchemy column type that automatically serializes
and compresses data when writing it to the database and decompresses
the data when reading it.
http://docs.sqlalchemy.org/en/latest/core/custom_types.html
"""
impl = types.LargeBinary
def process_bind_param(self, value, dialect):
return zlib.compress(encodeutils.to_utf8(jsonutils.dumps(value)))
def process_result_value(self, value, dialect):
if value is not None:
return jsonutils.loads(zlib.decompress(value))
else:
return value
def copy(self, **kwargs):
return CompressedData(self.impl.length)
You will need to use the zlib.decompress method -- or the php equivalent -- to read those values. I am not a PHP developer, but it looks as if PHP as a zlib module.
Related
JPA Order By with Character Set Issue.
I use spring-data-jpa and mysql 5.7. My db character set is utf8mb4 for I need store emoji in table.
Here is my table:
+--------------------+--------------+------+-----+-------------------+-----------------------------+
| Field | Type | Null | Key | Default | Extra |
+--------------------+--------------+------+-----+-------------------+-----------------------------+
| id | varchar(50) | NO | PRI | NULL | |
| name | varchar(100) | NO | | NULL | |
| content | varchar(100) | NO | | NULL | |
| status | varchar(20) | NO | MUL | DEFAULT | |
| type | varchar(50) | NO | MUL | DEFAULT | |
+--------------------+--------------+------+-----+-------------------+-----------------------------+
I want to select item and order by name with Chinese character order. Item name can be emoji and Chinese string.
I can do this if use native sql :
select * from item where name like ? order by convert(name using gbk);
Is it possible to use convert(name using gbk) in JPA Specification?
I have created a table as below:
+------------------+--------------+------+-----+------------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------------+--------------+------+-----+------------+----------------+
| category | varchar(20) | NO | | NULL | |
| report_id | int(5) | NO | PRI | NULL | auto_increment |
| name | varchar(255) | NO | UNI | NULL | |
| URL | varchar(200) | NO | | NULL | |
| refresh_type | varchar(30) | NO | | On Request | |
| create_dt | date | NO | | 9999-01-01 | |
| modified_dt | date | NO | | 9999-01-01 | |
| project_type | varchar(60) | YES | | NULL | |
| project_name | varchar(60) | YES | | NULL | |
| project_location | varchar(255) | YES | | NULL | |
+------------------+--------------+------+-----+------------+----------------+
I have inserted records into this as well. I am trying to automate the process of inserting and maintaining and I am getting only few columns like category, name and URL from a data feed.
According to the process, a user can update the records and change the other fields to make them useful, and the next time when I try to insert records, I want to perform upsert based on name. I am performing the process using python. here are the steps I tried:
dash= df.loc[:,['folder','name','url','url']].values.tolist()
dash_insert_sql= ("""insert into dashboards (category
,name
,URL
) values (%s,%s,%s)
on duplicate key update URL = values(%s) """)
cur.executemany(dash_insert_sql, dash)
When I try this, I am getting
Traceback (most recent call last):
File "C:\Python\Mysql\report_uri.py", line 65, in <module>
cur.executemany(dash_insert_sql, dash)
File "C:\Python\lib\site-packages\MySQLdb\cursors.py", line 228, in executemany
q_prefix = m.group(1) % ()
TypeError: not enough arguments for format string
Here is the example data:
My input is a list of lists.
[['Student Information', 'Active Students not Scheduled', 'https://example.com/SASReportViewer/?reportUri=/reports/reports/af4f7325-860f-4958-ad83-bb900f726b32&page=vi6', 'https://example.com/SASReportViewer/?reportUri=/reports/reports/af4f7325-860f-4958-ad83-bb900f726b32&page=vi6'], ['Student Information', 'Admissions Statistical Comparison from Snapshots', 'https://example.com/SASReportViewer/?reportUri=/reports/reports/6150909f-3ab4-4ec7-8ef0-7efdb1f09300&page=vi6', 'https://example.com/SASReportViewer/?reportUri=/reports/reports/6150909f-3ab4-4ec7-8ef0-7efdb1f09300&page=vi6']]
Please let me know how to proceed or where I am going wrong. Thank you.
dash= df.loc[:,['folder','name','url']].values.tolist()
dash_insert_sql= ("""insert into dashboards (category
,name
,URL
) values (%s,%s,%s)
on duplicate key update URL = values(URL) """)
I tested it by changing the url of a report with that of another one and it did update the url and no new rows were added.
I'm still learning MySQL and while working on a new project that requires multi-language content, I have stumbled upon a question about the most practical way to design a database that will support this functionality and at the same time be the most efficient database setup.
Table content_quote:
+--------------+-----------------------+------+-----+---------------------+-----------------------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+-----------------------+------+-----+---------------------+-----------------------------+
| quote_id | int(11) unsigned | NO | PRI | NULL | auto_increment |
| url_slug | varchar(255) | NO | | NULL | |
| author_id | mediumint(8) unsigned | NO | | NULL | |
| quote | mediumtext | NO | | NULL | |
| category | varchar(15) | NO | | NULL | |
| likes | int(11) unsigned | NO | | 0 | |
| publish_time | datetime | NO | | 0000-00-00 00:00:00 | on update CURRENT_TIMESTAMP |
| locale | char(5) | NO | | NULL | |
+--------------+-----------------------+------+-----+---------------------+-----------------------------+
Now here I can just have a standard locale value like en-US in the locale field, but I have quite a few tables like that and I'm not sure what is the correct path, either leave it like that OR create a locale table to store all the locales and change the current locale field to be tinyint 2 with a Foreign Key going to the new table that stores all the locales.
Example:
+-----------+------------------+------+-----+---------+-------------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+------------------+------+-----+---------+-------------------+
| locale_id | tinyint(2) unsigned | NO | PRI | NULL | auto_increment |
| locale | char(50) | NO | | NULL | |
+-----------+------------------+------+-----+---------+-------------------+
More than the answer itself, I'm interested to know what are the advantages/disadvantages of both approaches.
Advantages and disadvantages of a new locales table (they are swapped when NOT a locales table is used):
Advantages
Add a list of available locales when some might not be used yet. It allows you to create a dropdown list of available locales in some form.
Prevent typos since there will be only one en_US value available.
Disadvantages
JOIN on the new table all the time just to get a string like en_US.
Keep in mind that space will not be an issue. Don't try to make a decision base on 5 chars vs tiny int size.
I am getting this error
javax.servlet.ServletException: com.mysql.jdbc.NotUpdatable: Result
Set not updatable.
I know this error is regarding the primary key but for all my tables I initially insert a primary key.So for this table also I have a primary key.I am posting part of my code.
Statement st=con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_UPDATABLE);
ResultSet rs=st.executeQuery("Select * from test3 order by rand() limit 5");
List arrlist = new ArrayList();
while(rs.next()){
String xa =rs.getString("display");
if(xa.equals("1")){
arrlist.add(rs.getString("question_text"));
}
rs.updateString("display", "0");
rs.updateRow();
Just tell me if something is going wrong in this code.please help.
This is my database
+----------------+---------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------+---------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| index_question | varchar(45) | YES | | NULL | |
| question_no | varchar(10) | YES | | NULL | |
| question_text | varchar(1000) | YES | | NULL | |
| file_name | varchar(128) | YES | | NULL | |
| attachment | mediumblob | YES | | NULL | |
| display | varchar(10) | YES | | NULL | |
+----------------+---------------+------+-----+---------+----------------+
You have to update the row immediately after you have fetched it (FOR UPDATE and rs.updateRow(),
OR
you have to write an UPDATE tablename set = where statement to update a row at any time
The query can not use functions. Try removing the "rand()" from the SQL query string.
See the JDBC 2.1 API Specification, section 5.6 for more details.
I am designing a DB for a possible PHP MySQL project I may be undertaking.
I am a complete novice at relational DB design, and have only worked with single table DB's before.
This is a diagram of the tables:
So, 'Cars' contains each model of car, and the other 3 tables contains parts that the car can be fitted with.
So each car can have different parts from each of the three tables, and each part can be fitted to different cars from the parts table. In reality, there will be about 10 of these parts tables.
So, what would be the best way to link these together? do I need another table in the middle etc?
and what would I need to do with keys in terms of linking.
There is some inheritance in your parts. The common attributes seem to be:
part_number
price
and there are some specifics for your part types exhaust, software and intake.
There are two strategies:
- have three tables and one view over the three tables
- have one table with a parttype column and may be three views for the tables.
If you'd like to play with your design you might want to look at my companies website http://www.uml2php.com. UML2PHP will automatically convert your UML design to a database design and let you "play" with the result.
At:
http://service.bitplan.com/uml2phpexamples/carparts/
you'll find an example applicaton along your design. The menu does not allow you to access all tables via the menu yet.
via:
http://service.bitplan.com/uml2phpexamples/carparts/index.php?function=dbCheck
the table definitions are accessible:
mysql> describe CP01_car;
+-------------+---------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+---------------+------+-----+---------+-------+
| oid | varchar(32) | NO | | NULL | |
| car_id | varchar(255) | NO | PRI | NULL | |
| model | varchar(255) | YES | | NULL | |
| description | text | YES | | NULL | |
| model_year | decimal(10,0) | YES | | NULL | |
+-------------+---------------+------+-----+---------+-------+
mysql> describe CP01_part;
+-------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+-------+
| oid | varchar(32) | NO | | NULL | |
| part_number | varchar(255) | NO | PRI | NULL | |
| price | varchar(255) | YES | | NULL | |
| car_car_id | varchar(255) | YES | | NULL | |
+-------------+--------------+------+-----+---------+-------+
mysql> describe cp01_exhaust;
+-------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+-------+
| oid | varchar(32) | NO | | NULL | |
| type | varchar(255) | YES | | NULL | |
| part_number | varchar(255) | NO | PRI | NULL | |
| price | varchar(255) | YES | | NULL | |
+-------------+--------------+------+-----+---------+-------+
mysql> describe CP01_intake;
+-------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+-------+
| oid | varchar(32) | NO | | NULL | |
| part_number | varchar(255) | NO | PRI | NULL | |
| price | varchar(255) | YES | | NULL | |
+-------------+--------------+------+-----+---------+-------+
mysql> describe CP01_software;
+-------------+---------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+---------------+------+-----+---------+-------+
| oid | varchar(32) | NO | | NULL | |
| power_gain | decimal(10,0) | YES | | NULL | |
| part_number | varchar(255) | NO | PRI | NULL | |
| price | varchar(255) | YES | | NULL | |
+-------------+---------------+------+-----+---------+-------+
The above tables have been generated from the UML model and the result does not fit your needs yet.
Especially if you think of having 10 or more table likes this. The field car_car_id that links your parts to the car table should be available in all the tables. And according to the design proposal the base "table" for the parts should be a view like this:
mysql>
create view partview as
select oid,part_number,price from CP01_software
union select oid,part_number,price from CP01_exhaust
union select oid,part_number,price from CP01_intake;
of course the car_car_id column also needs to be selected;
Now you can edit every table by itself and the partview will show all parts together.
To be able to distinguish the parts types you might want to add another column "part_type".
I would do it like this. Instead of having three different tables for car parts:
table - cars table - parts (this would have only an id and a part
number and a type maybe)
table - part_connections (connectin cars with parts)
table - part_options (with all the options which arent in the
parts table like "power gain")
table - part_option_connections (which connects the parts to the
various part options)
In this way it is much easier to add new parts (because you won't need a new table) and its closer to being normalized as well.