I have a MySQL DB with multiple tables and views on those tables. A view limits what can be seen to a single customer's data (create view ... where customer_id = X). The Catalyst app will be talking to these views, not to the actual tables. The only difference between the view's columns and the underlying tables' ones is that the view lacks the customer_id column (i.e. to the application it seems like the current customer is the only one in the system).
The problem is, I cannot use DBIC Schema Loader to load the schema from the views, as they lack all the relations and keys. I have to load the schema from the base tables and then use it on the views. The problems is, I cannot get rid of that customer_id column. I need to get rid of it, because it is not present in the view that the application will be talking with.
I ended up using the filter_generated_code option to strip the unneeded bits away from the generated code, but then I get the following error during generation:
DBIx::Class::Schema::Loader::make_schema_at(): No such column customer_id
at /opt/merp/perl/lib/perl5/Catalyst/Helper/Model/DBIC/Schema.pm line 635
How can I have the loader skip certain columns at load time?
I'm not sure how you can get the loader to skip columns at load time, but you can remove them after load. For example, you can add something like this to any class which needs a column removed:
__PACKAGE__->remove_column('customer_id');
I'm not sure if there is an option for that in DBIC::Schema::Loader, the docs will tell you. If there isn't just generate the schema and then remove the column definition.
But besides that you seem to be missing a major feature of DBIC: ResultSet chaining.
If you're using e.g. Catalyst you'd have an action that filters your ResultSet on the stash based on the customer id and all chained sub actions would only ever see the allowed rows.
I ended up just leaving the column for what it is, so it is visible to the application code. The DB views and triggers ensure the application can only insert and select the currently set customer id. The only trick I employed was using filter_generated_code to replace the underlying table name with the view name (just stripping a leading underscore). This way I now have a script that does a show tables, filters out the views, dumps the structure into the DBIC classes, replacing the table name with the view name, looking somewhat like this:
exclude=`mysql -u user -ppassword -D db --execute='show tables' \
--silent --skip-column-names | egrep "^_" | sed "s/^_//g" | \
sed ':a;N;$!ba;s/\n/|/g'`
perl script/proj_create.pl model DB DBIC::Schema Proj::Schema \
create=static components=TimeStamp filter_generated_code=\
'sub { my ($type,$class,$text) = #_; $text =~ s/([<"])_/$1/g; return $text; } ' \
exclude="^($exclude)$" dbi:mysql:db 'user' 'password' quote_names=1 '{AutoCommit => 1}'
Related
I have transfered my project from MySQL to PostgreSQL and tried to drop the column as result of previous issue, because after I removed the problematic column from models.py and saved. error didn't even disappear. Integer error transferring from MySQL to PostgreSQL
Tried both with and without quotes.
ALTER TABLE "UserProfile" DROP COLUMN how_many_new_notifications;
Or:
ALTER TABLE UserProfile DROP COLUMN how_many_new_notifications;
Getting the following:
ERROR: relation "UserProfile" does not exist
Here's a model, if helps:
class UserProfile(models.Model):
user = models.OneToOneField(User)
how_many_new_notifications = models.IntegerField(null=True,default=0)
User.profile = property(lambda u: UserProfile.objects.get_or_create(user=u)[0])
I supposed it might have something to do with mixed-case but I have found no solution through all similar questions.
Yes, Postgresql is a case aware database but django is smart enough to know that. It converts all field and it generally converts the model name to a lower case table name. However the real problem here is that your model name will be prefixed by the app name. generally django table names are like:
<appname>_<modelname>
You can find out what exactly it is by:
from myapp.models import UserProfile
print (UserProfile._meta.db_table)
Obviously this needs to be typed into the django shell, which is invoked by ./manage.py shell the result of this print statement is what you should use in your query.
Client: DataGrip
Database engine: PostgreSQL
For me this worked opening a new console, because apparently from the IDE cache it was not recognizing the table I had created.
Steps to operate with the tables of a database:
Database (Left side panel of the IDE) >
Double Click on PostgreSQL - #localhost >
Double Click on the name of the database >
Right click on public schema >
New > Console
GL
Are there tutorials online that teaches you how to create tables and input values in InfluxDB? How would you create a table and insert values into them?
InfluxDB doesn't really have the concept of a table. Data is structured into series, which is composed of measurements, tags, and fields.
Measurements are like buckets.
Tags are indexed values.
Fields are the actual data.
Data is written into InfluxDB via line protocol. The structure of line protocol is as follows
<measurement>,<tag>[,<tags>] <field>[,<field>] <timestamp>
An example of a point in line protocol:
weather,location=us-midwest temperature=82 1465839830100400200
To insert data into the database you'll need to issue an HTTP POST request to the /write endpoint, specifying the db query parameter.
For example:
curl -XPOST http://localhost:8086/write?db=mydb --data-binary "weather,location=us-midwest temperature=82 1465839830100400200"
For more information see the Getting Started section of the InfluxDB docs.
I just want to quote the moderator of the influxdata community here:
You can think of
measurements as tables in SQL,
tags as indexed columns,
and fields as unindexed columns
Also, there is no "create table" statement. Just insert to a table. The web call was specified above. If you have the "influx" command line interpreter, you can do:
export INFLUX_PASSWORD="BlahBlahBlah"
influx -host <hostname> -u <username> -d <database>
insert my_influx_test_measurement,index1="aaa" value1="bbb"
Note that "insert" is only a command line (aka "influx") thing. Doesn't work with http calls.
It's to bad they named the command line interpreter "influx". Now when anyone refers to "influx", it's not clear if it's the database or the CLI.
I am trying to query my mysql database. I am using the database cookbook and can setup a connection with my database. I trying to query my database for information so now the question is how do I store than information so I can access it in another resource. Where do the results of the query get stored? This is my recipe:
mysql_database "Get admin users" do
connection mysql_connection_info
sql "Select * from #{table_name}"
action :query
end
Thanks in advance
If you don't have experience with Ruby, this might be really confusing. There's no way to "return" the result of a provider from a Chef resource. The mysql_database is a Chef::Recipe DSL method that gets translated to Chef::Provider::Database::Mysql at runtime. This provider is defined in the cookbook.
If you take some time to dive into that provider, you'll can see how it executes queries, using the db object. In order to get the results of a query, you'll need to create your own connection object in the recipe and execute a command against it. For example
require 'mysql'
db = ::Mysql.new('host', 'username', 'password', nil, 'port', 'socket') # varies with setup
users = db.query('SELECT * FROM users')
#
# You might need to manipulate the result into a more manageable data
# structure by splitting on a carriage return, etc...
#
# Assume the new object is an Array where each entry is a username.
#
file '/etc/group' do
contents users.join("\n")
end
I find using good old Chef::Mixin:ShellOut / shell_out() fairly sufficient for this job and it's DB agnostic (assuming you know your SQL :) ). It works particularly well if all you are querying is one value; for multiple rows/columns you will need to parse the SQL query results. You need to hide row counts, column headers, eat preceding white-space, etc. from your result set to just get the query results you want. For example, below works on SQL Server :
Single item
so = shell_out!("sqlcmd ... -Q \"set nocount on; select file_name(1)\" -h-1 -W")
db_logical_name = so.stdout.chop
Multiple rows/columns (0-based position of a value within a row tells you what this column is)
so = shell_out!("sqlcmd ... -Q \"set nocount on; select * from my_table\" -h-1 -W")
rows_column_data = so.stdout.chop
# columns within rows are space separated, so can be easily parsed
I'd like to dump my databases to a file.
Certain website hosts don't allow remote or command line access, so I have to do this using a series of queries.
All of the related questions say "use mysqldump" which is a great tool but I don't have command line access to this database.
I'd like CREATE and INSERT commands to be created at the same time - basically, the same performance as mysqldump. Is SELECT INTO OUTFILE the right road to travel, or is there something else I'm overlooking - or maybe it's not possible?
Use mysqldump-php a pure-PHP solution to replicate the function of the mysqldump executable for basic to med complexity use cases - I understand you may not have remote CLI and/or mysql direct access, but so long as you can execute via an HTTP request on a httpd on the host this will work:
So you should be able to just run the following purely PHP script straight from a secure-directory in /www/ and have an output file written there and grab it with a wget.
mysqldump-php - Pure PHP mysqldump on GitHub
PHP example:
<?php
require('database_connection.php');
require('mysql-dump.php')
$dumpSettings = array(
'include-tables' => array('table1', 'table2'),
'exclude-tables' => array('table3', 'table4'),
'compress' => CompressMethod::GZIP, /* CompressMethod::[GZIP, BZIP2, NONE] */
'no-data' => false,
'add-drop-table' => false,
'single-transaction' => true,
'lock-tables' => false,
'add-locks' => true,
'extended-insert' => true
);
$dump = new MySQLDump('database','database_user','database_pass','localhost', $dumpSettings);
$dump->start('forum_dump.sql.gz');
?>
With your hands tied by your host, you may have to take a rather extreme approach. Using any scripting option your host provides, you can achieve this with just a little difficulty. You can create a secure web page or strait text dump link known only to you and sufficiently secured to prevent all unauthorized access. The script to build the page/text contents could be written to follow these steps:
For each database you want to back up:
Step 1: Run SHOW TABLES.
Step 2: For each table name returned by the above query, run SHOW CREATE TABLE to get the create statement that you could run on another server to recreate the table and output the results to the web page. You may have to prepend "DROP TABLE X IF EXISTS;" before each create statement generated by the results of these queryies (!not in your query input!).
Step 3: For each table name returned from step 1 again, run a SELECT * query and capture full results. You will need to apply a bulk transformation to this query result before outputing to screen to convert each line into an INSERT INTO tblX statement and output the final transformed results to the web page/text file download.
The final web page/text download would have an output of all create statements with "drop table if exists" safeguards, and insert statements. Save the output to your own machine as a ".sql" file, and execute on any backup host as needed.
I'm sorry you have to go through with this. Note that preserving mysql user accounts that you need is something else entirely.
Use / Install PhpMySQLAdmin on your web server and click export. Many web hosts already offer you this as a service pre-configured, and it's easy to install if you don't already have it (pure php): http://www.phpmyadmin.net/
This allows you to export your database(s), as well as perform other otherwise tedious database operations very quickly and easily -- and it works for older versions of PHP < 5.3 (unlike the Mysqldump.php offered as another answer here).
I am aware that the question states 'using query' but I believe the point here is that any means necessary is sought when shell access is not available -- that is how I landed on this page, and PhpMyAdmin saved me!
I have a mysql query that looks like this $query="SELECT * FROM #__content".
What does the #__ at the start of the table name mean?
#__ is a prefix of your tables
#__ is simply the database table prefix and is defined in you configuration.php
If it wasn't defined, people would have to manually have to input their prefixes into every extension that requires access to the database, which you can imagine would be annoying.
So for example, if you database table prefix is j25, then:
#__content = j25_content
As others have said, hash underscore sequence '#_' is the prefix used for table names by Joomla!'s JDatabase class. (N.B. there is only one underscore, the second underscore is maintained to for readability in table names.)
When you first setup Joomla! you are given the option of setting a prefix or using the one randomly generated at the time. You can read about how to check the prefix here.
When you access the database using the JDatabase class it provides you with an abstraction mechanism so that you can interact with the database that Joomla is using without you having to code specifically for MySQL or MSSQL or PostgreSQL etc.
When JDatabase prepares a query prior to executing it, it replaces any occurrences #_ in the from segment of the query with the prefix setup when Joomla! was installed. e.g.
// Get the global DB object.
$db = JFactory::getDBO();
// Create a new query object.
$query = $db->getQuery(true);
// Select some fields
$query->select('*');
// Set the from From segment
$query->from('#__myComponents_Table');
Later when you execute the query JDatabase will change the from segment of the SQL from
from #__myComponents_Table to
from jp25_myComponents_Table — if the prefix is jp25, prior to executing it.