/Customers?$skip=30&$top=10
Is there a reason why you need '?' or '&' AND '$' to identify a query parameter?
Is this a case of the implementation leaking into the interface? I dont necessarily want to expose to users the blatant fact that I'm using .NET Data Services. especially, if at a later date I want to change the implementation to another technology...
Or, is there an easy way to disable the need for the '$' to identify a query option?
So it looks like a much more presentable...
/Customers?skip=30&top=10
Thanks
Query string options that start with the $ character are known as System Query Options and denote actions support by ADO.NET Data Services. Basically, this is done to distinguish system-wide "keywords" from model property names.
To solve this issue, you may try rewriting your URLs from /Customers?skip=30&top=10 to /Customers?$skip=30&$top=10 or even transfer this system information in HTTP headers (if this is an option).
Related
I'm new to MySQL, and am wondering: Is it possible to make a table check attempted inserts against some sort of pattern match, and reject any inserts that fail to match the pattern, or must this checking all be done PHP / what-ever-server-side-language side?
I'm thinking specifically about confining an email column in a user table to only be able to contain email addresses using some sort of regex-like pattern matching.
Because you can do something with a DBMS stored procedure doesn't make it a good idea. I strongly suggest you do this kind of validation in PHP rather than in the DBMS.
PHP's install kit contains a validator for email. See here: http://php.net/manual/en/filter.filters.validate.php
If you do this in your DBMS you have to reinvent the flat tire, er, wheel.
Although an answer was already accepted, I believe there are use cases for doing validation on the db side.
It's an extra layer of security. While there is such a thing as overkill I think the case could be made that no matter how well you validate on the application side and use parameter binding with PDO there is always a possibility someone figures out a way to get around it.
If the same data might be used by more than one application, it enforces rules in case a developer for a different app fails to validate correctly.
Here is a example of validation using triggers in mysql. http://cvuorinen.net/2013/05/validating-data-with-triggers-in-mysql/
And here you can see how you might replace the code in those IF statements with regex. regular expressions inside a SQL IF statement
I haven't tried it so don't know if it works but there is a link to the mysql docs on regex.
http://dev.mysql.com/doc/refman/5.1/en/regexp.html
And of course you should always validate on the application side too.
I work on a large perl website that currently stores all the configurations in a perl module. I have the need to move these settings into MySQL. The problem is the settings are defined in lots of variables and most of them are complex structures (like hash of hashes and array of hashes).
My first idea was to use either XML, YAML, or Storable perl module to easily write and read the variables from a simple file, but my boss doesn't want either of these solutions. He wants it to be stored in MySQL, so other solutions are not an option.
My question is, does anybody know about any CPAN module(s) that will help me to do that task; what I basically need is a way to map all the perl complex perl structures I have to MySQL tables.
Any suggestion would be really appreciated. Thanks!
Option 1: Store the data in serialized form (Data::Dumper, Storable, JSON, etc...) in MySQL's TEXT/MEDIUMTEXT/LONGTEXT type field (65KB/16MB/4GB max sizes respectively)
Option 2: Use DBIx ORM (Object-to-Relational-Mapping), which is the way to automatically map Perl data to DB tables (similar to Java's Hybernate). You'll need to convert your data structures to objects as far as I'm aware, though there may be DBIx module that can deal with non-blessed data structures.
Frankly, unless you need to manipulate the config data in detail within MySQL piece by piece, option #1 is dramatically simpler. However, if your boss's goal is to be able to either query details of configuration, or manipulate its individual elements one by one, you will have to go with #2.
Why you don't want to use Storable qw(freeze thaw) with MySQL?
This is probably a noob question, so I apologize in advance.
The HBase console, as far as I understand, is an extension (or a script running over) JIRB. Also, it comes with several HBase-specific commands, one of which is 'get' - to retrieve columns\values from a table.
However, it seems like 'get' only writes to screen and doesn't output values at all.
Is there any native hbase console command which will allow me to retrieve a value (e.g. a set of rows\columns), put them into a variable and retrieve their values?
Thanks
No, there is not a native console command in 0.92. If you dig into the source code, there is a class Hbase::Table that could be used to do what you want. I believe this is going to be more exposed in 0.96. At this point, I have resorted to adding my own Ruby to my shell to handle a variety of common tasks (like using SingleColumnValueFilters on scans).
I'm currently in the planning phase of a rather large project that I'll develop in the Zend Framework. One of the problems I'm facing is that the customers will want to translate not only the content but also the interface. I'm currently using gettext and poedit to manage my language files but this is not an option for the customer as they, for one, wont have FTP access to the site.
Hence, I'm thinking of a mysql back end with an interface in the front end for the customer to manage his own translations of the interface. There is however still no mysql adapater for Zend_Translate.
So, does anybody now of an adapter script for Zend_Translate so it can work with a mysql table? Or any arguments against using mysql and possible other solutions for this problem?
You could solve this problem on different ways:
Extend Zend_Translate_Adapter to create your own. All new adapters are only responsible from getting the translations out from the source. That is, you would need only to fetch the translations from the database. Look at other adapters and see how they are implemented.
Fetch the data from the database and pass it to Zend_Translate_Adapter_Array
Use Zend_Translate_Adapter_Csv or Ini. As there would be more reading the writing on the translations, this solution would cut down the number of queries to the database. When the client adds a new language or changes an existing one, simply write it to a file, not the database.
If you decide to go with the database adapter, maybe you could "tag" somehow the translations, so that on the home page you fetch only the translations for the home page, on the contact page only the translations for the contact page...
HTH!
Default Zend adapters handle caching well, so I'd stick to them, unless you really need database.
Instead storing the translation data in the database, you may directly operate on the translation files (e.g. po templates). This would be the best choice if you just needed to add (append to file) new translation strings.
You may use Zend_Translate's option to log untranslated messages (to file or any log adapter, including database),
and then handle the logs, or even create listener translating the saved strings.
Here's how: http://cloetensbrecht.be/zend_translate_mysql.html
I'd love to do this:
UPDATE table SET blobCol = HTTPGET(urlCol) WHERE whatever LIMIT n;
Is there code available to do this? I known this should be possible as the MySQL Docs include an example of adding a function that does a DNS lookup.
MySQL / windows / Preferably without having to compile stuff, but I can.
(If you haven't heard of anything like this but you would expect that you would have if it did exist, A "proly not" would be nice.)
EDIT: I known this would open a whole can-o-worms re security, however in my cases, the only access to the DB is via the mysql console app. Its is not a world accessible system. It is not a web back end. It is only a local data logging system
No, thank goodness — it would be a security horror. Every SQL injection hole in an application could be leveraged to start spamming connections to attack other sites.
You could, I suppose, write it in C and compile it as a UDF. But I don't think it really gets you anything in comparison to just SELECTing in your application layer and looping over the results doing HTTP GETs and UPDATEing. If we're talking about making HTTP connections, the extra efficiency of doing it in the database layer will be completely dwarfed by the network delays anyway.
I don't know of any function like that as part of MySQL.
Are you just trying to retreive HTML data from many URLs?
An alternative solution might be to use Google spreadsheet's importHtml function.
Google Spreadsheets Lets You Import Online Data
Proly not. Best practises in a web-enviroment is to have database-servers isolated from the outside, both ways, meaning that the db-server wouldn't be allowed to fetch stuff from the internet.
Proly not.
If you're absolutely determined to get web content from within an SQL environ, there are as far as I know two possibilities:
Write a custom MySQL UDF in C (as bobince mentioned). The could potentially be a huge job, depending on your experience of C, how much security you want, how complete you want the UDF to be: eg. Just GET requests? How about POST? HEAD? etc.
Use a different database which can do this. If you're happy with SQL you could probably do this with PostgreSQL and one of the snap-in languages such as Python or PHP.
If you're not too fussed about sticking with SQL you could use something like eXist. You can do this type of thing relatively easily with XQuery, and would benefit from being able to easily modify the results to fit your schema (rather than just lumping it into a blob field) or store the page "as is" as an xhtml doc in the DB.
Then you can run queries very quickly across all documents to, for instance, get all the links or quotes or whatever. You could even apply XSL to such a result with very little extra work. Great if you're storing the pages for reference and want to adapt the results into a personal "intranet"-style app.
Also since eXist is document-centric it has lots of great methods for fuzzy-text searching, near-word searching, and has a great full-text index (much better than MySQL's). Perfect if you're after doing some data-mining on the content, eg: find me all documents where a word like "burger" within 50 words of "hotdog" where the word isn't in a UL list. Try doing that native in MySQL!
As an aside, and with no malice intended; I often wonder why eXist is over-looked when people build CMSs. Its a database that can store content in its native format (XML, or its subset (x)HTML), query it with ease in its native format, and can translate it from its native format with a powerful templating language which looks and acts like its native format. Sometimes SQL is just plain wrong for the job!
Sorry. Didn't mean to waffle! :-$