I have database which contains huge number of tables, stored procedure. So,
how can i get specific objects like table, stored procedure in a single query for specific database.
SELECT
[schema] = s.name,
[object] = o.name,
o.type_desc
FROM sys.objects AS o
INNER JOIN sys.schemas AS s
ON o.[schema_id] = s.[schema_id]
WHERE o.[type] IN ('P','U');
Some other answers you'll find on this or other sites might suggest some or all of the following:
sysobjects - stay away, this is a backward compatibility view that has been deprecated, and shouldn't be used in any version > SQL Server 2000. See a thorough but not exhaustive replacement map here.
built-in functions like OBJECT_NAME(), SCHEMA_NAME() and OBJECT_SCHEMA_NAME() - I've recommended these myself over the years, until I realized they are blocking functions and don't observe the transaction's isolation semantics. So if you want to grab this information under read uncommitted while there are underlying changes happening, you can't, and you'll have to wait. Which may be what you want to do, but not always.
INFORMATION_SCHEMA - these views are there to satisfy the standards, but aren't complete, are warned to be inaccurate, and aren't updated to reflect new features (I blogged about several specific problems here). So for very basic information (or when you need to write cross-platform metadata code), they may be ok, but in almost all cases I suggest just always using a method you can trust instead of picking and choosing.
Related
I am trying to find a way to merge a select query and an update withing the same instruction on a MySQL server. This might sound as a repeated question, but my need is different from my predecessors.
I actuality looking for a single SQL instruction, as I cannot use transactions or split them in two. The goal is to bypass a security measure that only allows one select query to pass through. This is not for anything illegal, this is for a security class on my university, the goal is to bypass a tailored system, which was specially made vulnerable to SQL injection.
I can perform the injections and make any select, login with injections and so on, but this part with the update was left as a challenge.
I tried everything I could image, looking for a way to mix them, I even thought about putting a Update statement on a inner query, but the syntax was obviously wrong.
Any thoughts? If not possible, suggestions on how to attack the target and produce an update are more than welcome.
Here is a long shot, it is obviously wrong, but I thought it might help to understand what I am trying to achieve:
SELECT *
FROM user
WHERE (name = 'admin') and exists (
UPDATE user
SET pass='test'
WHERE name='peter');-- OR email = 'admin') AND pass = ’t’..
Target:
$sel1 = mysql_query ("SELECT ID, name, locale, lastlogin, gender,
FROM USERS_TABLE
WHERE (name = ’$user’ OR email = ’$user’) AND pass = ’$pass’");
Update: I accepted the answer that was closer to a 'not possible'. But further search on the matter led to the conclusion that this was more about the API used for the connector then a DBMS security feature itself, this is actually because of the DBMSs and acceptable uses and syntax.
On the question about a way of embedding an UPDATE statement on a SELECT, I found this to be not possible - at lest to the extend of my knowledge.
About the attack, it could be possible to use stacked statements, when the programmer uses and API that allows such thing - which is rare, but existent. Concluding, the whole thing seems to be had to accomplish.
I am not familiar with MySQL but from my SQL Server experience I can tell you that you cannot combine a SELECT and UPDATE statements both in a single query.
Moreover - any modern database system should be smart enough to prevent you if you are trying to sneak in a database UPDATE using a SELECT statement and thus circumventing your DB permissions.
I am sure MySQL will not be dumb to allow you an update if you are bundling it with SELECT query - not to say that it is possible.
Thus in my point of view - you may be chasing a dead end here which is not allowed/possible.
I have three tables in my SQL Schema: Clients, with address and so on, orders with order details and files, which stores uploaded files. both the files table and the orders table contain foreign keys referencing the Client tables.
How would I do that in IndexedDB? IÄm new to this whole key-index-thinking and would just like to understand, how the same Thing would be done with indexedDB.
Now I know there is a shim.js file, but I'm trying to understand the concept itself.
Help and tips highly appreciated!
EDIT:
So I would really have to think about which queries I want to allow and then optimize my IndexedDB implementation for those queries, is that the main point here? Basically, I want to to store a customer once and then many orders for that customer and then be able to upload small files (preferably pdfs) for that customer, not even necessarily for each order (although if that's easy to implement, I may do it)... I see every customer as a separate entity, I wont have things like "give me all customers who ordered xy" - I only need to have each customer once and then store all the orders for the customer and all the files. I wanto be able to go: Search for customer with the name of XY - which then gives me a list of all orders and their dates and a list of the files uploaded for that customer (maybe associated to the order).
This question is a bit too broad to answer correctly. Nevertheless, the major concept to learn when transitioning from SQL to No-SQL (indexedDB) is the concept of object stores. Most SQL databases are relational and perform much of the work of optimizing queries for you. indexedDB does not. So the concepts of normalization and denormalization work a bit differently. The focal point is to explicitly plan your own queries. Unlike the design of an app/system that allows simple ad-hoc SQL queries that are designed at a later point in time, and possibly even easily added/changed at a later time, you really need to do a lot of the planning up front for indexedDB.
So it is not quite safe to say that the transition is simply a matter of creating three object stores to correspond to your three relational tables. For one, there is no concept of joining in indexedDB so you cannot join on foreign keys.
It is not clear from your question but your 3 tables are clients, orders, and files. I will go out on a limb here and make some guesses. I would bet you could use a single object store, clients. Then, for each client object, store the normal client properties, store an orders array property, and store a files array property. In the orders array, store order objects.
If your files are binary, this won't work, you will need to use blobs, and may even encounter issues with blob support in various browser indexedDB implementations (Chrome sort of supports it, it is unclear from version to version).
This assumes your typical query plan is that you need to do something like list the orders for a client, and that is the most frequently used type of query.
If you needed to do something across orders, independent of which client an order belongs to, this would not work so well and you would have to iterate over the entire store.
If the clients-orders relation is many to many, then this also would not work so well, because of the need to store the order info redundantly per client. However, one note here, is that this redundant storage is quite common in NoSQL-style databases like indexedDB. The goal is not to perfectly model the data, but to store the data in such a way that it your most frequently occurring queries complete quickly (while still maintaining correctness).
Edit:
Based on your edit, I would suggest a simple prototype that uses three object stores. In your client view page where you display client details, simply run three separate queries.
Get the one entity from the client object store based on client id.
Open a cursor over the orders and get all orders for the client. In the orders store, use a client-id property. Create an index on this client-id property. Open the cursor over the index for a specific client id.
Open a cursor over the files store using a similar tactic as #2.
In your bizlogic layer, enforce your data constraints. For example, when deleting a client, first delete all the files from the files store, then delete all the orders from the orders store, and then delete the single client entity from the client store.
What I am suggesting is to not overthink it. It is not that complicated. So far you have not described something that sounds like it will have performance issues so there is no need for something more elegant.
I will go with Josh answer but if you are still finding it hard to use indexeddb and want to continue using sql. You can use sqlweb - It will let you do operation inside indexeddb by using sql query.
e.g -
var connection = new JsStore.Instance('jsstore worker path');
connection.runSql("select * from Customers").then(function(result) {
console.log(result);
});
Here is the link - http://jsstore.net/tutorial/sqlweb/
Is it possible to realize the "MySQL Handler" in PostgreSQL?
Using a cursor is nearly the tool that I search.
It permits to move fast forward and backward, row by row. Without fetching a big resultset.
DECLARE mycursor CURSOR FOR SELECT * FROM mytable ORDER BY "name";
But how to set the cursor at the beginning at a certain row?
For example to start the list at the first "M" name.
If I use this cursor:
DECLARE mycursor CURSOR FOR
SELECT *
FROM mytable
WHERE "name" LIKE 'M%' ORDER BY "name";
I can only move forward and backward trough the "M"-Records, but no more step backward to the "A" or forward to the "Z"-Records.
I only found a solution to get the "first M-Record" with thhe absolute row-number via ROW_NUMBER() and OVER() of the whole sorted resultset.
Then to create the cursor on the whole resultset (A to Z) and to move the cursor to the first M-occurence with
MOVE FORWARD nr_of_first_m FROM mycursor;
Is there a better solution? Because it takes over 1000ms to perform these queries.
Per your link, HANDLER seems to be a MySQL extension that exposes low level ISAM-like access to applications. I've worked with similar interfaces in old direct-access shared-file ISAM database products in the past, and I'm pleasantly surprised to see it in a client/server SQL database. (If I'd known about it four years ago it would've made writing a replacement intepreter for a 1983 4GL my previous job used for a business application a lot easier).
PostgreSQL does not have any equivalent feature exposed at the SQL level. The closest it comes is a scrollable cursor - but because of visibility rules and transaction isolation, this may require materializing a sorted copy of the data set (though not generally for a simple cursor over a SELECT from a single table with no aggregation, windows, etc, that is not with hold). As you have already noted, however, PostgreSQL's FETCH and MOVE do not support value-based scrolling, only row-count based scrolling, which appears to make them unsuitable for your requirements.
The usual solution in PostgreSQL for "get me the prior row" or "get me the next row" is to work in SNAPSHOT or SERIALIZABLE isolation and use queries like:
SELECT * FROM my_table WHERE the_key > 'last_seen_key_value' ORDER BY the_key ASC LIMIT 1;
e.g. if you last saw 'Matthew' and want the next name:
SELECT * FROM my_table WHERE "name" > 'Matthew' ORDER BY "name" ASC LIMIT 1;
or the previous name:
SELECT * FROM my_table WHERE "name" < 'Matthew' ORDER BY "name" DESC LIMIT 1;
This strategy works very well so long as you have a suitable index on the key - for a utf-8 db, you'll want a text_pattern_ops b-tree index on "name" in this case. It's still nowhere near as fast as raw access to an ISAM table (like MyISAM), but it's probably pretty similar to what MySQL's doing when you use a handle on an InnoDB table internally because it has to solve similar problems to PostgreSQL. There's some parsing and planning overhead, but you can get rid of some of that by keeping a pair of prepared statements and re-using them.
It's possible that you could implement something like HANDLE in PostgreSQL using low level C code to access the heap and indexes, but getting it right in the face of concurrent activity, vacuum, etc would be challenging. Considerable experience in PostgreSQL's innards would be required, especially with use of the index access methods. If you're prepared to put in the couple of months of work required to learn and implement it, you could study the sources and then post a preliminary proposal on pgsql-hackers. Or, if this is business critical functionality and you need PostgreSQL for other purposes, you could contact someone who does professional PostgreSQL development - but don't expect a low quote for something like this.
Otherwise, if you need such direct, low-level access it may be best to stick to a database product that directly supports what you need.
Does anyone know an ORM that can abstract JOINs? I'm using PHP, but I would take ideas from anywhere. I've used Doctrine ORM, but I'm not sure if it supports this concept.
I would like to be able to specify a relation that is actually a complicated query, and then use that relation in other queries. Mostly this is for maintainability, so I don't have a lot of replicated code that has to change if my schema change. Is this even possible in theory (at least for some subset of "complicated query")?
Here's an example of what I'm talking about:
ORM.defineRelationship('Message->Unresponded', '
LEFT JOIN Message_Response
ON Message.id = Message_Response.Message_id
LEFT JOIN Message AS Response
ON Message_Response.Response_id = Response.id
WHERE Response.id IS NULL
');
ORM.query('
SELECT * FROM Message
SUPER_JOIN Unresponded
');
Sorry for the purely invented syntax. I don't know if anything like this exists. It would certainly be complicated if it did.
One possibility would be to write this join as a view in the database. Then you can use any query tools on the view.
Microsofts Entity Framework also supports very complex mappings between code entities and the database tables, even crossing databases. The query you've given as an example would be easily supported in terms of mapping from that join of tables to an entity. You can then execute further queries against the resulting joined data using LINQ. Of course if you're using PHP this may not be a huge amount of use to you.
However I'm not aware of a product that wraps up the join into the syntax of further queries in the way you've shown.
In a controversial blog post today, Hackification pontificates on what appears to be a bug in the new LINQ To Entities framework:
Suppose I search for a customer:
var alice = data.Customers.First( c => c.Name == "Alice" );
Fine, that works nicely. Now let’s see
if I can find one of her orders:
var order = ( from o in alice.Orders
where o.Item == "Item_Name"
select o ).FirstOrDefault();
LINQ-to-SQL will find the child row.
LINQ-to-Entities will silently return
nothing.
Now let’s suppose I iterate through
all orders in the database:
foreach( var order in data.Orders ) {
Console.WriteLine( "Order: " + order.Item ); }
And now repeat my search:
var order = ( from o in alice.Orders
where o.Item == "Item_Name"
select o ).FirstOrDefault();
Wow! LINQ-to-Entities is suddenly
telling me the child object exists,
despite telling me earlier that it
didn’t!
My initial reaction was that this had to be a bug, but after further consideration (and backed up by the ADO.NET Team), I realized that this behavior was caused by the Entity Framework not lazy loading the Orders subquery when Alice is pulled from the datacontext.
This is because order is a LINQ-To-Object query:
var order = ( from o in alice.Orders
where o.Item == "Item_Name"
select o ).FirstOrDefault();
And is not accessing the datacontext in any way, while his foreach loop:
foreach( var order in data.Orders )
Is accessing the datacontext.
LINQ-To-SQL actually created lazy loaded properties for Orders, so that when accessed, would perform another query, LINQ to Entities leaves it up to you to manually retrieve related data.
Now, I'm not a big fan of ORM's, and this is precisly the reason. I've found that in order to have all the data you want ready at your fingertips, they repeatedly execute queries behind your back, for example, that linq-to-sql query above might run an additional query per row of Customers to get Orders.
However, the EF not doing this seems to majorly violate the principle of least surprise. While it is a technically correct way to do things (You should run a second query to retrieve orders, or retrieve everything from a view), it does not behave like you would expect from an ORM.
So, is this good framework design? Or is Microsoft over thinking this for us?
Jon,
I've been playing with linq to entities also. It's got a long way to go before it catches up with linq to SQL. I've had to use linq to entities for the Table per Type Inheritance stuff. I found a good article recently which explains the whole 1 company 2 different ORM technologies thing here.
However you can do lazy loading, in a way, by doing this:
// Lazy Load Orders
var alice2 = data.Customers.First(c => c.Name == "Alice");
// Should Load the Orders
if (!alice2.Orders.IsLoaded)
alice2.Orders.Load();
or you could just include the Orders in the original query:
// Include Orders in original query
var alice = data.Customers.Include("Orders").First(c => c.Name == "Alice");
// Should already be loaded
if (!alice.Orders.IsLoaded)
alice.Orders.Load();
Hope it helps.
Dave
So, is this good framework design? Or is Microsoft over thinking this for us?
Well lets analyse that - all the thinking that Microsoft does so we don't have to really makes us lazier programmers. But in general, it does make us more productive (for the most part). So are they overthinking or are they just thinking for us?
If LINQ-to-Sql and LINQ-to-Entities came from two different companies, it would be an acceptable difference - there's no law stating that all LINQ-To-Whatevers have to be implemented the same way.
However, they both come from Microsoft - and we shouldn't need intimate knowledge of their internal development teams and processes to know how to use two different things that, on their face, look exactly the same.
ORMs have their place, and do indeed fill a gap for people trying to get things done, but the ORM uses must know exactly how their ORM gets things done - treating it like an impenetrable black box will only lead you to trouble.
Having lost a few days to this very problem, I sympathize.
The "fault," if there is one, is that there's a reasonable tendency to expect that a layer of abstraction is going to insulate from these kinds of problems. Going from LINQ, to Entities, to the database layer, doubly so.
Having to switch from MS-SQL (using LingToSQL) to MySQL (using LinqToEntities), for instance, one would figure that the LINQ, at least, would be the same if not just to save from the cost of having to re-write program logic.
Having to litter code with .Load() and/or LINQ with .Include() simply because the persistence mechanism under the hood changed seems slightly disturbing, especially with a silent failure. The LINQ layer ought to at least behave consistently.
A number of ORM frameworks use a proxy object to dynamically load the lazy object transparently, rather than just return null, though I would have been happy with a collection-not-loaded exception.
I tend not to buy into the they-did-it-deliberately-for-your-benefit excuse; other ORM frameworks let you annotate whether you want eager or lazy-loading as needed. The same could be done here.
I don't know much about ORMs, but as a user of LinqToSql and LinqToEntities I would hope that when you try to query Orders for Alice it does the extra query for you when you make the linq query (as opposed to not querying anything or querying everything for every row).
It seems natural to expect
from o in alice.Orders where o.Item == "Item_Name" select o
to work given that's one of the reasons people use ORM's in the first place (to simplify data access).
The more I read about LinqToEntities the more I think LinqToSql fulfills most developers needs adequately. I usually just need a one-to-one mappingn of tables.
Even though you shouldn't have to know about Microsoft's internal development teams and processes, fact of the matter is that these two technologies are two completely different beasts.
The design decision for LINQ to SQL was, for simplicity's sake, to implicitly lazy-load collections. The ADO.NET Entity Framework team didn't want to execute queries without the user knowing so they designed the API to be explicitly-loaded for the first release.
LINQ to SQL has been handed over to ADO.NET team and so you may see a consolidation of APIs in the future, or LINQ to SQL get folded into the Entity Framework, or you may see LINQ to SQL atrophy from neglect and eventually become deprecated.