I was reviewing some code that a consultant checked in and notice they were using SQLCLR. I don't have any experience with it so thought I would research what it was about. I noticed that they used
Dim cn As New SqlConnection("server=LOCALHOST;integrated security=yes;database=" & sDb)
instead of
DIM conn As New SqlConnection("context connection=true")
I'm wondering what the difference since it's localhost on the first?
The context connection uses the user's already established connection to the server. So you inherit things like their database context, connection options, etc.
Using localhost will connect to the server using a normal shared memory connection. This can be useful if you don't want to use the user's connection (i.e. if you want to connect to a different database, or with different options, etc).
In most cases you should use the context connection, since it doesn't create a separate connection to the server.
Also, be warned that using a separate connection means you are not part of the user's transaction and are subject to normal locking semantics.
Consider a big office phone systems:
My office has an internal phone system. But every phone also has an external phone number (virtual numbers that utilize one of a group of real TELCO lines). I can call another office by dialing their phone extension directly and the call will route through our internal phone system (one hop). Alternatively I could dial that phone's public number and the call routes out from the building's system to the TELCO switching office, then back through the building's system then to the office extension (3 hops).
The first SQL connection behaves as any standard SQL connection would when connecting to the server specified in the connection string. A new connection is created using the standard native SQL connectivity. This behaves like dialing the full public phone number of another office phone. Sure, you are connecting to the local machine, but the connection is routed differently.
The context connection has the new SqlConnection instance using the existing connection that is executing the SQLCLR object. It's using the existing/local context. This is like dialing my office mate's extension directly. Local context and more efficient.
Although I'm not positive, I believe that when using the context connection, the calls to the SQLCLR objects also then participate in the context's transaction. Someone please correct me if I'm wrong.
Peter
Related
I'm developing WPF - EF Core desktop application for multiple users. I have to connect to a MySql server with a limited number of connections. Testing with a single desktop client i see my connections grows 3-4 instances so i'm worry worried about it.
I really dont understand why because my code only calls one instance at the same time.
How i could decrease these numbers?
May be MySql maintains a minimun opened connections pool ?
Can i force to EF Core to use only one instance for a desktop application instance?
Edit:
It's an Azure MySql database (limited opened connections per instance). I attach an active connections graph. First graphic's part (range values between 4-7) is when i'm using a single desktop user test, then i stop and connections come back to 4.
All my calls are synchronous and with this structure:
using(var context = database.getContext())
{
//Calls to database
db.Savechanges(); // if needed
}
Have you tried adding the pooling option to your connection string: pooling=false
var connectionString = "Server=server;Database=database;User ID=user;Password=pass;Pooling=false;";
Our mysql hoster has a limit of concurrent db connections. As it is rather pricey to expand that limit the following question came up:
Info:
I have a web app (been developed by an external coder- in case you might wonder about this question).
The web app is distributed and installed on many servers (every user installs it on their pc). These satellites are sending data into a mysql db. Right now the satellites are posting into the db directly.To improve security and error handling i would like to have the satellites posting to a XML-rpc (wordpress api) which then further posts into the db.
Question:
would such api reduce number of concurrent connections or not?
(right now as every satellite connects directly. It is like 1 user = 1 connection)
If 10 satellites are posting to one file, this file then processes the data and posts them into the db -> has this been one connection? (or as many connections as different data sets have been processed.)
What if the api throttles a little bit, so as there is only posting at a time. Would this lead to just one connection or not?
Any pointers are well appreciated!
Thank you in advance!
If you want to improve concurrent connections to the database (because the fact is, creating a connection to the database is "expensive"). You should look into using a ConnectionPool (example with Java).
How are concurrent database connections counted?
(source: iforce.co.nz)
A connectionless server
Uses a connectionless IPC API (e.g., connectionless datagram socket)
Sessions with concurrent clients can be interleaved.
A connection-oriented server
Uses a connection-oriented IPC API (e.g. stream-mode socket )
Sessions with concurrent clients can only be sequential unless the server is threaded.
(Client-server distributed computing paradigm, N.A.)
Design and Performance Issues
Database connections can become a bottleneck. This can be addressed
by using connection pools.
Compiled SQL statements can be re-used by using PreparedStatements
instead of statements. These statements can be parameterized.
Connections are usually not created directly by the servlet but either
created using a factory (DataSource) or obtained from a naming service
(JNDI).
It is important to release connections (close them or return them to the
connection pool). This should be done in a finally clause (so that it is done
in any case). Note that close() also throws an exception!
try
{
Console.WriteLine("Executing the try statement.");
throw new NullReferenceException();
}
catch (NullReferenceException e)
{
Console.WriteLine("{0} Caught exception #1.", e);
}
catch
{
Console.WriteLine("Caught exception #2.");
}
finally
{
Console.WriteLine("Executing finally block.");
}
There are various problems when developing interfaces between OO and
RDBMS. This is called the “paradigm mismatch”. The main problem is that
databases use reference by value while OO languages use reference by
address. So-called middleware/ object persistency framework software
tries to ease this.
Dietrich, R. (2012). Web Application Architecture, Server Side Scripting Servlets. Palmerston North: Massey University.
It depends on the way you implement the centralized service.
If the service after receiving a request immediatly posts the data to mysql, you may have many connections if there are simultaneous requests. But using connection pooling you can control precisely how many open connections you will have. In the limit, you can have just one connection open. This might cause contention if there are many concurrent requests as each request has to wait for the connection to be released.
If the service receives requests, store them in some place (other then the database), and processes them in chunks, you can also have just one connection. But this case is more complex to implement because you have to control the access (reading and writing) to the temporary data buffer.
I have an encrypted connection from my ios app to my mysql database. My question is whether or not they would be able to intercept the connection form the ios app and find the domain with or without an encryption
whether or not they would be able to intercept the connection form the ios app
Yes, they would be able to do so. At least surely using a jailbroken device - for jailbroken devices, there are a couple of factors that make hacking easier.
On the one hand, on a jailbroken system, it is possible to prevent Apple's encryption of the app executable (by dumping the unencrypted program code from memory to the disk) and run a utility called "class-dump" to obtain the Objective-C class information (it is also possible to use the GDB debugger on the device or IDA Pro in order to reverse engineer the application logic).
On the other hand, the same MobileSubstrate library that is used for making iOS tweaks can be used to alter the behavior of any given application (I have successfully used this technique for circumventing some code obfuscation at runtime), so in theory an attacker would alter the communication logic of your application and dump the unencrypted data of yours and your users.
On the gripping hand, most standard and less-used Unix utilities usable for such kind of hacking are ported/compiled for jailbroken iOS - including the popular network sniffing tool nmap, the "John the Ripper" password cracker, the infamous aircrack-ng WEP/WPA key cracker, the GNU debugger (GDB), etc. These are also useful for executing an attack you described.
If the connection itself is encrypted, then, in theory, your data should be safe while in the wire. This still doesn't prevent the MobileSubstrate-based approach to exploitation. It is also true that the IP address of the server you're connecting to can be found relatively easily (end even the domain it is matching, since there are also known techniques for obtaining reverse-DNS information using a known IP address).
I'm not sure if this is possible without a jailbreak, but a similar man-in-the-middle attack was performed against Apple's in-app purchases by a Russian hacker (effectively rendering ineffective the underlying payment system and allowing purchases to be freely downloaded), merely by requiring users to install SSL certificates, profiles and using the hacker's own proxy server, so I'd suspect it is possible even without a jailbreak. Note that in this case the connection was also encrypted, and it was not the encryption that mattered.
You should not imo create a direct connection to the mysql-database but instead pursue a connection with a server program/api with a connection to the database in question. To answer the question more directly users should not be able to intercept the connection from the ios-app if it is encrypted correctly but still, is it worth that risk?
If the connection is encrypted, the data are secure. But not the domain. The iPhone is connecting to an IP Address, and that IP Address is obviously not encrypted.
Create a PHP interface between your app and the Mysql. Doing this they will be able to hack only app-accounts not the entire database! Your Mysql credential will be stored in the remote domain where the PHP code runs.
I am writing my first .NET MVC application and I am using the Code-First approach. I have recently learned how to configure two SQL Servers installations for High Availability using a Mirror Database and a Witness (not to be confused with Failover Clusters) to do the failover process. I think this would be a great time to practice both things by mounting my web app into a highly-available DB.
Now, for what I learned (correct me if I'm wrong) in the mirror configuration you have the witness failover to the secondary DB if the first one goes down... but your application will also need to change the connection string to reference the secondary server.
What is the best approach to have both addresses in the Web.config (or somewhere else) and choosing the right connection string?
I have zero experience with connecting to Mirrored databases, so this is all heresy! :)
The short of it may be you may not have to do anything special, as long as you pass along the FailoverPartner attribute in your connection string. The long of it is you may need additional error handling to attempt a new connection so the data provide will actually use the FailoverPartner name in the new connection.
There seems to be some good information with Connecting Clients to a Database Mirroring Session to get started. Have you had a chance to check that out?
If not, its there with Making the Initial Connection where they introduce the FailoverPartner attribute of the ConnectionString property attributes.
Reconnecting to a Database Mirroring Session suggests that on any client disconnect due to failover, the client will need to trap this exception and be prepared to reconnect:
The application must become aware of
the error. Then, the application needs
to close the failed connection and
open a new connection using the same
connection string attributes.
If the FailoverPartner attribute is available, this process should be relatively transparent to the client.
If the above doesn't work, then you might need to actually introduce some logic at the application tier to track who is the primary node, the failover node, and connection strings for each, and be prepared to persist that information somewhere - much like the data access provider should be doing for us (eyes wide open).
There is also this ServerFault post on database mirroring with Sql Server that might be of interest from an operational viewpoint that has additional reference information.
Hopefully someone with actual experience will back up any of this!
This may be totally off base, but what if you had a load balancer between your web server and the database servers?
The Load Balancer would have both databases in it's pool, using basic health check techniques (e.g ping, etc).
Your configuration would then only need to point to the IP of the Load Balancer, and wouldn't need to change.
This is what these network devices are good for. It's not the job of the programming framework (ASP.NET) to make decisions on the health of servers.
I've been thinking, why does Apache start a new connection to the MySQL server for each page request? Why doesn't it just keep ONE connection open at all times and send all sql queries through that one connection (obviously with client id attached to each req)?
It cuts down on the handshake time overhead, and a couple of other advantages that I see.
It's like plugging in a computer every time you want to use it. Why go to the outlet each time when you can just leave it plugged in?
MySQL does not support multiple sessions over a single connection.
Oracle, for instance, allows this, and you can setup Apache to mutliplex several logical sessions over a single TCP connection.
This is limitation of MySQL, not Apache or script languages.
There are modules that can do session pooling:
Precreate a number of connections
Pick a free connection on demand
Create additional connections if not free connection is available.
the reason is: it's simpler.
to re-use connections, you have to invent and implement connection pooling. this adds another almost-layer of code that has to be developed, maintained, etc.
plus pooled connections invite a whole other class of bugs that you have to watch out for while developing your application. for example, if you define a user variable but the next user of that connection goes down a code path that branches based on the existence of that variable or not then that user runs the wrong code. other problems include: temporary tables, transaction deadlocks, session variables, etc. all of these become very hard to duplicate because it depends on the subsequent actions of two different users that appear to have no ties to each other.
besides, the connection overhead on a mysql connection is tiny. in my experience, connection pooling does increase the number of users a server can support by very much.
Because that's the purpose of the mod_dbd module.