Accessing an in-memory sqlite database via ODBC - mysql

I have an in-memory database embedded in my application and I want other applications to be able access it via ODBC.
After searching for an hour, I think I have 2 options.
Write my own ODBC Driver which knows how to talk my application via the network.
Implement MySQL or Postgres wire protocol in my application so that the client application can use MySQL/Postgres drivers.
According to Creating a custom ODBC driver writing my own driver is a huge task and should not be taken lightly.
So here are my questions.
If I have to implement the wire protocol, am I correct in assuming that I can just create a thread which talks to database and communicates with the clients over the network? That sounds fairly simple to me.
Do you know of any reasons why I should choose one of MySQL/Postgres/Some other wire protocol over the others? My queries will be basic SELECT statements.
Is there any simpler way to access my in-memory sqlite DB via ODBC?
I'm new to the world of databases and would appreciate any pointers for further reading.

You should create your own ODBC driver. Maybe you don't need to implement all ODBC functionality, and a subset will do.
Trying to emulate a different database with its quirks and idiosyncrasies will probably be more difficult.

Related

Delphi existing application connect to cloud

I have my application in Delphi with MySQL as a database. This is a Desktop application with local Database connected using ADO components.
I have another web application done in PHP and MYSQL.
I want to merge both databases and connect the Delphi application to the cloud MySQL database.
Do I need to put all my logic in PHP scripts and access them from Delphi?
How Delphi to cloud connection can be established?
You can use FireDAC to connect to a database located in the cloud, as soon as your provider allows that connectivity.
But exposing you database to the internet is not the best secure architecture. As you suggest yourself with naming it, a much better architecture is REST. The idea is to write server side software - could be PHP - to accept REST requests from a client, execute it (access the database) and send a reply to the client.
Today's, the REST requests are frequently using JSON to pass requests and receive replies. JSON is supported by Delphi. In short, this is an ASCII representation for object properties.
If accessing the database directly is what you really want, look at this video by Stephen Ball showing how to access an MS-SQL database on Azure cloud. This would be pretty well the same with mySQL.

How to integrate database into software

I am practicing writing an app which use MySQL to manipulating data.
My concern is if my client machine doesn't have have MySQL pre-installed, it's not be able to run my app, is it?. So is there anyway to embed the database server right into the app, or to run the app without the data server. I wonder how all the softwares out there manipulates data. It's not like we need to install some kind of database server before install the app.
MySQL is a client/server database engine, which means that you must install the client and server separately from each, and they communicate over some kind of network protocol.
If you want to deploy a stand-alone application, you are probably better off using a library like SQLite, which gives you as much of the functionality of a SQL database as you are likely to need in such an app, but instead operates on local files and doesn't require installation of a separate server.
You can embed MySQL in your application, see MySQL as an Embedded Database for details.
Your application could work with the remote database, when configuring database connection you should set your DB server IP address(host), port and login credentials. so in order to write application which is dealing with data manipulation, you need to connect to any database instant.
If you are working on client-server application, MySQL database may be accessed either by means of MySQL (this solution may be suitable for internal networks), or through some database-side service, which can provide some API and which can be accessed from client via some application-level protocol (for example, XML-RPC).
If you are working on client application, there are other database solution, which can be used in stand-alone software: SQLite, Derby. As an alternative to database approach, you may consider storing data in XML / YAML format.
I suggest to wrap db layer in you application witch simple interface provided for all operations performed on the database. In this way, you will not have to go into the details of the atomic operations on the database and through unified interface, you can create several different classes which will be responsible for access to different databases in the same way (the same interface). These classes should realize the interface and implement all necessary methods inside (for example ADO). Now your db layer can be visible in all program it the same way. Wrapped class can realize singleton desing pattern and can be used as one instance enywhere in your application. Universal class design (interface) gives you many benefits such as the possibility of substitution on another layer if necessary (switch to a totally different db).

Is there a standard mysql connection pooling library for C?

I have a C application using the MySQL library for database access.
Is there a standard way to implement database connection pooling for such an application?
The C connector doesn't appear to support it.
The Zild Database Library, "a thread-safe high level multi-database connection pool library", looks very promising.
Previously I suggested that
SQL Relay could be used to do this, amongst many other useful things, such as
client-side caching
load-balancing across database instances
translating between different database access APIs
If the MySQL library is dynamically linked this can be done without recompiling the application.
When I last looked in 2009, the mailing list suggested SQL Relay might not be fully ready for production use, but that appears to have changed.

Microsoft access is a mere file or data-base server

A database-server serves all requests, weather coming from local-host or remote client
and to listen any request, a database server must run on a port to listen requests on that port.
As far as i know, Microsoft access don't run on any port,
and it is not possible to request Microsoft access on remote machine using
DriverManager.getConnection("URL", "user", "password");
but possible if your data-source is MySql, Oracle, etc... using,
DriverManager.getConnection("jdbc:mysql://ipAddress:portNo./schemaName", "user", "password");
(if i am wrong, please correct me).
Please u guys here help me with the concept, weather Microsoft access is a
a mere file for storing data (because it don't runs on any port),
or a database server
(because for Microsoft-access Type-1 driver is available ,
it means it must be a data-source because drivers are only available for data sources).
Access does not provide networked connectivity beyond a file share. There is no "Access" port.
Access is not a database to begin with.
It is an application development environment that ships with a default database engine, Jet (or ACE in A2007, which is just an updated version of Jet), and that uses Jet MDBs or ACE ACCDBs for storing its application objects.
Your question is not about Access. It is about the Jet database engine.
Jet is not a server database. There is no process running on the server through which all communication with the Jet data store is managed.
Instead, Jet is a file-based database system. Each user runs Jet locally in memory, and opens the database file in shared mode. Locking of the database file is managed via the LDB file.
ODBC does not provide server functionality to Jet data. It is simply another user of a file.
Microsoft Access databases can be used over ODBC or using a shared file system, so from that standpoint they can be considered multiuser databases.
This not really a database server from the standpoint that, there is not one location that serves queries up to clients. Unless you are using ODBC, each "client" has it's own copy of the database engine.
Access is not designed for many users, and does not have many of the properties that you normally think of when talking about database servers. Including scalability and robustness.
MS Access is a file-based database system but technically speaking, so are many other database systems. SQL Server, for example, will store it's data in a single file and can behave in a way that's very similar to Access. Then again, SQL Server has much more additional features.
But is Access a database server? Well, that depends on your definition of what a server should do. It is possible to create an Access database and give it some server-like functionality by writing some code to "serve" your data to some client application. Been there, done that. And actually, Access has been popular in the past for several cheap-hosted websites as database to e.g. run a forum or guestbook on.
To make things more interesting, Access databases can be accessed through COM. And COM objects can be created on a remote system. So theoretically, through ADO you can already access an Access database on another machine.
Access is also reasonably able to handle multiple users and offers some basic security, if need be.
MS Access is also more than just a database file format, although most people tend to forget this. MS Access is part of MS Office and as such it provides much more functionality than just a file-based database system. (Then again, even Paradox is more than just a file-based database if you buy the complete product from Corel instead of just using the database files plus drivers.)
Btw, the term "server" can be confusing. You don't need to run something on a port to make it a server. Basically, a database server is just some program that provides database services to other programs and computers. With Access, you can technically do both, so yes: Access is a database server. (Albeit a very primitive one.)
In determining whether something is a server or not, the issue of whether it has ports is a red herring. Ports are simply one means of interprocess communication. As others have already noted, other servers use named pipes or shared memory to communicate with clients.
The architectural feature that really makes a server is process isolation. This is true whether you are talking about web servers, database servers, or display servers like X Windows. In each case you have some important resource that you want to guard very carefully. Therefore you don't let anything but a few select processes touch it. If another process wants access to that resource, they don't get to work with it directly. They have to send the server process a message, "Hey server please perform operation X on Y and send me the results". The channel used for sending the message is relatively unimportant, the key point is that some independent process is charged with managing the resource. Contrast this with Access (or as somebody pointed out more correctly the Jet database engine). If your application uses an Access database, then your process open file handles on the database, performs the record locking, and does the index lookups. This is all conveniently hidden by many layers of library calls, and it probably involves many switches to kernel space, but in the end it is still your process that is getting all the CPU cycles and doing all the work. This is true even if you are accessing the Access database via ODBC, which is really just another layer of library calls.
AFAIK, MS Access is a database and you can connect to it through ODBC etc, but it is not a database server in the way SQLServer, MySQL, Postgres or Oracle are database servers.
Access is a file that can be attached to via the JET engine or many others. But it is a file. This means that if too many people attempt to connect to it there have been stories of it becoming corrupt and the whole db lost! It is not anywhere near as powerful as the other database engines you mentioned.
It does not run on a port. It's just a file.
If you put the file on a windows file share, then protocol is SMB, the port is 445. The machine with the file is called a File Server, so in a sense, it is a server app, but MS-Access isn't the server, the SMB bits are. What SMB doesn't do that a real sql server would do is manage the concurrent access.

ODBC vs. newer methods for database management over the internet

I am taking on a legacy project in which database management was handled over the internet using an ODBC connection. The legacy program has recently been rewritten in C#. We are currently discussing how to improve the program and I am a bit uncomfortable with using ODBC to connect to the database. I have written routines to connect to a server using sockets and POST, PUT, and GET commands combined with cgi or php scripts and have read extensively about the AJAX paradigm which I see as the way forward. My colleague insists on using ODBC. What are the pros and cons of using an ODBC connection vs. a more modern approach?
Database-to-application protocols were never designed to be used over the internet. They are too chatty and difficult to secure. If you have the opportunity to do so, then you should consider encapsulating the database behind a properly-secured web service.
those who don't know networking are doomed to reinvent it on port 80
there's nothing 'modern' about HTTP over ODBC. just be sure to wrap it in SSL and/or a VPN and use sensible access controls.
it will be a lot more efficient than HTTP, which wasn't designed for this. at the very least, HTTP commands add a lot of overhead for each operation. ODBC will get you far better latency (which is critical in client-driven DB designs)
How about using ODBC with a modern approach, web services. There are many advantages to this approach. For example:
Multiple client programs can use a single instance of the web service to
access data. There is no need to
write database related code in each
individual application.
Users need to install ODBC drivers and configure ODBC data
sources only on the server machine
that hosts the web service. Client
programs can be running on other
machines over the network.
Client programs are not limited to .NET or Windows platform. All they
have to do to access database is call
a web service.
Database connections can be shared among different client
programs.
Access to databases can be controlled and monitored from a
central location (the web service).
Of course, there are some security issues and limitations to the complexity of your queries.
I had something similar in my office. They had lots of machines with VB.NET apps hitting the local database (regularly got it stuck with too many unused connections) and some web services that contacted an external database through an SSH/SSL tunnel.
We didn't really have a lot of problems with the external database unless the tunnel went down which was rare. You can probably also set up a VPN.
If you are interested in using AJAX/JSON/REST technologies to communicate with a database, you can use an abstraction layer like DBSlayer.
Using a TypeIV "direct" database driver like the System.Data.SqlClient namespace for C# ,or a JDBC driver for Java, is 2-3 times more efficient (better performance) than going through the ODBC layer.
I would avoid ODBC because its slower and I think its not any easier.