What is Client/Server Technology in DBMS? - mysql

I wanna really know what is Client/Server in DBMS,in Hardware,Software and in Architectural point of view.
Whats the difference between Client/Server Technology and File Handling System.

A Client and a Server are two separate entities -- hardware and/or software. The Client asks a question; the Server sits around waiting for questions and provides answers.
The "separate entities" is to emphasize they are logically separate, even though you may put them on the same hardware.
In databases, the Client says "SELECT ..."; the Server says "here's the result set for that query". Or it might say "there are no database rows satisfying that query". Or the Client says "Please INSERT ..."; the Server says "OK, that's done". Note, in this latter example, the "result" is more of just an "acknowledgement".
A database Client may, but does not have to be, on a separate physical computer as the database Server.

The terms "client" and "server" correspond to roles in a communication by two (or more) software components(like father and son in a family relationship).
Usually the software-component that has the data and the logic of operating on that data is called the server, as it serves with data and activity. A software-component that connects to that server and communicates with it and has not all data and logic is called the client, which is usually quite passive. Server and client are not bound to hardware: You can have a HTTP-server on your working machine as well as a browser (a HTTP-client). In real life you apply separation of concerns also to hardware: You have big data-stores with high responsive hardware that you dedicate to the server-software-component and lots of smaller working machines that have a client-software-component to connect to the servers.
This concept can be applied to most software systems, like databases (server keeps data, client knows how to asks for data), documents (HTTP-servers have the documents, manage them and can even contain logic-components, like PHP scripts or applications, and usually browsers as clients). Server and client are not opposit. Having an application server, like a SAP system, the server is usually also client to other services. The application logic is usually seperated from the database, so the application, being a server for the application clients, is (or has) a client to the database.
As the client/server view is a hierarchical division of software communication, you can also have components with equal rights. Some distributed architectures have equal components that communicate with each other, having the same abilities und logic, and eventually having all or part of the data.
In a client-server separation of software both components can be on the same hardware, but they also can communicate via networks and be on different hardwares. Usually the server has the hard-working part so you can have a lot of lightweight clients that only send requests for currently needed data and logic.
But that all is not a must have. When a computer connects to another computer and copies all the logic (programms) and data from it, to become another server, in the process of copying all that information, the taking machine is the client and the giving machine the server.
I'm not sure, what you mean with "file handling systems". A file handling system is usually a software component to serve you data from a file system. Usually it is a local issue, the file system operates on the harddiscs on one hardware. But there are also distributed storages, like NAS (network area storage), where you also have client and server components connected through a network.
So to sum up, what the advantages of a client/server architecture are:
separation of concerns (this allows for specialisation)
independent scalability of the server and the clients
concentration of logic / data that works together (follows separation of concerns) this makes maintainance of the logic on the server much easier (imagine you would have to update all browsers to have a change in your application)

Hardware Component of a Client/Server System
It has mainly 3 types Client,Network and Database Server
Client may be PC's,Laptop,Moblie,Tablet.
Network is the Cabling,Communication Lines,NIC,Hub's,Routers,LAN,WAN.
Server is a computer which is having Enough Processing Speed,Internal RAM,Disk Storage etc.
Software Component of a Client/Server System
It has 2 type's Client and Database Server,Application Software is run on Client side,it uses data that Stored on Server via Sql Queries through data access API's like JDBC and ADO.net.
Architectural Component of a Client/Server System
It mainly uses 2 type Application Servers and Web Servers,Business component is stored in Application Server.Web Servers are used to store Web Application's and Web Service's.

Related

Risk of data corruption over WAN with shared Access database

I am developing an application which uses as a back-end an MS Access database (.mdb, not my decision). Recently I came across someone suggesting that using JET engine over WAN is not really a good idea, with a high risk of data corruption. Since my application should be doing just that (connecting to database on NAS (EDIT: not NAS, shared shared network drive), I got worried. It is really that risky? If so, is there any work around or is an MS Access database just unusable for that kind of application?
EDIT
The front end is .NET windows desktop application in C# (WPF). The system does not have many users, max 10. Most of the time they will approach the database from LAN and 99% of writing to the database will be done within the LAN (from the area of the company). However there are some cases where they will connect to the NAS (EDIT: not NAS, shared shared network drive) from outside the company via network (from their home).
If you have a 100 Mb/s fibre, it will be OK, but if your line is, say, an xDSL line, it is generally an absolute no-no.
Convince the powers that be to move the backend to a server engine like SQL Server where the Express version is free.
The scenario you describe is not a good fit for having an Access database as the back-end. The WAN users could very well find the application slow, but the NAS is the real cause for concern regarding corruption, and that would affect both LAN and WAN users.
Many (most?) NAS devices run on Linux and use Samba to provide Windows file-sharing services. The Access Database Engine apparently uses some low-level features of "real" Windows file sharing that Samba does not always fully implement (ref: here).
In fact, the only time I've seen repeated corruption problems with a shared Access back-end (and a properly distributed front-end) was when a client moved their file shares from an older Windows server to a newer NAS device. The Access application continued to work for the most part, but every few months they would find that the primary keys of some tables would disappear after they did a Compact and Repair on the back-end database file. That never happened while their file share was on the Windows server.
Splitting a front-end from a back-end removes the majority of the risk of corruption. Of course, with Access there's always the possibility and if you're looking for something that reduces the risk to close to nil then you might want to consider SQL Server or MySQL. However, using Access is fine as long as you take proper precautions.
For example, you might want to look into record-locking on tables that will get edited, to prevent multiple simultaneous writes. Backing up your DB on a regular basis is always good, too.

Web servers vs application servers, Open source database Security vs Enterprise Database security

I am working on creating a spec for a startup to create a financial broker check website. It involves storing information about financial advisers and payment details of the users (so obviously needs a lot of security). What kind of databases are best suited for the application. Is MySQL or its open source variations enough or is it better to go with Oracle Enterprise etc. Also any info about the usefulness of application servers over traditional web servers (cloud based or normal) in this scenario and the preferred scripting language (PHP, Ruby, Python) for secure web applications.
Your choice of language, database, etc. has a relatively small impact on the security of your application. The developer's understanding of how to write secure code and the developer's understanding of the features provided by their tools is far more important. It is entirely possible to write a secure application on an open source LAMP stack. It is entirely possible to write a secure application on a completely closed source stack. It is also very easy to write insecure applications on any stack.
An enterprise database like Oracle will (depending on the edition, the options that are licensed, and the add-ons that are purchased) provide a host of security functions that may be useful. You can transparently encrypt the data at rest, you can encrypt the data when it flows over the network to the app server, you can prevent the DBA from viewing sensitive data, you can audit the actions of the DBA and other users, etc. But these sorts of things really only come into play when you've written a reasonably secure application to begin with. It does you little good to encrypt all the data if your application is vulnerable to SQL injection attacks and can be easily hacked to present all the decrypted data to the attacker, for example.

How does a server farm handle a database?

I have been making some research in the domain of servers for a website I want to launch. I thought of a certain configuration of a server with RAID 10 implemented with a NAS doing the backup which has a RAID 10 configuration as well. This should keep data safe in 99.99+ of cases.
My problem appeared when I thought about the need of a second server. If I shall ever require more processing power and thus more storage for users, how can I connect a second server to my primary one and make them act as one what the database (mySQL) is regarded?
I mean, I don't want to replicate my first DB on the second server and load-balance the request - I want to use just one DB (maybe external) and let the servers use it both at the same time. Is this possible? And is the option of backing up mySQL data on a NAS viable?
The most common configuration (once scaling up from a single box) is to put the database on its own server. In many web applications, the database is the bottleneck (rather than the web server); so the first hardware scale-up step tends to be to put the DB on its own server.
This also allows you to put additional security between the database and web server - firewalls are common; different user accounts etc. are pretty much standard.
You can then add web servers to the load balancer, all talking to the same database, as long as your database can keep up.
Having more than one web server also helps with resilience - you can have a catastrophic hardware event on one webserver and the load balancer will direct the traffic to the remaining machines.
Scaling the database server performance is a whole different story - though typically you use very beefy machines for the database, and relative lightweights for the web servers.
To add resilience to the database layer, you can introduce clustering - this is a fairly complex thing to keep running, but protects you against catastrophic failure of a single machine.
Yes, you can back up MySQL to a NAS.

AS3 with mysql connection with sockets or PHP?

So, we want to move out from Air (Adobe stopping support and really bad implementation for the sqlite api, among other things).
I want to make 3 things:
Connect with a flash (not web) application to a local mysql database.
Connect with a falsh (not web) application to a remote mysql database.
Connect with a flash (web) application with a remote mysql database.
All of this can be done without any problem, however:
1 and 2 can be done (WITHOUT using a webserver) using for example this:
http://code.google.com/p/assql/
3 can be done using also the above one as far as I understand.
Question are:
if you can connect with socket wit mysql server, why use a web server (for example with php) to connect like a inter connectioN? why not connnect directly?
I have done this a lot of times, using AMFPHP for example, but wouldn't be faster going directly?
In the case of accessing local machine, it will be a more simple deploy application that only require the flash application + mysql server, not need to also instal a web server.
Is this assumption correct?
Thanks a lot in advance.
The necessity of separate layer of data access usually stems from the way people build applications, the layered architecture, the distribution of the workload etc. SQL server usually don't provide very robust API for user management, session management etc. so one would use an intermediate layer between the database and the client application so that that layer could handle the issues not related directly to storing the data. Security plays a significant role here too. There are other concerns as well, as, for example, some times you would like to close all access to the database for maintenance reasons, but if you don't have any intermediate layer to notify the user about your intention, you'd leave them wondering about whether your application is still alive. The data access layer can also do a lot of caching, actually saving your trips to the database, you would have to make from client (of course, the client can do that too, but ymmv).
However, in some simple cases, having an intermediate layer is an overhead. More yet, I'd say that if you can, do it without an intermediate layer - less code makes better programs, but all chances are for that you will find yourself needing that layer for one reason or another.
Because connecting remotely over the internet poses huge huge huge security problems. You should never deploy an application that connects over the internet to a database directly. That's why AIR and Flex doesn't have remote Mysql Drivers because they should never be used except for building development type tools. And, even if you did build a tool that could connect directly, any descent network admin is going to block access to the database from anywhere outside the DMZ and internal network.
First in order your your application to connect to the database the database port has to exposed to the world. That means I won't have to hack your application to get your data. I just need to hack your database, and I can cut you out of the problem entirely because you were stupid enough to leave your database port open to me.
Second most databases don't encrypt credentials or data traveling over the wire. While most databases support SSL connections most people don't turn it on because applications want super fast data access and they don't want to pay for SSL encryption overhead blah blah blah. Furthermore, most applications sit in the DMZ and their database is behind a firewall so between the server and the database is unlikely something could be eavesdropping on their conversation. However, if you connected directly from an AIR app to the database it would be very easy to insert myself in the middle and watch the traffic coming out of your database because your not using SSL.
There are a whole host of problems doing what you are suggesting around privacy and data integrity that you can't guarantee by allowing a RIA direct access to the database its using.
Then there are some smaller nagging issues like if you want to do modern features like publishing reports to a central server so users don't have to install your software to see them, sending out email, social features, web service integration, cloud storage, collaboration or real time messaging etc you don't get if you don't use a web application. Middleware also gives you control over your database so you can pool connections to handle larger load. Using a web application brings more to the table than just security.

How do you build and deploy a scalable web services infrastructure?

I have a client asking this for a requirement and haven't done this before, what does he mean by web service infrastructure?
That phrase encompasses a wide variety of technical aspects. Your infrastructure is all of the components that make up the systems that run a web business or application, including hardware. So it refers to your server and network setup, your bandwidth and connections in and out, your database setup, backup solutions, web server software, code deployment methods, and anything else used to successfully run a web business with high reliability and uptime and low error and bug incidents.
In order to make such a thing scalable, you have to architect all these components together into something that will work smoothly with growth over time. A scalable architecture should be flexible enough to handle sudden traffic spikes.
Methods used to facilitate scalability include replicated databases, clustered web servers, load balancers, RAID disk striping, and network switching. Your code has to take much of this into account.
It's a tough service to provide.
First thing that comes to mind was the Enterprise service bus.
He probably means some sort of "infrastructure" to run a lot of complex interacting web services.
Either an enterprise application that you call via a web service that can run on many instances of a web application server, or run a single instance that are very nicely multithreaded and scale to many CPUs, or deploying loads of different webservices that all talk to each other, often via message queues, until you have something that breaks all the time and requires a huge team of people to maintain. Might as well throw in a load of virtual machines to have a virtualised, scalable, re-deployable web service infrastructure (i.e., loads of tomcats or jbosses in linux VMs ready to deply as needed, one app per VM).
Then there is physical scalability. Is there enough CPU power for your needs? Is there enough bandwidth between physical nodes to send all these messages and SOAP transactions between machines? Is there enough storage? Is the storage available on a fast, low latency interconnect? Is the database nicely fed with CPU power, bandwidth, a disc system that doesn't lag. Is there a database backup. How about when a single machine can't handle the load of a particular function - then you need load balancers, although these are good for redundancy and software updates whilst remaining live as well.
Is there a site backup? Or are you scaling globally - will there be multiple data centres around the globe? Do you have redundant links to the internet from each data centre? What happens when a site goes down? How is data replicated between sites, to reduce inter-site communications, and how do these data caches and pushes work?
And so on and so forth. But your client probably just wants a web service that can be load balanced without thrashing (i.e., two or more instances can share data/sessions/etc, depends on the application really), with easy database configuration and backup. Ease of deployment is desirable, so make the install simple. Or even provide a Linux VM for them to add to their VM infrastructure. Talk to their sysadmin to see what they currently do.
This phrase is often used as a marketing term from companies who sell some part of what they'll call a "scalable web services infrastructure".
Try to find out from the client exactly what they need. Do they have existing web services? Do they have existing business logic they've decided to expose as web services? Do they have customers who are asking to be able to access your client's systems through web services?
Does your client even know what a web service is?