So I'm going to attempt to create a basic monitoring tool in VB.net. Now I'd like some advice on how basically to tackle the logging and reporting side of things so I'd appreciate some responses from users who I'm sure have a better idea than me and can tell me far more efficient ways of doing things.
So my plan is to have a client tool, which will read from a MySQL database values and basically change every x interval, I'm thinking 10/15 minutes at the moment. This side of the application is quite easy, I mean I can get something to read a database every x amount of time and then change labels and display alerts based on them. - This is all well documented and I am probably okay with that.
The second part is to have a client that sits in the system tray of the server gathering the required information. Now the system tray part I think will probably be the trickiest bit of this, however that's not really part of my question.
So I assume I can use the normal information gathering commands and store them perhaps as strings and I can then connect to the same database and add them to the relevant fields. For example if I had a MySQL table called "server" and a column titled "Connection" I could check if the server has an internet connection for example and store the result as the value 1 for yes and 0 for no and then send a MySQL command to the table to update the "connection" value to either 0/1.
Then I assume the monitoring tool I can run a MySQL query to check the "Connection" column and if the value is = 0 change a label or flag an error and if 1 report that connectivity is okay?
My main questions about the above are listed below.
Is using a MySQL database the most efficient way of doing something like this?
Obviously if my database goes down there's no more reporting, I still think that's a con I'll have to live with though.
Storing everything as values within the code is the best way to store my data?
Is there anything particular type of format I should use in the MySQL colum, I was thinking maybe tinyint(9)?
Is the above method redundant and pointless?
I assume all these database connections could cause some unwanted server load, however the 15 minute refresh time should combat that.
Is there a way to properly combat delays with perhaps client updating not in time for the reporter so it picks up false data, perhaps a fail safe for a column containing last updated time?
You probably don't need the tool that gathers information per se. The web app (real time monitor) can do that, since the clients are storing their information in the same database. The web app can access the database every 15 minutes and display the data, without the intermediate step of saving it again. This will provide the web app with the latest information instead of a potential 29-minute delay.
In other words, the clients are saving the connection information once. Don't duplicate it in the database.
MySQL should work just about as well as anything.
It's a bad idea to hard code "everything". You can use application settings or a MySQL table if you need to store IPs, etc.
In an application like this, the conversion will more than offset the data savings of a tinyint. I would use the most convenient data type.
Related
I'm new to Java and just started writing some JavaFX applications.
My current project is to write an application for a consulting company that store a list of customers, add them to a queue and serve them one by one. There are a few staffs and they will running a copy of the application I write on their PC.
What I've done so far:
create Customer.class to handle personal info and store them in a MySQL db
create Staff.class to handle staff info
create Service.class to handle kind of services are available for the customers
create Consultation.class to handle info of a particular consultation such as date of consultation, customer being served, which staff is providing service, the services offered and the outcome
create an ObservableArrayList, store the data in the MySQL db, and display the data on a TableView of each client PC
What I want to do is, after a staff editing the data in the list, the changes will be updated on the TableView of other client PCs automatically.
The possible solutions I can think of includes:
Option 1
Program the application to query the db regularly for an update.
This method is more simple to implement, but I don't want to keep the MySQL server busy by non-stop querys from a number of clients. I do not want any delay between data write and update on other clients. There are more than 10 clients. If each client update once a second, that will mean at least 10 queries per second and the server will never rest. I don't want to put any stress on the server's harddisk.
Option 2
Program the application to broadcast a message every time after they write data to the db and other clients query the database every time they receive a broadcast. I prefer do it this way but I'm not familiar with network programming. That will mean I'll have to spend some time on it before I can continue the project.
Which of the above is a better choice? Is there other way to keep the TableView on the clients keep synchronized?
Which of the above is a better choice? Is there another way to keep the TableView on the clients keep synchronized?
Before choosing - you may consider optimizing them,
Option 1 seems quite expensive as it has to request frequently. But you can optimize it using connection-pool and specifying certain time-interval(minimum 10 sec) to fetch the data.
Option2 is much more convincing as it applies the lazy-loading concept. You may consider looking socket programming to notify all clients to fetch data.
It's quite hard to say which one is the better option - somehow, I prefer to go with the first approach if your application may insert data frequently, otherwise go with the second one.
An alternative solution - listening to the data changes
Here are some QA, these solutions may help you to implement your requirement.
How to implement a DB listener in Java
How to make a database listener with java?
How to listen to new DB records through java
I have an app that stores locally some data in sqlite,
I have a server that stores the same data in a mysql database,
Both tables have a timestamp column that indicates the time it was edited.
What i want to do is sync the data so it matches,
So if another devices changes the central data on the server, it is pushed down to all devices.
Currently i achieve this by ...
In the app i store the time i last made a server read.
I ask the server for all data that has changed since ...
I make a read about every 30 seconds
My issue, what happens when the clocks change (this will potentially cause issues)
What is the standard way of achieving what i want, the project is very early in development so i can change if there is a much better way of achieving that i want.
Thanks
I want to have ONE single mysql-connection used by EVERY user that selects the data all the time and updates it if specific conditions are met (like a placed bid). Most preferably even then if no user is visiting the website, if that's even possible?
So, in the last days I'm google'ing all the time, trying so hard to figure out to solve my issue, but it seems there are no people with enough knowledge to help me with my problem. So I try to ask my question as simple as possible without confusing you with my code. (But if you're interested seeing the code: http://pastebin.com/dRFzWtEH)
However, this is all about an auction website with live-countdown-timer and I just want to run a node.js server that SELECTs data every second and sends it to a WebSocket to show all users visiting that website the countdown and price-updates (on bids) in realtime.
I accomplished this whole task by using single-mysql-queries but then I ran into errors. Then the author of the GitHub node-mysql-module suggested me to use a MySQL Pool. But there is like no content at all to find about my specific aim stated in my first sentence of this question.
Now I want to ask in general, how could I accomplish this and is this even possible or does at least one user has to be on my website?
What would the code/code-structure/logical process look like?
And I guess I don't need to close the connection at all, so I won't need functions like connection.end()?
No, don't worry about connection pooling. It is not a big deal in MySQL.
Furthermore a "pool" has a problem -- it must clear out all settings, #variables, transaction state, etc, etc, before allowing the next 'client' to use the pooled connection. This can take time, especially if the client is far from the server.
MySQL's connection/disconnection time is very low, unlike competing products.
If you are developing a Web product, then keep in mind that HTTP is "stateless". That is, you cannot hang onto a connection from one 'page' to the next 'page. Hence, no 'state' can be saved.
Edit
If you have "Across the pond" latency problems (100-200ms between US and Europe), client-side connection pool could be very useful. However, if the pool software is injecting commands to reset things, that could totally defeat the pooling.
If you can turn on the 'general log' (in a hosted service, you may have to use log_output=TABLE), do so to see what extra commands are injected.
Also, consider combining multiple client SQL statements into Stored Procedures to cut down on back-and-forth.
Also consider either moving the MySQL server closer to the client, or moving the client closer to the MySQL server, depending on how the end-user to client back-and-forth compares to the client to MySQL traffic.
OK so I'm kinda new to databases in general. I understand the basic theory behind them and have knocked up the odd Access DB here and there.
One thing I'm struggling to learn about is the specifics of how e.g. an SQL query accesses a database.
So say you have a scenario where there's a database on a LAN server (let's say it's MS Access for arguments sake). You run some SQL query or other on it from a client machine. Does the client machine have to download the entire database to run said query (even if the result of the query is just one line)? Or does it somehow manage to get just the data it wants to come down the ol' CAT5? Does the server have to be running anything to do that? Can't quite understand how the client could get JUST the query results without the server having to do some of the work...
I'm seeing two conflicting stories on this matter when googling stuff.
And so this follows on the next question (which may already be answered): if you CAN query a DB without having to get the whole damn thing, and without the server running any other software, can the same be done with a CSV? If not, why not?
Reason I ask is I'm developing an app for a mobile device that needs to talk to a db or CSV file of some kind, and it'll be updating records at a pretty high rate (barcode scanning), so don't want the network to grind to a halt (it's a slow bag of [insert relevant insult] as it is). The less data travelling from device to server, the better.
Thanks in advance
The various SQL servers are just that: a server. It's a program that listens for client queries and sends back a response. It is more than just its data.
A CSV file, or "flat file" is just data. There is no way for it to respond to a query by itself.
So, when you are on a network, your query is sent to the server, which does the work of finding the appropriate results. When you open a flat file, you're using the network and/or file system to read/write the entire file.
Edit to add a note about your specific usage. You'll probably want to use a database engine, as the queries are going to be the least amount of network traffic. For example, when you scan a barcode, your query may be as simple as the following text:
INSERT INTO barcode_table ('code', 'scan_date', 'user') VALUES ('1234567890', '2011-01-24 12:00:00', '1');
The above string is handled by the database engine and the code (along with whatever relevant support data) is stored. No need for your application to open a file, append data to it, and close it. The latter becomes very slow once files get to a large size, and concurrency can become a problem with many users accessing it.
If your application needs to display some data to your user, it would request specific information the same way, and the server would generate the relevant results. So, imagine a scenario in which the user wants a list of products that match some filter. If your products were books, suppose the user requested a list by a specific author:
SELECT products.title, barcode_table.code
FROM products, barcode_table
WHERE products.author = 'Anders Hejlsberg'
ORDER BY products.title ASC;
In this example, only those product titles and their barcodes are sent from the server to the mobile application.
Hopefully these examples help make a case for using a structure database engine of some kind, rather than using a flat file. The specific flavor and implementation of database, however, is another question unto itself.
Generally speaking, relational databases are stored on a remote server, and you access them via a client interface. Each database vendor has software that you'd install on your remote computer that would allow you to access the database on a server. The entire DB is not sent back to the client when a query is executed, although it can send very large result sets if you are not careful about how to structure your query. Generally speaking the flow is like this:
A database server listens for clients to connect
A client connects and issues a SQL
command to the database
The database builds a query plan to
figure out how to get the result
The plan is executed and the results
are sent back to the client.
CSV is simply a file format, not a fully functional platform like a relational database.
I'm building a web app. This app will use MySQL to store all the information associated with each user. However, it will also use MySQL to store sys admin type stuff like error logs, event logs, various temporary tokens, etc. This second set of information will probably be larger than the first set, and it's not as important. If I lost all my error logs, the site would go on without a hiccup.
I am torn on whether to have multiple databases for these different types of information, or stuff it all into a single database, in multiple tables.
The reason to keep it all in one, is that I only have to open up one connection. I've noticed a measurable time penalty for connection opening, particularly using remote mysql servers.
What do you guys do?
Fisrt,i must say, i think storing all your event logs, error logs in db is a very bad idea, instead you may want to store them on the filesystem.
You will only need error logs or event logs if something in your web app goes unexpected. Then you download the file, and examine it, thats all. No need to store it on the db. It will slow down your db and your web app.
As an answer to your question, if you really want to do that, you should seperate them, and you should find a way to keep your page running even your event og and error log databases are loaded and responding slowly.
Going with two distinct database (one for your application's "core" data, and another one for "technical" data) might not be a bad idea, at least if you expect your application to have a lot of users :
it'll allow you to put one DB on one server, and the other DB on a second server
and you can think about scaling a bit more, later : more servers for the "core" data, and still only one for the "technical" data -- or the opposite
if the "technical" data is not as important, you can (more easily) have two distinct backup processes / policies
having two distinct databases, and two distinct servers, also means you can have heavy calculations on the technical data, without impacting the DB server that hosts the "core" data -- and those calculations can be useful, on logs, or stuff like that.
as a sidenote : if you don't need that kind of "reporting" calculations, maybe storing those data to a DB is not useful, and files would do perfectly ?
Maybe opening two connections means a bit more time -- but that difference is probably rather negligible, is it not ?
I've worked a couple of times on applications that would use two database :
One "master" / "write" database, that would be used only for writes
and one "slave" database (a replication of the first one, to several slave servers), that would be used for reads
This way, yes, we sometimes open two connections -- bu one server alone would not have been able to handle the load...
Use connection pooling anyway. So the time to get a connection is not a problem. But if you have 2 connections, transaction handling become more complicated. On the other hand, sometimes it's handy to have 2 connections: if something goes wrong on the business transaction, you can rollback transaction and still log the failure on the admin transaction. But I would still stick to one database.
I would only use one databse - mostly for the reason you supply: You only need one connection to reach both logging and user stored data.
Depending on your programming language, some frameworks (J2EE as an example) provide connection pooling. With two databases you would need two pools. In PHP on the other hand, the performance come in to perspective when setting up a connection (or two).
I see no reason for two databases. It'd be perfectly acceptable to have tables that are devoted to "technical" and "business"data, but the logical separation should be sufficient.
Physical separation doesn't seem necessary to me, unless you mean an application and data warehouse star schema. In that case, it's either real-time updates or, more typically, a nightly batch ETL.
It makes no difference to mysql in any way whether you use separate "datbases", they are simply catalogues.
It may make setting permissions easier, this is a legitimate reason to do it. Other than that, it is exactly the same as keeping the tables in the same db (except you can have several tables with the same name ... but please don't)
Putting them on separate servers might be a good idea however, as you probably don't want your core critical (user info, for example) data mixed in with your high-volume, unimportant data. This is particularly true for old audit data, debug logs etc.
Also short-lived data, such as search results, sessions etc, could be placed on a different server - it presumably has no high availability[1] requirement.
Having said that, if you don't need to do this, dump it all on one server where it's easier to manage (backup, provide high availibilty, manage security etc).
It is not generally possible to take a consistent snapshot of data on >1 server. This is a good reason to only have one (or one that you care about for backup purposes)
[1] Of the data, not the database.
In MySQL, InnoDB has an option of storing all tables of a certain database in one file, or having one file per table.
Having one file per table is somewhat recommended anyway, and if you do that, it makes difference on the database storage level if you have one database or several.
With connection pooling, one database or several is probably not going to matter either.
So, in my opinion, the question is if you'd ever consider separating the "other half" of the database to a separate server - with the separate server having perhaps a very different hardware configuration, such as no RAID. If so, consider using separate databases. If not, use a single database.