Good Day,
I am trying to learn on how to save my data in MySQL database using C in real-time.
I am using a Raspberry Pi MCU, and an external web server where the data will be saved. I am also using C to get the data from the sensors and would like to save it to my external database, but I do not know how to proceed with this problem as I am not that familiar with using C and MySQL together. though my main concern here is how do i make sure that my data is real-time, or when my sensors get the data it will then be saved to the database.
I'm thinking of doing an infinite loop inside my main and will place an if statement that will serve as a trigger whenever there is a data from the sensors and will save it to the mysql server.
but i am not sure if that is the most efficient way of doing this, that is why if you have any better ideas of how to retrieve my data in real-time using C and saving it to MySQL then it would be greatly appreciated.
in PHP i would have simply made a cron job for this but since i will be doing this in C, I am at lost on how to proceed or if my idea is correct.
You are looking at two independent problem:
Retrieve the data at a fixed interval
Save the data to a database.
For the first, there are two known methods, the first is polling which means stay in a while loop and constantly check if updates are available. The second method is using interrupts, you should choose the most appropriate for your problem but for beginning you can use the first method and when the program works (maybe) move it to interrupts.
For the second, just install MySQL and mysql C connector, just go to their site and download and install it. Its connection is pretty simple and there are a lot of examples online, both for combining and syntax.
An efficient way to do such things is called 'the hardware interrupt'. You should read the docs to check if the hardware supports it.
Related
I am using SSIS to move data between a local MSSQL server table to a remote MYSQL table (Data flow, OLEdb source and ODBC Destination). this works fine if im only moving 2 lines of data, but is very slow when using the table I want which has 5000 rows that fits into a csv of about 3mb, this currently takes about 3 minutes using ssis's options, however performing the steps below can be done in 5 seconds max).
I can export the data to a csv file copy it to the remote server then run a script to import straight to the DB, but this requires a lot more steps that I would like as I have multiple tables I wish to perform the steps on.
I have tried row by row and batch processing but both are very slow in comparison.
I know I can use the above steps but I like using the SSIS GUI and would have thought there was a better way of tackling this.
I have googled away multiple times but have not found anything that fits the bill so am calling on external opinions.
I understand SSIS has its limitations but I would hope there is is a better and faster way of achieving what I am trying to do. If SSIS is so bad I may as well just rewrite everything into a script and be done with it, But I like the look and feel of the Gui and would like to move my data in this nice friendly way of seeing things happen.
any suggestions or opinions would be appreciated.
thank you for your time.
As above have tried ssis options including a 3rd party option cozyroc but that sent some data with errors (delimiting on columns seemed off) now and again, different amount of rows being copied and enough problems to make me not trust the data.
So I'm moving from MS Access to MySQL:
In MS Access you can store certain INSERT, DELETE, and UPDATE queries as objects alongside your tables. Thus for anyone who don't understand computers that well, they can click on the objects and automatically run the queries to alter the master table for various business functions.
In MySQL, where and how do you store these queries, I seem to be only able to make tables. When I write a piece of code using the SQL editor, I can only save it to a remote location (such as my local desktop) and not onto the MySQL database, where it's accessible for my coworkers.
If you can't save it onto the server, how would I write a piece of code and execute it within the database that would be easily usable by others.
Thanks
The answer to this question is going to depend on your environment, your users, and your bandwidth to support any given solution. You are gaining a lot by making the switch from Access to MySQL, however you are losing some of the the WYSIWYG features. (e.g., Access forms that can bind directly to your data source.)
There are many approaches:
If your users are more advanced, simply having access to the database using MySQL Workbench may suffice. From there they would have access to run views, stored procedures, or to create their own custom queries.
Another option would be to script your objects using Python and provide a simple gui using TkInter. Python is generally thought of as an easy to use language; with built in suppport for MySQL and TkInter is its "default" interface.
Using the LAMP architecture is another largely popular paradigm using MySQL as the backend database.
There is also nothing stopping you from using Access to link to your MySQL db using MySQL as an external data source.
I hope this provides enough info to help you begin whittling down your options.
I'm building a simple commenting system using node and i need to configure this in a PHP project running in Apache server. So, i need to trigger node.js when some changes made in MySQL database table present in the Apache server. So, i need to know whether it is possible to do this in a Apache server? If so, then how to do that? Any idea or suggestions on this are greatly welcome. Please help...
I guess there are few options you could take, but I don't think you can get some sort of triggered action from within MySQL or Apache. IMHO, you these are the approaches you can take:
you can expose a HTTP API from node and every time you need to notify the node app, you could simply insert the data into MySQL using PHP and then issue a simple GET request to trigger node.
You could use some sort of queuing system (rabbitmq, redis, etc.) to manage the messages to and from the two application, hence orchestrating the flow of the data between the two apps (and later the db).
you could poll the database from node and check for new rows to be available. This is fairly inefficient and quite tricky, but it sounds more close to what you want.
I was given access to a cluster today along with a front-end. The person who gave me access tells me I cannot start anything on the front-end and that I should submit everything as a job. Now I have no idea what that means but I'm thinking that I am not supposed to start MySQL on the front-end. If that is the case, how can I even use the database?
Is there a way I can add indexes without using the client-server? Or is it even possible to use a database in a cluster setup where I can only submit jobs?
I am guessing the person means that the client is actually executed as a "job". you might want to find out how they program the cluster.
For some security issues I'm in an envorinment where third party apps can't access my DB. For this reason I should have some service/tool/script (dunno what yet... i'm open to the best option, still reading to see what I'm gonna do...)
which enables me to generate on a regular basis(daily, weekly, monthly) some csv file with all new/modified records for a certain application.
I should be able to automate this process and also export at any time a new file.
So it should keep track for each application which records he still needs.
Each application will need some data in some other format (csv/xls/sql), also some fields will be needed for some application and some aren't... It should be fairly flexible...
What is the best option for me? Creating some custom tables for each application? Based on that extracting modified data?
I think you best thing here, assuming you have access to the server to let you set this up is to make a small command line program that can do the relativley simple task you need. Languages like pearl are good for this sort of thing I do believe.
once you have that 'tool' made you can schedule it through the OS of the server to run ever set amount of time. Either schedule task for a windows server or a cronjob for a linux server.
You can also (with out having to set up the scheduled task if you don't / can't want to) enable this small command line application to be called via 'CGI' this is a special way of letting applications on the server be executed at will by a web user. If you do enable this though, I suggest you add some sort of locking system so that it can only be run every so often and to stop it being run five times at once.
EDIT
You might also want to just look into database replication or adding read only users. This saves a hole lot of arseing around. Try to find a solution that dose not split or duplicate data. You can set up users to only be able to access certain parts of the database system in certain ways, such as SELECT data