How to git a mysql database made on phpmyadmin [closed] - mysql

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I would like to use git on a database made with phpmyadmin for use it with my collaborators. Is there a way to use git together with phpmyadmin? If yes I have not found it through the web, so I would llike to init git in the folder where the database is stored, but I can't find this directory, where could it be? I'm working on ubuntu 13.10.
Thank you for the help.

I would recommend exporting the database (under the Export tab in phpMyAdmin), and store the exported form of your database as a flat file a directory you create as a git repo. Re-export when you want.
This also gives you the flexibility to export a specific database instead of the whole system. Just highlight the database you want to export before you click on the Export tab.
It's really not the intended use of git (or any distributed source control system) to store binary files. There's no way to merge them, so files just overwrite rather than merge. Add to this pushes from other people's local repos, and it just becomes a mess of people clobbering each others' work.
Also the data files in your data directory are updating continually. They would not update the committed copy in git until you do a git commit. There's no guarantee that you'd commit these files in a safe manner. That is, you could save the last table a few seconds later than the first table, and then you'd have an inconsistent snapshot of the data.
To ensure a consistent export, you'd have to make sure no applications are making any changes, or else lock all tables.
But to answer your question, you can find out the location of a given MySQL instance's data directory in phpMyAdmin, by clicking the Variables tab, and searching for the variable datadir. The value of that variable is the location of your data directory on the MySQL server.

Related

Git Repository And Database Schemas

My company use Git for “version control”,etc. Currently it is used for C, C# and Python. I have been asked to add the database schemas together with the more “complex” SQL (no idea when it becomes “complex”) to the repository. Currently the database is backed up after changes have been made to the schemas or after data has been added (at the moment it is purely a development environment). Having looked at Git, database schemas and the like do not really seem (to me) to map onto it. Should I be considering another package for “source control” to compliment the existing MySQL backups?
Thank you...
Assuming you are just wanting to store the SQL scripts that can recreate your DB schema without any data in it (CREATE TABLE, VIEW, INDEX, etc.) then Git seems like a perfectly good option. Git is generally good for version control of textual data, such as SQL scripts.
The fingerprint rule is not to store large files which are often modified in git for several reasons. (out of this answer scope - heuristically, snapshots etc) so i would suggest not to add them to git directly and instead store them in a submodule as a standalone repository.
This way you can still use git to track changes but your git repository will not growth to a huge size (pack files) and you can manage it inside your project.
If you only want to store the sql script git is a good choice sine it will handle it as any other file.

Sofware Engineering: Combining many modular programs in Unix [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Upfront my question is: Are there any standard/common methods for implementing a software package that maintains and updates a MySQL database?
I'm an undergraduate research assistant and I've been tasked with creating a cron job that updates one of our university's in house bioinformatics databases.
Instead of building on monolithic binary that does every aspect of the work, I've divided the problem into subtasks and written a few python/c++ modules to handle the different tasks, as listed in the pipeline below:
Query the remote database for a list of updated files and return the result for the given time interval (monthly updated files / weekly / daily);
Module implemented in python. URL of updated file(s) output to stdout
Read in relative URL's of updated files and download to local directory
Module implemented in python
Unzip each archive of new files
Implemented as bash script
Parse files into CSV format
Module implemented in C++
Run MySQL query to insert CSV files into database
Obviously just a bash script
I'm not sure how to go about combining these modules into one package that can be easily moved to another machine, say if our current servers run out of space and the DB needs to be copied to another file-system (It's already happened once before).
My first thought is to create a bash script that pipes all of these modules together given that they all operate with stdin/stdout anyway, but this seems like an odd way of doing things.
Alternatively, I could write my C++ code as a python extension, package all of these scripts together and just write one python file that does this work for me.
Should I be using a package manager so that my code is easily installed on different machines? Does a simple zip archive of the entire updater with an included makefile suffice?
I'm extremely new to database management, and I don't have a ton of experience with distributing software, but I want to do a good job with this project. Thanks for the help in advance.
Inter-process communication (IPC) is a standard mechanism of composing many disparate programs into a complex application. IPC includes piping the output of one program to the input of another, using sockets (e.g. issuing HTTP requests from one application to another or sending data via TCP streams), using named FIFOs, and other mechanisms. In any event, using a Bash script to combine these disparate elements (or similarly, writing a Python script that accomplishes the same thing with the subprocess module) would be completely reasonable. The only thing that I would point out with this approach is that, since you are reading/writing to/from a database, you really do need to consider security/authentication with this approach (e.g. can anyone who can invoke this application write to the database? How do you verify that the caller has the appropriate access).
Regarding distribution, I would say that the most important thing is to ensure that you can find -- at any given version and prior versions -- a snapshot of all components and their dependencies at the versions that they were at the time of release. You should set up a code repository (e.g. on GitHub or some other service that you trust), and create a release branch at the time of each release that contains a snapshot of all the tools at the time of this release. That way if, God forbid, the one and only machine in which you have installed the tools fails, you will still be able to instantly grab a copy of the code and install it on a new machine (and if something breaks, you will be able to go back to an earlier release and binary search until you find out where the breakage happened)
In terms of installation, it really depends on how many steps are involved. If it is as simple as unzipping a folder and ensuring that the folder is in your PATH environment variable, then it is probably not worth the hassle to create any special distribution mechanism (unless you are able to do so easily). What I would recommend, though, is clearly documenting the installation steps in the INSTALL or README documentation in the repository (so that the instructions are snapshotted) as well as on the website for your repository. If the number of steps is small and easy to accomplish, then I wouldn't bother with much more. If there are many steps involved (like downloading and installing a large number of dependencies), then I would recommend writing a script that can automate the installation process. That being said, it's really about what the University wants in this case.

Keep MySQL files as 'per-project'?

Currently I work at two places - at work and at home. I have a problem with keeping up to date. With files I solved my problem (I use private SVN and commit from phpStorm), but I still have no idea about MySQL. Currently, I just export my tables when I'm going out, but it isn't much of a good way (I know myself, sooner or later I'll forget to do it).
My question is: can I store MySQL data files on per-project basis, so I could commit it into SVN along with other files?
You could make use of a post commit hook that dumps the database, and a hook before update the inserts the dump.

Set site's /image folder to be protected [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a website which uses many files inside of /images and other /folder directories that the HTML files use to display on the page.
What I'd like to do is protect the images so that a user can't go to the root (/images) in this case, and see a directory listing of all of the files in the folder.
I only want the website to display the photo.
I found a perfect example:
http://edge2.mobafire.com/images/champion/icon/tryndamere.png
The image is used by the HTML page and displays perfectly when directly accessing the image in the URL, but the following links are all protected:
Example 1: http://edge2.mobafire.com/images/champion/icon
Example 2: http://edge2.mobafire.com/images
Thanks!
That is typically a webserver configuration paramater. I believe you can achieve the same thing with .htaccess rules (which I do not know off the top of my head).
Knowing what kind of webserver your website runs on and also if you are running the server yourself, or if you are using a shared hosting account will be necessary to further answer the question.
Assuming you are cool, and you run your own server using Apache2 -- you would edit your virtual host (or httpd.conf if you do not use virtual hosts) and find the definition for your root directory like so:
example vhost configuration /etc/apache2/vhost/mywebsite.com.conf:
<Directory "/var/www/website.com/htdocs">
Options Indexes FollowSymLinks
</Directory>
The Option "Indexes" is what tells the webserver that it's OK to view all files within a directory that has no index file. To disable this functionality like you are asking you would remove the word "Indexes" or prepend a hyphen (-) in front of the work Indexes.
If you do not use apache or have root access to your server -- good keywords for you to ask your hosting provider is "How do I disable indexing in folders that have no index file" or similar.
Drop a blank index.html file in the folder.

How to export an entire Access database to SQL Server? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I've just got a lovely Access database, so the first thing I want to do is to move it over to a normal database management system (sqlexpress), but the only solution I've found sounds like craziness.
Isn't there an "export database to .sql" button somewhere? I have around 50 tables and this export might run more than once so it would be great if I didn't have to export all the tables manually. Generating a .sql file (with tables creation and inserts) would also be great since it would allow me to keep that under version control.
I guess if it's not possible to do something simple like this I'd appreciate any pointers to do something similar.
Is there a reason you don't want to use Management Studio and specify Microsoft Access as the data source for your Import Data operation? (Database->Tasks->Import, Microsoft Access as data source, mdb file as parameter). Or is there a reason it must be done from within Microsoft Access?
There is a tool from the SQL Server group - SQL Server Migration Assistant for Access (SSMA Access) There have been comments stating it's a better tool than the Upsizing Wizard included in Access.
A quick-and-dirty way to upsize Jet/ACE tables to any ODBC-accessible database engine:
create an ODBC DSN for your database.
in Access, select a table, and choose EXPORT from the file menu. Choose ODBC as the type and then select your DSN.
This will export the table and its data with data types that your ODBC driver indicates are most compatible with Jet/ACE's data types. It won't necessarily guess right, and that's why you likely wouldn't do this with SQL Server (for which there are tools that do better translating). But with non-SQL Server databases, this can be an excellent starting place.