I have used MySQL for a few months and have really liked it. I have a question about using procedures between different schemas.
To give some context, I am working with a local copy of a database from my job. When I create procedures for the database, some of them I want to upload to the server, but others I rather want to keep on my local computer. However, the ones that I keep in my computer will be deleted when I load a new backup copy of the production database.
Where would be a safe place or way to save these procedures in my computer. Should I keep a separate schema for my local procedure, and then will I be able to call them from the backed up schema? Is there another way to do this?
Yes, you can put procedures in a separate schema. Any query that references a table inside the procedure should be qualified by schema name.
BEGIN
SELECT ... FROM dbname.tablename;
END
Related
I'm kinda new to this kind of problem. I'm developing a web-app and changing DB design trying to improve it and add new tables.
well since we had not published the app since some days ago,
what I would do was to dump all the tables in server and import my local version but now we've passed the version 1 and users are starting to use it.
so I can't dump the server, but I still would need to update design of server DB when I want to publish a new version. What are the best practices here?
I like to know how I can manage differences between local and server in mysql?
I need to preserve data in server and just change the design, data on local DB are only for test.
Before this all my other apps were small and I would change a single table or column but I can't keep track of all changes now, since I might revert many of them later and managing all team members on this is impossible.
Assuming you are not using a framework that provides a migration tool for database, you need to keep track of the changes manually.
Create a folder sql_upgrades (or whatever name you name) in your code repository
Whenever a team member updates the SQL schema, he creates a file in this folder with the corresponding ALTER statements, and possibly UPDATE, CREATE TABLE etc. So basically the file contains all the statements used to update the dev database.
Name the files so that it's easy to manage, and that statements for the same feature are grouped together. I suggest something like YYYYMMDD-description.sql, e.g. 20150825-queries-for-feature-foobar.sql
When you push to production, execute the files to upgrade you SQL schema in production. Only execute the files that have been created since your last deployment, and execute them in the order they have been created.
Should you need to rollback a file, check the queries it contains, and write queries to undo what was done (drop added columns, re-create dropped columns, etc.). Note that this is "non-trivial", as many changes cannot be rolled back fully (e.g. you can recreate a dropped column, but you will have lost the data inside).
Many web frameworks (such as Ruby of Rails) have tools that will do exactly that process for you. They usually work together with the ORM provided by the framework. Keeping track of the changes manually in SQL works just as well.
I've exported a database via SSH and I didn't add --routine command to export the routines.
Now I don't have any access to this database, and I have only one .sql file. is there any way to restore and find the routines through PHP code or database structures?
No, sorry, in this case I think you're out of luck. Looking at the database structure, you won't be able to figure out what a routine might have done. Likewise, looking at the PHP code is probably not going to help. If you know what the routines did (for instance, manipulate data on insert, maintenance by deleting some rows, or some such) you can work through recreating it, but that's basically reverse engineering it based on what breaks when you try to run your application.
I have databases in my system and also put database on web server also, so when I update my system database data I ll have to then replace or add data into web database.
but
problem is that I am doing changes in database to some specific record frequently for testing purpose.
So I want some mechanism that will used to export some specific records to sql file with insert statement.
Suppose I have made change in table tbl1 and added 10 records to it.
So right now I am manually adding or replacing whole table on web database.
So is there any mechanism in MySql or in Workbench using that I can export specific records.
Any Help for that.
The only automatic solution is to use replication, but that is probably not a good solution for your scenario. So what remains is some manual process. Here are some ideas:
Write a script that writes specific records into a dump file.
Then use a different script to load this dump file into your
target server.
If you frequently change the same records you could create a script
with insert statements that you edit for each new value and run
against both your local and your remote (web) server.
I need to create several databases at once. I have the .BAK file for the dbs and I would like to loop through those files then have SQL create the databases based on the name of the .BAK.
I already have a query to create a database but I seem to be having trouble with the loop.
How would I make SQL server check my .BAK files and create DBs accordingly?
Thanks!
I would take a different approach with this and leave the actual looping done in a small program.
Have the file system handle the files and you can issue a procedure (stored procedure) to do the backup directly from your application.
I know it's not the answer you are after just giving you additional ideas...
I would use an external tool to do something like this.
Use client side scripting to browse the directory (Powershell perhaps?) and then pass the bak file names to the SQL commandline to create the databases.
You can do this with xp_cmdshell, but it's not recommended for the reasons they list in the article.
http://msdn.microsoft.com/en-us/library/ms175046.aspx
I am looking into ways to encrypt mySQL stored procedure source code when installed in clients local environment.
I did lot of research on this topic and had no luck except for one promising reply from gazzang.com
Here is the reply from gazzang. Let me know if someone has already tried this out.
We should be able to encrypt the table where store procs and functions are stored - mysql.proc
Thus os users won't be able to read the contents of the sp or functions.
I can't remember which internal table views are stored in but the same some apply to them.
I am not sure we could come up with a solution to encrypt the routines internal to mysql.
Other databases that do this really implement "obfuscation" internally - I think PostgreSQL does that for example.
You cannot encrypt stored procedures in a really useful way, because MySQL server will have to decrypt it anyway when it reads stored procedure from it's tables. If you encrypt the table file, your customer will login as root and make dump on mysql.proc table using native MySQL statements. If you change root password, they will always have a way to start MySQL with --skip-grant-tables switch to overcome that.