I am just starting my adventure with writing an app that has to work with database. I've started by designing my database in MySQL Workbench, exported this project to the server and now I want to try writing some code, that will be able to talk with this server.
I've decided to try using NHibernate, but I'am stuck at writing mappings for the Id/Primary key property/column. I've read, that I must specify the generator, but does that mean, that I have to remove auto-increment from this column in a database?
If answer to previous question is negative, then which NH generator will work correctly with server driven auto-increment generator. If it's positive, then which generator is preferable? Or maybe, even if NH can work with server's AI, you still recommend to remove AI from columns and use client based generator (so again, which?).
Ok, since there was no direct answer to this question (there was comment that gave me the answer instead) and I want to close it, I'll write one.
My problem here was the understanding of the functionality name - GENERATOR. When you have a server based generator (auto increment), why do you need another in your app? That was the source of my mistake. Actually, this in app generator does not have to really generate anything, it can fetch data from database, but still, for this app, it will have no difference.
So, TL;DR
As #RadimKöhler wrote in his comment to my question, both native and identity work well with autoincrement/identity (or whatever name is used by your database engine) column in database.
now, time for quote:
NOTE: I for years live with identity.. but if I could choose, sequence or HILO would be my choice. It does not require real INSERT to let DB generate the key...
... think about it :) I haven't gone this way, because I am writing a small app which will be used by single user most of the time. Usage of an actual in-app generator let's you work with data kept in memory, which can be flushed in one go to the database.
Related
I have now to deal with a program called FDT whose support is no longer taken by the company I am working for but still using the same program. Now I need to insert new orders into the program from the site which I can get in xml, csv or some other from magento. I am trying to automate this process. All work in the office are done on the basis of this software FDT like checking the out of stock, bills printing and others.
I am now thinking to use profiler to trace events. I would like to know what processing does the program do when we place some order in it. I am not a good user of Profiler, I would like some suggestions if it is possible know what tables it effects, what columns it updates or writes to.
Above it is a new order no. the program generates. which is a unique id and is integer. I am not able to know the pattern. I do have a test server where I can make changes and trial and error is no problem.
Some suggestions on how shall I proceed or at least start going on would be appreciated.
I think most important would be to trace the T-sql but again which events and what filter to use?
I am sorry if it a stupid question, I am trying to learn .. source code and support is not an option.
This question has too many parts- how to do trace, how to deal with an application post-support-contract, how to reverse engineer an app and even if that is a good idea (and sometimes it's the only idea available) I'd re-ask this as a series of narrow technical question or ask it on Programmers (after reading their FAQ they only like certain questions)
Yup, been there done that. In large organizations, normally these tasks fall to technies who don't weild the awesome power of the budget and can't personal go negotiate a new contract with the original vendor. I assume you have food bills to pay and can't tell your supervisor, "well, I ain't do doing nothing until we get a support contract"
Step 0 Diagram the tables - work out the entity relationships and assembly a data dictionary (one that explains the motivation of each table and column, not just the name and data type)
Step 1 Attach the profiler to an active instance of SQL 2008. If you have a specific question about SQL Profiler, open a new question. One hint-- if you are attached to a multi-user instance, filter down to just your own user (the one in the connection string)
http://blog.sqlauthority.com/2009/08/03/sql-server-introduction-to-sql-server-2008-profiler-2/
Step 2
Do an action in the application and watch what SQL was emitted. If it is SQL, you can copy and paste it to Management studio so you can diagram the query and run your own test executions. If it is a stored proc, you go read the source code of the stored procedure. If the stored procedure is encrypted, it may or may not be possible to decrypt it. Scenarios when decrypting the code is fairly defensible is when you aren't redistributing it and the supporting company isn't there.
Step 3
Once you understand the app, you can write reports, or more likey, you want to record either new transactions or old transactions differently.
If the app is written in .net or java, you can decompile it and read the code. Creating a custom build from that source isn't going to be fun. A more likely thing to happen is you will create an application that targets the same tables or possibly export all the data out of the original app and into a new bespoke one.
I'm working with a project which is using mysql as the database. The application is hosted with many clients and we are doing upgrades for the current live systems often.
There are some instances where the client has change the database structure(adding new tables) and causes some unexpected db crashes.
I need to log all the structural changes which were done at that database, so we can find the correct root cause for that. We can't do it 100% correct with diff tool because it will not show the intermediate changes.
I found http://www.liquibase.org/ tool but seems little bit complex.
Is there any well known technique or a tool to track database structural changes only.
well from mysql studio you can generate all object's schema definition and compare them with your standard schema definition and this way you can compare two database schema...
generate scrips of both database (One is client's Database and One is master copy database) and then compare it using file compare tool would be the best practice according to me because this way you can track which collumn was added, which column was deleted, which index was added like wise without any tool download.
Possiable duplication of Compare two MySQL databases ?
Hope this helps.
If you have an application for your clients to manage these schema changes, you can use a mechanism at application level. If you have a Python and Django-based solution, you could probably use South which provides schema change tracking and rollbacks.
How can I track changes to a development database and apply those changes to a production database (SQL Server 2008)?
I keep a local copy of a database on my development server, and as I'm adding new features, I may add new tables or change field and table names in the database. What's a good way to track such changes and then apply them to the main database?
Is there some way to do a "diff"-like operation between two databases and merge definition changes?
I considered merge-replication, but I'm not sure how well than handles schema changes. For example, here: http://technet.microsoft.com/en-us/library/ms151870.aspx it mentions that I basically cannot use SSMS to make definition changes, because it drops and recreates tables, which is not allowed for published objects.
A smart piece of software could compare column counts, types, positions, and apply other fuzzy matching/logical deduction methods to figure out that a table was renamed or a new table was added or a column name changed, after which it could present the differences to the user for confirmation and automatic application.
Does anything like what I've described above exist, or am I stuck remembering to save DDL statements in SSMS and running them manually in the production database?
Maybe you need a migration tool like (for example) FluentMigrator, which helps you track database changes in source code.
Here is a tutorial from the original author of Fluent Migrator, explaining what Fluent Migrator is, why you might need it and how it works.
Another alternative would be what you already mentioned:
A smart piece of software could compare column counts, types,
positions, and apply other fuzzy matching/logical deduction methods to
figure out that a table was renamed or a new table was added or a
column name changed, after which it could present the differences to
the user for confirmation and automatic application.
I never tried it myself, but I've seen lots of recommendations for Redgate SQL Compare (which apparently does exactly what you asked for) here at Stack Overflow.
Right now, in my internship, I'm assigned to create a system that holds employee information such as personal info, education, salary, etc.
All these stuff is kept in a few spreadsheets right now. I need a basic program, but I feel like I should be using MySQL or another database solution to hold the data. I used MySQL before, but it was a PHP/MySQL assignment which I used Wampserver to create the whole system.
Edit: The system will be used by a few computers across the network. When someone makes a change to the system, it will become visible to other computers aswell. (obviously) (Before the edit, I thought that it's gonna be used by a single computer.)
I'm confused right now. Should I create a PHP/MySQL webpage with wampserver (or similar) to hold the information, or not?
Would it be easier or better to combine MySQL with some other programming language (such as Java/C++) and build a GUI? (I doubt it)
Should I come up with a different solution? Without database usage?
Database usage would be the best option. In the end it will come down to what you are more comfortable using, Java/C++ or php, for what you want to do either can work, but remember the database will need to be live at all times and using wamp server wont cut it. You need to learn how to tun a mysql server without wamp, which is easy(Google is awesome). And personally I would have used Java because Java is also easy to link with mysql, just google it a bit, and java doesn't need to run on a server so no wamp needed as you would have needed for php.
EDIT:
Ok if I understand you correctly what you want to do is the following:
1. Identify a pc to be used as a server and assign it a static IP.
2. This must also be the pc that is turned on first every day and turned of last.
3. Create a front end client application that connects to your sql server that you will be running on the server machine.
Now I am assuming this network is rather small, so you wont need a specific computer to just act as server. The server can also be one of the client machines.
The best approach would be to create a mysql server and make sure the firewall is not blocking your sql server. Then create a client application that can access the database over the network using Java, I find this easier than creating a php server for the users because of port forwarding for an apache server is time consuming, I did it once and never again. Java will be easiest to make the application work over the network. Use Netbeans for the development, it's an awesome IDE and it makes life easier when setting up the database connection.
If you have anymore questions please ask in comment, and I will elaborate, since this might be a bit vague lol.
Of course you should use a database for this type of work. That is the best way to organize, search, sort and filter your data without having to reinvent the wheel.
As to the other questions, the choice of language and environment is up to you to decide after evaluating the needs of your application.
Your solution should use a database to store the data and an front-end application to manage the data.
The database and front-end should be seen as two separate layers. In other words create the database using whatever database your are familiar with eg: MySQL and likewise create the front-end using whatever technology youre familiar with eg: PHP.
Personally for this type of requirement i would typically use MySQL / SQLExpress and ASP.Net / MVC3 front-end.
Hope this is helpful.
Basically have many huge delimited files that I know I can import as a table, but I need to map that data to an existing rational multi-table MySQL database. There should not be any conflict with datatypes, but I'm super new to this, so please point out anything I should be watching for. Clearly I'm not going to run this in production either until I know it works.
Not 100% sure stackoverflow is the right place to ask a database question, but I couldn't find any other Stack Exchange that was a better fit.
Posted this question on SuperUser looking for a GUI to do this, but I up for coding this is it gets the job done. As such there is no target language, just the requirement that the database be MySQL.
Also, found this stackoverflow Q/A that deals with MS-SQL's SSIS (which I'm not planning on using due to cost, but the content and issues faced are of the same nature it appears.) --
Loading Multiple Tables using SSIS keeping foreign key relationships
I'd suggest using the ETL(extract translate load) tool from the Pentaho Business Intelligence package. It's got a bit of a learning curve but it'll do exactly what you're looking for. Their ETL tool is called Kettle and it's extremely powerful once you get the hang of it.
There are two versions of Pentaho, an enterprise version that has a free trial, and a free community version. The community version is more than capable but you might give the enterprise version a test ride too.
Here's some links
Pentaho Community Edition Site
Kettle Site
Pentaho Enterprise Site
Update: Multiple table outputs
One of the key steps in your transformation is going to be a combination lookup-update. This step checks a given table to see if a record from your data-stream exists and inserts a new record if it does not. Regardless of whether it's a new or old record it's going to append the key field from that record into your data-stream. As you keep going you'll use these keys as foreign keys as you import data into related tables.