Synchronize multiple web-servers when using Linq - linq-to-sql

Today I'm running a single asp.net 4 iis web-server and a single SQL-server. But now i need to start a second web-server to manage the load i get. The problem is when i use Linq the framework caches data from the database on the web-server resulting in the server getting out of sync. i always run (sample) DatabaseDataContext.MyTable.First(some condition) and i always submit the data to the database after changing it, but I still get old data that is cached on the web-server that did not do the change.
How do i solve this, is there any settings like forcing Linq to check last modified or any other checks on the tables involved.
Any ideas or tutorials on how to manage multiple servers and using Linq.
BR
Andreas

Related

connecting already existing database in a local environment

I have an access database that connects to a vb6 application and this whole thing is connected between two computers via a shared network one running win 8 and other a win 7, and there is no internet involved in any sorta way nor should it be that is a requirement in fact
sorry I advance I have tried researching on the net but there is really short time and a lot of confusing material online
I am creating a WPF app connected to MySQL DB
now I have copied the access file and imported the contents of the DB in MySQL
things are a real mess in the imported DB so I am fixing it
what I am confused is how I am going to make it work there
do I go and install MySQL and do the whole process manually there, repeating all the steps and changes
is made
make a document that contains the code/script for all the changes I have made and run the data through
it, and is there even a way to implement that as a whole in a singular go
connect both databases together, i don't even know if this is possible
yes, in place of a simple "file share" of the Access file, you now are going to run some kind of SQL server system. In this case MySQL. But it could be PostgreSQL or any kind of "server" database.
That instance of "sql server" thus has to be setup, installed and you ensure that the "box" running that instance of MySQL also allows external connections (often by default the given computer firewall settings prevent this).
At that point, 2 or 10 different computers on that same network can now simply connect to the SQL server. The code of course is going to be VERY simular. You almost for sure used the oleDB provider for use with Access. However, you can use the ODBC provider, or even use the provider from MySQL. Those providers thus means you change the connect object, datareader object etc. However the "base" .net types such as row, or datatable, or dataset can remain as before (so you only change the provider). If you have a lot of code based on oleDB, then you could well consider to contine to use that oleDB provider code in .net, and thus you change the connection strings to now point to MySQL.
If you don't have a lot of code, then for sure do adopt the mySQL provider for .net. But as noted the least amount of changes would be to continue to use a oleDB provider for mysql, and that would suggest the least amount of code to be changed.
As for the msaccess data migration? Well, it not clear what tools and how you doing that now. But, once you transfer the data to the MySQL server (assuming you installed + setup my sql to run on one computer). The it is a simple matter to point your .net connection(s) in your code to Now MySQL as opposed to Acess. As a result, most if not all of your code logic for working with the tables can remain as before - but as noted you have to swap out the provider parts in .net
Now, if your REALLY lucky and the .net code used the ODBC provider? Then all you have to do is change your connection strings. And of course "some" SQL syntax in your code may have to be tweaked, as like Oracle, MS SQL server, postgreSQL, and MySQL?
Well, they all have some features and syntax that is different - this is especially in regards to date/time calculations, datediff() etc. But the general sql you have/had in your .net code should continue to run mostly un-changed against MySQL data tables.
As for how to migrate the data? I think that a really good tool is of course to use MS-Access. What you do is get MySQL up and running. Then use ms-access to open that database. You then add linked tables from MS-access to the MySQL tables.
At that point, you can now run append queries from Access to move/send the data to MySQL. It really depends on how many tables, and how many related tables are in that database. The more complex and the greater number of related tables in Access then the more the challenge to move such data up to MySQL.
Transferring Excel or a small or even big table is a breeze. (again, use MS Access and link to the tables on the sql server). However, where things can become messy is that if you have say 25 tables, and they are all related, many have cascade delete and say enforced parent to child relationships. So the more tables, and especially a larger number of related data tables, then the more work such a data migration task will become.
I think MS Access is a really good tool, since if you setup a connection to MySQL, then you can execute a transferDatabase commend in Access to send up one table to MySQL, and even all the columns and data types for those columns will be automatic created for you. So not only can Access transfer the data, but MORE valuable is it has the abilty to create the target tables on MySQL for you - and that will save you large amounts of time to build + setup the tables on MySQL.

SQL Server 2012 Data Integration

I'm writing an intranet application (in a LAMP environment) that uses data from sections of an MSSQL 2012 database (used by another much larger application).
As I see it my options are to:
Directly query the database from the application.
Create a web service
Use Microsoft SQL Server Integration Services to have the data
automatically integrated into my applications database
I'm sure the best solution here would be using SSIS, however I've not done this before am on a deadline - so if that's the case could someone let me know
a) With my limited experience in that area would I be able to set that up, and
b) What are the pros and cons of the above options?
Any other suggestions outside of the options I've thought of would also be appreciated
Options:
Directly query the database from the application.
Upside:
Never any stale data
Downside:
Your application now contains specific code and is tied that
application
If you are in the common situation where the business
buys another application containing the same master data, you now
need special code to connect to two applications
Vendor might not like it
Might be performance impacts on source application
Use Windows Task Scheduler / SQL Agent to run a script or SSIS to replicate data at x minute intervals or so.
Upside:
Your application is only tied to your local copy of the database, which you can customise as required. If your source app gets moved to the cloud or something then you don't need to make application changes, just integration changes
If another source application appears with the same type of master data, you can now replicate that into your local DB rather than making application changes to connect to 2 databases.
Downside:
Possibility of stale data
Even worse: possibility of stale data without users realising it, with subsequent loss of confidence in the application
Another component to maintain
If you write a batch script, .Net app or SSIS, they are all pieces of logic that needs to be scheduled to run
Another option is to replicate the database using differential replication if your source database is Oracle or SQL, you can use replication to replicate it into another database.
You need to consider where you will be in a few years. The data copy method probably gives you more flexibility to adapt to changes in the source system as you only need to change your integration, not your whole app if something drastic changes with your source system.
You also need to consider: will you ever be asked to propogate changes back the other way, i.e. update data in your local copy and have it pushed back to the source systems.

Trigger node.js when changes made in apache mysql

I'm building a simple commenting system using node and i need to configure this in a PHP project running in Apache server. So, i need to trigger node.js when some changes made in MySQL database table present in the Apache server. So, i need to know whether it is possible to do this in a Apache server? If so, then how to do that? Any idea or suggestions on this are greatly welcome. Please help...
I guess there are few options you could take, but I don't think you can get some sort of triggered action from within MySQL or Apache. IMHO, you these are the approaches you can take:
you can expose a HTTP API from node and every time you need to notify the node app, you could simply insert the data into MySQL using PHP and then issue a simple GET request to trigger node.
You could use some sort of queuing system (rabbitmq, redis, etc.) to manage the messages to and from the two application, hence orchestrating the flow of the data between the two apps (and later the db).
you could poll the database from node and check for new rows to be available. This is fairly inefficient and quite tricky, but it sounds more close to what you want.

What's the appropriate way to test code that uses MySQL-specific queries internally

I am collecting data and store this data in a MySQL database using Java. Additionally, I use Maven for building the project, TestNG as a test framework, and Spring-Jdbc for accessing the database. I've implemented a DAO layer that encapsulates the access to the database. Besides adding data using the DAO classes I want to execute some queries which aggregate the data and store the results in some other tables (like materialized views).
Now, I would like to write some testcases which check whether the DAO classes are working as they should. Therefore, I thought of using an in-memory database which will be populated with some test data. Since I am also using MySQL-specific SQL queries for aggregating data, I went into some trouble:
Firstly, I've thought of simply using the embedded-database functionality provided by Spring-Jdbc to instantiate an embedded database. I've decided to use the H2 implementation. There I ran into trouble because of the aggregation queries, which are using MySQL-specific content (e.g. time-manipulation functions like DATE()). Another disadvantage of this approach is that I need to maintain two ddl files - the actual ddl file defining the tables in MySQL (here I define the encoding and add comments to tables and columns, both features are MySQL-specific); and the test ddl file that defines the same tables but without comments etc. since H2 does not support comments.
I've found a description for using MySQL as an embedded database which I can use within the test cases (http://literatitech.blogspot.de/2011/04/embedded-mysql-server-for-junit-testing.html). That sounded really promising to me. Unfortunately, it didn't worked: A MissingResourceExcpetion occurred "Resource '5-0-21/Linux-amd64/mysqld' not found". It seems that the driver is not able to find the database daemon on my local machine. But I don't know what I have to look for to find a solution for that issue.
Now, I am a little bit stuck and I am wondering if I should have created the architecture differently. Do someone has some tips how I should setup an appropriate system? I have two other options in mind:
Instead of using an embedded database, I'll go with a native MySQL instance and setup a database that is only used for the testcases. This options sounds slow. Actually, I might want to setup a CI server later on and I thought that using an embedded database would be more appropriate since the test run faster.
I erase all the MySQL-specific stuff out of the SQL queries and use H2 as an embedded database for testing. If this option is the right choice, I would need to find another way to test the SQL queries that aggregates the data into materialized views.
Or is there a 3rd option which I don't have in mind?
I would appreciate any hints.
Thanks,
XComp
I've created Maven plugin exactly for this purpose: jcabi-mysql-maven-plugin. It starts a local MySQL server on pre-integration-test phase and shuts it down on post-integration-test.
If it is not possible to get the in-memory MySQL database to work I suggest using the H2 database for the "simple" tests and a dedicated MySQL instance to test MySQL-specific queries.
Additionally, the tests for the real MySQL database can be configured as integration tests in a separate maven profile so that they are not part of the regular maven build. On the CI server you can create an additional job that runs the MySQL tests periodically, e.g. daily or every few hours. With such a setup you can keep and test your product-specific queries while your regular build will not slow down. You can also run a normal build even if the test database is not available.
There is a nice maven plugin for integration tests called maven-failsafe-plugin. It provides pre- and post- integration test steps that can be used to setup the test data before the tests and to cleanup the database after the tests.

Update MySQL database from SQL Server in different domain

I am SQL Server developer and the current assignment is little different than what I have done in past. I found Stack Overflow very promising for my problem. I am working on the SQL Server 2005 database for the internal application for my client and the client also got the public facing web application with MySQL database. I do not have any details about this web application, but I got the assignment to update the MySQL database (on public domain) from the SQL Server database (internal domain) on daily basis as auto process. How can I achieve this through the SQL Server?
You might want to try Pentaho Data integrator.
http://wiki.pentaho.com/display/EAI/Latest+Pentaho+Data+Integration+%28aka+Kettle%29+Documentation
The product would allow you to speak to both data technologies. (MSSQL+MySQL) You will find the product similar to DTS. You may be able to construct your solution will little to no code.
SSIS will do this just fine. The hard part is determining how you want to transform the data from one structure to the other (I assume they are not exactly alike in terms of table design.)
But basically you create a dataflow task, connect to the SQL Server for the source data and use a query to define what data you are going to copy, then you do any transformations needed to make the data fit into the MySQL structure and connect to a MySQL destination.
Repeat this process for mulitple data sets you want to send to differnt places.
Once the SSIS pacakge is done, set up configurations so that you can run the package on the production server (you will want to test development to development of course!) then schedule the package to run at an appropriate time.
Depending on how different the two databases are and how much data you need to move, this can be a relatively simple process or very complicated.