I've been having some issues using SchemaUpdate with MySQL.
I seem to have implemented everything correctly, but when I run it
it doesn't update anything. It doesn't generate any errors, and it
pauses for about the sort of length of time you would expect it to
take to inspect the DB schema, but it simply doesn't update anything,
and when I try to get it to script the change it just doesn't do
anything - it's as if it can;'t detect any changes to up the DB
schema, but I have created a new entity and a new mapping class - so I
cant see why it's not picking it up.
var config = Fluently.Configure()
.Database(() => {
var dbConfig = MySQLConfiguration.Standard.ConnectionString(
c => c.Server(configuration.Get<string>("server", ""))
.Database(configuration.Get<string>("database",""))
.Password(configuration.Get<string>("password", ""))
.Username(configuration.Get<string>("user", ""))
);
});
config.Mappings(
m => m.FluentMappings
.AddFromAssemblyOf<User>()
.AddFromAssemblyOf<UserMap>()
.Conventions.AddFromAssemblyOf<UserMap>()
.Conventions.AddFromAssemblyOf<PrimaryKeyIdConvention>()
// .PersistenceModel.Add(new CultureFilter())
);
var export = new SchemaUpdate(config);
export.Execute(false, true);
I don't think there's anything wrong with my config because it works
perfectly well with ShemaExport - it's just SchemaUpdate where I seem
to have a problem.
any ideas would be much appreciated!
Did you try to wrap SchemaUpdate execution in a transaction? There're some databases which need to run this in a transaction AFAIK.
using (var tx = session.BeginTransaction())
{
var tempFileName = Path.GetTempFileName();
try
{
using (var str = new StreamWriter(tempFileName))
{
new SchemaExport(configuration).Execute(showBuildScript, true, false, session.Connection, str);
}
}
finally
{
if (File.Exists(tempFileName))
{
File.Delete(tempFileName);
}
}
tx.Commit();
}
I figured it out:
The problem is that MySQL doesn't have multiple databases. It seems like some parts of MySQL and/or NHibernate use Schemas instead and SchemaUpdate seems to be one of them. So when I have
Database=A
in my connectionstring, and
<class ... schema="B">
in the mapping, then SchemaUpdate seems to think that this class is "for a different database" and doesn't update it.
The only fix I can think of right now would be to do a SchemaUpdate for every single schema (calling USE schema; first). But afaik, NHibernate has no interface to get a list of all schemas that are used in the mappings (correct me if I'm wrong). I'm afraid I have to iterate through the XML files manually (I use XML-based mappings) and collect them...
Related
I am working on a discord bot written in nodejs, the bot utilises a mysql database server to store information. The problem I have run into is that I cannot seem to retrieve the data from the database in a neat way, every single thing I try seems to run into some issue or another.
The select query returns an object called RowDataPacket. When googling every single result will reference this solution: Object.values(JSON.parse(JSON.stringify(rows)))
It postulates that I should get the values back, but I dont I get an array back that is as hard to work with as the rowdatapacket object.
This is a snippet of my code:
const kenneledMemberRolesTableName = 'kenneled_member_roles'
const kenneledMemberKey = 'kenneled_member'
const kenneledMemberRoleKey = 'kenneled_member_role_id'
const kenneledStaffMemberKey = 'kenneled_staff_member'
const kenneledDateKey = 'kenneled_date'
const kenneledReturnableRoleKey = 'kenneled_role_can_be_returned'
async function findKenneledMemberRoles(kenneledMemberId) {
let sql = `SELECT CAST(${kenneledMemberRoleKey} AS Char) FROM ${kenneledMemberRolesTableName} WHERE ${kenneledMemberKey} = ${kenneledMemberId}`
let rows = await databaseAccessor.runQuery(sql)
let result = JSON.parse(JSON.stringify(rows)).map(row => {
return row.kenneled_member_role_id
})
return result
}
This seemed to work, until I had to do a type conversion on the value, now the dot notations requires me to reference row.CAST(kenneled_member_role_id AS Char), this cannot work, and I have found no other way to retrieve the data than through dot notation. I swear there must be a better way to work with mysql rowdatapackets but the solution eludes me
I figured out something that works, however I still feel like this is an inelegant solution, I would love to hear from others if I am misunderstanding how to work with mysql code in nodejs, or if this is just a consequence of the library:
let result = JSON.parse(JSON.stringify(rows)).map(row => {
return row[`CAST(${kenneledMemberRoleKey} AS CHAR)`];
})
So what I did is I access the value through brackets instead of dot notation, this seems to work, and at least makes me able to store part of or the whole expression in a constant variable, hiding the ugliness.
Using Entity Framework 6 and MySQL, I am trying to archive data from a 'production' database table to an 'archive' database. I have created two DBContexts one for each database. Each database has the same schema.
I can move an entire table of data from the production database to the archive database using the following code:
using (MyDBContext archiveContext =
MyDBContext.CreateEntitiesForSpecificDatabaseName("archive_db"))
using (MyDBContext prodContext =
MyDBContext.CreateEntitiesForSpecificDatabaseName("prod_db"))
{
if(prodContext.myTable.Any())
{
archiveContext.myTable.AddRange(prodContext.myTable.AsNoTracking());
archiveContext.SaveChanges();
}
}
However I don't want to archive the whole table, I only wish to archive data older than a certain date, so I tried the following:
using (MyDBContext archiveContext =
MyDBContext.CreateEntitiesForSpecificDatabaseName("archive_db"))
using (MyDBContext prodContext =
MyDBContext.CreateEntitiesForSpecificDatabaseName("prod_db"))
{
IQueryable<myTable> dataToArchive =
from mt in prodContext.myTable
where mt.date < DateTimeSixMonths
select mt;
archiveContext.myTable.AddRange(dataToArchive);
archiveContext.SaveChanges();
}
but I cannot get around the exception I get when I run this:
System.InvalidOperationException: 'An entity object cannot be
referenced by multiple instances of IEntityChangeTracker.'
It occurs on this line:
archiveContext.myTable.AddRange(dataToArchive);
Is it possible to somehow remove the tracking from the 'dataToArchive'
Have you tried disposing the first DataContext after retrieving data? Something like this:
List<myTable> dataToArchive;
using (MyDBContext prodContext =
MyDBContext.CreateEntitiesForSpecificDatabaseName("prod_db"))
{
dataToArchive = (from mt in prodContext.myTable
where mt.date < DateTimeSixMonths
select mt).ToList();
}
using (MyDBContext archiveContext =
MyDBContext.CreateEntitiesForSpecificDatabaseName("archive_db"))
{
archiveContext.myTable.AddRange(dataToArchive);
archiveContext.SaveChanges();
}
Using EF to manage archiving data isn't ideal, something like that would be better served at the database level using insert-select + delete for low to moderate data volumes or detachable partitions (I.e. 3-6 mo. partition sizes) that can be moved between databases.
To do this with EF (only recommended for small and non-complex domain models) you should be able to accomplish this by disabling the proxy generation in your read context, load the data AsNoTracking, then add it to the new context DbSet. This example does not handle associated entities, or do the delete from the prod DbSet.
using (MyDBContext prodContext =
MyDBContext.CreateEntitiesForSpecificDatabaseName("prod_db"))
{
prodContext.Configuration.ProxyCreationEnabled = false;
dataToArchive = prodContext.myTable.AsNoTracking()
.Where(mt => mt.Date < DateTimeSixMonths);
using (MyDBContext archiveContext =
MyDBContext.CreateEntitiesForSpecificDatabaseName("archive_db"))
{
archiveContext.myTable.AddRange(dataToArchive);
archiveContext.SaveChanges();
}
}
I am using Mahout to build an Item-based Cf recommendation engine.
I create an MahoutHelper class which has a constructor:
public MahoutHelper(String serverName, String user, String password,
String DatabaseName, String tableName) {
source = new MysqlConnectionPoolDataSource();
source.setServerName(serverName);
source.setUser(user);
source.setPassword(password);
source.setDatabaseName(DatabaseName);
source.setCachePreparedStatements(true);
source.setCachePrepStmts(true);
source.setCacheResultSetMetadata(true);
source.setAlwaysSendSetIsolation(true);
source.setElideSetAutoCommits(true);
DBmodel = new MySQLJDBCDataModel(source, tableName, "userId", "itemId",
"value", null);
similarity = new TanimotoCoefficientSimilarity(DBmodel);
}
and the recommend method is:
public List<RecommendedItem> recommendation() throws TasteException {
Recommender recommender = null;
recommender = new GenericItemBasedRecommender(DBmodel, similarity);
List<RecommendedItem> recommendations = null;
recommendations = recommender.recommend(userId, maxNum);
System.out.println("query completed");
return recommendations;
}
It's using datasource to build datamodel but the problem is that when mysql has only a few data (less than 100) the program works fine for me, while when the scale turns to be over 1,000,000, the program stacks at doing recommendation and never goes forward. I have no idea how it happens. By the way I used the same data to build a FileDataModel with a .dat file, and it takes only 2~3 second to complete analysis. I am confused.
Using the database directly will only work for tiny data sets, like maybe a hundred thousand data points. Beyond that the overhead of such data-intensive applications will never run quickly; a query takes thousands of SQL queries or more.
Instead you must load and re-load into memory. You can still pull from the database; look at ReloadFromJDBCDataModel as a wrapper.
I want to import my IIS logs into SQL for reporting using Bulk Insert, but the comment lines - the ones that start with a # - cause a problem becasue those lines do not have the same number f fields as the data lines.
If I manually deleted the comments, I can perform a bulk insert.
Is there a way to perform a bulk insert while excluding lines based on a match such as : any line that beings with a "#".
Thanks.
The approach I generally use with BULK INSERT and irregular data is to push the incoming data into a temporary staging table with a single VARCHAR(MAX) column.
Once it's in there, I can use more flexible decision-making tools like SQL queries and string functions to decide which rows I want to select out of the staging table and bring into my main tables. This is also helpful because BULK INSERT can be maddeningly cryptic about the why and how of why it fails on a specific file.
The only other option I can think of is using pre-upload scripting to trim comments and other lines that don't fit your tabular criteria before you do your bulk insert.
I recommend using logparser.exe instead. LogParser has some pretty neat capabilities on its own, but it can also be used to format the IIS log to be properly imported by SQL Server.
Microsoft has a tool called "PrepWebLog" http://support.microsoft.com/kb/296093 - which strips-out these hash/pound characters, however I'm running it now (using a PowerShell script for multiple files) and am finding its performance intolerably slow.
I think it'd be faster if I wrote a C# program (or maybe even a macro).
Update: PrepWebLog just crashed on me. I'd avoid it.
Update #2, I looked at PowerShell's Get-Content and Set-Content commands but didn't like the syntax and possible performance. So I wrote this little C# console app:
if (args.Length == 2)
{
string path = args[0];
string outPath = args[1];
Regex hashString = new Regex("^#.+\r\n", RegexOptions.Multiline | RegexOptions.Compiled);
foreach (string file in Directory.GetFiles(path, "*.log"))
{
string data;
using (StreamReader sr = new StreamReader(file))
{
data = sr.ReadToEnd();
}
string output = hashString.Replace(data, string.Empty);
using (StreamWriter sw = new StreamWriter(Path.Combine(outPath, new FileInfo(file).Name), false))
{
sw.Write(output);
}
}
}
else
{
Console.WriteLine("Source and Destination Log Path required or too many arguments");
}
It's pretty quick.
Following up on what PeterX wrote, I modified the application to handle large log files since anything sufficiently large would create an out-of-memory exception. Also, since we're only interested in whether or not the first character of a line starts with a hash, we can just use StartsWith() method on the read operation.
class Program
{
static void Main(string[] args)
{
if (args.Length == 2)
{
string path = args[0];
string outPath = args[1];
string line;
foreach (string file in Directory.GetFiles(path, "*.log"))
{
using (StreamReader sr = new StreamReader(file))
{
using (StreamWriter sw = new StreamWriter(Path.Combine(outPath, new FileInfo(file).Name), false))
{
while ((line = sr.ReadLine()) != null)
{
if(!line.StartsWith("#"))
{
sw.WriteLine(line);
}
}
}
}
}
}
else
{
Console.WriteLine("Source and Destination Log Path required or too many arguments");
}
}
}
I have a domain class that needs to have a date after the day it is created in one of its fields.
class myClass {
Date startDate
String iAmGonnaChangeThisInSeveralDays
static constraints = {
iAmGonnaChangeThisInSeveralDays(nullable:true)
startDate(validator:{
def now = new Date()
def roundedDay = DateUtils.round(now, Calendar.DATE)
def checkAgainst
if(roundedDay>now){
Calendar cal = Calendar.getInstance();
cal.setTime(roundedDay);
cal.add(Calendar.DAY_OF_YEAR, -1); // <--
checkAgainst = cal.getTime();
}
else checkAgainst = roundedDay
return (it >= checkAgainst)
})
}
}
So several days later when I change only the string and call save the save fails because the validator is rechecking the date and it is now in the past. Can I set the validator to fire only on create, or is there some way I can change it to detect if we are creating or editing/updating?
#Rob H
I am not entirely sure how to use your answer. I have the following code causing this error:
myInstance.iAmGonnaChangeThisInSeveralDays = "nachos"
myInstance.save()
if(myInstance.hasErrors()){
println "This keeps happening because of the stupid date problem"
}
You can check if the id is set as an indicator of whether it's a new non-persistent instance or an existing persistent instance:
startDate(validator:{ date, obj ->
if (obj.id) {
// don't check existing instances
return
}
def now = new Date()
...
}
One option might be to specify which properties you want to be validated. From the documentation:
The validate method accepts an
optional List argument which may
contain the names of the properties
that should be validated. When a List
is passed to the validate method, only
the properties defined in the List
will be validated.
Example:
// when saving for the first time:
myInstance.startDate = new Date()
if(myInstance.validate() && myInstance.save()) { ... }
// when updating later
myInstance.iAmGonnaChangeThisInSeveralDays = 'New Value'
myInstance.validate(['iAmGonnaChangeThisInSeveralDays'])
if(myInstance.hasErrors() || !myInstance.save(validate: false)) {
// handle errors
} else {
// handle success
}
This feels a bit hacky, since you're bypassing some built-in Grails goodness. You'll want to be cautious that you aren't bypassing any necessary validation on the domain that would normally happen if you were to just call save(). I'd be interested in seeing others' solutions if there are more elegant ones.
Note: I really don't recommend using save(validate: false) if you can avoid it. It's bound to cause some unforeseen negative consequence down the road unless you're very careful about how you use it. If you can find an alternative, by all means use it instead.