I am maintaining an existing flex desktop application which is using SQLite3 database. I am very new to Flex programming.
The database originally has some test values which I changed by deleting the existing table importing the new data with same table name.
Now once I run the application, I still see old data, and on checking the database table find test values back there and new values gone.
I checked the code for any insert or import statement but could not locate one. Does any one faced similar issue? Is it because database cacheing, if yes then how to clear this cache? Any hint on what could be the reason behind this weird issue?
Flex creates a location to save data locally in machine. In my machine the path is under my userid AppData\Roaming\ProjectName\Local Store\Database\dbname.db
And my code responsible is following (Line 9, dbWorkFile variable):
public function SQLCon(strQuery:String):SQLResult
{
try
{
var dbStatement:SQLStatement = new SQLStatement();
var dbFile:File = File.applicationDirectory.resolvePath("DataBase/question.db"); //file in the compiled folder
var dbWorkFile:File = File.applicationStorageDirectory.resolvePath("DataBase/question.db");//Location of roaming database
if(!dbWorkFile.exists){
dbFile.copyTo(dbWorkFile);
}
conn = new SQLConnection();
conn.open(dbWorkFile);
....
Leaving this answer for benefit of someone who faces similar ghost issue :)
Related
I am trying to initialize a new Mediawiki family. I use this guide, of course. In the Upgrading section of the guide, it is mentioned:
As of MediaWiki 1.21, when upgrading MediaWiki from the web installer, $wgSharedTables must be temporarily cleared during upgrade. Otherwise, the shared tables are not touched at all (neither tables with $wgSharedPrefix, nor those with $wgDBprefix), which may lead to a failed upgrade.
It is right, because using this setting:
$wgSharedDB = 'wiki_shared';
$wgSharedTables[] = array('user','user_groups','actor');
$wgSharedPrefix = '';
I had no success in setting the db up; no shared tables are created in the wiki_shared db (it remains an empty db).
How should I "clear $wgSharedTables" to avoid facing this issue?
(Even though this is old, just in case someone will get here...)
First of all, this is how to clear $wgSharedTables:
$wgSharedTables = [];
All the other options just added a new empty array into the array.
Also, This is not the way you set $wgSharedTables. You essentially added an array inside the array; however, each table is supposed to be its own item. Either use array_merge():
$wgSharedTables = array_merge( $wgSharedTables, [ user','user_groups','actor' ] );
Or set each one separately:
$wgSharedTables[] = 'user';
$wgSharedTables[] = 'user_groups';
$wgSharedTables[] = 'actor';
Goal: to update a Fusion Table by replacing old rows by new ones from a csv file without headers using ReplaceRows().
I am using the Google.Apis.Fusiontables.v2 library.
I have read and reread the documentation, but still can`t get my code working.
Authentication is working and I am able to perform simple INSERTs without issue:
string sql = "INSERT INTO 11t9VLt3vzb46oGQMaS2LTSPWUyBYNcfi1shkmvag (rpu_id, NO_BAIL, 'Usage (description)', 'Use (description)', 'Sup. louable m2', 'Sup. Utilisable m2', 'SumTotal Lou', 'Percent Lou', 'SumTotal Util', 'Percent Util') VALUES (9999,1111,'Test','Test En',1,2,3,4,5,6)"
Sqlresponse sqlRspnse = service.Query.Sql(sql).Execute();
I have tried ReplaceRowsMediaUpload and ReplaceRowsMediaUpload directly from the TableResource class without luck.
Calling the upload function from the service object doesn't error out, but I'm not sure what to do next that would actually replace the rows in the Fusion Table (service is a FusiontablesService):
StreamReader str = new StreamReader(Server.MapPath("~") + #"\sample2.csv");
service.Table.ReplaceRows("1X7JMLFy75uq20UnU6cLrGTTDfp6lLuD1Fc3vYYjQ", str.BaseStream, "text/csv").Upload();
I've tried:
service.Table.ReplaceRows("1X7JMLFy75uq20UnU6cLrGTTDfp6lLuD1Fc3vYYjQ").Execute()
following the upload, but this just puts the Fusion table in "stuck" mode.
Can someone please provide the lines required to make ReplaceRows work? (Explanations would be appreciated, but aren't necessary!).
You should change "text/csv" for "application/octet-stream". (See accepted MIME type here: https://developers.google.com/fusiontables/docs/v2/reference/table/replaceRows)
StreamReader str = new StreamReader(Server.MapPath("~") + #"\sample2.csv");
service.Table.ReplaceRows("1X7JMLFy75uq20UnU6cLrGTTDfp6lLuD1Fc3vYYjQ", str.BaseStream, "application/octet-stream").Upload();
The call to Upload should be enough.
Also, try to create a new table to test it out, to be sure it is setup correctly.
You can use a REST API call to replace a row in your Google Fusion table directly instead of writing methods to do that. Here is an example:
POST https://www.googleapis.com/upload/fusiontables/v2/tables/tableId/replace
Please refer to this document for more details, it has a testing environment tool too.
I used openAsync() function many times in my application to open SQLite connection with a success. But lately I added more code that also uses openAsync() and now I obtain this error:
Error: Error #3110: Operation cannot be performed while SQLStatement.executing is true.
at Error$/throwError()
at flash.data::SQLStatement/checkReady()
at flash.data::SQLStatement/execute()
at Function/com.lang.SQL:SQLErrorStack/deleteAllRecordsFromErrorStackTable/com.lang.SQL:connOpenHandler()[C:\work\Lang\trunk\actionscript\src\com\lang\SQL\SQLErrorStack.as:466]
It looks like the previous code didn't finish executing while another has started.
My question is: Why the execution of code in the second connection was rejected? I expected that some kind of a queue mechanism is used but it isn't. I looked everywhere for a solution how to cope with this problem but I failed. Can you help?
Can one opened DB connection solve the problem? What changes to my code should I apply then?
This is the code similar to this, that appears a few times in my application.
var SQLquery:String;
SQLquery = "DELETE FROM ErrorStackTable";
var sqlConn:SQLConnection = new SQLConnection();
sqlConn.addEventListener(SQLEvent.OPEN, connOpenHandler);
var dbFile:File = new File();
dbFile.nativePath = FlexGlobals.topLevelApplication.databaseFullPath_conf+"\\"+FlexGlobals.topLevelApplication.databaseName_conf;
sqlConn.openAsync(dbFile); // openDB
sqlSelect = new SQLStatement();
sqlSelect.sqlConnection = sqlConn;
sqlSelect.text = SQLquery;
function connOpenHandler(event:SQLEvent):void
{
sqlSelect.addEventListener(SQLEvent.RESULT, resultSQLHandler);
sqlSelect.addEventListener(SQLErrorEvent.ERROR, errorHandler);
sqlSelect.execute();
}
In Big Flex Applications Try To Avoid openAsync(db) calls because of the reusablity of the SQL code , if u have many sql statments to be executed then you should define more and more sql statments . and if you have dynamic result [Array] getting from web service (RPC ) then you will surely get an error although it is successful Execution and array insertion in the database will be fail .. Just Look at
the link Click Here You Will Get your answer
I just changed conn.openAsync(db); to conn.open(db); and it worked
Thanks
I started using Slickgrid recently but have not gotten in depth with it. I'm trying to connect it to my own MySQL database but I haven't gotten any luck doing so.
I want this for displaying purposes so all I need (for now) is to show up the data in the database into the Slickgrid.
Other than the connection to the database
var getDB = new ActiveXObject("ADODB.Connection") ;
var cntstring = "DSN=adsn;UID=root;PWD=1234";
getDB.Open(cntstring);
var rset = new ActiveXObject("ADODB.Recordset");
I don't know how to populate the rest to each cell in the Slickgrid.
Any help is greatly appreciated!
Build a grid with sample data first. Get it working the way you want first. Then retrieve data from the DB and format it like the sample data.
Visual Web Developer. Entity data sources model. I have it creating the new database fine. Example
creates SAMPLE1.MDF and SAMPLE1.LDF
When I run my app, it creates another SAMPLE1_LOG.lDF file.
When I run createdatabase, is there a place I can specify the _LOG.ldf for the log file? SQL 2008 r2.
It messes up when I run the DeleteDatabase functions... 2 log files...
How come it does not create the file SAMPLE1_Log.ldf to start with, if that is what it is looking for...
Thank you for your time,
Frank
// database or initial catalog produce same results...
// strip the .mdf off of newfile and see what happens?
// nope. this did not do anything... still not create the ldf file correctly!!!
// sample1.mdf, sample1.ldf... but when run, it creates sample1_log.LDF...
newfile = newfile.Substring(0, newfile.Length - 4);
String mfile = "Initial Catalog=" + newfile + ";data source=";
String connectionString = FT_EntityDataSource.ConnectionManager.GetConnectionString().Replace("data source=", mfile);
// String mexclude = #"attachdbfilename=" + "|" + "DataDirectory" + "|" + #"\" + newfile + ";";
// nope. must have attach to create the file in the app_data, otherwise if goes to documents & setting, etc sqlexpress.
// connectionString = connectionString.Replace(mexclude, "");
Labeldebug2.Text = connectionString;
using (FTMAIN_DataEntities1 context = new FTMAIN_DataEntities1(connectionString))
{
// try
// {
if (context.DatabaseExists())
{
Buttoncreatedb.Enabled = false;
box.Checked = true;
boxcreatedate.Text = DateTime.Now.ToString();
Session["zusermdf"] = Session["zusermdfsave"];
return;
// Make sure the database instance is closed.
// context.DeleteDatabase();
// i have entire diff section for deletedatabase.. not here.
}
// View the database creation script.
// Labeldebug.Text = Labeldebug.Text + " script ==> " + context.CreateDatabaseScript().ToString().Trim();
// Console.WriteLine(context.CreateDatabaseScript());
// Create the new database instance based on the storage (SSDL) section
// of the .edmx file.
context.CreateDatabaseScript();
context.CreateDatabase();
}
took out all the try, catch so i can see anything that might happen...
==========================================================================
Rough code while working out the kinks..
connection string it creates
metadata=res://*/FT_EDS1.csdl|res://*/FT_EDS1.ssdl|res://*/FT_EDS1.msl;provider=System.Data.SqlClient;provider connection string="Initial Catalog=data_bac100;data source=.\SQLEXPRESS;attachdbfilename=|DataDirectory|\data_bac100.mdf;integrated security=True;user instance=True;multipleactiveresultsets=True;App=EntityFramework"
in this example, the file to create is "data_bac100.mdf".
It creates the data_bac100.mdf and data_bac100.ldf
when I actually use this file and tables to run, it auto-creates data_bac100_log.LDF
1) was trying just not to create the ldf, so when the system runs, it just creates the single one off the bat...
2) the Initial Catalog, and/or Database keywords are ONLY added to the connection string to run the createdatabase().. the regular connection strings created in web config only have attachdbfilename stuff, and works fine.
I have 1 connection string for unlimited databases, with the main database in the web.config.. I use a initialize section based on the user roles, whether visitor, member, admin, anonymous, or not authenticated... which sets the database correctly with a expression builder, and function to parse the connection string with the correct values for the database to operate on. This all runs good.
The entity framework automatically generates the script. I have tried with and without the .mdf extensions, makes no difference... thought maybe there is a setup somewhere that holds naming conventions for ldf files...
Eventually all of this will be for naught when start trying to deploy where not using APP_Data folder anyways...
Here is an example of connection string created when running application
metadata=res://*/FT_EDS1.csdl|res://*/FT_EDS1.ssdl|res://*/FT_EDS1.msl;provider=System.Data.SqlClient;provider connection string="data source=.\SQLEXPRESS;attachdbfilename=|DataDirectory|\TDSLLC_Data.mdf;integrated security=True;user instance=True;multipleactiveresultsets=True;App=EntityFramework"
in this case, use the TDSLLCData.mdf file...
04/01/2012... followup...
Entity Framework
feature
Log files created by the ObjectContext.CreateDatabase method
change
When the CreateDatabase method is called either directly or by using Code First with the SqlClient provider and an AttachDBFilename value in the connection string, it creates a log file named filename_log.ldf instead of filename.ldf (where filename is the name of the file specified by the AttachDBFilename value).
impact.
This change improves debugging by providing a log file named according to SQL Server specifications. It should have no unexpected side effects.
http://msdn.microsoft.com/en-us/library/hh367887(v=vs.110).aspx
I am on a Windows XP with .net 4 (not .net 4.5)... will hunt some more.. but looks like a issue that cannot be changed.
4/1/2012, 4:30...
ok, more hunting and searching and some of the inconsistancies I have experienced with createdatabase and databaseexists... so .net 4.5 is supposed to add the _log.ldf, and not just .ldf files, so they must have addressed this for some reason....
found others with same issues, but different server....
MySQL has a connector for EF4, the current version is 6.3.5 and its main functionalities are working fine but it still has issues with a few methods, e.g.
•System.Data.Objects.ObjectContext.CreateDatabase()
•System.Data.Objects.ObjectContext.DatabaseExists()
which makes it difficult to fully use the model-first approach. It's possible by manually editing the MySQL script (available with the CreateDatabaseScript method). The MySQL team doesn't seem eager to solve those bugs, I'm not sure what the commitment level actually is from their part but it certainly is lower than it once was.
That being said, the same methods fail with SQL CE too (they are not implemented, and I don't see the MS team as likely to tackle that soon).
Ran out of space below... it just becomes a problem when create a database, and it does not create the _log.ldf file, but just the ldf file, then use the database, and it creates a _log.ldf file... now you have 2 ldf files.. one becomes invalid.. Then when done with the database, delete it, then try to create a new, and a ldf exists, it will not work....
it turns out this is just the way it is with EF4, and they changed with EF4.5 beta to create the _log.ldf file to match what is created when the database is used.
thanks for time.
I've never used this "mdf attachment" feature myself and I don't know much about it, but according to the xcopy deployment documentation, you should not create a log file yourself because it will be automatically created when you attach the mdf. The docs also mention naming and say that the new log filename ends in _log.ldf. In other words, this behaviour appears to be by design and you can't change it.
Perhaps a more important question is, why do you care what the log file is called? Does it actually cause any problems for your application? If so, you should give details of that problem and see if someone has a solution.