Overriding logback a log file for every log - logback

I use a file appender to note down the current timestamp when a particular event occur. Can some one help how to override the .log file where only timestamp should be noted at any given point of time.
If I delete the existing file programmatically and trying to store value, out of four events I'm able to store only first and third events and for second and fourth I don't see a .log file but I can see a logger on my console saying that timestamp is logged(for all the four events). Which is excepted because I'm deleting file programmatically, somewhere in the middle of the execution.
//timer expired?
if(diffInMin > lagAllowed){
file.delete();
return true;
}else{
LOGGER.info("Can't recorded time for next \"{}\" minutes",(lagAllowed - diffInMin) );
return false;
}

One solution would be using PrintWriter. If the file exists then it will be truncated to zero size.
//timer expired?
if(diffInMin > lagAllowed){
PrintWriter writer = new PrintWriter(file);
writer.close();
return true;
}else{
LOGGER.info("Can't recorded time for next \"{}\" minutes",(lagAllowed - diffInMin) );
return false;
}

Related

How to find the Flat file which is currently updating record into it

In SSIS
In a folder there are many flat files and by using for each loop container we are processing it one by one. If any new file is placed in the folder and it is still in copying mode. Then, We should not take it for continue process. We should process Only fully copied file alone to our next process.
How can we achieve this? Please give your suggestions.
Best way I have done this in the past is to use a C# Script Task and try to open the file - If the file is still being copied you will get an error (which you Catch). Then you can set a boolean variable to conditionally process the file if the Open worked.
EG:
Boolean b = true;
FileStream f;
try
{
f = new FileStream("C:\\Test\\Test.txt", FileMode.Open, FileAccess.ReadWrite, FileShare.None);
}
catch (IOException e)
{
if (e.Message == "hello")
{
b = false;
}
}

Mallet Api - Get consistent results

I am new to LDA and mallet. I have the following query
I tried running Mallet-LDA with the command line and by setting the --random-seed to a fixed value, I was able to get consistent results for multiple runs of the algorithm
However, I did try with the Mallet-Java-API and everytime I run the program I get different output.
I did google around and found out that random-seed needs to be fixed and I have it fixed in my java code. I still am getting different results.
Could anyone let me know what other parameters do I need to consider for consistent results (when run multiple times)
I might want to add that train-topics when ran multiple times(command line) yields same result. However, when I rerun import-dir and then run train-topics, the results do not match with previous one. (Probably as expected).
I am ok with running import-dir just once and then experiment with different number of topics and iterations by running train-topics.
Similarly, what needs to be changed/ kept constant if I want to replicate the same when I use Java-Api.
I was able to solve this.
I will respond in detail here:
There are two ways in which Mallet could be run.
a. Command mode
b. Using Java API
To get consistent results for different runs, we need to fix the 'random seed' and in the command line we have an option of setting it. We have no surprises there.
However, while using APIs, though we have an option of setting 'random seed', we need to know that it needs to be done at proper point, else it does not work. (see code)
I have pasted the code here which would create a model(read InstanceList) file from the data
and then we could use the same model file and set the random seed and see to it that we get consistent(read same) results every time we run.
Creating and saving model for later use.
Note: Follow this link to know the format of input file.
http://mallet.cs.umass.edu/ap.txt
public void getModelReady(String inputFile) throws IOException {
if(inputFile != null && (! inputFile.isEmpty())) {
List<Pipe> pipeList = new ArrayList<Pipe>();
pipeList.add(new Target2Label());
pipeList.add(new Input2CharSequence("UTF-8"));
pipeList.add(new CharSequence2TokenSequence());
pipeList.add(new TokenSequenceLowercase());
pipeList.add(new TokenSequenceRemoveStopwords());
pipeList.add(new TokenSequence2FeatureSequence());
Reader fileReader = new InputStreamReader(new FileInputStream(new File(inputFile)), "UTF-8");
CsvIterator ci = new CsvIterator (fileReader, Pattern.compile("^(\\S*)[\\s,]*(\\S*)[\\s,]*(.*)$"),
3, 2, 1); // data, label, name fields
InstanceList instances = new InstanceList(new SerialPipes(pipeList));
instances.addThruPipe(ci);
ObjectOutputStream oos;
oos = new ObjectOutputStream(new FileOutputStream("Resources\\Input\\Model\\Model.vectors"));
oos.writeObject(instances);
oos.close();
}
}
Once model file is saved, this uses the above saved file to generate topics
public void applyLDA(ParallelTopicModel model) throws IOException {
InstanceList training = InstanceList.load (new File("Resources\\Input\\Model\\Model.vectors"));
logger.debug("InstanceList Data loaded.");
if (training.size() > 0 &&
training.get(0) != null) {
Object data = training.get(0).getData();
if (! (data instanceof FeatureSequence)) {
logger.error("Topic modeling currently only supports feature sequences.");
System.exit(1);
}
}
// IT HAS TO BE SET HERE, BEFORE CALLING ADDINSTANCE METHOD.
model.setRandomSeed(5);
model.addInstances(training);
model.estimate();
model.printTopWords(new File("Resources\\Output\\OutputFile\\topic_keys_java.txt"), 25,
false);
model.printDocumentTopics(new File ("Resources\\Output\\OutputFile\\document_topicssplit_java.txt"));
}

Deleted files status unreliably reported in the new Google Drive Android API (GDAA)

This issue has been bugging me since the inception of the new Google Drive Android Api (GDAA).
First discussed here, I hoped it would go away in later releases, but it is still there (as of 2014/03/19). The user-trashed (referring to the 'Remove' action in 'drive.google.com') files/folders keep appearing in both the
Drive.DriveApi.query(_gac, query), and
DriveFolder.queryChildren(_gac, query)
as well as
DriveFolder.listChildren(_gac)
methods, even if used with
Filters.eq(SearchableField.TRASHED, false)
query qualifier, or if I use a filtering construct on the results
for (Metadata md : result.getMetadataBuffer()) {
if ((md == null) || (!md.isDataValid()) || md.isTrashed()) continue;
dMDs.add(new DrvMD(md));
}
Using
Drive.DriveApi.requestSync(_gac);
has no impact. And the time elapsed since the removal varies wildly, my last case was over 12 HOURS. And it is completely random.
What's worse, I can't even rely on EMPTY TRASH in 'drive.google.com', it does not yield any predictable results. Sometime the file status changes to 'isTrashed()' sometimes it disappears from the result list.
As I kept fiddling with this issue, I ended up with the following superawfulhack:
find file with TRASH status equal FALSE
if (file found and is not trashed) {
try to write content
if ( write content fails)
create a new file
}
Not even this helps. The file shows up as healthy even if the file is in the trash (and it's status was double-filtered by query and by metadata test). It can even be happily written into and when inspected in the trash, it is modified.
The conclusion here is that a fix should get higher priority, since it renders multi-platform use of Drive unreliable. It will be discovered by developers right away in the development / debugging process, steering them away.
While waiting for any acknowledgement from the support team, I devised a HACK that allows a workaround for this problem. Using the same principle as in SO 22295903, the logic involves falling back to RESTful API. Basically, dropping the LIST / QUERY functionality of GDAA.
The high level logic is:
query the RESTful API to retrieve the ID/IDs of file(s) in question
use retrieved ID to get GDAA's DriveId via 'fetchDriveId()'
here are the code snippets to document the process:
1/ initialize both GDAA's 'GoogleApiClient' and RESTful's 'services.drive.Drive'
GoogleApiClient _gac;
com.google.api.services.drive.Drive _drvSvc;
void init(Context ctx, String email){
// build GDAA GoogleApiClient
_gac = new GoogleApiClient.Builder(ctx).addApi(com.google.android.gms.drive.Drive.API)
.addScope(com.google.android.gms.drive.Drive.SCOPE_FILE).setAccountName(email)
.addConnectionCallbacks(ctx).addOnConnectionFailedListener(ctx).build();
// build RESTFul (DriveSDKv2) service to fall back to
GoogleAccountCredential crd = GoogleAccountCredential
.usingOAuth2(ctx, Arrays.asList(com.google.api.services.drive.DriveScopes.DRIVE_FILE));
crd.setSelectedAccountName(email);
_drvSvc = new com.google.api.services.drive.Drive.Builder(
AndroidHttp.newCompatibleTransport(), new GsonFactory(), crd).build();
}
2/ method that queries the Drive RESTful API, returning GDAA's DriveId to be used by the app.
String qry = "title = 'MYFILE' and mimeType = 'text/plain' and trashed = false";
DriveId findObject(String qry) throws Exception {
DriveId dId = null;
try {
final FileList gLst = _drvSvc.files().list().setQ(query).setFields("items(id)").execute();
if (gLst.getItems().size() == 1) {
String sId = gLst.getItems().get(0).getId();
dId = Drive.DriveApi.fetchDriveId(_gac, sId).await().getDriveId();
} else if (gLst.getItems().size() > 1)
throw new Exception("more then one folder/file found");
} catch (Exception e) {}
return dId;
}
The findObject() method above (again I'm using the 'await()' flavor for simplicity) returns the the Drive objects correctly, reflecting the trashed status with no noticeable delay (implement in non-UI thread).
Again, I would strongly advice AGAINST leaving this in code longer than necassary since it is a HACK with unpredictable effect on the rest of the system.

SSIS - In DFT, how to each row input to exe and read the output for each row?

In the Data Flow Task, I need re-direct each row to exe and get the output for each row, similar to script component.
I tried to use Process in Script Component, but it is throwing exception "StandardOut has not been redirected or the process hasn't started yet.".
The code used in Script Component is:
Process myApp = new Process();
myApp.StartInfo.FileName = #"Someexe.exe";
myApp.StartInfo.Arguments = "param1";
myApp.StartInfo.UseShellExecute = false;
myApp.StartInfo.RedirectStandardOutput = false;
myApp.Start();
while (!myApp.HasExited)
{
string result= myApp.StandardOutput.ReadToEnd();
}
Any suggestions? I would like to know if there is any better way to do this?
Your error message is helpful: "StandardOut has not been redirected..."
Since you want to capture the output, redirecting it to the Destination Component, don't you want to change the line:
myApp.StartInfo.RedirectStandardOutput = false;
to be:
myApp.StartInfo.RedirectStandardOutput = true;
Consider the example at this link, on BOL:
ProcessStartInfo.RedirectStandardOutput Property

How to show the song download represented by duration time of it

What i'm trying to do is to show the song download progress in form of song duration time. For example: 00:00, 01:05, 02:14, 03:58, .... 04:13 being 04:13 the song total duration. So far i have this code:
var soundClip:Sound;
var sTransform:SoundTransform = new SoundTransform(0.1);
function init() {
soundClip = new Sound();
soundClip.load(new URLRequest("magneto.mp3"));
//soundClip.load(new URLRequest("making.mp3"));
soundClip.addEventListener(Event.COMPLETE, soundLoaded);
soundClip.addEventListener(ProgressEvent.PROGRESS, soundLoading);
}
init();
function convertTime(millis:Number):String{
var displayMinutes:String;
var displaySeconds:String;
var Minutes:Number = (millis % (1000*60*60)/(1000*60));
var Seconds:Number = ((millis % (1000*60*60)) % (1000*60))/1000;
if(Minutes<10){
displayMinutes = "0"+Math.floor(Minutes);
}else{
displayMinutes = Math.floor(Minutes).toString();
}
if(Seconds<10){
displaySeconds = "0"+Math.floor(Seconds);
}else{
displaySeconds = Math.floor(Seconds).toString();
}
return displayMinutes + ":" + displaySeconds;
}
function soundLoaded(e:Event) {
soundClip.play(0,0,sTransform);
}
function soundLoading(e:ProgressEvent) {
trace(convertTime(soundClip.length));
}
As you can see, i'm testing it out with two songs, according to the code above the duration time of both are: 03:52 and 11:28 but according to the window these two songs last 03:52 and 05:44. Here is the code and both mp3 files.
Thank you.
EDIT:I'm analizing this page wicht play the song making.mp3, after debbuging it i realized that there is a value wicht is passed to the player, and go this way: 0, 0, 2664, 7576,...344370 these values are shown as*00:00, 01:05, 02:14, 03:58, .... 04:13* as the download progress. Knowing where this data come from would solve my problem, initially i thought it would be obtein through length propety but this only worked well for the magneto.mp3 file not for both songs.
On the whole i want to show:
00:00, 00:23, 01:23...03:57(where 03:57 is the duration time of any song) as the download progress.
Thank you for helping me. Cheers :)
Your code has no problems and your technique is correct.
You only need to fetch the total duration at the end of the download. The value of the file length changes as more data is retrieved. If your download stops before reaching the end, you will only have the length of the incomplete file. Add another handler possibly to check for errors, and if fired, let the user know that the file download is still incomplete.
Update: I figured it out. Add a call to convertTime() in the soundLoaded() method.
What is happening is that the file length is being updated in the PROGRESS event handler. But the final length is often only available in the COMPLETE event, because the PROGRESS event handler is called only when the file download is incomplete and not after it is ready.
Keep the convertTime() call in the PROGRESS event handler as you do presently.
private function soundLoaded(e:Event):void
{
soundClip.play(0, 0, sTransform);
trace(convertTime(soundClip.length));
}
This should do it.
Update 2: This is a known issue reported online at many forums. The length of any sound file sampled at less than 44 khz is reported incorrectly while the download is in progress. It's only after the download completes that the correct duration is reported. This only affects SWF files version 9 or less.
Changing the output SWF to version 10+ fixes the issue.