Intermittently: Couchbase Save Not Happening - couchbase

I am using Couchbase java sdk client 2.7.11 with Couchbase 6.0 community addition. While performing upsert it gives me success response, but when I fetch the document or see through Couchbase UI, it’s not available.
//getClient returning me "api com.couchbase.client.java.Bucket" instance
private static final RetryWhenFunction RETRY_POLICY =
RetryBuilder.anyOf( TimeoutException.class,
TemporaryFailureException.class,
RequestCancelledException.class,
BackpressureException.class,
CASMismatchException.class)
.delay(Delay.exponential(TimeUnit.MILLISECONDS, 50))
.max(3)
.build();
int expiryTime = Instant.now().getEpochSecond() + (10 * 60);
StringDocument document = StringDocument.create("ABC_Test",expiryTime , "SomeValue");
StringDocument savedDocument = getClient().async().upsert(document).retryWhen(RETRY_POLICY)
.doOnError(exception -> {
String msg = "Unable to update a document = " + exception.getMessage();
LOGGER.error(()->msg);
})
.doOnCompleted(() -> LOGGER.debug(()-> "Succesfully saved document with key \"" + key))
.doAfterTerminate(() -> LOGGER.debug(()-> "Processing save document with key \"" + key + "\" Completed."))
.toBlocking()
.singleOrDefault(null);
if(savedDocument==null) {
LOGGER.error(()-> "Document with id couldn't be saved: " + key);
} else {
LOGGER.debug(()-> "Saved document: \n" + savedDocument);
}
I faced the similar issue when trying to use QueuePush. The Queue push gave me the success response but Queue pop says queue itself doesn’t exist. I intend to use both of the saving within next 5 sec for say. I do not have any load test running that could indicate towards Async delay behavior.
//expirationTime is quiet ahead in future.
getClient().async()
.queuePush(queueName, queueElement, MutationOptionBuilder.builder().createDocument(true).expiry(expirationTime))
.retryWhen(RETRY_POLICY)
.doOnError(exception -> LOGGER.error(() -> “Unable to add element '”+ queueElement +"’ in queue ‘" + queueName +
"’ Exception = " + exception.getMessage()))
.doOnCompleted(() -> LOGGER.debug(()-> "Succesfully saved document in queue “” + queueName))
.doAfterTerminate(latch::countDown).subscribe();
Both of above scenario have been noticed intermittently. Could you please suggest to diagnose this one? Does Community Version has a way to enable Document Level Auditing?
I have posted the similar question on Couchbase forum too, trying to bring it for bigger audience https://forums.couchbase.com/t/intermittently-couchbase-save-not-happening/28006 and get the right direction.
Thank you in advance.
Regards

Related

How do I get output to the webpage while still inside a cfscript?

Sorry for the longer post, I'm trying to be specific. I'm a bit of a newb at cold fusion and lucee, so forgive me if I have missed something fundamental here. I'm just trying to do a quick POC, but can't get it working. What I am trying to do is make a page call, write to a web page, sleep for a while. Kind of a heartbeat. What I can't get to happen is the write to the web page...until all sleep(s) have happened and the page cfm file completes processing. I've looked extensively for the past couple of days, and have tried numerous items, but to no avail. From my index.cfm lucee page, I have a link to launch a new tab and call my cfm file.
<a href="./pinger2.cfm" target="_blank"><img class="ploverride" src="/assets/img/Ping.png" alt="Ping Test" width="125" height="75">
No problem here, a new tab opens and pinger2.cfm starts processing.
What I'm hoping for is the table heading to almost immediately print to the page, then make the first call out, print the results to the page, sleep, make the next call out, print to the page...but it no workey. Anyone have a clue? The code in the pinger2.cfm file is:
<cfscript>
public struct function pinger( required string url, required string verb, required numeric timeout, struct body )
{
var result = {
success = false,
errorMessage = ""
};
var httpService = new http();
httpService.setMethod( arguments.verb );
httpService.setUrl( arguments.url );
httpService.setTimeout( arguments.timeout );
if( arguments.verb == "post" || arguments.verb == "put" )
{
httpService.addParam(type="body", value=SerializeJSON(arguments.body));
}
try {
callStart = getTickCount();
var resultObject = httpService.send().getPrefix();
callEnd = getTickCount();
callLength = (callEnd-callStart)/1000;
if( isDefined("resultObject.status_code") && resultObject.status_code == 200 )
{
result.success = true;
logMessage = "Pinger took " & toString( callLength ) & " seconds.";
outLine = "<tr><td>" & resultObject.charset & "</td><td>" & resultObject.http_version & "</td><td>" & resultObject.mimetype & "</td><td>" & resultObject.status_code & "</td><td>" & resultObject.status_text & "</td><td>" & resultObject.statuscode & "</td><td>" & logMessage & "</td></tr>";
writeOutput( outLine );
getPageContext().getOut().flush();
return result;
}
else
{
throw("Status Code returned " & resultObject.status_code);
}
}
catch(Any e) {
// something went wrong with the request
writeDump( e );
abort;
}
}
outLine = "<table><tr><th>charset</th> <th>http_version</th> <th>mimetype</th> <th>status_code</th> <th>status_text</th> <th>statuscode</th> <th>time</th> </tr>";
writeOutput( outLine );
getPageContext().getOut().flush();
intCounter = 0;
while(intCounter LT 2)
{
theResponse = pinger(
url = "https://www.google.com",
verb = "GET",
timeout = 5
);
intCounter = intCounter + 1;
getPageContext().getOut().flush();
sleep(2000);
}
outLine = "</table>";
writeOutput( outLine );
</cfscript>
NOTE: I'm sure there are other "less than best" practices in there, but I'm just trying to do this quick and dirty.
I thought the getPageContext().getOut().flush(); would do the trick, but no bueno.
EDIT: If it matters, I'm using CF version 10,0,0,0 and Lucee version 4.5.2.018.
I do something similar to generate ETags by hand (using Lucee 4.5). I stick a simple
GetPageContext().getOut().getString()
in the onRequestEnd function in Application.cfc. This returns a string of HTML just like it's sent to the browser.
You could store that in the appropriate scope (APPLICATION, SESSION, etc) and use it later, or whatever you need. Obviously, all processing needs to be completed, but it shouldn't require any flushes. In fact, flushing may or may not alter its behavior.

ffmpeg azure function consumption plan low CPU availability for high volume requests

I am running an azure queue function on a consumption plan; my function starts an FFMpeg process and accordingly is very CPU intensive. When I run the function with less than 100 items in the queue at once it works perfectly, azure scales up and gives me plenty of servers and all of the tasks complete very quickly. My problem is once I start doing more than 300 or 400 items at once, it starts fine but after a while the CPU slowly goes from 80% utilisation to only around 10% utilisation - my functions cant finish in time with only 10% CPU. This can be seen in the image shown below.
Does anyone know why the CPU useage is going lower the more instances my function creates? Thanks in advance Cuan
edit: the function is set to only run one at a time per instance, but the problem exists when set to 2 or 3 concurrent processes per instance in the host.json
edit: the CPU drops get noticeable at 15-20 servers, and start causing failures at around 60. After that the CPU bottoms out at an average of 8-10% with individuals reaching 0-3%, and the server count seems to increase without limit (which would be more helpful if I got some CPU with the servers)
Thanks again, Cuan.
I've also added the function code to the bottom of this post in case it helps.
using System.Net;
using System;
using System.Diagnostics;
using System.ComponentModel;
public static void Run(string myQueueItem, TraceWriter log)
{
log.Info($"C# Queue trigger function processed a request: {myQueueItem}");
//Basic Parameters
string ffmpegFile = #"D:\home\site\wwwroot\CommonResources\ffmpeg.exe";
string outputpath = #"D:\home\site\wwwroot\queue-ffmpeg-test\output\";
string reloutputpath = "output/";
string relinputpath = "input/";
string outputfile = "video2.mp4";
string dir = #"D:\home\site\wwwroot\queue-ffmpeg-test\";
//Special Parameters
string videoFile = "1 minute basic.mp4";
string sub = "1 minute sub.ass";
//guid tmp files
// Guid g1=Guid.NewGuid();
// Guid g2=Guid.NewGuid();
// string f1 = g1 + ".mp4";
// string f2 = g2 + ".ass";
string f1 = videoFile;
string f2 = sub;
//guid output - we will now do this at the caller level
string g3 = myQueueItem;
string outputGuid = g3+".mp4";
//get input files
//argument
string tmp = subArg(f1, f2, outputGuid );
//String.Format("-i \"" + #"input/tmp.mp4" + "\" -vf \"ass = '" + sub + "'\" \"" + reloutputpath +outputfile + "\" -y");
log.Info("ffmpeg argument is: "+tmp);
//startprocess parameters
Process process = new Process();
process.StartInfo.FileName = ffmpegFile;
process.StartInfo.Arguments = tmp;
process.StartInfo.UseShellExecute = false;
process.StartInfo.RedirectStandardOutput = true;
process.StartInfo.RedirectStandardError = true;
process.StartInfo.WorkingDirectory = dir;
//output handler
process.OutputDataReceived += new DataReceivedEventHandler(
(s, e) =>
{
log.Info("O: "+e.Data);
}
);
process.ErrorDataReceived += new DataReceivedEventHandler(
(s, e) =>
{
log.Info("E: "+e.Data);
}
);
//start process
process.Start();
log.Info("process started");
process.BeginOutputReadLine();
process.BeginErrorReadLine();
process.WaitForExit();
}
public static void getFile(string link, string fileName, string dir, string relInputPath){
using (var client = new WebClient()){
client.DownloadFile(link, dir + relInputPath+ fileName);
}
}
public static string subArg(string input1, string input2, string output1){
return String.Format("-i \"" + #"input/" +input1+ "\" -vf \"ass = '" + #"input/"+input2 + "'\" \"" + #"output/" +output1 + "\" -y");
}
When you use the D:\home directory you are writing to the virtual function, which means each instance has to continually try to write to the same spot as the functions run which causes the massive I/O block. Instead writing to D:\local and then sending the finished file somewhere else solves that issue, this way rather than each instance constantly writing to a location they only write when completed, and write to a location designed to handle high throughput.
The easiest way I could find to manage the input and output after writing to D:\local was just to hook up the function to an azure storage container and handle the ins and outs that way. Doing so made the average CPU stay at 90-100% for upwards of 70 concurrent Instances.

[Qt][QMYSQL] Deployed app - Driver not loaded

First of all, many thanks for those who will take time to help me on this topic. I've searched a lot on many different forums before posting here but it seems I'm missing something.
Well, I'm working on Windows 7 (64 bits) with Qt5.5 / MySQL Server 5.6.
And I use MinGW 5.5.0 32 bits on Qt Creator (auto detected).
It's not a matter of building the drivers, it's done and it works perfectly for the dev.! :-)
I can reach my BD, do any query that I want and retrieve/insert all the data.
I'm facing to a matter of deploying my application on other computers.
I know that I have to put qsqlmysql.dll in a folder "sqldrivers" placed in my app. directory. Such as placing libmysql.dll in this directory too.
So I have something like the following
App directory
App.exe
libmysql.dll
Qt5Core.dll
Qt5Gui
Qt5Sql
Qt5Widget
libwinpthread-1.dll
libstdc++-6.dll
libgcc_s_dw2-1.dll
platforms
qwindow.dll
sqldrivers
qsqlmysql.dll
BUT when I release the application and I try to run it from another computer which I used to develop, I have a "Driver not loaded" error...
As far now, I have really no idea what I've missed...
So please, if anyone could give me some, it would be really really appreciated!
I let you the part of the code which is really useful, just in case...
main.cpp
QApplication a(argc, argv);
Maintenance w;
w.show();
return a.exec();
Maintenance.cpp
void Maintenance::login(){
int db_select = 1;
this->maint_db = Database(db_select);
/* All that follow is linked to the login of user... */
}
Database.cpp
Database::Database(int default_db)
{
this->db = QSqlDatabase::addDatabase("QMYSQL");
switch(default_db){
case 0:
this->db.setHostName("XXX.XX.XXX.XX");
this->db.setDatabaseName("maintenance_db");
this->db.setUserName("USERNAME");
this->db.setPassword("PASSWORD");
this->db.setPort(3306);
break;
// Only to make some trials in local
case 1:
this->db.setHostName("127.0.0.1");
this->db.setDatabaseName("maintenance_db");
this->db.setUserName("USERNAME");
this->db.setPassword("PASSWORD");
break;
}
/* I've added the following code to try to solve the problem
I retrieve that the available drivers are: QMYSQL / QMYSQL3
But all the information about the DB are empty (due to the unloaded driver I assume.)
And the error from *lastError()* is "Driver not loaded"
*/
QString my_drivers;
for(int i = 0; i < QSqlDatabase::drivers().length(); i++){
my_drivers = my_drivers + " / " + QSqlDatabase::drivers().at(i);
}
QString lib_path;
for(int i = 0; i < QApplication::libraryPaths().length(); i++){
lib_path = lib_path + " / " + QApplication::libraryPaths().at(i);
}
QString start = QString::number(QCoreApplication::startingUp());
QMessageBox::information(0, "BDD init",
"Drivers available: " + my_drivers
+ " \nHostname: " + this->db.hostName()
+ "\nDB name: " + this->db.databaseName()
+ "\nUsername: " + this->db.userName()
+ "\nPW: " + this->db.password()
+ "\n\n" + lib_path + "\n" + start
);
if(this->db.isOpen()){
QMessageBox::information(0, "BDD init", "Already open.");
}
else{
if(this->db.open()){
QMessageBox::information(0, "BDD init", "Opened.");
}
else{
QMessageBox::critical(0, "BDD init", "Not opened.\n" + this->db.lastError().text());
}
}
}
There are at least 3 possible solutions:
Find all .dll paths are correct with your favourite process monitor
Make sure all .dll is in the same arch as your .exe, which is x86 (32bit)
Debug with QPluginLoader
Simplest way to create "deploy" folder for you windows Qt5 aplication is using windeployqt tool
Create empty directory, copy your app.exe file and than run windeployqt app.exe
Check out the docs http://doc.qt.io/qt-5/windows-deployment.html#the-windows-deployment-tool

How to deal with information received in two packets

This is the case. I want to make a game, client being made in flash and server on java. From server side, the first byte i write on the stream is the protocol id, like this:
try
{
Output.writeByte(LOGIN);
Output.writeByte((byte)ID);
Output.writeByte(new_position.x);
Output.writeByte(new_position.y);
Output.flush();
}
After the 'onResponse' event is triggered, the socket is read like this:
type:int = socket_client.readByte();
if (type == 0x1)
FP.console.log("You are logged as " + socket_client.readByte() + " in x:" + socket_client.readByte() + " y:" + socket_client.readByte() );
else if (type == 0x2)
FP.console.log("You are now in x:" + socket_client.readByte() + " y:" + socket_client.readByte());
As you probably have guessed by now, this gives me some problems. Sometimes, server sends the information split in two, so the above code throws an EOF exception. Tracing the following code gives me sometimes this result:
trace("SIZE: " + socket_client.bytesAvailable);
//var type:int = socket_client.readByte();
var values:String = "";
while (socket_client.bytesAvailable > 0)
values += socket_client.readByte() + " ";
trace(values);`
Values:
SIZE: 1
2
SIZE: 2
2 6
The first '2' is the protocol id, the second and the third stands for x and y values.
Now, the question is, how can i prevent this to happen? How could i 'wait' until i have all the information needed?
Btw, on java this never happens, but i have no more control than on as3.
Add BufferedOutputStream in output initialization like this:
Output = new DataOutputStream(new BufferedOutputStream(connection.getOutputStream()));
Basically you need to switch your message format from [type, data] to [type, length, data]. Then, wait to process the data until bytesAvailable >= length, otherwise put it into a buffer.
Here is an example SOCKET_DATA handler that uses this logic:
https://github.com/magicalhobo/Flash-CS5-mobile-proxy/blob/master/com/magicalhobo/mobile/proxy/MobileClient.as#L110

How to trigger manual clean of Hudson workspaces

We have a Hudson cluster with eight nodes. When a particular code branch is no longer in active use, we disable the build job, but the workspaces for that job still hang around taking up space on all the nodes.
I am looking for a way to trigger workspace cleanup across all nodes. Note that I am not looking for a "clean workspace before build" solution.
You do not need to write a plugin. You can write a job that utilizes Groovy plugin to write a Groovy system script. The job would run, say, nightly. It would identify disabled projects and erase their workspaces. Here is a link to Hudson Model API that your script will tap into. There is a Groovy script console at http://<hudson-server>/script that is very useful for debugging.
Here is a code snippet that should be of direct benefit to you. Run it in the script console and examine the output:
def hi = hudson.model.Hudson.instance
hi.getItems(hudson.model.Job).each {
job ->
println(job.displayName)
println(job.isDisabled())
println(job.workspace)
}
You may also find code snippets in this answer useful. They refer to Jenkins API, but on this level I do not think there is a difference between Jenkins and Hudson.
Update:
Here's how you can do it on multiple slaves: create a multi-configuration job (also called "matrix job") that runs on all the slaves. On each slave the following system Groovy script will give you for every job its workspace on that slave (as well as enabled/disabled flag):
def hi = hudson.model.Hudson.instance
def thr = Thread.currentThread()
def build = thr?.executable
def node = build.executor.owner.node
hi.getItems(hudson.model.Job).each {
job ->
println("---------")
println(job.displayName)
println(job.isDisabled())
println(node.getWorkspaceFor(job))
}
As the script runs on the slave itself you can wipe out the workspace directly from it. Of course, the worskspace may not exist, but it's not a problem. Note that you write the script only once - Jenkins will run it on all the slaves you specify in the matrix job automatically.
I have tried following script and it works for Single node,
def hi = hudson.model.Hudson.instance
hi.getItems(hudson.model.Job).each {
job ->
if(job.isDisabled())
{
println(job.displayName)
job.doDoWipeOutWorkspace()
}
}
The following Groovy script wipes workspaces of certain jobs on all nodes. Execute it from "Jenkins host"/computer/(master)/script
In the TODO part, change the job name to the one that you need.
import hudson.model.*
// For each job
for (item in Hudson.instance.items)
{
jobName = item.getFullDisplayName()
// check that job is not building
if (!item.isBuilding())
{
// TODO: Modify the following condition to select which jobs to affect
if (jobName == "MyJob")
{
println("Wiping out workspaces of job " + jobName)
customWorkspace = item.getCustomWorkspace()
println("Custom workspace = " + customWorkspace)
for (node in Hudson.getInstance().getNodes())
{
println(" Node: " + node.getDisplayName())
workspacePath = node.getWorkspaceFor(item)
if (workspacePath == null)
{
println(" Could not get workspace path")
}
else
{
if (customWorkspace != null)
{
workspacePath = node.getRootPath().child(customWorkspace)
}
pathAsString = workspacePath.getRemote()
if (workspacePath.exists())
{
workspacePath.deleteRecursive()
println(" Deleted from location " + pathAsString)
}
else
{
println(" Nothing to delete at " + pathAsString)
}
}
}
}
}
else
{
println("Skipping job " + jobName + ", currently building")
}
}
its a bit late, but i ran into the same problem. my script will check if atleast 2 GB space is available. if this is not the case, all workspaces on the node are cleared to free space.
import hudson.FilePath.FileCallable
import hudson.slaves.OfflineCause
for (node in Jenkins.instance.nodes) {
computer = node.toComputer()
if (computer.getChannel() == null) continue
rootPath = node.getRootPath()
size = rootPath.asCallableWith({f, c -> f.getUsableSpace()} as FileCallable).call()
roundedSize = size / (1024 * 1024 * 1024) as int
println("node: " + node.getDisplayName() + ", free space: " + roundedSize + "GB")
if (roundedSize < 2) {
computer.setTemporarilyOffline(true, [toString: {"disk cleanup"}] as OfflineCause)
for (item in Jenkins.instance.items) {
jobName = item.getFullDisplayName()
if (item.isBuilding()) {
println(".. job " + jobName + " is currently running, skipped")
continue
}
println(".. wiping out workspaces of job " + jobName)
workspacePath = node.getWorkspaceFor(item)
if (workspacePath == null) {
println(".... could not get workspace path")
continue
}
println(".... workspace = " + workspacePath)
customWorkspace = item.getCustomWorkspace()
if (customWorkspace != null) {
workspacePath = node.getRootPath().child(customWorkspace)
println(".... custom workspace = " + workspacePath)
}
pathAsString = workspacePath.getRemote()
if (workspacePath.exists()) {
workspacePath.deleteRecursive()
println(".... deleted from location " + pathAsString)
} else {
println(".... nothing to delete at " + pathAsString)
}
}
computer.setTemporarilyOffline(false, null)
}
}
I was recently also looking to clean up my jenkins workspaces, but with a little twist: I wanted to only remove workspaces from jobs that no longer exist. This is because jenkins does not get rid of workspaces when deleting a job, which is pretty annoying.
And we only use a master at the moment, no separate nodes.
I found a script somewhere (can't find the link anymore) but tweaked it a bit for our usage, putting it in a jenkins job with an 'Execute system Groovy script' build step, running daily:
import hudson.FilePath
import jenkins.model.Jenkins
import hudson.model.Job
def deleteUnusedWorkspace(FilePath root, String path) {
root.list().sort{child->child.name}.each { child ->
String fullName = path + child.name
def item = Jenkins.instance.getItemByFullName(fullName);
println "Checking '$fullName'"
try{
if (item.class.canonicalName == 'com.cloudbees.hudson.plugins.folder.Folder') {
println "-> going deeper into the folder"
deleteUnusedWorkspace(root.child(child.name), "$fullName/")
} else if (item == null) {
// this code is never reached, non-existing projects generate an exception
println "Deleting (no such job): '$fullName'"
child.deleteRecursive()
} else if (item instanceof Job && !item.isBuildable()) {
// don't remove the workspace for disabled jobs!
//println "Deleting (job disabled): '$fullName'"
//child.deleteRecursive()
}
} catch (Exception exc) {
println " Exception happened: " + exc.message
println " So we delete '" + child + "'!"
child.deleteRecursive()
}
}
}
println "Beginning of cleanup script."
// loop over possible slaves
for (node in Jenkins.instance.nodes) {
println "Processing $node.displayName"
def workspaceRoot = node.rootPath.child("workspace");
deleteUnusedWorkspace(workspaceRoot, "")
}
// do the master itself
deleteUnusedWorkspace(Jenkins.instance.rootPath.child("workspace"), "")
println "Script has completed."
Might need some individual tweaking though.
Obviously you should run this script with all delete statements commented out first, and make sure you have a backup before doing an actual run.
It sounds like you are looking for a "delete workspace when disabling build" solution. You could write a Hudson plugin to do this. Which is probably overkill.
If I had to do this (which I wouldn't as we don't have a disk space shortage), I would write a unit script to find all disabled jobs under the hudson directory. A job is represented by an XML file. Then I'd have the script delete the workspace for any matches. And I'd probably set it up in cron so it runs nightly or weekly or whatever is appropriate in the environment.