K6 Stress Test with shuffled images - stress-testing

Context
I am building the javascript file to be loaded and executed by K6
tool.
It will be used for both stress and spike tests.
My POST requests will contain 1 image and 1 id
I want to use a random image within 7 known options
I want to randomly generate the id
Question
Where should I randomize both the image and de id that will be used in the requests? At "init context" or "vu context"?
Code considering "init context"
let rand_id = getRandomInt(10000,99999)
let image = open("face"+getRandomInt(0,6)+".jpg","b")
export default function() {
group("post_request", function() {
http.post("https://my_api", {
"id": rand_id,
"image": http.file(image),
})
});
}
Code considering "vu context"
let images = []
for (i=0; i <= 6; i++) {
images.push(open("face"+i+".jpg","b"))
}
export default function() {
group("post_request", function() {
http.post("https://my_api", {
"id": getRandomInt(10000,99999),
"image": http.file(open(images[getRandomInt(0,6)],"b")),
})
});
}

tl;dr Given that you want it to be random - "vu context"
As explained in k6 test lifecycle the init context is executed once per VU (and at least 1 more before the test starts).
This means that if you do your random number generation in the init context you will get the same "random" number for each iteration of the different VUs. This still means that different VUs will have different random values they won't change between iterations if this is okay with your use case that is perfectly fine.
But I guess what you want is to constantly generate a new random id on each iteration and use the corresponding id and image. This though means that you will need to have an array of images generated in the init context as open isn't available in vu code. so instead of open(....getRandomInt...) in the vu code you should have images[getRandomInt(0,6)].
Also for the record each VU will get it's OWN copy of the images so this might be a problem with memory if they are large or you just don't have enough memory for the amount of VUs you want to use.

Related

Set instanceTree to a custom node in Forge 3D viewer

Lets say I'm working with a 3D file which is the combination of one Architectural model and one Structural model.
The instance tree or Model Browser looks like this
root/
Arch/
Level 01/
Level 02/
...
Str/
Level 01/
Level 02/
...
I want to display only the Level 01.
So I:
Followed the steps in the Viewer tutorial
Add an event listener to both Autodesk.Viewing.GEOMETRY_LOADED_EVENT & Autodesk.Viewing.OBJECT_TREE_CREATED_EVENT
When the 2 are fired, I use the code in this article to display only the Level 01 without ghosting.
I have 2 problem with this approach
I have to wait until the entire model is loaded before I can filter the level
After filtering the level, if I click on Model Browser, I can still see the entire model structure (but with everything as hidden except Level 01). How can I set the instance tree to only have what's below?
root/
Arch/
Level 01/
Str/
Level 01/
EDIT
At what point am I supposed to override the shouldInclude() function?
I've tried this and put a breakpoint but it seems it never gets called... I also tried to move it around but in vain.
const start = Date.now();
Autodesk.Viewing.UI.ModelStructurePanel.shouldInclude = (node) => {
Logger.log(node);
return true;
};
Autodesk.Viewing.Initializer(options, () => {
Logger.log(`Viewer initialized in ${Date.now() - start}ms`);
const config = {};
// prettier-ignore
Autodesk.Viewing.theExtensionManager.registerExtension('MyAwesomeExtension', MyAwesomeExtension);
viewerApp = new Autodesk.Viewing.ViewingApplication('MyViewerDiv');
viewerApp.registerViewer(viewerApp.k3D, Autodesk.Viewing.Private.GuiViewer3D, config);
loadDocumentStart = Date.now();
// prettier-ignore
viewerApp.loadDocument(documentId, onDocumentLoadSuccess, onDocumentLoadFailure);
});
Regarding #1: the object tree is stored in the file's internal database which - for performance reasons - is only loaded after the actual geometry.
Regarding #2: you can subclass the ModelStructurePanel class and add your own behavior, for example, by overriding the ModelStructurePanel#shouldInclude method.
Since I wasn't able to understand how to use ModelStructurePanel, I overrode Autodesk.Viewing.ViewingApplication.selectItem to only modify the options which are either passed to loadDocumentNode or startWithDocumentNode as below:
const options = {
ids: leafIDs.length > 0 ? leafIDs : null, // changed this line
acmSessionId: this.myDocument.acmSessionId,
loadOptions,
useConsolidation: this.options.useConsolidation,
consolidationMemoryLimit: this.options.consolidationMemoryLimit || 100 * 1024 * 1024, // 100 MB
};
With leafIDs being an array of objectIDs to display. I was able to build it by:
querying the ModelDerivativeAPI using GET :urn/metadata/:guid
going through the tree to find the ids which I am interested in.
There's probably a more elegant way to do this but that's the best I could do so far.

Limit rendered image size in icepdf

While rendering a bunch of PDFs to images, icepdf seemingly randomly bombs out with an OutOfMemoryError. Trying to track this down I find two things:
Close to the OOM it rendered an A0 page or similarly large document pages
With eclipse memory analyzer I find 1/2GB images in memory.
This suggests to limit the output image size to something managable. I wonder what the easiest way is to do this?
I looked at icepdf's Page object but there it is strongly recommended to just always use Page.BOUNDARY_CROPBOX and other uses seem not to be documented in the Javadoc.
How can I limit the output image size of Document.getPageImage or what other measure could I use to prevent the OOM (other than just increasing the Xmx, which I can't). Reduction of image quality is an option. But it should apply only to "oversize" images, not to all.
I tried already to use a predefined image using Document.paintPage(), but this was not sufficient.
Debug finally allowed me to zoom in on a document that is problematic. I get a log like:
2016-12-09T14:23:35Z DEBUG class org.icepdf.core.pobjects.Document 1 MEMFREE: 712484296 of 838860800
2016-12-09T14:23:35Z DEBUG class org.icepdf.core.pobjects.Document 1 LOADING: ..../F1-2.pdf
2016-12-09T14:23:37Z WARN class org.icepdf.core.pobjects.graphics.ScaledImageReference 1 Error loading image: 9 0 R Image stream= {Type=XObject, Length=8 0 R, Filter=FlateDecode, ColorSpace=DeviceGray, Decode=[1, 0], Height=18676, Width=13248, Subtype=Image, BitsPerComponent=1, Name=Im1} 9 0 R
so this would be Height=18676, Width=13248 which is really huge.
I guess that the OOM happens already during loading of the image, so later scaling does not help. Also it seems that the property org.icepdf.core.imageReference=scaled does not hit early enough.
For me it would be fine to just ignore oversized images like this. Any chance?
Image loading is by far the most memory expensive memory task when decoding PDF content. At this time there isn't an esasy way to turn off image loading for really large image however I'll give you a few code hints if you want to implement this your self.
The ImageReferenceFactory.java class is the factory behind the system property org.icepdf.core.imageReference, you'll see that the default for getImageReferenced() is ImageStreamReference. You can create a new ImageReference type like this:
public static org.icepdf.core.pobjects.graphics.ImageReference
getImageReference(ImageStream imageStream, Resources resources, GraphicsState graphicsState,
Integer imageIndex, Page page) {
switch (scaleType) {
case SCALED:
return new ScaledImageReference(imageStream, graphicsState, resources, imageIndex, page);
case SMOOTH_SCALED:
return new SmoothScaledImageReference(imageStream, graphicsState, resources, imageIndex, page);
case MIP_MAP:
return new MipMappedImageReference(imageStream, graphicsState, resources, imageIndex, page);
case SKIP_LARGE:
return new SkipLargeImageReference(imageStream, graphicsState, resources, imageIndex, page);
default:
return new ImageStreamReference(imageStream, graphicsState, resources, imageIndex, page);
}
}
Next you can extend the class ImageStreamReference with your new SkipLargeImageReference class. Then override the call() method as follows and it will skip the loading of any image over the defined MAX_SIZE .
public BufferedImage call() {
BufferedImage image = null;
if (imageStream.getWidth() < MAX_SIZE && imageStream.getHeight() < MAX_SIZE){
long start = System.nanoTime();
try {
image = imageStream.getImage(graphicsState, resources);
} catch (Throwable e) {
logger.log(Level.WARNING, "Error loading image: " + imageStream.getPObjectReference() +
" " + imageStream.toString(), e);
}
long end = System.nanoTime();
notifyImagePageEvents((end - start));
return image;
}
return null;
}
On a side note: To minimize the the amount of memory needed to decode an image make sure you are using org.icepdf.core.imageReference=default as this will decode the image only once. org.icepdf.core.imageReference=scaled will actually decode the image at full size and then do the scale which can create a very large memory spike. We are experimenting with NIO's direct ByteBuffers which looks promising to moving the decode memory usage off the heap, so hopefully this will get better in the future.

Push JSON File to Firebase

I've got a large JSON file that I'd like to push to my Firebase. It's currently in a specific format that I'd like to slightly change when pushed.
My current JSON file looks a bit like this:
"item": [
{
"title": "Hernia Repair",
"dc:creator": "realph",
"content:encoded": "A hernia occurs when an internal part of the body pushes through a weakness in the muscle or surrounding tissue wall. Your muscles are usually strong and tight enough to keep your intestines and organs in place, but a hernia can develop if there are any weak spots.",
},
...
]
While my Firebase items look like this:
"services" : {
"-JfTLQsxlZr6W2JWwMMd" : {
"description" : "Hernia repair refers to a surgical operation for the correction of a hernia (a bulging of internal organs or tissues through the wall that contains itself.",
"title" : "Hernia Repair",
},
...
}
I'm trying to push each one of these items to the services object that's already set up in my Firebase. But I'd like to push each item to a new unique id (i.e. -JfTLQsxlZr6W2JWwMMd), just like it is in my Firebase object (above). I also want push the title to title and push content:encoded to description.
Is this even possible? Doing this would potentially save me a lot of time going forward.
Any help from someone that is familiar with this sort of thing would be appreciated. Thanks in advance!
Update
This is what I was thinking to do, but I don't believe it's wired up correctly. I'm getting back a unique key with the console.log, but no item is being added to the services object:
$scope.convertItems = function() {
for(var i=0; i < $scope.items.length; i++) {
var newService = {
title: title,
description: 'content:encoded'
};
}
var promise = ServiceService.add(newService);
promise.then(function(data) {
console.log(data.name());
});
};

Load a single record from a JSON API using Sencha Touch 2

I am having a horrible time understanding Sencha Touch 2's architecture. I'm finding even the most basic things I do in other language and frameworks to be incredibly painful.
Currently, I just want to do a standard Master/Detail view. I load a store into a list view and would like to click on each list item to slide in a detail view. Since my initial list view can contain quite a lot of items, I'm only loading a little bit of the data with this method in my controller:
viewUserCommand: function(list, record) {
// console.log(record);
var profileStore = Ext.getStore("Profiles");
profileStore.setProxy({
url: 'http://localhost:8000/profile/' + record.data.user_id
});
profileStore.load();
// console.log(profileStore);
Ext.Viewport.animateActiveItem(Ext.getCmp('profileview'), this.slideLeftTransition);
}
First, modifying the url property for each tap event seems a bit hacky. Isn't there a way to specify "this.id" or something along those lines, and then pass that to my store? Or would that require loading the entire DB table into an object?
I can console.log the return from this method and it's exactly what I want. How do I populate the detail view? I've tried utilizing a DataView component, but it doesn't show any data. The examples on sencha's website are fairly sparse, and relatively contextless. That means that even copying and pasting their examples are likely to fail. (Any examples I've tried using Ext.modelMgr.getModel() have failed.)
I know it's partly that this framework is new and I'm probably missing a huge gaping hole in my understanding of it, but does anyone have any clue?
Would suggest you check out the docs, there's an example of loading a single model:
http://docs.sencha.com/touch/2-0/#!/api/Ext.data.Model
Ext.define('User', {
extend: 'Ext.data.Model',
config: {
fields: ['id', 'name', 'email'],
proxy: {
type: 'rest',
url : '/users'
}
}
});
//get a reference to the User model class
var User = Ext.ModelManager.getModel('User');
//Uses the configured RestProxy to make a GET request to /users/123
User.load(123, {
success: function(user) {
console.log(user.getId()); //logs 123
}
});

Looping through JSON with node.js

I have a JSON file which I need to iterate over, as shown below...
{
"device_id": "8020",
"data": [{
"Timestamp": "04-29-11 05:22:39 pm",
"Start_Value": 0.02,
"Abstract": 18.60,
"Editor": 65.20
}, {
"Timestamp": "04-29-11 04:22:39 pm",
"End_Value": 22.22,
"Text": 8.65,
"Common": 1.10,
"Editable": "true",
"Insert": 6.0
}]
}
The keys in data will not always be the same (i've just used examples, there are 20 different keys), and as such, I cannot set up my script to statically reference them to get the values.
Otherwise I could state
var value1 = json.data.Timestamp;
var value2 = json.data.Start_Value;
var value3 = json.data.Abstract;
etc
In the past i've used a simple foreach loop on the data node...
foreach ($json->data as $key => $val) {
switch($key) {
case 'Timestamp':
//do this;
case: 'Start_Value':
//do this
}
}
But don't want to block the script. Any ideas?
You can iterate through JavaScript objects this way:
for(var attributename in myobject){
console.log(attributename+": "+myobject[attributename]);
}
myobject could be your json.data
I would recommend taking advantage of the fact that nodeJS will always be ES5. Remember this isn't the browser folks you can depend on the language's implementation on being stable. That said I would recommend against ever using a for-in loop in nodeJS, unless you really want to do deep recursion up the prototype chain. For simple, traditional looping I would recommend making good use of Object.keys method, in ES5. If you view the following JSPerf test, especially if you use Chrome (since it has the same engine as nodeJS), you will get a rough idea of how much more performant using this method is than using a for-in loop (roughly 10 times faster). Here's a sample of the code:
var keys = Object.keys( obj );
for( var i = 0,length = keys.length; i < length; i++ ) {
obj[ keys[ i ] ];
}
You may also want to use hasOwnProperty in the loop.
for (var prop in obj) {
if (obj.hasOwnProperty(prop)) {
switch (prop) {
// obj[prop] has the value
}
}
}
node.js is single-threaded which means your script will block whether you want it or not. Remember that V8 (Google's Javascript engine that node.js uses) compiles Javascript into machine code which means that most basic operations are really fast and looping through an object with 100 keys would probably take a couple of nanoseconds?
However, if you do a lot more inside the loop and you don't want it to block right now, you could do something like this
switch (prop) {
case 'Timestamp':
setTimeout(function() { ... }, 5);
break;
case 'Start_Value':
setTimeout(function() { ... }, 10);
break;
}
If your loop is doing some very CPU intensive work, you will need to spawn a child process to do that work or use web workers.
If you want to avoid blocking, which is only necessary for very large loops, then wrap the contents of your loop in a function called like this: process.nextTick(function(){<contents of loop>}), which will defer execution until the next tick, giving an opportunity for pending calls from other asynchronous functions to be processed.
My most preferred way is,
var objectKeysArray = Object.keys(yourJsonObj)
objectKeysArray.forEach(function(objKey) {
var objValue = yourJsonObj[objKey]
})
If we are using nodeJS, we should definitely take advantage of different libraries it provides. Inbuilt functions like each(), map(), reduce() and many more from underscoreJS reduces our efforts. Here's a sample
var _=require("underscore");
var fs=require("fs");
var jsonObject=JSON.parse(fs.readFileSync('YourJson.json', 'utf8'));
_.map( jsonObject, function(content) {
_.map(content,function(data){
if(data.Timestamp)
console.log(data.Timestamp)
})
})
A little late but I believe some further clarification is given below.
You can iterate through a JSON array with a simple loop as well, like:
for(var i = 0; i < jsonArray.length; i++)
{
console.log(jsonArray[i].attributename);
}
If you have a JSON object and you want to loop through all of its inner objects, then you first need to get all the keys in an array and loop through the keys to retrieve objects using the key names, like:
var keys = Object.keys(jsonObject);
for(var i = 0; i < keys.length; i++)
{
var key = keys[i];
console.log(jsonObject.key.attributename);
}
Not sure if it helps, but it looks like there might be a library for async iteration in node hosted here:https://github.com/caolan/async
Async is a utility module which provides straight-forward, powerful functions for working with asynchronous JavaScript. Although originally designed for use with node.js, it can also be used directly in the browser.
Async provides around 20 functions that include the usual 'functional' suspects (map, reduce, filter, forEach…) as well as some common patterns for asynchronous control flow (parallel, series, waterfall…). All these functions assume you follow the node.js convention of providing a single callback as the last argument of your async function.
Take a look at Traverse. It will recursively walk an object tree for you and at every node you have a number of different objects you can access - key of current node, value of current node, parent of current node, full key path of current node, etc. https://github.com/substack/js-traverse. I've used it to good effect on objects that I wanted to scrub circular references to and when I need to do a deep clone while transforming various data bits. Here's some code pulled form their samples to give you a flavor of what it can do.
var id = 54;
var callbacks = {};
var obj = { moo : function () {}, foo : [2,3,4, function () {}] };
var scrubbed = traverse(obj).map(function (x) {
if (typeof x === 'function') {
callbacks[id] = { id : id, f : x, path : this.path };
this.update('[Function]');
id++;
}
});
console.dir(scrubbed);
console.dir(callbacks);