I want to cache lot of textures in a running scene without blocking it, theoretically cc.textureCache.addImage do caching async if you call it the three arguments. This is not happening.
Code:
rainOfArrows: {
_animFrames:[],
loaded: function(){
console.log('>>>>>>>>>>>>>>LOAD!')
},
load: function(){
/* Load all animation arrow sprites in memory & generate animation */
var str = '';
for (var i = 0; i < 40; i++) {
str = 'arrow00' + (i < 10 ? ('0' + i) : i) + '.png';
cc.textureCache.addImage('res/SkilllsAnimations/arrowRain/'+str, this.loaded, this);
var spriteFrame = new cc.SpriteFrame(texture, cc.rect(0,0,winsize.width,winsize.height));
cc.spriteFrameCache.addSpriteFrame(spriteFrame, str);
this._animFrames.push(cc.spriteFrameCache.getSpriteFrame(str));
}
},
run: function():{ ----- }
}
The function loaded is executed and string '>>>>>>>>>>>>>>LOAD!' is printed async. But the hole scene freezes until all textures are loaded.
Before v3.0, this was done with addImageAsync and worked fine, now in 3.2 this is merged in addImage but I can not get it working, am I missing something?
Btw, i'm not using textures packed in one img and plist because they're too big.
A fix for this has been submitted by pandamicro and will be available in 3.4
Here it is in case you need it before 3.4: https://github.com/pandamicro/cocos2d-js/commit/2a124b5
Related
While crawling a webpage the structure of the webpage keeps changing , I mean its dynamic which leads to a scenario where my crawler stops working . Is there a mechanism to identify webpage structural changes before running the full crawler so as to identify whether the structure has changed or not.
If you can run your own javascript code in the webpage you can use MutationObserver providing the ability to watch for changes being made to the DOM tree.
Something like:
waitForDomStability(timeout: number) {
return new Promise(resolve => {
const waitResolve = observer => {
observer.disconnect();
resolve();
};
let timeoutId;
const observer = new MutationObserver((mutationList, observer) => {
for (let i = 0; i < mutationList.length; i += 1) {
// we only care if new nodes have been added
if (mutationList[i].type === 'childList') {
// restart the countdown timer
window.clearTimeout(timeoutId);
timeoutId = window.setTimeout(waitResolve, timeout, observer);
break;
}
}
});
timeoutId = setTimeout(waitResolve, timeout, observer);
// start observing document.body
observer.observe(document.body, { attributes: true, childList: true, subtree: true });
});
}
I'm using this approach in the open source scraping extension get-set-fetch. For full code look at /packages/background/src/ts/plugins/builtin/FetchPlugin.ts from the repo.
You can certainly use "snapshots" for comparing 2 versions of the same page. I've implemented something similar to java String hashCode to achieve this.
Code in javascript:
/*
returns a dom element snapshot as innerText hash code
starting point is java String hashCode: s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1]
keep everything fast: only work with a 32 bit hash, remove exponentiation
custom implementation: s[0]*31 + s[1]*31 + ... + s[n-1]*31
*/
function getSnapshot() {
const snapshotSelector = 'body';
const nodeToBeHashed = document.querySelector(snapshotSelector);
if (!nodeToBeHashed) return 0;
const { innerText } = nodeToBeHashed;
let hash = 0;
if (innerText.length === 0) {
return hash;
}
for (let i = 0; i < innerText.length; i += 1) {
// an integer between 0 and 65535 representing the UTF-16 code unit
const charCode = innerText.charCodeAt(i);
// multiply by 31 and add current charCode
hash = ((hash << 5) - hash) + charCode;
// convert to 32 bits as bitwise operators treat their operands as a sequence of 32 bits
hash |= 0;
}
return hash;
}
If you can't run javascript code in the page, you can use the entire html response as the content to be hashed in your favorite language.
I've created a function to pipe data into a file and output individual files for each line of data. However, there are 35K lines of data to compile and although the code below signals "done" after about 5 mins, the files are still processing. I can't start the next function until this one is completed so I need to know when it has actually finished.
function createLocations(done) {
if (GENERATE_LOCATIONS) {
for (var i = 0; i < locations.length; i++) {
var location = locations[i],
fileName = 'location-' + location.area.replace(/ +/g, '-').replace(/'+/g, '').replace(/&+/g, 'and').toLowerCase() + '-' + location.town.replace(/ +/g, '-').replace(/'+/g, '').replace(/`+/g, '').replace(/&+/g, 'and').toLowerCase();
gulp.src('./templates/location-template.html')
.pipe($.rename(fileName + ".html"))
.pipe(gulp.dest('./tmp/'));
}
}
done();
console.log('Creating '+locations.length+' Files Please Wait...');
}
The function runs for a further 15mins after it has signalled done. Appreciate any help to detect the actual completion of the function.
Many thanks!
I'm currently having issues with the following problem.
I'm trying to access the layer inside the scene and thereby the elements that I set in that layer. In this case I want access to the conv_label in the Layer to set the text.
I'm doing this via a ConversationClass which extends cc.Class.
When I try to access the layer via the variable it doesn't work or with getChildByName or Tag it doesn't work (value is always null).
This is a method inside the ConversationClass, I can console log the currentscene without any problem, but any variable I set doesn't appear in the current scene. in this case the name was "conv_layer", I can access the children by just using array calls, but that's not really a good way it seems and quite confusing.
This I tried:
currentscene.children[0].children[3] will give me the right element.
currentscene.conv_layer.getChildByName("text") says conv_layer does not exist
currentscene.children[0].getChildByName("text") returns null
Does anyone know how to solve this issue or can tell me where my thinking went wrong?
Not sure of it matters, but I call the scene (for now) in the following way.
cc.LoaderScene.preload(conversation_load, function() {
cc.director.runScene(new ConversationScene());
this.startGame();
}, this);
This is where I want access
startConversation: function(conversation) {
this._conversationObject = conversation;
this._currentScene = cc.director.getRunningScene();
console.log(this._currentScene); // Shows current scene object (doesn't have conv_layer property)
if(scene !== null)
this._currentConversationLayer = scene.conv_layer; // Returns null
},
This is my scene:
var ConversationScene = cc.Scene.extend({
conv_layer: null,
onEnter: function() {
this._super();
this.conv_layer = new ConversationLayer();
this.conv_layer.setName('conversation');
this.conv_layer.setTag(1);
this.addChild(this.conv_layer);
}
});
and this is my layer:
var ConversationLayer = cc.Layer.extend({
ctor: function() {
this._super();
this.init();
},
init: function() {
this._super();
var winSize = cc.director.getWinSize();
GD.current_conversation = conversation1;
this.background = new cc.Sprite();
this.background.anchorX = 0.5;
this.background.anchorY = 0.5;
this.background.setPositionX(winSize.width / 2);
this.background.setPositionY(winSize.height / 2);
this.addChild(this.background);
this.girl = new cc.Sprite();
this.girl.anchorX = 0;
this.girl.anchorY = 0;
this.addChild(this.girl);
this.text_background = new cc.Sprite(resources.conversation_interactive_assets, cc.rect(0,0,1920/GD.options.scale,320/GD.options.scale));
this.text_background.anchorX = 0.5;
this.text_background.anchorY = 0;
this.text_background.setPositionX(winSize.width / 2);
this.text_background.setPositionY(0);
this.addChild(this.text_background);
// Left
this.conv_label = new cc.LabelBMFont("", resources.font);
this.conv_label.anchorX = 0;
this.conv_label.anchorY = 0;
this.conv_label.setPositionX((winSize.width - this.text_background.width) / 2 + 20);
this.conv_label.setPositionY(this.text_background.height - 30);
this.conv_label.color = cc.color.BLACK;
this.conv_label.setName('text');
this.addChild(this.conv_label);
}
});
The issue was the loading order of everything.
It seems scenes are loaded in async, so the next function would be called but no layer would exist at that point.
Solved it by doing creation inside the class itself and calling onSceneEnter.
Hy everybody,
I'm working with dc.js and I think it's a genious tool ! However I have a issue I can't solve.
I'm using a dc.barchart and I want to launch a function of mine after a click on one bar, but I need to wait the end of the redraw of the barchart.
Order :
- my barchart is displayed
- I click on one bar
-> the barchart is redraw
-> only after the complete redraw, my function is launched
Where can I put my callback ? I can't find the corresponding code.
_chart.onClick = function (d) {
var filter = _chart.keyAccessor()(d);
dc.events.trigger(function () {
_chart.filter(filter);
_chart.redrawGroup();
alert("here is not working");
});
};
(...)
dc.redrawAll = function(group) {
var charts = dc.chartRegistry.list(group);
for (var i = 0; i < charts.length; ++i) {
charts[i].redraw();
}
alert("neither here");
if(dc._renderlet !== null)
dc._renderlet(group);
};
dc.events.trigger = function(closure, delay) {
if (!delay){
closure();
alert("neither neither here");
return;
}
dc.events.current = closure;
setTimeout(function() {
if (closure == dc.events.current)
closure();
}, delay);
};
Any idea ? I'm completely blocked right now :(
Thanks a lot for your help,
vanessa
If _chart is the name of your chart and you want to execute some function named my_function after finishing drawing, use the following piece of code after the declaration of the chart itself:
_chart.on("postRedraw", my_function);
Hope this is what you were looking for.
I brought this up in my last post but since it was off topic from the original question I'm posting it separately. I'm having trouble with getting my transmitted audio to play back through Web Audio the same way it would sound in a media player. I have tried 2 different transmission protocols, binaryjs and socketio, and neither make a difference when trying to play through Web Audio. To rule out the transportation of the audio data being the issue I created an example that sends the data back to the server after it's received from the client and dumps the return to stdout. Piping that into VLC results in a listening experience that you would expect to hear.
To hear the results when playing through vlc, which sounds the way it should, run the example at https://github.com/grkblood13/web-audio-stream/tree/master/vlc using the following command:
$ node webaudio_vlc_svr.js | vlc -
For whatever reason though when I try to play this same audio data through Web Audio it fails miserably. The results are random noises with large gaps of silence in between.
What is wrong with the following code that is making the playback sound so bad?
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var delayTime = 0;
var init = 0;
var audioStack = [];
client.on('stream', function(stream, meta){
stream.on('data', function(data) {
context.decodeAudioData(data, function(buffer) {
audioStack.push(buffer);
if (audioStack.length > 10 && init == 0) { init++; playBuffer(); }
}, function(err) {
console.log("err(decodeAudioData): "+err);
});
});
});
function playBuffer() {
var buffer = audioStack.shift();
setTimeout( function() {
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
source.start(context.currentTime);
delayTime=source.buffer.duration*1000; // Make the next buffer wait the length of the last buffer before being played
playBuffer();
}, delayTime);
}
Full source: https://github.com/grkblood13/web-audio-stream/tree/master/binaryjs
You really can't just call source.start(audioContext.currentTime) like that.
setTimeout() has a long and imprecise latency - other main-thread stuff can be going on, so your setTimeout() calls can be delayed by milliseconds, even tens of milliseconds (by garbage collection, JS execution, layout...) Your code is trying to immediately play audio - which needs to be started within about 0.02ms accuracy to not glitch - on a timer that has tens of milliseconds of imprecision.
The whole point of the web audio system is that the audio scheduler works in a separate high-priority thread, and you can pre-schedule audio (starts, stops, and audioparam changes) at very high accuracy. You should rewrite your system to:
1) track when the first block was scheduled in audiocontext time - and DON'T schedule the first block immediately, give some latency so your network can hopefully keep up.
2) schedule each successive block received in the future based on its "next block" timing.
e.g. (note I haven't tested this code, this is off the top of my head):
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var delayTime = 0;
var init = 0;
var audioStack = [];
var nextTime = 0;
client.on('stream', function(stream, meta){
stream.on('data', function(data) {
context.decodeAudioData(data, function(buffer) {
audioStack.push(buffer);
if ((init!=0) || (audioStack.length > 10)) { // make sure we put at least 10 chunks in the buffer before starting
init++;
scheduleBuffers();
}
}, function(err) {
console.log("err(decodeAudioData): "+err);
});
});
});
function scheduleBuffers() {
while ( audioStack.length) {
var buffer = audioStack.shift();
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
if (nextTime == 0)
nextTime = context.currentTime + 0.05; /// add 50ms latency to work well across systems - tune this if you like
source.start(nextTime);
nextTime+=source.buffer.duration; // Make the next buffer wait the length of the last buffer before being played
};
}