game seem run too quickly in some scene,
I wanna limit fps at 30.
I am using pure-cpp d3d.
I find SetWaitableTimer is not available in WP8 (which i use to limit fps in desktop)
SleepConditionVariableCS is available in WP8 here is the code for sleep
struct WaitTimeData
{
CONDITION_VARIABLE conditionVariable;
CRITICAL_SECTION criticalSection;
WaitTimeData()
{
InitializeConditionVariable(&conditionVariable);
memset(&criticalSection , 0 , sizeof(CRITICAL_SECTION));
//InitializeCriticalSection(&_CriticalSection); wp8 has no this func
criticalSection.DebugInfo = (PRTL_CRITICAL_SECTION_DEBUG)-1;
criticalSection.LockCount = -1;
}
};
int WPSleep(unsigned int ms)
{
static WaitTimeData s_wtd;
SleepConditionVariableCS(
&s_wtd.conditionVariable,
&s_wtd.criticalSection,
ms
);
if (GetLastError() == ERROR_TIMEOUT)
{
SetLastError(0);
}
return 0;
}
Related
I've made an application with Blazor WebAssembly with a 5min timer in a BackgroundService into my SERVER.
Now, everytime a second change I would like to notify my client to update the timer on the CLIENT page. (it's a Component)
I was wondering if there was a solution to call a CLIENT C# Method from my SERVER by using a SERVICE between this two ?
Do you have any suggestion ?
I've already created the shared SERVICE but I don't know how I can call my CLIENT C# method from my Service automatically.
ps : I've already trying to pass my values from my SERVER to a SERVICE who call a JAVASCRIPT function (with IJSRuntime) who will call my C# function to update the timer. But actually my Js function seems to not working.
[My Background Task]
`
// A 5min timer who create a Draw every 5min and update it when count is at 0
public async Task ClockCountDownAsync()
{
if (secTime == 0 && minTime != 0)
{
if (minTime == 5)
{
Draw newDraw = new Draw();
_unitOfWork.Draws.Add(newDraw);
await _unitOfWork.Complete();
}
minTime = minTime - 1;
secTime = 59;
}
else if (secTime != 0)
{
secTime = secTime - 1;
}
else if(minTime == 0 && secTime == 0)
{
await DrawAndAddNumbers();
minTime = 5;
secTime = 0;
}
int[] timeTab = new int[] { minTime, secTime };
_timerService.GetTimeFromCounter(timeTab);
}
`
[My Client Function]
`
public async Task UpdateTimer(int[] timeTab)
{
minTime = timeTab[0];
secTime = timeTab[1];
await InvokeAsync(StateHasChanged);
}
`
[The function in a Service shared who is called by the SERVER who need to call the function from my CLIENT]
`
public void GetTimeFromCounter(int[] timeTab)
{
// UpdateTimer(timeTab);
}
`
public useAudio(base64EncodedAudio: any, loop: boolean, volume: number) {
let _this = this;
let audioFromString = this.base64ToBuffer(base64EncodedAudio);
this._context.decodeAudioData(audioFromString, function (buffer) {
_this.audioBuffer = buffer;
_this.PlaySound(loop, volume);
}, function (error) {
TelemetryClient.error(EventType.BASE64_DECODE_ERROR, "Error decoding sound string");
});
}
private PlaySound(loop: boolean, volume: number) {
this._source = this._context.createBufferSource();
let gainNode = this._context.createGain();
gainNode.gain.value = +(volume / 100).toFixed(2);
gainNode.connect(this._context.destination);
this._source.buffer = this._audioBuffer;
this._source.loop = loop;
this._source.connect(gainNode);
this._source.start(0);
}
gainNode.gain.value doesn't seem to work properly. Value 1 and 0.15 plays the sound at the full volume.
Am I missing anything here?
The gain AudioParam can handle very large values but if you want to lower the volume its value has to be somewhere in between 0 and 1. When the value is 0 no sound will be audible and a value of 1 will actually bypass the GainNode. Any larger value will amplify the sound. But depending on the browser and operating system you may not hear that since there might be a limiter in place before the sound hits your speakers.
What is the best strategy to choose camera for qrcode scanning?
Currently modern devices have few back cameras.
For example huawei mate 20 have 4 camera (3 physical and 1 virtual based on physical ones)
Currently my algorithm just selecting first camera with "back" in label.
Is there any better strategy for best readability of qr code?
Here is my code:
this.qrScannerComponent.getMediaDevices().then(devices => {
// this.info = devices.map((dev, i) => `${i}. ${dev.label}`).join('\n');
const videoDevices: MediaDeviceInfo[] = [];
for (const device of devices) {
if (device.kind.toString() === 'videoinput') {
videoDevices.push(device);
}
}
if (videoDevices.length > 0) {
let choosenDev;
for (const dev of videoDevices) {
if (dev.label.includes('back')) {
choosenDev = dev;
break;
}
}
if (choosenDev) {
this.qrScannerComponent.chooseCamera.next(choosenDev);
} else {
this.qrScannerComponent.chooseCamera.next(videoDevices[0]);
}
}
});
I've seen questions about the LWJGL window flickering during rendering, but I'm talking about the full screen flickering here, not just the window. The flickers are ~1.5 seconds long of the normal computer screen, then ~1.5 seconds of full black.
I'm not trying to run the app in full screen.
I'm using the code from this tutorial, that I rewrote in Kotlin:
fun main(args: Array<String>){
SharedLibraryLoader.load()
thread {
Main.run()
}
}
object Main: Runnable {
val window: Long
init {
if (glfwInit() != GL_TRUE)
throw RuntimeException("Couldn't initialize GLFW...")
glfwWindowHint(GLFW_RESIZABLE, GL_TRUE)
window = glfwCreateWindow(800, 600, "Test", NULL, NULL)
if (window == NULL)
throw RuntimeException("Couldn't create the window...")
val videoMode = glfwGetVideoMode(glfwGetPrimaryMonitor())
glfwSetWindowPos(window, 100, 100)
glfwMakeContextCurrent(window)
glfwShowWindow(window)
}
fun update() {
glfwPollEvents()
}
fun render() {
glfwSwapBuffers(window)
}
override fun run() {
while (glfwWindowShouldClose(window) != GL_TRUE){
update()
render()
}
}
}
with the Gradle dependencies:
val lwjglVersion = "3.0.0a"
compile("org.lwjgl:lwjgl:${lwjglVersion}")
compile("org.lwjgl:lwjgl-platform:${lwjglVersion}:natives-windows")
compile("org.lwjgl:lwjgl-platform:${lwjglVersion}:natives-linux")
compile("org.lwjgl:lwjgl-platform:${lwjglVersion}:natives-osx")
I'm on a Debian 9 machine (Linux). Is this a graphics card problem? Or is the code just wrong?
I am able successfully to encode the data by H264 using Media Foundation Transform (MFT) but unfortunately I got a very high CPU(when I comment in the program the calling of this function I got low CPU).It is few steps followed to get the encoding so I can't do anything to improve it?Any idea can help
HRESULT MFTransform::EncodeSample(IMFSample *videosample, LONGLONG llVideoTimeStamp, MFT_OUTPUT_STREAM_INFO &StreamInfo, MFT_OUTPUT_DATA_BUFFER &encDataBuffer)
{
HRESULT hr;
LONGLONG llSampleDuration;
DWORD mftEncFlags, processOutputStatus;
//used to set the output sample
IMFSample *mftEncodedSample;
//used to set the output sample
IMFMediaBuffer *mftEncodedBuffer = NULL;
memset(&encDataBuffer, 0, sizeof encDataBuffer);
if (videosample)
{
//1=set the time stamp for the sample
hr = videosample->SetSampleTime(llVideoTimeStamp);
#ifdef _DEBUG
printf("Passing sample to the H264 encoder with sample time %i.\n", llVideoTimeStamp);
#endif
if (SUCCEEDED(hr))
{
hr = MFT_encoder->ProcessInput(0, videosample, 0);
}
if (SUCCEEDED(hr))
{
MFT_encoder->GetOutputStatus(&mftEncFlags);
}
if (mftEncFlags == MFT_OUTPUT_STATUS_SAMPLE_READY)
{
hr = MFT_encoder->GetOutputStreamInfo(0, &StreamInfo);
//create empty encoded sample
if (SUCCEEDED(hr))
{
hr = MFCreateSample(&mftEncodedSample);
}
if (SUCCEEDED(hr))
{
hr = MFCreateMemoryBuffer(StreamInfo.cbSize, &mftEncodedBuffer);
}
if (SUCCEEDED(hr))
{
hr = mftEncodedSample->AddBuffer(mftEncodedBuffer);
}
if (SUCCEEDED(hr))
{
encDataBuffer.dwStatus = 0;
encDataBuffer.pEvents = 0;
encDataBuffer.dwStreamID = 0;
//Two shall after this step points on the same address
encDataBuffer.pSample = mftEncodedSample;
hr = MFT_encoder->ProcessOutput(0, 1, &encDataBuffer, &processOutputStatus);
}
}
}
SafeRelease(&mftEncodedBuffer);
return hr;
}
The first key is to ensure you have configured the sink with MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS. I also set the MF_LOW_LATENCY attribute.
// error checking omitted for brevity
hr = attributes->SetUINT32(MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS, TRUE);
hr = attributes->SetUINT32(MF_SINK_WRITER_DISABLE_THROTTLING, TRUE);
hr = attributes->SetUINT32(MF_LOW_LATENCY, TRUE);
The other key is to ensure you are selecting the native format for the output of the source. Otherwise, you will remain very disappointed. I describe this in detail here.
I should also mention that you should consider creating the transform sample and memory buffer once at the beginning, instead of recreating them on each sample received.
Good luck. I hope this helps.