Using "tic" without "toc" in Octave - octave

Quick background, Octave GUI version 6.2.0 takes at least 20-30 seconds to initialize on one computer, but 1-2 seconds on another computer running the same version and same OS. I wanted to figure out where the holdup was, so I used the tic and toc timer functions in various places within the octaverc startup file. At some point, I noticed that merely having tic at the very end of the startup file completely fixed the slow initialize time.
So while my main issue is resolved (though I don't know why that worked), I wanted to see if using tic without ever using toc in an Octave session can cause any issues, e.g., does this leave some timer process running in the background? Thanks in advance.

tic stores the current clock time in an internal variable (see tic_toc_timestamp here, the source code for tic starts right after that line), toc compares the current clock time to the stored variable (see its source code here).
So, no, there’s no timer running in the background after you call tic.

Related

External FLASH slow verification with STM32cubeProgrammer

I'm working with an STM32F469 chip with a Micron MT25Q Quad_SPI Flash. To program the Flash, there needs to be an external loader program developed. That's all working, but the problem is that verification of the QSPI Flash is extremely slow.
Looking in the log file, it shows that the Flash is being programmed in 150K byte blocks. However, the verification is being done in 1K byte blocks. In addition, the chip is re-initialized before each block check. I've tried this with both through STM32cubeIDE and in STM32cubeProgrammer directly.
The external programmer program include the correct chip configuration information and specifies a 64K page size. I don't see how to get the programmer to use a larger block size. It looks like it understands what part of the SRAM is used and is using the balance of the 256K in the on-board SRAM for programming the QSPI Flash. It could use the same size for reading the data back or use the Verify() function in the external loader. It's calling Read() and then checking the data itself.
Any thought or hints?
Let me add some observations on creating a new external loader. The first observation is "Don't." If you can pick a supported external chip and pin it out to use an existing loader, then do that. STM provides just 4 example programs but they must have 50 external loaders. If the hardware design copies the schematic for a demo board that has an external loader, you should be fine and avoid doing the development work.
The external loader is not a complete executable. It provides a set of functions to do basic operations like Init(), Erase(), Read() and Write(). The trick is that there is no main() and no start-up code is run when the program starts.
The external loader is an ELF file, renamed to "*.stldr". The programming tool looks into the debug information to find the location of the functions. It then sets the registers to provide the parameters, the PC to run the function, and then let's it run. There's some super-clever work going on to make this work. The programmer looks at the returned value (R0) to see if things pass or not. It can also figure out if the function has crashed the core or otherwise timed-out.
What makes writing the external super fun is that the debugger is running the program so there's no debugger available to see what the code is doing. I settled on outputting errors, and encoded information, on the return() from the called functions to give hints as to what was happening.
The external loader isn't a "full" program. Without the startup code, lots of on-chip stuff isn't set up and some just isn't going to work. At least I couldn't figure it out. I'm not sure if it wasn't configured right or the debugger was blocking its use. Looking at the example external loaders, they are written in a very simple way and do not call the HAL or use interrupts. You'll need to provide core set-up functions to configure the clock chains. That Hal_Delay() method will never return as the timers and/or interrupts aren't working. I could never make them work and suspect the NVIC was somehow being disabled. I ended up replacing the HAL_delay() function with a for loop that spun based on core clock rate and a the instruction cycles per loop.
The app note suggests developing a stand-alone program to debug the basic capabilities. That's a good idea but a challenge. Prior to starting the external loader, I had the QSPI doing the needed operations but from a C++ application calling the HAL. Creating an external loader from that was a long exercise in stripping out and replacing functionality. A hint is that the examples are written at a register level. I'm not that good to deal directly with the QuadSPI peripheral and the chip's instruction set at the same time.
The normal start-up of a program is eliminated. Everything that's done before the main() is called (E.g., in startup_stm32f469nihx.s) is up to you. This includes setting the clock chains to boost the core clock and get the peripheral buses working. The program runs in the on-chip SRAM so any initialized variables are loaded correctly. There's no moving data needed but the stack and uninitialized data areas could/should still be zeroed.
I hope this helps someone!
Today I've faced the same issue.
I was able to improve the Verify speed with two simple steps, but Verify still much slower than programming and this is strange...
If anyone find a way to change the 1KB block read of STM32CubeProgrammer I would like to know =).
Follow the changes I made to improve a bit the performance.
Add a kind of lock in the Init Function to avoid multiple initializations. This was the most significant change because I'am checking the Flash ID in my initialization proccess. Other aproachs could be safer but this simple code snippet worked to me.
int Init(void)
{
static uint32_t lock;
if(lock != 0x43213CA5)
{
lock = 0x43213CA5;
/* Init procedure goes here */
}
return(1);
}
Cache a page instead of reading the external memory for each call. This will help more if your external memory page read has too much overhead, otherwise this idea won't give relevant results.

tkzinc bbox command taking hours to complete

I have tcl script for creating tkzinc canvas which have multiple compartments and other components associated with each compartment.
The code runs perfectly fine with 32-bit centos5.6 and takes less than a minute but on 64-bit centos7 it takes around 25-28 hours load the canvas.
Need help on the same.

why game is running slow in libgdx?

I am making racing game in Libgdx.My game apk size is 9.92 mb and I am using four texture packer of total size is 9.92 Mb. My game is running on desktop but its run on android device very slow. What is reason behind it?
There are few loopholes which we neglect while programming.
Desktop processors are way more powerful so the game may run smoothly on Desktop but may slow on mobile Device.
Here are some key notes which you should follow for optimum game flow:
No I/O operations in render method.
Avoid creating Objects in Render Method.
Objects must be reused (for instance if your game have 1000 platforms but on current screen you can display only 3, than instead of making 1000 objects make 5 or 6 and reuse them). You can use Pool class provided by LibGdx for object pooling.
Try to load only those assets which are necessary to show on current screen.
Try to check your logcat if the Garbage collector is called. If so than try to use finalize method of object class to find which class object are collected as garbage and try to improve on it.
Good luck.
I've got some additional tips for improving performance:
Try to minimize texture bindings (or generally bindings when you're making a 3D game for example) in you render loop. Use texture atlases and try to use one texture after binding as often as possible, before binding another texture unit.
Don't display things that are not in the frustum/viewport. Calculate first if the drawn object can even be seen by the active camera or not. If it's not seen, just don't load it onto your GPU when rendering!
Don't use spritebatch.begin() or spritebatch.end() too often in the render loop, because every time you begin/end it, it's flushed and loaded onto the GPU for rendering its stuff.
Do NOT load assets while rendering, except you're doing it once in another thread.
The latest versions of libgdx also provide a GLProfiler where you can measure how many draw calls, texture bindings, vertices, etc. you have per frame. I'd strongly recommend this since there always can be situations where you would not expect an overhead of memory/computational usage.
Use libgdx Poolable (interface) objects and Pool for pooling objects and minimizing the time for object creation, since the creation of objects might cause tiny but noticable stutterings in your game-render loop
By the way, without any additional information, no one's going to give you a good or precise answer. If you think it's not worth it to write enough text or information for your question, why should it be worth it to answer it?
To really understand why your game is running slow you need to profile your application.
There are free tools avaiable for this.
On Desktop you can use VisualVM.
On Android you can use Android Monitor.
With profiling you will find excatly which methods are taking up the most time.
A likely cause of slowdowns is texture binding. Do you switch between different pages of packed textures often? Try to draw everything from one page before switching to another page.
The answer is likely a little more that just "Computer fast; phone slow". Rather, it's important to note that your computer Java VM is likely Oracles very nicely optimized JVM while your phone's Java VM is likely Dalvik, which, to say nothing else of its performance, does not have the same optimizations for object creation and management.
As others have said, libGDX provides a Pool class for just this reason. Take a look here: https://github.com/libgdx/libgdx/wiki/Memory-management
One very important thing in LibGDX is that you should make sure that sometimes loading assets from the memory cannot go in the render() method. Make sure that you are loading the assets in the right times and they are not coming in the render method.
Another very important thing is that try to calculate your math and make it independent of the render in the sense that your next frame should not wait for calculations to happen...!
These are the major 2 things i encountered when I was making the Snake game Tutorial.
Thanks,
Abhijeet.
One thing I have found, is that drawing is laggy. This means that if you are drawing offscreen items, then it uses a lot of useless resources. If you just check if they are onscreen before drawing, then your performance improves by a lot surprisingly.
Points to ponder (From personal experience)
DO NOT keep calling a function,in render method, that updates something like time,score on HUD (Make these updates only when required eg when score increases ONLY then update score etc)
Make calls IF specific (Make updations on certain condition, not all the time)
eg. Calling/updating in render method at 60FPS - means you update time 60 times a sec when it just needs to be updated once per sec )
These points will effect hugely on performance (thumbs up)
You need to check the your Image size of the game.If your image size are more than decrease the size of images by using the following link "http://tinypng.org/".
It will be help you.

Decrease CPU load while calculating for a long time on AIR mobile application?

I am facing the follow problem :
- There is a calculation which is calculated complex maths during the loading of the application, and it is toking considerable long time ( about 20 seconds ) on which time the CPU is used nearly on 100% and the application look like it is frozen.
Since it is a mobile application, this must be prevented, even with the costs of extending the initial loading time, but there is not direct access to the calculating code since it is inside 3rd party library.
Is there a way to prevent AIR application most of CPU generally ?
On desktop, you would use the Workers API. Its pretty new, I'd recommend it for AS3 only projects. If you use flex, its better to wait a few months.
Workers is a multi-threading API, what allows you to make a UI and a Working thread. This will still use 100% of CPU, but UI won't stuck.. Here are some links to get you started:
Thibault Imbert - sneak peek,
Intro to as 3 workers,
AS3 Workers livedocs
However, on Mobile, you can't use workers, so you'd have to break your function apart, and insert some delays there, like callLater, or setTimeout. Its hard to compose a function like that, but if it has a loop, you can insert a callLater method after every x iteration. you can parametrize both x, and the delay of callLater function to achieve perfect solution. After callLater is called, the UI will be rendered, events will be generated and catched. If you don't need them, remove their listeners, or stop their propagation with a higher priority handler. If you need, I can post some source example of callLater in a loop.

Why does Flash Builder 4.6 Profiler seem to leak Strings, whereas Debug mode GC's as expected

While unit profiling my classes I noticed that the String class endlessly accumulates (eating up over 90% of the memory in my sizable app). Luckily this is only while running in Profiler mode of Flash Builder 4.6. In debug or deployment (as AIR) memory usage levels off as expected using embedded on-screen memory profilier (Mr Doobs Stats).
To verify I made a test app that was simply a URLLoader continuously loading a text file. When running in Profilier mode using URLLoaderDataFormat.String the String data is never GC'd and grows continuously whereas using URLLoaderDataFormat.BINARY the data is nearly immediately GC'd and stays level.
I hesitate to call this a bug, because possibly it's necessary part of the way the Profilier works… but perhaps this is abnormal for the Profiler? This is the essence of my StackOverflow inquiry.
At any rate, this burned up a couple work-days for me, so if you're Googling wondering why the String class is growing like crazy and never getting GC'd consider measuring your apps memory usage outside the Profilier to verify. In my case I was mislead into thinking I had run into some problem with Master Strings — though it's good understand Master Strings and their impact on memory (see:) don't get mislead like I did.