How to setup resources dependencies in alt:V - separation-of-concerns

I have seen that it is possible to have dependencies between multiple resources. So that resource1 uses functionality of resource2. How does the communication between resources work?
When should I separate scripts in different resources? Is it better to stick to one resource for a whole gamemode or to split it up?
Cheers

Serverside
Each resource is isolated from each other. The resources are using the cpp-sdk for communication. The isolation is depending on the script runtime. Some runtimes like c# support sharing memory between resources, while nodejs doesn't support sharing memory between resources running on different threads.
You always have to explicitly tell the runtime which functions and data you want to expose to other resources.
This means that you have a small runtime overhead when calling a function or accessing the data, because the communication data needs to be serialized to the unmanaged cpp memory and then back again to the memory of the other resource.
When the runtime supports sharing the same memory this overhead doesn't happen between resources with the same type.
When sticking to a single resource you don't have the runtime overhead but can't swap out resources individually.
Clientside
Its basically the same as for serverside with the exception that currently only a v8 javascript module exists that doesn't support sharing memory between resources.
For clientside the overhead of calling other resources most likely doesn't matter as much as for serverside. Especially when you want to reduce the cpu intensive tasks the server main thread has to execute. For clientside multiple resources also reduce the amount of data the client has to download, because when you change something in a resource the client has to redownload the whole resource.
tl;dr
Serverside
When performance matters most stick to a single resource for serverside. When you need to swap out resources from time to time use multiple resources.
Clientside
Use multiple resources when you use resources from other people or want to have a modulized resource. Split your assets (mods, images, ...) in as many resources ,that make sense, as possible to reduce data download when changes are happening.

Related

can a cuda code finish without cudaStreamDestroy()?

In our large code base, I could find there are multiple cudaStreamCreate() functions. However, I could not find cudaStreamDestroy() anywhere. Is it important to destroy streams after program is complete or one does not need to worry about this? What is a good programming practice in this regard?
Is it important to destroy streams after program is complete or one does not need to worry about this?
The runtime API will clean up all resources allocated (streams, memory, events, etc) by the context owned by the process during normal process termination. It isn't necessary to explicitly destroy streams in most situations.
While talonmies answer is correct, it is still often important to destroy your streams, and other entities created in CUDA:
If you're writing a library - you may finish your work well before the application exits. (Although in that case you might be working in a different CUDA context, and maybe you'll simply destroy the whole context).
If your code which creates streams might be called many times.
also, if you don't synchronize your streams after completing all work on them, then you might be missing some errors (and the results of your last bits of work); and if you do have a "last synch", that often means an opportunity to also destroy the stream.
Finally, if you use C++-flavored wrappers, like mine, then streams get destroyed when you leave the scope in which they were created, and you don't have to worry about it (but you pay the overhead of stream destruction API calls).

What effects GCP Cloud Function memory usage

I recently redeployed a hanful of python GCP cloud functions and noticed they are taking about 50mbs more memory, triggering memory limit errors (I had to increase the memory allocation from 256mb to 512mb to get them to run). Unfortunately, that is 2x the cost.
I am trying to figure what caused the memory increase. The only thing I can think of is a python package recent upgrade. So, I specified all package versions in the requirements.txt, based off of my local virtual env, which has not changed lately. The memory usage increase remained.
Are there other factors that would lead to a memory utilization increase? Python runtime is still 3.7, the data that the functions processed has not changed. It also doesn't seem to be a change that GCP has made to cloud functions in general, because it has only happened with functions I have redeployed.
I can point out few possibilities of memory limit errors which are:
One of the reasons for out of memory in Cloud Functions is as discussed in the document.
Files that you write consume memory available to your function, and
sometimes persist between invocations. Failing to explicitly delete
these files may eventually lead to an out-of-memory error and a
subsequent cold start.
As mentioned in this StackOverflow Answer, that if you allocate anything in global memory space without deallocating it, the memory allocation will count this with the future invocations. To minimize memory usage, only allocate objects locally that will get cleaned up when the function is complete. Memory leaks are often difficult to detect.
Also, The cloud functions need to respond when they're done. if they don't respond then their allocated resources won't be free. Any exception in the cloud functions may cause a memory limit error.
You may also wanna check Auto-scaling and Concurrency which mentions another possibility.
Each instance of a function handles only one concurrent request at a
time. This means that while your code is processing one request, there
is no possibility of a second request being routed to the same
instance. Thus the original request can use the full amount of
resources (CPU and memory) that you requested.
Lastly, this may be caused by issues with logging. If you are logging objects, this may prevent these objects from being garbage collected. You may need to make the logging less verbose and use string representations to see if the memory usage gets better. Either way, you could try using the Profiler in order to get more information about what’s going on with your Cloud Function’s memory.

Architecture advice for EventMachine and MySQL

We are writing a real-time game in EventMachine/Ruby. We're using ActiveRecord with MySQL for storing the game objects.
When we start the server we plan to load all the game objects into memory. This will allow for us to avoid any blocking/slow SQL queries with ActiveRecord.
However, we still need to persist the data in the database in case the server crashes, of course.
What are our options for doing so? I could use EM.Defer but I have no idea how many concurrent players that could handle since the thread pool is limited to 20.
Currently I'm thinking using Resque with Redis would be the best bet. Do everything with the objects in memory, and whenever there is a save that needs to occur for the database, fire off a job and add it to the Resque queue.
Any advice?
Threadpool size can be tweaked - see EventMachine.threadpool_size
Each server process (apache...) will spawn its own EventMachine reactor and its own EM.defer threadpool, so if you use a forking server (a mongrel farm, passenger, ...) you don't need to go crazy on the threadpool size
See EM-Synchrony by Ilja Grigorik (https://github.com/igrigorik/em-synchrony) - you should be able to simplify your code with it
Afaik, mysql has a non-blocking driver that you can use freely with EM, EM::Synchrony supports it http://www.igvita.com/2010/04/15/non-blocking-activerecord-rails/ - this would mean you don't need EM.defer at all!
Take a look at Thin - https://github.com/macournoyer/thin/ - it's non-blocking EM-based webserver that supports Rails
Having said all this, writing evented code is a bitch - forget about stack traces and make sure you're running benchmark tests often as anything blocking your reactor will block the entire application.
Also, this all applies to MRI Ruby ONLY. If you mean to use jruby... You're bound to get into trouble as thread-safety of eventmachine seems to be largely due to GIL of MRI Ruby and standard patterns don't work (many aspects of it can be made to work with this fork https://github.com/WebtehHR/eventmachine/tree/v1.0.3_w_fix which fixes some issues EM has with JRuby)
Unfortunately, guys from https://github.com/eventmachine/eventmachine are kind of not very active, the project currently has 200+ issues and almost 60 open pull requests which is why I've had to use a separate fork to continue playing with my current project - this still means EM is an awesome project just don't expect problems you encounter to be quickly fixed so do your best to not go out of the trodden path of EM use.
Another problem with JRuby is that EM::Synchrony imposes a heavy performance penalty because JRuby doesn't have implemented fibers as of 1.7.8 but rather maps them to Java native threads which are MUCH slower
Also, have you considered messaging with something like RabbitMQ (it has a synchronous https://github.com/ruby-amqp/bunny, and evented driver https://github.com/ruby-amqp/amqp) as a possibility to communicate game objects between clients and perhaps reduce overhead on the database / distributed memory store that you had in mind?
Redis/Resque seem good, but if all the jobs will need to do is simple persistance, and if there will be A LOT of such calls, you might want to consider beanstalkd - it has A LOT faster but simpler queue then Resque and you can probably make this even faster if you don't really need activerecord to dump attribute hashes into the database, see delayed_jobs vs resque vs beanstalkd?
A couple years and a failed project later, some thoughts:
avoid eventmachine if any way possible, there's a plethora of opportunities to peg your CPU nowadays with YARV/MRI Ruby on a IO constrained application and without wasting memory.
My favorite approach for a web application at this time is use Puma with multiple processes and threads.
Have in mind that GIL in YARV only affects the Ruby interpreter code, not the IO operations, meaning that on a IO constrained application you can add threads and see better utilization of a single core,
add more processes and you see better utilization of many cores :) On Heroku 1x worker we run 2 processes with 4 threads each and this pegs our CPU potential to the top in benchmark meaning the application is no longer IO bound, but CPU bound and doing so without unacceptable memory losses.
When we needed super-fast responses we were troubled by the DB write operation times which did not affect the response to client, so we did asynchronous database writes using sidekiq / resque,
In hindsight you could even do celluloid or concurrent-ruby for asynchronous IO reads/writes (think DB writes, cache visits etc), it's less overhead and infrastructure but harder to debug and problem solve in production - my worst nightmare being an async operation failing silently with no error trace in our Errors console (an exception in exception handling for example)
End result is that your application experiences the same sort of benefits you used to get from using eventmachine (elimination of the IO bound, full utilization of CPU without huge memory footprint, parallel non-blocking IO) without resorting to writing reactor code which is a complete bitch to do as explained in my 2013 post

Is cache miss a kind of interrupt/fault

We know that a page miss in memory will bring a page fault, and the page handler must load the page into the physical memory. Here I wonder whether a miss in a cache is also a system fault? If not, what's the difference between a memory fault and a cache fault? Thanks a lot.
By "cache fault" do you mean a cache miss in the L1/L2/L3 caches of the processor? If so, then no, it does not generate a fault, at least on every processor architecture that I've ever heard of.
The reason for this is that a page fault requires software intervention to decide whether the access was invalid, whether the access was to a page that was swapped out to disk, etc. In contrast, a cache miss can by definition be handled by the processor itself - since it didn't cause a page fault, the data must already be stored in main memory or a lower-level cache, which is directly accessible to the processor. The processor will mechanically translate the address of the memory being accessed from virtual to physical and then asks the lower-level cache or main memory for the data.
The same idea applies to simultaneous multiprocessors, where a cache line might be invalidated by one core which writes to it, even though another core has it stored in a cache. The processor defines its own coherency protocol to ensure that the stale copy will not be read, usually either by forcing the core with the invalid cache line to refresh it from a lower-level cache, or by requiring it to watch a shared write bus where all processors can see values which are being written to.
No, it simply causes a processor stall. Perhaps an appropriate mental image is of one or more NOP instructions getting inserted into the pipeline. Also called a "bubble". Not so sure this is an appropriate model for what modern processors do but the effect certainly is the same, the processor stops executing instructions until the data becomes available.
A cache fault is when a core is blocked to read/write because another core intends to read/write at the same time the very same data. This is an issue of multicore parallelism. For instance, consider that two cores (0 and 1) require a variable x from the RAM, a copy of x is placed on the highest level cache (L2 or L3) which is shared by all the cores, then a second copy of x is placed in the most internal cache (L1) of core 0 while core 1 request the very same variable to operate with. Core 1 must be blocked while the conflict of updating the value of the variable from core_0 is performed. The blocking operation is a cache fault.
Nobody else has mentioned TLB so far. Some CPUs (e.g. MIPS) have a software-filled TLB and a TLB miss actually triggers execution of the dedicated exception handler, which then needs to provide to the CPU the sought virtual to physical mapping. IOW, some cache misses/faults may not be handled automatically by hardware.

In-memory function calls

What are in-memory function calls? Could someone please point me to some resource discussing this technique and its advantages. I need to learn more about them and at the moment do not know where to go. Google does not seem to help as it takes me to the domain of cognition and nervous system etc..
Assuming your explanatory comment is correct (I'd have to see the original source of your question to know for sure..) it's probably a matter of either (a) function binding times or (b) demand paging.
Function Binding
When a program starts, the linker/loader finds all function references in the executable file that aren't resolvable within the file. It searches all the linked libraries to find the missing functions, and then iterates. At least the Linux ld.so(8) linker/loader supports two modes of operation: LD_BIND_NOW forces all symbol references to be resolved at program start up. This is excellent for finding errors and it means there's no penalty for the first use of a function vs repeated use of a function. It can drastically increase application load time. Without LD_BIND_NOW, functions are resolved as they are needed. This is great for small programs that link against huge libraries, as it'll only resolve the few functions needed, but for larger programs, this might require re-loading libraries from disk over and over, during the lifetime of the program, and that can drastically influence response time as the application is running.
Demand Paging
Modern operating system kernels juggle more virtual memory than physical memory. Each application thinks it has access to an entire machine of 4 gigabytes of memory (for 32-bit applications) or much much more memory (for 64-bit applications), regardless of the actual amount of physical memory installed in the machine. Each page of memory needs a backing store, a drive space that will be used to store that page if the page must be shoved out of physical memory under memory pressure. If it is purely data, the it gets stored in a swap partition or swap file. If it is executable code, then it is simply dropped, because it can be reloaded from the file in the future if it needs to be. Note that this doesn't happen on a function-by-function basis -- instead, it happens on pages, which are a hardware-dependent feature. Think 4096 bytes on most 32 bit platforms, perhaps more or less on other architectures, and with special frameworks, upwards of 2 megabytes or 4 megabytes. If there is a reference for a missing page, the memory management unit will signal a page fault, and the kernel will load the missing page from disk and restart the process.