I'm looking to buy a chromebook and install either Ubuntu 14 or Ubuntu 16 on it. I looked at the unity specs, and did some research, but it doesn't appear certain unity will run.
I'm wondering, what specs for a chromebook will I need to run the unity GUI interface on it to do some light development work?
Further, Is dual-core processor enough to run unity or do I need quad-core cpu? Do I need 4gb of ram, or more?
Also, if you can recommend one that will work for this need?
Thank you
You'll need at least, and probably more than, 4GB of RAM in order to use Unity effectively in a Linux environment.
Dual core should be enough for things to run, however, everything is going to be more responsive if you're using a quad core system.
You will need to get the best graphics hardware you can find, Intel HD may work but I would be more optimistic about a Tegra GPU being capable of running Unity. Graphics drivers will probably be a hurdle here.
A Chromebook is going to run out of disk space very quickly. Unity itself takes around 2.5 GB after it is installed, and each game project, depending on graphics and audio resources is going to consume disk space very quickly. 32GB hard drive would be the absolute minimum, and I can still foresee the inevitably full hard drive causing issues.
Ultimately I would suggest finding a laptop with higher specs than a typical Chromebook if you're serious about using Unity on it.
My best advice here, though, is don't buy a Chromebook for this purpose unless you're confident in the retailer being open-minded about returns.
Related
I'm successfully compiling my unit-test with arm-eabi-none-gcc and running them in qemu-system-arm with the machine lm3s6965evb.... But for some of the unit-tests I need more than the 64k of RAM that the lm3s6965evb mcu/machine has.
The IAR simulator apparently has no hard limit in the 'machine', so I just made a phony linkerfile that allows the unittest-program to use e.g. 512k RAM. This works (surprisingly) fine , but qemu doesn't play like that (hangs the moment I change the RAM section in the linkerfile). So I need another machine...
But thinking about it: I think I just need something that executes ARMv7 thumb(2?) code, like the CortexM3. It could also be Cortex-M33 which is a ARMv8 ...
I don't care about Hardware-registers or interrupts etc. I do need, however, printf() to work via semihosting or other means (uart etc), to printout unittest status (success/failures)
What are my best candidates,
modify the lm3s6965evb somehow?
taking an A7?
taking some of the ARM vhdl/fpga machines? (msp2.. musca ...) ?
(The 'virt' machine does not support cortex-m3/m4, according to error message)
?
Thanks
/T
(It turns out, that I misread the "mps2-an385" documentation & tutorials, - it wasn't complicated at all.)
It works if I just use the "mps2-an385" machine and modify the linkerfile to use more flash and ram. Currently i beefed it up to 4x ram and flash which is enough currently. (Haven't found out what the exact limits are.)
Still, I would like to hear if there are other solutions.
QEMU's lm3s6965evb model follows the real hardware, which does not have much RAM. If you want more RAM and you don't specifically want to have a model of those Stellaris boards, pick a board model type which has more RAM. If you need to use an M-profile core, try one of the MPS2 boards. If you are happy with an A-profile core, then the "virt" board with a Cortex-A15 may be a good choice.
I'm building a one-off smart-home data collection box. It's expected to run on a raspberry-pi-class machine (~1G RAM), handling about 200K data points per day (each a 64-bit int). We've been working with vanilla MySQL, but performance is starting to crumble, especially for queries on the number of entries in a given time interval.
As I understand it, this is basically exactly what time-series databases are designed for. If anything, the unusual thing about my situation is that the volume is relatively low, and so is the amount of RAM available.
A quick look at Wikipedia suggests OpenTSDB, InfluxDB, and possibly BlueFlood. OpenTSDB suggests 4G of RAM, though that may be for high-volume settings. InfluxDB actually mentions sensor readings, but I can't find a lot of information on what kind of resources are required.
Okay, so here's my actual question: are there obvious red flags that would make any of these systems inappropriate for the project I describe?
I realize that this is an invitation to flame, so I'm counting on folks to keep it on the bright and helpful side. Many thanks in advance!
InfluxDB should be fine with 1 GB RAM at that volume. Embedded sensors and low-power devices like Raspberry Pi's are definitely a core use case, although we haven't done much testing with the latest betas beyond compiling on ARM.
InfluxDB 0.9.0 was just released, and 0.9.x should be available in our Hosted environment in a few weeks. The low end instances have 1 GB RAM and 1 CPU equivalent, so they are a reasonable proxy for your Pi performance, and the free trial lasts two weeks.
If you have more specific questions, please reach out to us at influxdb#googlegroups.com or support#influxdb.com and we'll see hwo we can help.
Try VictoriaMetrics. It should run on systems with low RAM such as Raspberry Pi. See these instructions on how to build it for ARM.
VictoriaMetrics has the following additional benefits for small systems:
It is easy to configure and maintain since it has zero external dependencies and all the configuration is done via a few command-line flags.
It is optimized for low CPU usage and low persistent storage IO usage.
It compresses data well, so it uses small amounts of persistent storage space comparing to other solutions.
Did you try with OpenTSDB. We are using OpenTSDB for almost 150 houses to collect smart meter data where data is collected every 10 minutes. i.e is a lot of data points in one day. But we haven't tested it in Raspberry pi. For Raspberry pi OpenTSDB might be quite heavy since it needs to run webserver, HBase and Java.
Just for suggestions. You can use Raspberry pi as collecting hub for smart home and send the data from Raspberry pi to server and store all the points in the server. Later in the server you can do whatever you want like aggregation, or performing statistical analysis etc. And then you can send results back to the smart hub.
ATSD supports ARM architecture and can be installed on a Raspberry Pi 2 to store sensor data. Currently, Ubuntu or Debian OS is required. Be sure that the device has at least 1 GB of RAM and an SD card with high write speed (60mb/s or more). The size of the SD card depends on how much data you want to store and for how long, we recommend at least 16GB, you should plan ahead. Backup batter power is also recommended, to protect against crashes and ungraceful shutdowns.
Here you can find an in-depth guide on setting up a temperature/humidity sensor paired with an Arduino device. Using the guide you will be able to stream the sensor data into ATSD using MQTT or TCP protocol. Open-source sketches are included.
I need a remote PC/server which has a decent 3D card in it, to perform real-time 3D rendering... imagine running a 3D game on a remote server and that's a good comparison.
Most VPS and dedicated servers do not have good graphics capabilities for obvious reasons but Amazon do have special GPU instances. They're sold for GPGPU computation, using the GPU for data-crunching using tools like CUDA, but I wondered if they could also be used for real-time 3D rendering.
Can anyone provide a solid answer to that?
Edit: I should add it's my own 3d code and I want to know the capabilities of EC2 for this purpose, not a generic EC2 question
Amazon GPU servers are equipped with NVidia Tesla GPUs .While these are best at doing GPGPU they also have more than average capabilities for real-time graphics rendering.Though in this respect they are inferior to NVidia GTX cards (see GPU specs on NVidia website).
Now , you can use Amazon for real-time rendering but your bottleneck will be the network bandwidth.Tesla cards can be used with OpenGL to render graphics into Offscreen buffers very very fast , but then you should find the way to read pixels for each rendered frame and stream it to the client with acceptable frame rate.OpenGL pixel read from GPU is already very slow (though you can do some hacks using PBOs ping pongs) but I don't really think you can stream pixel packages ,with standard resolutions (800x600 or even less)from remote server ,so that the client gets it at minimal acceptable frequency.I do believe it will be possible in the future :)
P.S My answer is based on personal experience with Amazon EC2
Yes, amazon EC2 is well-suited for rendering. I've been doing this at large scale for over 3 years for a mobile application. Throughput has been fine for short animations which I move from EC2 to S3/CloudFront.
I want to stress-test my application this way, because it seems to be failing in some very old client machines.
At first I read a bit about QEmu and thought about hardware emulation, but it seems a long shot. I asked at superuser, but didn't get much feedback (yet).
So I'm turning to you guys... How do you this kind of testing?
I'm not sure about slowing a CPU but if you use a virtual machine, like VMWare, you can control how much RAM is actually used. I run it on a MBP at home with 8GB and my WinXP VM is capped at 1.5 GB RAM.
EDIT: I just checked my version of VMWare and I can control the number of cores it can use. It's definitely not the same as a slower CPU but it might highlight some issues for you.
Since it's not entirely clear that your app is failing because of the old hardware or the old OS, a VM client should allow you to test various versions of OSes rather quickly. It came in handy for me a few years back when I was trying to get a .Net 2.0 app to run on Win98 (it can be done though I don't remember how I got it working...).
Virtual Box is a free virtual machine similar to VMWare. It also has the capacity to reduce available memory. It can restrict how many CPUs are available, but not how fast those CPUs are.
Try cpulimit, most distro includes it (Ubuntu does) http://www.digipedia.pl/man/doc/view/cpulimit.1
If you want to lower the speed of your cpu you can easily do this by modifying a fork bomb program
int main(){
int x=0;
int limit = 10
while( x < limit ){
int pid = fork();
if( pid == 0 )
while( 1 ){}
else
x++;
}
}
This will slow down your computer quite quickly, you may want to change the limit variable to a higher number. I must warn you though this can be dangerous, because if implemented wrong you could fork bomb your system leaving it useless unless you restart it. Read this first if you don't understand what this code will do.
On POSIX (Unix) systems you can apply run limits to processes (that is, to executions of a program). The system call to do this is called setrlimit(), and most shells enable you to use the ulimit built-in to set them from the command-line (plain POSIX ulimit is not very useful). Using these you can run a program with low limits to simulate a smaller computer.
POSIX systems also provide a nice command for running a program at lower CPU priority, which can simulate a slower CPU if you also ensure there is another CPU intensive progam running at the same time.
I think it's pretty unlikely that cpu speed is going to exercise very many bugs; On the other hand, it's much more likely for different cpu features to matter. Many VM implementations provide ways of toggling on and off certain cpu features; qemu in particular permits a high level of control over what's available to the CPU.
Think outside the box. Which application of the ones you use regularly does this?
A debugger of course! But, how can you achieve such a behavior, to emulate a low cpu?
The secret to your question is asm _int 3. This is the assembly "pause me" command that is send from the attached debugger to the application you are debugging.
More about int 3 to this question.
You can use the code from this tool to pause/resume your process continuously. You can add an interval and make that tool pause your application for that amount of time.
The emulated-cpu-speed would be: (YourCPU/Interval) -0.00001% because of the signaling and other processes running on your machine, but it should do the trick.
About the low memory emulation:
You can create a wrapper class that allocates memory for the application and replace each allocation with call to this class. You would be able to set exactly the amount of memory your application can use before it fails to allocate more memory.
Something such as: MyClass* foo = AllocWrapper(new MyClass(arguments or whatever));
Then you can have the AllocWrapper allocating/deallocating memory for you.
On Linux, you can use ulimit as Raedwald said. On Windows, you can use the SetProcessWorkingSetSize system call. But these only set a limit on a per process basis. In reality, parts of the system will start to fail in a stressed environment. I would suggest using the Sysinternals' testlimit tool to stress the entire machine.
See https://serverfault.com/questions/36309/throttle-down-cpu-speed-of-vmware-image
were it is claimed free-as-in-beer VMware vSphere Hypervisorâ„¢ (ESXi) allows you to select the virtual CPU speed on top of setting the memory size of the virtual machine.
Obviously, it probably has some (or many) advantages over 32-bit that I'm clearly not aware of. So, what are they?
I just don't get it, so many things still aren't supported on X64 PC's. For example, on Internet Explorer 8 and 9 64-bit versions don't support Flash, and when I manage to get it working, it then UN-works, then brings up a message telling me that 64-bit IE's don't currently support flash or Flash isn't available on 64-bit browsers.
I have a 64-bit pc now with Windows 7, and am still writing 32-bit apps, and they all work perfectly (minus a few bugs here n there, which would appear whether you're using 32/64-bit). Why should/would one want to develop for 64-bit systems? I don't see how they are any different and, if I were to learn more about developing for 64bit, where would you recommend I start?
The most commonly cited reason for 64-bit applications is access to more memory. Database servers, for one obvious example, can benefit tremendously when most (or all) the data they're working with is available in memory instead of being stored on disk.
You can also gain extra speed, especially for floating point-intensive applications (I've seen a 3x speed-up fairly routinely, though it also depends somewhat on the CPU).
Some other applications, however, gain little or even lose some by moving to 64-bits. The CPU still has the same bandwidth to memory, but all your pointers double in size so if you're using pointers a lot, it can end up a net loss.
It depends what you are doing.
If you're writing a standalone app that doesn't talk to anything else, isn't going to need a huge amount of memory and wouldn't benefit from the extra registers x64 provides then you won't get much (except bloated structure sizes :)) from making an x64 version.
OTOH, for code that runs in-process, x64 is kinda viral. The shell itself is 64-bit now so if you want to plug into it you have to be 64-bit as well. (Or at least provide an adapter which can talk to the 64-bit world.) As a result, it's often easier to compile everything as 64-bit so you don't have the hassle of marshalling calls between the two worlds.
(While still having a 32-bit build for 32-bit OS, of course.)
Edit: Forgot to say, it's also useful to target x64 if you want to present the "real" view of a machine. 64-bit Windows "lies" to 32-bit processes about various things for compatibility reasons. You can disable/bypass the lies but doing so without breaking things (e.g. 3rd party DLLs) can be tricky and it's best avoided.
64-bit software can address more than 4GB of memory (in reality the limit is ~3GB) directly and it uses additional hardware (extra registers, etc.) available on modern CPUs, thus improving performance. These are the two major reasons of the migration to 64 bit.
Normally you would develop cross-platform software and your compiler would take care of using all the 64-bit features.