Do you violate any law/patent/license if you implement a open source processor for ARM ISA? [closed] - open-source

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am considering an HDL implementation of the ARM Instruction Set Architecture as an open source project as part of course project and suport it with the gcc compiler/ QEMU. Will I be violating any law/patent/license by implementing any ARM ISA and distributing the source as an open core ?

Arm Ltd has at least 706 US Patents (search). While ISAs are generally not protected, ISA implementations most certainly are, so you will need to ensure that your implementation does not violate any of Arm's patents.

Related

how to update to cuda 10.2 on xavier agx? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I am trying to use onnx-trt but that requires TENSORRT_LIBRARY_MYELIN.
TENSORRT_LIBRARY_MYELIN requires trt 7. trt 7 requires cuda 10.2. cuda 10.2 can only be updated through the jetpack sdk manager. But I only see cuda 10.0 on there. How do I update to cuda 10.2?
TensorRT 7 hasn't been released for Jetson yet, stay tuned.

understanding HPC Linpack (CUDA edition) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I want to know what role play CPUs when HPC Linpack (CUDA version) is runnig. They are recieving data from other cluster nodes and performing CPU-GPU data exchange, arenot they? so thier work doesnot influence on performance, yes?
In typical usage both GPU and CPU are contributing to the numerical calculations. The host code will use MKL or another BLAS implementation for host-generated numerical results, and the device code will use CUBLAS or something related for device numerical results.
A version of HPL is available to registered developers in source code format, so you can inspect all this yourself.
And as you say the CPUs are also involved in various other administration activities such as internode data exchange in a multinode setting.

Platform vs Software Framework [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
CUDA advertises itself as a parallel computing platform. However, I'm having trouble seeing how it's any different from a software framework (a collection of libraries used for some functionality). I am using CUDA in class and all I'm seeing is that it provides libraries in C for - functions that help in parallel computing on the GPU - which fits my definition of a framework. So tell me, how is a platform like CUDA different from a framework? Thank you.
CUDA the hardware platform, is the actual GPU and its scheduler ("CUDA architecture"). However CUDA is also a programming language, which is very close to C. To work with the software written in CUDA you also need an API for calling these functions, allocating memory etc. from your host language. So CUDA is a platform, a language and a set of APIs.
If the latter (a set of APIs) matches your definition of a software framework, then the answer is simply yes, as both options are true.

What is the absolutely fastest way to output a signal to external hardware in modern PC? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I was wondering, what is the absolutely fastest way (lowest latency) to produce external signal (for example CMOS state change from 0 to 1 on electrical wire connected to other device etc.) from PC, counting from the moment, where CPU assembler program knows that signal must be produced.
I know that network device, usb, VGA monitor output have some large latency comapred to other interfaces (SATA, PCI-E). Wich of interfaces or what hardware modification can provide a near-0 latency in output from let's suppose assembler program?
I don't know if it is really the fastest interface you can provide, because that also depends on your definition of "external", but http://en.wikipedia.org/wiki/InfiniBand certainly comes close to what your question aims at. Latency is 200 nanoseconds and below in certain scenarios ...

Can you eventually close an opensource program? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am slowly starting to build myself a small game engine using openGL and C++ and I thought it would be kinda nice to make it open source for the time being, problem is that I may want to eventually market it once I add in more unique or detailed features. I know most licenses for open source software state that future versions must also be open source, but given that it would be my program, would I be allowed to eventually stop making it open source?
Depend on the Open Source License you use and the way you set up your project. You could use the BSD/MIT licence and then you don't have the viral problem of the GPL/LGPL (but not the advantages either). You can also leave your main engine Open & Free and just sell your unique features.
There is many ways!