Nand2tetris.The builtin gate chips behave strangly - nand2tetris

I am doing the projects of nand2tetris, from which we can build a virtual computer by basic logic gates with the simulator provided by their webpage. But I am confronted with a problem as shown here:
The outputs of "NOT" and "AND" gate are the inverse of what's expected in the red circle from the pictrue. For the "NOT" gate, if its input is 1, its output should be 0, but is 1. For "AND", if its inputs are both 1, its output should be 1, but is 0. In previous projects I've finished, there were no such erros. It's so strange. They both are building chips not built by myself, but output abnormally, which confused me so much. If I don't solve it, I don't think I can do the rest of projects of nands2tetris. It's really nice for someone to help me find the reason.

I can see that you're testing Multiplexor chip.
If you didn't move your Mux.hdl file out of its original source then by default it uses your implementation of And and Not gates. Check implementation of And.hdl and Not.hdl that are present in the same directory as your Mux.hdl.
You can be sure that built-in chips are implemented correctly if you got them from project's webpage, they are thoroughly tested.

Related

Problem loading "decomposable-attention-elmo" with `Predictor.from_path`

I'm trying to load the decomposable attention model proposed in this paper The decomposable attention model (Parikh et al, 2017) combined with ELMo embeddings trained on SNLI., and used the code suggested as the demo website described:
predictor = Predictor.from_path("https://storage.googleapis.com/allennlp-public-models/decomposable-attention-elmo-2020.04.09.tar.gz", "textual_entailment")
predictor.predict(
hypothesis="Two women are sitting on a blanket near some rocks talking about politics.",
premise="Two women are wandering along the shore drinking iced tea."
)
I found this from log:
Did not use initialization regex that was passed: .*token_embedder_tokens\._projection.*weight
and the prediction was also different from what I got on the demo website (which I intended to see). Did I miss anything here?
Also, I tried the two other versions of the pretrained model, decomposable-attention-elmo-2018.02.19.tar.gz and decomposable-attention-elmo-2020.02.10.tar.gz. Neither of them works and I got this error:
ConfigurationError: key "token_embedders" is required at location "model.text_field_embedder."
What do I need to do to get the exact output as presented in the demo website?
ELMo is a bit difficult in this way in that it keeps state, and you don't get the same output if you call it twice. It depends on what you processed beforehand. In general, ELMo should be warmed up with a few queries before using it seriously.
If you're still seeing large discrepancies in the output, let us know and we'll look into it.
The old versions of the model don't work with the new code. That's why we published the new model versions.

How does our computer actually converts decimal number into binary?

We know that computer performs its all operations in binary only. It cannot work with decimal or any other base numbers.
If computer cannot perform any operation in decimals, then how does it convert them to binary? I think there are different stages during the conversion at which addition and multiplication are required. How the computer can add or multiply any number even before getting its binary equivalent ?
I have searched this in many place but couldn't find a convincing answer.
Note: This stackexchange site is not the right place to ask this question. I am still answering it, better shift it to the appropriate one or delete question after getting your answer.
Well it doesn't care what input you supply to it. Think of it as your tv switch. When you switch it on, your tv starts to work. This happens because it got the exact current flow it required to work. Similarly in a computer, there is a particular voltage, let's say 5V. Below 5V is all considered what you call as '0' otherwise '1'. You may have seen an AND, OR etc gates. If you supply both '1' to AND it results in '1' otherwise '0'. There are many such digital circuits. Some examples are a binary adder, a latch, a flip flop etc. These work with these current signals (which are characterised as 0 or 1 as explained above). A computer is a combination of millions of such circuits.
When you talk about converting decimal to binary or something like that, its actually not like that. Every program (spreadsheets, games etc) are written in some language. Most common ones are compiled or interpreted. Some languages that get compiled are C, javaetc. and some interpreted ones are python, rubyetc. Job of a compiler or interpreter if to convert the code you wrote in that language to assembly code as per the rules of that language. Assembly code is then converted into machine code when it has to run. Machine code is pure zeros and ones. These zeros and ones just define triggers on what to execute and when.
Don't confuse this with what you see. Desktop that displays you the data is a secondary thing that is specifically made just to make things easy for us.
In a computer a clock keeps running. Like you must have heard 2.5Ghz processor or something like that. This is the frequency with which instructions are executed. Seems odd but yes whether you are doing work or not, when computer is working, it executes instructions continuously and if you are not doing anything it keeps on checking for interaction.
Imagine correctly
1) you opened your pc, the hardware got ready for your commands and kept checking for interaction
2)you opened a folder. Now think to open a folder you obviously need to touch the keyboard, mouse or do some voice interaction. This interaction is followed by your computer. Pressing a down arrow produces a zero or one signal at the right place. Now after this it gets displayed to you. It is not that what is being displayed is being done. Instead what is being done is getting displayed for you to follow it easily.

AVR-Studio how to output?

I do not have experience with micorcontrollers but I have something related to them. Here is and explanation of my issue:
I have an algorithm, and I want to calculate how many cycles my algorithm would cost on a specific avr microcontroller.
To do that I downloaded AVR-STudio 6, and I used the simulator. I succeeded in obtaining the number of cycles for my algorithm. What I wan to know is that how can I make sure that my algorithm is working as it should be. AVR-Studio allows me to debug using the simulator but I am not able to see the output of my algorithm.
To simplify my question, I would like some help in implementing the hello world example in AVR-Studio, that is I want to see "hello world" in the output window, if that is possible.
My question is not how to program the microcontroller, my question is that how could I see the output of a program in AVR-Studio.
Many thanks
As Hanno Binder suggested in his comment:
Atmel Studio still does not provide any means to display debug messages sent by the program simulated. Your only option is to place breakpoints at apropriate locations and then inspect the state of the device in the simulator. For example the locations in RAM where your result is stored, or the registers in which it may reside; maybe have a 'watch' set on a variable or expression.
I think this is the best answer, watch vairables and memory while in debug mode.
Note: turn off optimization when you want to debug for infomation, or some variables will be optimized away.
the best thing to test if algorithms work is to run them in a regular PC program and feed them with data and compare the results with ground trouth.
Clearly to be able to do this a good programming style is neccessary that separates hardware related tasks from the actual data processing. Additionally you have to keep architectural differences in mind (eg: int=16bit vs. int=32bit --> use inttypes.h)

LabView: Icon identification

I'm entirely new to LabView, and as a pet project, I'm trying to recreate a pulse detector. Thing is, the version of the .VI is LabView2010, and I can't open the.VI in LabView2009, so were trying to remake it by looking at the module. I do however, have the image, but since I'm pretty new, I can't identify some of the components used. Below is an image of the .VI, as well as, the parts I don't know encircled with red and enumerated. What exactly are these? Thanks!
To make a shift register, right click on the edge of the while loop and place a shift register. The Wait (ms) node is found in the timing functions pallet. #1 and #3 are found in the waveform generation pallet. And #2 is a waveform graph that is bound to the output of the filter. Just right click on the output of the filter and create an Indicator
I only have limited experience with the specific features in this code, so I don't have exact names, but it should point you in the right direction:
2 is a dynamic data indicator.
1 converts it to a waveform (it probably appears automatically if you hook up a DDT wire to a WF function.
3 unbundles the data from the waveform. It should be in the waveform palette.
4 is a shift register.
5 is a wait function.
In general, I would recommend that you get to learning, as you will need to understand these things to at least that level before you can be proficient.
Also, the NI forums are much more suitable for this type of question and they have many more users. I would suggest if you have such questions which you can't answer yourself, then post them there.

Reverse engineering a QuickBASIC 3.0 program

I have a program (I own the rights) written in QuickBASIC 3.0, though I do not have anymore the source code.
Anyone know a decompiler that I can use to see what the program does?
Basically it gets some numbers in input and it performs some calculation, showing some results. Nothing too complicated.
Thanks
I haven't seen any publicly available tools but there's a page from a guy who claims to have made one. You could try contacting him.
I wouldn't recommend trying it on your own if you don't have any experience in reversing DOS programs. It seems QuickBASIC 3.0 was compiled into some kind of p-code. I've never seen any research on the DOS-era p-code, but it might bear some relation to the one eventually used in Visual Basic 6.0, and that one has been investigated quite a lot.
If you vaguely remember the idea but don't remember the details (e.g. actual values of coefficients in the formula), one thing you could try is to enter some numbers, read the results, and save them in an Excel sheet. Repeat that a couple of times and try to plot the data. Not much, but might help.
Use the debugger of Borland C++ 3.1, but you are going to need knowledge of assembler...