how to understand tcpdump particular field field [closed] - tcpdump

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 days ago.
Improve this question
I have below two tcpdump, Want to know what are "syslog.info" and "local6.info" column meaning? and what are the representation of syslog.info and local6.info
06:56:07.533143 IP 10.10.40.10.52126 > 10.18.40.58.514: SYSLOG **syslog.info**, length: 189
06:56:07.669902 IP 10.10.40.15.37866 > 10.18.40.58.514: SYSLOG **local6.info**, length: 292

void openlog(const char *ident, int option, int facility); where priority is facility | level. In the string version above facility syslog and local6 corresponds to LOG_SYSLOG and LOG_LOCAL6, and info is the level which corresponds to LOG_INFO. As for semantic:
LOG_SYSLOG: messages generated internally by syslogd(8)
LOG_USER (default): generic user-level messages
LOG_INFO: informational message

Related

What is the Mathematical formula for sparse categorical cross entropy loss? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 months ago.
Improve this question
Can anyone help me with the Mathematics of sparse categorical cross entropy loss function? I have searched for the derivation, explanation (Mathematical) but couldn't find any
I know it is not the right place to ask question like this. But I am helpless.
It is just cross entropy loss. The "sparse" refers to the representation it is expecting for efficiency reasons. E.g. in keras it is expected that label provided is an integer i*, an index for which target[i*] = 1.
CE(target, pred) = -1/n SUM_k [ SUM_i target_ki log pred_ki ]
and since we have sparse target, we have
sparse-CE(int_target, pred) = -1/n SUM_k [ log pred_k{int_target_k} ]
So instead of summing over label dimension we just index, since we know all remaining ones are 0s either way.
And overall as long as targets are exactly one class we have:
CE(target, pred) = CE(onehot(int_target), pred) = sparse-CE(int_target, pred)
The only reason for this distinction is efficiency. For regular classification with ~10-100 classes it does not really matter, but imagine word-level language models where we have thousands of classes.

Nvidia Tesla T4 tensor core benchmark [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am using the code given here to find out the TFlops of mixed precision ops on Nvidia Tesla T4. Its theoretical value is given 65 Tflops. however, the code produces the value as 10 Tflops. Any explanation that can justify this happening?
This might be more of an extended comment, bet hear me out ...
As pointed out in the comments CUDA Samples are not meant as performance measuring tools.
The second benchmark you provided does not actually use tensor cores, but just a normal instruction executed on FP32 or FP64 cores.
for(int i=0; i<compute_iterations; i++){
tmps[j] = mad(tmps[j], tmps[j], seed);
}
On a Turing T4 this, for single precision operations gives me a peak of 7.97 TFLOPS, so very close to the theoretical limit of 8.1 TFLOPS.
For half precision operations I get 16.09 TFLOPS, as expected about double that of the single precision performance.
Now, on to Tensor cores. As the previously mentioned benchmark does not use them, let's look for something that does.
CUTLASS (https://github.com/NVIDIA/cutlass) is a high performance Matrix-Matrix Multiplication library from NVIDIA.
They provide a profiling application for all the kernels provided. If you run this on a T4, you should get output like this:
Problem ID: 1
Provider: ^[[1;37mCUTLASS^[[0m
OperationKind: ^[[1;37mgemm^[[0m
Operation: cutlass_tensorop_h1688gemm_256x128_32x2_nt_align8
Status: ^[[1;37mSuccess^[[0m
Verification: ^[[1;37mON^[[0m
Disposition: ^[[1;32mPassed^[[0m
reference_device: Passed
cuBLAS: Passed
Arguments: --gemm_kind=universal --m=1024 --n=1024 --k=1024 --A=f16:column --B=f16:row --C=f16:column --alpha=1 \
--beta=0 --split_k_slices=1 --batch_count=1 --op_class=tensorop --accum=f16 --cta_m=256 --cta_n=128 \
--cta_k=32 --stages=2 --warps_m=4 --warps_n=2 --warps_k=1 --inst_m=16 --inst_n=8 --inst_k=8 --min_cc=75 \
--max_cc=1024
Bytes: 6291456 bytes
FLOPs: 2149580800 flops
Runtime: 0.0640419 ms
Memory: 91.4928 GiB/s
Math: 33565.2 GFLOP/s
As you can see we are now actually using Tensor cores, and half-precision operation, with a performance of 33.5 TFLOPS. Now, this might not be at 65 TFLOS, but for an application you can use in the real world, that is pretty good.

Octave - How to plot an "infinite"(= Defining the function on [0:35916] for me) sawtooth function [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 years ago.
Improve this question
I know how to plot a sawtooth function (thanks to another forum) but only on the domain [0:10] thanks to the following code which is actually working :
t=0:0.04:10;
A=1;
T=1;
rho= mod(t * A / T, A);
plot(t,rho)
A=the amplitude, T=the period,t=the time interval.
The problem is that I need the same function on the domain [0:35916] but when I try to adapt this code to do so (eg by extending the time interval), I get an error and I don't understand why.
error:
plt2vv: vector lengths must match error: called from plt>plt2vv at line 487 column 5 plt>plt2 at line 246 column 14 plt at line 113 column 17 plot at line 222 column 10
Simply modifying the original upper limit of your interval from 10 to 35916 should do the trick:
t=0:0.04:35916;
A=1;
T=1;
rho= mod(t * A / T, A);
plot(t,rho)
The code above yields the following image:
Of course it is up to you to adjust A and T to suit your needs.

Does any programming language support the expressions like "12.Pounds.ToKilograms()"? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Converting units and measurements from one system to another can be achieved in most programming languages in one or another way. But, Can we express something like "12.Pounds.ToKilograms()" in any programming language?
Not exactly in that syntax but you may want to take a look at Frink: https://frinklang.org
Frink syntax is similar to Google Calculator or Wolfram Alpha but not exactly the same. Whereas Google and Wolfram Alpha uses the in keyword to trigger unit conversion Frink uses the -> operator. So in frink, the following is valid source code:
// Calculate length of Wifi antenna:
lightspeed / (2.4GHz) / 4 -> inches
As I mentioned, this syntax is similar to Google. For reference, the same calculation in Google syntax is speed of light / 2.4GHz / 4 in inches. Frink predates both Google calculator and Wolfram Alpha. I was first aware of frink sometime in the early 2000s.
Frink is unit aware. A number in frink always has unit even if that unit is simply "scalar" (no units). So to declare a variable that is 12 pounds you'd do:
var x = 12 pounds
To convert you'd do:
x -> kg
Or you can simply write the expression:
12 pounds -> kg
In Smalltalk you could express this as
12 pounds inKilograms
Notice however that it is up to you to program both messages pounds and inKilograms (there are libraries that do that kind of things also). But the key point is that the expression above is perfectly valid in Smalltalk (even if these messages do not exist).
I can't say I've ever seen this as a valid expression. 12 would have to be defined as a type. Let's assume 12 is an integer. An integer type only knows that it is an integer and in most languages there are built in functions/ methods for integers. For this to be a valid expression you would need to define a weight type and within that object you could define methods of conversion or inherit child types with conversion methods.
You could do it in Ruby because of it's syntactic sugar.
Actually there's a ruby gem - alchemist - you should check it out.
For example you could write:
10.miles.to.meters

google maps saving geocoded point infringement [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I know that saving the geocoded lat,lng in a DB (or other media) ,
is against the Terms Of Use, but what if i "take" that point (of course while displaying it on the map) and instead of saving that lat,lng in the DB , i save:
lat + C , lng + C
where C is some const.
Later when i query from the DB , i query lat - C , lng -C.
Is this legal ? currently i don't have the $10k needed and i do want to use their geocoder.
Thanks
When you steal a wallet and put a own dollar into it, is it legal then?
However: it's not illegal to store the latLng's somewhere as long as you store them there to use them later only inside a Maps-API-application.
https://developers.google.com/maps/faq?hl=en#geocoder_exists