I've been trying to design a tool wherein I can do per-process tracing, but this implies that I need a key for each process so that I can store key-value map pairings for each process. I instinctually don't like using structs or strings as keys, and for a while I was considering how to access inode values for their use as keys. However there are numerous examples that use structures or strings as hashmap keys, and Alexei suggested that process names will be commonly used as a key. That said, I am unable to get a basic implementation of such a hashmap to work. Within the BPF program, the tracepoint isn't able to find the associated value with the process_name key. Perhaps I'm comparing memory locations and not the string literals as intended? Is there something going on under the hood with c_types that creates a mismatch between the keys?
from bcc import BPF
from bcc.utils import printb
from bcc.syscall import syscall_name, syscalls
from ctypes import *
b = BPF(text = """
struct procName {
char name[16];
};
BPF_HASH(attempt, struct procName, u32);
TRACEPOINT_PROBE(raw_syscalls, sys_exit)
{
u32 *val;
struct procName hKey;
bpf_get_current_comm(hKey.name,16);
val = attempt.lookup(&hKey);
if (val)
{
bpf_trace_printk("Hello world, I have value %d!\\n", *val);
}
return 0;
}
""")
class procName(Structure):
_fields_ = [("name", (c_char_p*16))]
myFirst = procName(('p','y','t','h','o','n','\0'))
trialUpload[myFirst] = c_int(10)
while 1:
try:
(task, pid, cpu, flags, ts, msg) = b.trace_fields()
except KeyboardInterrupt:
print("Detaching")
exit()
print("%-18.9f %-16s %-6d %s" % (ts, task, pid, msg))
The error in the original code has nothing to do with BCC & BPF and lies within my implemention of ctypes. For starters --
class procName(Structure):
_fields_ = [("name", (c_char_p*16))]
creates a structure with the field "name". In the above definition, name will be of type *char[16] when I want a char[16]. Second, while this
myFirst = procName(('p','y','t','h','o','n','\0'))
might work, its not the best practice initialization. This is the correct approach --
class procName(Structure):
_fields_ = [("name", (c_char*16))]
s = "python"
mySecond = procName()
mySecond.name = s
Thus, the full program incorporating a process_name based key and implementation of such to pass data from python is then..
from bcc import BPF
from bcc.utils import printb
from bcc.syscall import syscall_name, syscalls
import ctypes
from ctypes import *
b = BPF(text = """
#include <linux/string.h>
struct procName {
char name[16];
};
BPF_HASH(attempt, struct procName, u32);
TRACEPOINT_PROBE(raw_syscalls, sys_exit)
{
u32 *myVal;
struct procName key;
bpf_get_current_comm(&(key.name),16);
myVal = attempt.lookup(&key);
if (myVal)
{
bpf_trace_printk("values: %d\\n", *myVal);
}
return 0;
}
""")
class procName(Structure):
_fields_ = [("name", (c_char*16))]
trialUpload = b["attempt"]
s = "python"
mySecond = procName()
mySecond.name = s
trialUpload[mySecond] = c_int(5)
while 1:
try:
(task, pid, cpu, flags, ts, msg) = b.trace_fields()
except KeyboardInterrupt:
print("Detaching")
exit()
print("%-18.9f %-16s %-6d %s" % (ts, task, pid, msg))
Related
I am trying to pass some random integers (which I have stored in an array) to my hardware as an Input through the poke method in peekpoketester. But I am getting this error:
chisel3.internal.ChiselException: Error: Not in a UserModule. Likely cause: Missed Module() wrap, bare chisel API call, or attempting to construct hardware inside a BlackBox.
What could be the reason? I don't think I need a module wrap here as this is not hardware.
class TesterSimple (dut: DeviceUnderTest)(parameter1 : Int)(parameter2 : Int) extends
PeekPokeTester (dut) {
var x = Array[Int](parameter1)
var y = Array[Int](parameter2)
var z = 1
poke(dut.io.IP1, z.asUInt)
for(i <- 0 until parameter1){poke(dut.io.IP2(i), x(i).asUInt)}
for(j <- 0 until parameter2){poke(dut.io.IP3(j), y(j).asUInt)}
}
object TesterSimple extends App {
implicit val parameter1 = 2
implicit val parameter2 = 2
chisel3.iotesters.Driver (() => DeviceUnderTest(parameter1 :Int, parameter2 :Int)) { c =>
new TesterSimple (c)(parameter1, parameter2)}
}
I'd suggest a couple of things.
Main problem, I think you are not initializing your arrays properly
Try using Array.fill or Array.tabulate to create and initialize arrays
val rand = scala.util.Random
var x = Array.fill(parameter1)(rand.nextInt(100))
var y = Array.fill(parameter2)(rand.nextInt(100))
You don't need the .asUInt in the poke, it accepts Ints or BigInts
When defining hardware constants, use .U instead of .asUInt, the latter is a way of casting other chisel types, it does work but it a backward compatibility thing.
It's better to not start variables or methods with capital letters
I suggest us class DutName(val parameter1: Int, val parameter2: Int) or class DutName(val parameter1: Int)(val parameter2: Int) if you prefer.
This will allow to use the dut's paremeters when you are writing your test.
E.g. for(i <- 0 until dut.parameter1){poke(dut.io.IP2(i), x(i))}
This will save you have to duplicate parameter objects on your DUT and your Tester
Good luck!
Could you also share your DUT?
I believe the most likely case is your DUT does not extend Module
I'm starting develop with Play 2.2.1 and Scala 2.10.2. I'm developing a CRUD application. I'm following a example from book "Play for Scala: Covers 2", but I've a problem.
In these book there is this code in model
import play.api.Play.current
import play.api.db.DB
def getAll: List[Product] = DB.withConnection { implicit connection =>
sql().map
( row =>
Product(row[Long]("id"), row[Long]("ean"), row[String]("name"), row[String]("description"))
).toList
}
But when I try run it, I recieve this error:
value map is not a member of anorm.SqlQuery
Why doesn't function .map?
Thanks you!
SqlQuery doesn't have a map function. I'm not sure how the example in the book is supposed to look, but I'm a little wary of it if it's using that clunky syntax for anorm. I think it should always be preferred to use result set parsers defined separately from the function itself--as you'll be able to reuse them elsewhere.
import anorm._
import anorm.SqlParser._
import play.api.Play.current
import play.api.db.DB
case class Product(id: Long, ean: Long, name: String, description: String)
object Product {
/** Describes how to transform a result row to a `Product`. */
val parser: RowParser[Product] = {
get[Long]("products.id") ~
get[Long]("products.ean") ~
get[String]("products.name") ~
get[String]("products.description") map {
case id ~ ean ~ name ~ description => Product(id, ean, name, description)
}
def getAll: List[Product] = {
DB.withConnection { implicit connection =>
SQL("SELECT * FROM products").as(parser *)
}
}
}
I've made the assumption that your table is named products. It's best to use the full column names in parsers (products.id instead of id), as if later you need to combine parsers (using joined results), then anorm won't get confused from multiple tables using a similar column name like id. The getAll function now looks so much cleaner, and we can re-use the parser for other functions:
def getById(id: Long): Option[Product] = {
DB.withConnection{ implicit connection =>
SQL("SELECT * FROM products WHERE id = {id}")
.on("id" -> id)
.as(parser.singleOpt)
}
}
In the tutorial it is mentioned that,
Before we can get any results, we have to create a query. With Anorm, you call
anorm.SQL with your query as a String parameter:
import anorm.SQL
import anorm.SqlQuery
val sql: SqlQuery = SQL("select * from products order by name asc")
Is this missing from your code?.
SqlQuery is being made private (pending PR), so it should not be used directly. In place SQL("...") or SQL"..." functions can be used, in a safer way.
Best
Good day everyone,
I have a file of strings corresponding to the fields of my SQLAlchemy object. Some fields are floats, some are ints, and some are strings.
I'd like to be able to coerce my string into the proper type by interrogating the column definition. Is this possible?
For instance:
class MyClass(Base):
...
my_field = Column(Float)
It feels like one should be able to say something like MyClass.my_field.column.type and either ask the type to coerce the string directly or write some conditions and int(x), float(x) as needed.
I wondered whether this would happen automatically if all the values were strings, but I received Oracle errors because the type was incorrect.
Currently I naively coerce -- if it's float()able, that's my value, else it's a string, and I trust that integral floats will become integers upon inserting because they are represented exactly. But the runtime value is wrong (e.g. 1.0 vs 1) and it just seems sloppy.
Thanks for your input!
SQLAlchemy 0.7.4
You can iterate over columns of the mapped Table:
for col in MyClass.__table__.columns:
print col, repr(col.type)
... so you can check the type of each field by its name like this:
def get_col_type(cls_, fld_):
for col in cls_.__table__.columns:
if col.name == fld_:
return col.type # this contains the instance of SA type
assert Float == type(get_col_type(MyClass, 'my_field'))
I would cache the results though if your file is large in order to save the for-loop on every row imported from the file.
Type coercion for sqlalchemy prior to committing to some database.
How can I verify Column data types in the SQLAlchemy ORM?
from sqlalchemy import (
Column,
Integer,
String,
DateTime,
)
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import event
import datetime
Base = declarative_base()
type_coercion = {
Integer: int,
String: str,
DateTime: datetime.datetime,
}
# this event is called whenever an attribute
# on a class is instrumented
#event.listens_for(Base, 'attribute_instrument')
def configure_listener(class_, key, inst):
if not hasattr(inst.property, 'columns'):
return
# this event is called whenever a "set"
# occurs on that instrumented attribute
#event.listens_for(inst, "set", retval=True)
def set_(instance, value, oldvalue, initiator):
desired_type = type_coercion.get(inst.property.columns[0].type.__class__)
coerced_value = desired_type(value)
return coerced_value
class MyObject(Base):
__tablename__ = 'mytable'
id = Column(Integer, primary_key=True)
svalue = Column(String)
ivalue = Column(Integer)
dvalue = Column(DateTime)
x = MyObject(svalue=50)
assert isinstance(x.svalue, str)
I'm not sure if I'm reading this question correctly, but I would do something like:
class MyClass(Base):
some_float = Column(Float)
some_string = Column(String)
some_int = Column(Int)
...
def __init__(self, some_float, some_string, some_int, ...):
if isinstance(some_float, float):
self.some_float = somefloat
else:
try:
self.some_float = float(somefloat)
except:
# do something intelligent
if isinstance(some_string, string):
...
And I would repeat the checking process for each column. I would trust anything to do it "automatically". I also expect your file of strings to be well structured, otherwise something more complicated would have to be done.
Assuming your file is a CSV (I'm not good with file reads in python, so read this as pseudocode):
while not EOF:
thisline = readline('thisfile.csv', separator=',') # this line is an ordered list of strings
thisthing = MyClass(some_float=thisline[0], some_string=thisline[1]...)
DBSession.add(thisthing)
What are the implications of using def vs. val in Scala to define a constant, immutable value? I obviously can write the following:
val x = 3;
def y = 4;
var a = x + y; // 7
What's the difference between those two statements? Which one performs better / is the recommended way / more idiomatic? When would I use one over the other?
Assuming these are class-level declarations:
The compiler will make a val final, which can lead to better-optimised code by the VM.
A def won't store the value in the object instance, so will save memory, but requires the method to be evaluated each time.
For the best of both worlds, make a companion object and declare constants as vals there.
i.e. instead of
class Foo {
val MyConstant = 42
}
this:
class Foo {}
object Foo {
val MyConstant = 42
}
The val is evaluated once and stored in a field. The def is implemented as a method and is reevaluated each time, but does not use memory space to store the resulting value.
I'm implementing an actor-based app in scala and I'm trying to be able to pass functions between the actors for them to be processed only when some message is received by the actor.
import actors.Actor
import java.util.Random
import scala.Numeric._
import Implicits._
class Constant(val n:Number) extends Actor{
def act(){
loop{
receive{
case "value" => reply( {n} )
}
}
}
}
class Arithmetic[T: Numeric](A: ()=>T, B: ()=>T) extends Actor{
def act(){
receive{
case "sum" => reply ( A() + B() )
/* case "mul" => reply ( A * B )
*/
}
}
}
object Main extends App{
val c5 = new Constant(5)
c5.start
val a = new Arithmetic({c5 !! "value"}, {c5!!"value"} )
a.start
println(a!?"sum")
println(a!?"mul")
}
In the example code above I would expect the output to be both 5+5 and 5*5. The issue is that reply is not a typed function and as such I'm unable to have the operator (+,*) to operate over the result from A and B.
Can you provide any help on how to better design/implement such system?
Edit: Code updated to better reflect the problem. Error in:
error: could not find implicit value for evidence parameter of type Numeric[Any]
val a = new Arithmetic({c5 !! "value"}, {c5!!"value"} )
I need to be able to pass the function to be evaluated in the actor whenever I call it. This example uses static values but I'll bu using dynamic values in the future, so, passing the value won't solve the problem. Also, I would like to receive different var types (Int/Long/Double) and still be able to use the same code.
The error: Error in: error: could not find implicit value for evidence parameter of type Numeric[Any]. The definition of !!:
def !! (msg: Any): Future[Any]
So the T that Arithmetic is getting is Any. There truly isn't a Numeric[Any].
I'm pretty sure that is not your problem. First, A and B are functions, and functions don't have + or *. If you called A() and B(), then you might stand a chance... except for the fact that they are java.lang.Number, which also does not have + or * (or any other method you'd expect it to have).
Basically, there's no "Number" type that is a superclass or interface of all numbers for the simple reason that Java doesn't have it. There's a lot of questions touching this subject on Stack Overflow, including some of my own very first questions about Scala -- investigate scala.math.Numeric, which is the best approximation for the moment.
Method vs Function and lack of parenthesis
Methods and functions are different things -- see tons of related questions here, and the rule regarding dropping parenthesis is different as well. I'll let REPL speak for me:
scala> def f: () => Int = () => 5
f: () => Int
scala> def g(): Int = 5
g: ()Int
scala> f
res2: () => Int = <function0>
scala> f()
res3: Int = 5
scala> g
res4: Int = 5
scala> g()
res5: Int = 5