Function composed by the derivative of a function - function

I have a function (being an array of functions) and I would like to calculate its derivative defining via def. In other words, I have
import sympy as sym
from sympy import *
import numpy as np
import math
def g(r,theta):
return np.array([[1+r+theta,0],[0,1-r+theta]])
def Drg(r,theta):
r=symbols('r')
theta=symbols('theta')
f=np.array([[1+r+theta,0],[0,1-r+theta]])
Df=diff(f, r)
return lambdify((r,theta),Df,"numpy")
print(Drg(1,np.pi))
It is not working the function Drg, how can I solve this issue?

Related

How can I create an empty dataset from on a PySpark schema in Palantir Foundry?

I have a PySpark schema that describes columns and their types for a dataset (which I could write by hand, or get from an existing dataset by going to the 'Columns' tab, then 'Copy PySpark schema').
I want an empty dataset with this schema, for example that could be used as a backing dataset for a writeback-only ontology object. How can I create this in Foundry?
To do this in Python, you can create an empty dataset by using the Spark session from the context to create a DataFrame with the schema, for example:
from pyspark.sql import types as T
from transforms.api import transform_df, configure, Output
SCHEMA = T.StructType([
T.StructField('entity_name', T.StringType()),
T.StructField('thing_value', T.IntegerType()),
T.StructField('created_at', T.TimestampType()),
])
# Given there is no work to do, save on compute by running it on the driver
#configure(profile=["KUBERNETES_NO_EXECUTORS_SMALL"])
#transform_df(
Output("/some/dataset/path/or/rid"),
)
def compute(ctx):
return ctx.spark_session.createDataFrame([], schema=SCHEMA)
To do this in Java, you can create a transform using the Spark session on the TransformContext:
package myproject.datasets;
import com.palantir.transforms.lang.java.api.Compute;
import com.palantir.transforms.lang.java.api.Output;
import com.palantir.transforms.lang.java.api.TransformProfiles;
import com.palantir.transforms.lang.java.api.TransformContext;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.types.*;
import java.util.List;
public final class MyTransform {
private static final StructType SCHEMA = new StructType()
.add(new StructField("entity_name", DataTypes.StringType, true, Metadata.empty()))
.add(new StructField("thing_value", DataTypes.IntegerType, true, Metadata.empty()))
.add(new StructField("created_at", DataTypes.TimestampType, true, Metadata.empty()));
#Compute
// Given there is no work to do, save on compute by running it on the driver
#TransformProfiles({ "KUBERNETES_NO_EXECUTORS_SMALL" })
#Output("/some/dataset/path/or/rid")
public Dataset<Row> myComputeFunction(TransformContext context) {
return context.sparkSession().createDataFrame(List.of(), SCHEMA);
}
}

SQLModel: sqlalchemy.exc.ArgumentError: Column expression or FROM clause expected,

I am using the SQLModel library to do a simple select() like described on their official website. However I am getting Column expression or FROM clause expected error message
from typing import Optional
from sqlmodel import Field, Session, SQLModel, create_engine, select
from models import Hero
sqrl = f"mysql+pymysql:///roo#asdf:localhost:3306/datab"
engine = create_engine(sqrl, echo=True)
def create_db_and_tables():
SQLModel.metadata.create_all(engine)
def select_heroes():
with Session(engine) as session:
statement = select(Hero)
results = session.exec(statement)
for hero in results:
print(hero)
def main():
select_heroes()
if __name__ == "__main__":
main()
this is my models/Hero.py code:
from datetime import datetime, date, time
from typing import Optional
from sqlmodel import Field, SQLModel
class Hero(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
name: str
secret_name: str
age: Optional[int] = None
created: datetime
lastseen: time
when I run app.py I get the sqlalchemy.exc.ArgumentError: Column expression or FROM clause expected, got <module 'models.Hero' from '/Users/dev/test/models/Hero.py'>. message
The error message <Column expression or FROM clause expected, got module 'models.Hero' from '/Users/dev/test/models/Hero.py'> tells us:
SQLModel / SQLAlchemy unexpectedly received a module object named models.Hero
that you have a module named Hero.py
The import statement from models import Hero only imports the module Hero. Either
change the import to import the model*
from models.Hero import Hero
change the code in select_heroes to reference the model†
statement = select(Hero.Hero)
* It's conventional to use all lowercase for module names; following this convention will help you distinguish between modules and models.
† This approach is preferable in my opinion: accessing the object via the module namespace eliminates the possibility of name collisions (of course it can be combined with lowercase module names).

Random number generation with pyCUDA

I want to generate random numbers with pyCUDA.
To this end, I'm using the following code, which I'm running on the Kaggle virtual machine:
import numpy as np
import time
import pycuda.gpuarray as gpuarray
import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
N = 10
from pycuda.curandom import XORWOWRandomNumberGenerator
rng = XORWOWRandomNumberGenerator()
d_x = rng.gen_uniform((N,), dtype = np.float32)
My question is on how do I feed the random number generator with a seed.
At the pyCUDA documentation page, it says that
class pycuda.curandom.XORWOWRandomNumberGenerator(seed_getter=None, offset=0)
Parameters:
seed_getter – a function that, given an
integer count, will yield an int32 GPUArray of seeds.
offset –
Starting index into the XORWOW sequence, given seed.
What is an example of the seed_getter function?
The curandom module has two built-in functions for generating random seeds:
seed_getter_uniform which will return an array length N initialized with a single random seed, and
seed_getter_unique which will return an array initialized with N different random seeds.
Use one or the other depending on whether you want all internal generator instances to used the same seed or a unique seed.

No instance of play.api.libs.json.Format is available for scala.Iterable[java.lang.String]

Trying to map a simple class using play version 2.6.2 and scala 2.11.11
import play.api.libs.json._
import play.api.libs.json.util._
import play.api.libs.json.Reads._
import play.api.libs.json.Writes._
import play.api.libs.json.Format._
import play.api.libs.functional.syntax._
case class ObjectInfo (
names : Iterable[String],
info : Iterable[String]
)
object ObjectInfo {
/**
* Mapping to and from JSON.
*/
implicit val documentFormatter = Json.format[ObjectInfo]
}
getting:
No instance of play.api.libs.json.Format is available for
scala.Iterable[java.lang.String], scala.Iterable[java.lang.String] in
the implicit scope (Hint: if declared in the same file, make sure it's
declared before)
I was expecting Play to automatically map these fields since they're not complex object types but simple Collection of strings.
You provide "too much" implicit stuff with your imports. If you remove all imports but the first one, it will compile and do what you want.
If you enable implicit parameter logging via the scalac option -Xlog-implicits, you will see various "ambigouity" and "diverging implicit expansion" errors. The conflicting imports are import play.api.libs.json.Reads._/import play.api.libs.json.Writes._ and import play.api.libs.json.Format._. Maybe someone else can explain this conflict in more detail.

Why can't cython memory views be pickled?

I have a cython module that uses memoryview arrays, that is...
double[:,:] foo
I want to run this module in parallel using multiprocessing. However I get the error:
PicklingError: Can't pickle <type 'tile_class._memoryviewslice'>: attribute lookup tile_class._memoryviewslice failed
Why can't I pickle a memory view and what can I do about it.
Maybe passing the actual array instead of the memory view can solve your problem.
If you want to execute a function in parallel, all of it parameters have to be picklable if i recall correctly. At least that is the case with python multiprocessing. So you could pass the array to the function and create the memoryview inside your function.
def some_function(matrix_as_array):
cdef double[:,:] matrix = matrix_as_array
...
I don't know if this helps you, but I encountered a similar problem. I use a memoryview as an attribute in a cdef class. I had to write my own __reduce__ and __setstate__ methods to correctly unpickle instances of my class. Pickling the memory view as an array by using numpy.asarray and restoring it in __setstate__ worked for me. A reduced version of my code:
import numpy as np
cdef class Foo:
cdef double[:,:] matrix
def __init__(self, matrix):
'''Assign a passed array to the typed memory view.'''
self.matrix = matrix
def __reduce__(self):
'''Define how instances of Foo are pickled.'''
d=dict()
d['matrix'] = np.asarray(self.matrix)
return (Foo, (d['matrix'],), d)
def __setstate__(self, d):
'''Define how instances of Foo are restored.'''
self.matrix = d['matrix']
Note that __reduce__ returns a tuple consisting of a callable (Foo), a tuple of parameters for that callable (i.e. what is needed to create a 'new' Foo instance, in this case the saved matrix) and the dictionary with all values needed to restore the instance.