set module name in ACL - acl

Is it possible to define module name in 'Resourse'?
Such as what I illustrated here:
// Define the "Customers" resource in "Backend" module
$customersResource = new \Phalcon\Acl\Resource("Backend\Customers");
// Add "customers" resource with a couple of operations
$acl->addResource($customersResource, "search");
$acl->addResource($customersResource, array("create", "update"));

Related

Predictor.from_archive failed

archive = load_archive(
"elmo-constituency-parser-2018.03.14.tar.gz"
)
predictor = Predictor.from_archive(archive, 'constituency-parser')
predictor.predict_json({"sentence": "This is a sentence to be predicted!"})
Loading the elmo-constituency-parser is thorwing this error:
allennlp.common.checks.ConfigurationError: ptb_trees not in acceptable choices for
dataset_reader.type: ['babi', 'conll2003', 'interleaving', 'multitask', 'sequence_tagging', 'sharded', 'text_classification_json', 'multitask_shim']. You should either use the --include-package flag to make sure the correct module is loaded, or use a fully qualified class name in your config file like {"model": "my_module.models.MyModel"} to have it imported automatically.
Seems the load_archive func returnd a model name "ptb_trees" and a name containd "." was required, such as {"model": "my_module.models.MyModel"}
Any suggestion? Thanks!

Google Drive API v3 update an object if one exists with same name: list 'q' parameter does not work as documented?

I'm trying to update a file if it exists in a particular folder and has a specific name. In this instance the object in question is in a team drive. I followed documentation to compose the q parameter to the list call, tried to switch back to v2...It seems that the query is composed exactly correctly. That being said, even though I see multiple objects present in the target folder, the list call fails to see them. I've tried name = '' and name contains ''. There seems to be enough input validation put in place by the google team, as when i get creative the API bombs. Any pointers?
def import_or_replace_csv_to_td_folder(self, folder_id, local_fn, remote_fn, mime_type):
DRIVE = build('drive', 'v3', http=creds.authorize(Http()))
query = "'{0}' in parents and name = '{1}'.format(folder_id, remote_fn)
print("Searching for previous versions of this file : {0}".format(query))
check_if_already_exists = DRIVE.files().list(q=query, fields="files(id, name)").execute()
name_and_location_conflict = check_if_already_exists.get('files', [])
if not name_and_location_conflict:
body = {'name': remote_fn, 'mimeType': mime_type, 'parents': [folder_id]}
out = DRIVE.files().create(body=body, media_body=local_fn, supportsTeamDrives=True, fields='id').execute().get('id')
return out
else:
if len(name_and_location_conflict)==1:
file_id=name_and_location_conflict['id']
DRIVE.files().update(fileId=file_id, supportsTeamDrives=True, media_body=local_fn)
return file_id
else:
raise MultipleConflictsError("There are multiple documents matching parent folder and file name. Unclear which requires a version update")
When i tried to replace the 'name' parameter to 'title' (used to work in v2, based on some answers I reviewed) the API barfed
googleapiclient.errors.HttpError: <HttpError 400 when requesting https://www.googleapis.com/drive/v3/files?q=%27xxxxxxxxxxxxxxxx%27+in+parents+and+title+%3D+%27Somefile_2018-09-27.csv%27&fields=files%28id%2C+name%29&alt=json returned "Invalid Value">
Thanks #tehhowch,
Indeed extra measures are necessary when the target in a team drive, namely includeTeamDriveItems option needs to be set, otherwise TD locations are not included by default:
check_if_already_exists = DRIVE.files().list(
q=query,
fields="files(id, name)",
supportsTeamDrives=True,
includeTeamDriveItems=True
).execute()

Retrieve custom attribute from user profile in Google API Scripts- Google Admin Directory

This is about G suite users.The following works in Google Admin Directory using Google Admin SDK. It retrieves email address and full name of user.
var myemail = Session.getActiveUser().getEmail();
var mycontact = AdminDirectory.Users.get(myemail);
var myname = mycontact.name.fullName;
There is a custom attribute in user profile named "Department". The following does NOT retrieve anything. It throws null
var mydept = mycontact.Department;
How can one retrieve custom attribute from user profile in G suite?
According to Directory Api - Users: get you need to set the projection to "custom".
projection - What subset of fields to fetch for this user.
Acceptable values are:
"basic": Do not include any custom fields for the user. (default)
"custom": Include custom fields from schemas requested in customFieldMask.
"full": Include all fields associated with this user.
Then you should define a Schema for the custom data
customFieldMask (string) A comma-separated list of schema names. All fields from these schemas are fetched. This should only be set when projection=custom.
So something like:
var mycontact = AdminDirectory.Users.get({
"userKey": myemail,
"projection": "full",
"customFieldMask": "Define Schema Here"
});
You can then Logger.log(mycontact); to see how to access the returned custom fields
For a custom schema, you can just use the full projection to get all custom schema fields.
For the standard department field, see user.organizations[0].department
https://developers.google.com/admin-sdk/directory/v1/reference/users
If you got an error :
Resource Not Found: userKey
Try this :
mycontact = AdminDirectory.Users.get(
myemail,{
projection: 'full'
});

How to create a JSON file from a NDB database using Python

I would like to generate a simple json file from an database.
I am not an expert in parsing json files using python nor NDB database engine nor GQL.
What is the right query to search the data? see https://developers.google.com/appengine/docs/python/ndb/queries
How should I write the code to generate the JSON using the same schema as the json described here below?
Many thanks for your help
Model Class definition using NDB:
# coding=UTF-8
from google.appengine.ext import ndb
import logging
class Albums(ndb.Model):
"""Models an individual Event entry with content and date."""
SingerName = ndb.StringProperty()
albumName = ndb.StringProperty()
Expected output:
{
"Madonna": ["Madonna Album", "Like a Virgin", "True Blue", "Like a Prayer"],
"Lady Gaga": ["The Fame", "Born This Way"],
"Bruce Dickinson": ["Iron Maiden", "Killers", "The Number of the Beast", "Piece of Mind"]
}
For consistency, model names should by singular (Album not Albums), and property names should be lowercase_with_underscores:
class Album(ndb.Model):
singer_name = ndb.StringProperty()
album_name = ndb.StringProperty()
To generate the JSON as described in your question:
1) Query the Album entities from the datastore:
albums = Album.query().fetch(100)
2) Iterate over them to form a python data structure:
albums_dict = {}
for album in albums:
if not album.singer_name in albums_dict:
albums_dict[album.singer_name] = []
albums_dict[album.singer_name].append(album.album_name)
3) use json.dumps() method to encode to JSON.
albums_json = json.dumps(albums_dict)
Alternatively, you could use the built in to_dict() method:
albums = Album.query().fetch(100)
albums_json = json.dumps([a.to_dict() for a in albums])

web2py:Grid csv exports shows ids not values for reference fields

Table structure like -
db.define_table('parent',
Field('name'),format='%(name)s')
db.define_table('children',
Field('name'),
Field('mother','reference parent'),
Field('father','reference parent'))
db.children.mother.requires = IS_IN_DB(db, db.parent.id,'%(name)s')
db.children.father.requires = IS_IN_DB(db, db.parent.id,'%(name)s')
Controller :
grid = SQLFORM.grid(db.children, orderby=[db.children.id],
csv=True,
fields=[db.children.id, db.children.name, db.children.mother, db.children.father])
return dict(grid=grid)
Here grid shows proper values i.e names of the mother and father from the parent table.
But when I try to export it via csv link - resulted excelsheet shows ids and not the names of mother and father.
Please help!
The CSV download just gives you the raw database values without first applying each field's represent attribute. If you want the "represented" values of each field, you have two options. First, you can choose the TSV (tab-separated-values) download instead of CSV. Second, you can define a custom export class:
import cStringIO
class CSVExporter(object):
file_ext = "csv"
content_type = "text/csv"
def __init__(self, rows):
self.rows = rows
def export(self):
if self.rows:
s = cStringIO.StringIO()
self.rows.export_to_csv_file(s, represent=True)
return s.getvalue()
else:
return ''
grid = SQLFORM.grid(db.mytable, exportclasses=dict(csv=(CSVExporter, 'CSV')))
The exportclasses argument is a dictionary of custom download types that can be used to override existing types or add new ones. Each item is a tuple including the exporter class and the label to be used for the download link in the UI.
We should probably add this as an option.