How to translate words from $this->Time->timeAgoInWords? - cakephp-3.0

Here some translations at default.po
/src/Locale/pt_BR/default.po
msgid "{0} minute"
msgid_plural "{0} minutes"
msgstr[0] "minuto"
msgstr[1] "{0} minutos"
msgid "January"
msgstr "Janeiro"
...
The months are being translated at input type date but minute at $this->Time->timeAgoInWords is not. They are from a different domain?

Yes, they are in a different domain, the cake domain, the months are in that domain too. Take a look at the source, all translatable core messages are in that domain.
I'd suggest that you extract them using the I18N shell (backup your existing translations first in case you accidently overwrite them), it won't get simpler than that.
See Cookbook Console & Shells > I18N Shell > Generating POT Files

Related

How do I package all of my binaries in bazel?

There is a plethora of BUILD files scattered throughout the hierarchy of my mono repo.
Some of these files contain cc_binary rules.
I know they are all built into bazel-bin, but I'd like to get easy access to them all.
How can I package them all up, and put them all into ~/.bin/ for example?
I see the packaging rules, but its not clear to me how to write a rule that captures every single program and packages them together.
It may not be the most elegant solution (plus I hope I got the question), but this is how we do it by packaging/"tarring" each binary in its own bazel package / BUILD file:
cc_binary(
name = "hello"
...
)
load("#bazel_tools//tools/build_defs/pkg:pkg.bzl", "pkg_tar")
pkg_tar(
name = "hello_pkg",
srcs = [":hello"],
mode = "0755",
package_dir = "/usr/bin",
)
And then we'd collect all those into a one overall tarball/package in project root:
pkg_tar(
name = "mypkg",
extension = "tar.gz",
deps = [
"//hello:hello_pkg",
...
],
)
Sometimes we'd actually have multiple such rules for hello to collect for instance executables under bin and libraries in lib with intermediary hello_bin and hello_lib targets. Which would in the same fashion as mypkg above be first aggregated into hello_pkg and that in turn would be used in mypkg.

How to upload multiple JSON files into CouchDB

I am new to CouchDB. I need to get 60 or more JSON files in a minute from a server.
I have to upload these JSON files to CouchDB individually as soon as I receive them.
I installed CouchDB on my Linux machine.
I hope some one can help me with my requirement.
If possible can someone help me with pseudo code.
My Idea:
Is to write a python script for uploading all JSON files to CouchDB.
Each and every JSON file must be each document and the data present in
JSON must be inserted same into CouchDB
(the specified format with values in a file).
Note:
These JSON files are Transactional, every second 1 file is generated
so I need to read the file upload as same format into CouchDB on
successful uploading archive the file into local system of different folder.
python program to parse the json and insert into CouchDb:
import sys
import glob
import errno,time,os
import couchdb,simplejson
import json
from pprint import pprint
couch = couchdb.Server() # Assuming localhost:5984
#couch.resource.credentials = (USERNAME, PASSWORD)
# If your CouchDB server is running elsewhere, set it up like this:
couch = couchdb.Server('http://localhost:5984/')
db = couch['mydb']
path = 'C:/Users/Desktop/CouchDB_Python/Json_files/*.json'
#dirPath = 'C:/Users/VijayKumar/Desktop/CouchDB_Python'
files = glob.glob(path)
for file1 in files:
#dirs = os.listdir( dirPath )
file2 = glob.glob(file1)
for name in file2: # 'file' is a builtin type, 'name' is a less-ambiguous variable name.
try:
with open(name) as f: # No need to specify 'r': this is the default.
#sys.stdout.write(f.read())
json_data=f
data = json.load(json_data)
db.save(data)
pprint(data)
json_data.close()
#time.sleep(2)
except IOError as exc:
if exc.errno != errno.EISDIR: # Do not fail if a directory is found, just ignore it.
raise # Propagate other kinds of IOError.
I would use CouchDB bulk API, even though you have specified that you need to send them to db one by one. For example, by implementing a simple queue that gets sent out every say 5 - 10 seconds via a bulk doc call will greatly increase performance of your application.
There is obviously a quirk in that and that is you need to know the IDs of the docs that you want to get from the DB. But for the PUTs it is perfect. (it is not entirely true, you can get ranges of docs using bulk operation if the IDs you are using for your docs can be sorted nicely).
From my experience working with CouchDB, I have a hunch that you are dealing with Transactional documents in order to compile them into some sort of sum result and act on that data accordingly (maybe creating next transactional doc in series). For that you can rely on CouchDB by using 'reduce' functions on the views you create. It takes a little practice to get reduce function working properly and is highly dependent on what it is you actually what to achieve and what data you are prepared to emit by the view so I can't really provide you with more detail on that.
So in the end the app logic would go something like that:
get _design/someDesign/_view/yourReducedView
calculate new transaction
add transaction to queue
onTimeout
send all in transaction queue
If I got that first part of why you are using transactional docs wrong all that would really change is the part where you getting those transactional docs in my app logic.
Also, before writing your own 'reduce' function, have a look at buil-in ones (they are alot faster then anything outside of db engine can do)
http://wiki.apache.org/couchdb/HTTP_Bulk_Document_API
EDIT:
Since you are starting, I strongly recommend to have a look at CouchDB Definitive Guide.
NOTE FOR LATER:
Here is one hidden stone (well maybe not so much a hidden stone but not an obvious thing to look out for for the new-comer in any case). When you write reduce function make sure that it does not produce too much output for the query without boundaries. This will extremely slow down the entire view even when you provide reduce=false when getting stuff from it.
So you need to get JSON documents from a server and send them to CouchDB as you receive them. A Python script would work fine. Here is some pseudo-code:
loop (until no more docs)
get new JSON doc from server
send JSON doc to CouchDB
end loop
In Python, you could use requests to send the documents to CouchDB and probably to get the documents from the server as well (if it is using an HTTP API).
You might want to checkout the pycouchdb module for python3. I've used it myself to upload lots of JSON objects into couchdb instance. My project does pretty much the same as you describe so you can take a look at my project Pyro at Github for details.
My class looks like that:
class MyCouch:
""" COMMUNICATES WITH COUCHDB SERVER """
def __init__(self, server, port, user, password, database):
# ESTABLISHING CONNECTION
self.server = pycouchdb.Server("http://" + user + ":" + password + "#" + server + ":" + port + "/")
self.db = self.server.database(database)
def check_doc_rev(self, doc_id):
# CHECKS REVISION OF SUPPLIED DOCUMENT
try:
rev = self.db.get(doc_id)
return rev["_rev"]
except Exception as inst:
return -1
def update(self, all_computers):
# UPDATES DATABASE WITH JSON STRING
try:
result = self.db.save_bulk( all_computers, transaction=False )
sys.stdout.write( " Updating database" )
sys.stdout.flush()
return result
except Exception as ex:
sys.stdout.write( "Updating database" )
sys.stdout.write( "Exception: " )
print( ex )
sys.stdout.flush()
return None
Let me know in case of any questions - I will be more than glad to help if you will find some of my code usable.

Most "popular" Python repos on GitHub

Based on the v3 documentation I would have thought that this:
$ curl https://api.github.com/legacy/repos/search/python?language=Python&sort=forks&order=desc
would return the top 100 Python repositories in descending order of number of forks. It actually returns an empty (json) list of repositories.
This:
$ curl https://api.github.com/legacy/repos/search/python?language=Python&sort=forks
returns a list of repositories (in json), but many of them are not listed as Python repositories.
So, clearly I have misunderstood the Github API. What is the accepted way of retrieving the top N repositories for a particular language?
As pengwynn said -- currently this is not easily doable via GitHub's API alone. However, have a look at this alternative way of querying using the GitHub Archive project: How to find the 100 largest GitHub repositories for a past date?
In essence, you can query GitHub's historical data using an SQL-like language. So, if having real-time results is not something that is important for you, you could execute the following query on https://bigquery.cloud.google.com/?pli=1 to get the top 100 Python repos as on April 1st 2013 (yesterday), descending by the number of forks:
SELECT MAX(repository_forks) as forks, repository_url
FROM [githubarchive:github.timeline]
WHERE (created_at CONTAINS "2013-04-01" and repository_language = "Python")
GROUP BY repository_url
ORDER BY forks
DESC LIMIT 100
I've put the results of the query in this Gist in CSV format, and the top few repos are:
forks repository_url
1913 https://github.com/django/django
1100 https://github.com/facebook/tornado
994 https://github.com/mitsuhiko/flask
...
The intent of Repository Search API is to find Repositories by keyword and then further filter those results by the other optional query string parameters.
Since you're missing a ?, you're passing the entire intended query string as the :keyword. I'm sorry, we do not support your intended search via the GitHub API at this time.
Try this:
https://api.github.com/search/repositories?q=language:Python&sort=forks&order=desc
It is searching over repositories.

Retrieving multiple objects from LDAP by DN at once

I have a list of DNs, and for performance reasons I want to retrieve the attributes of every DN in the list in a single trip to the LDAP server.
Seems like searching by DN, i.e., using DN as a filter search, is not possible
Using DN in Search Filter
http://www.openldap.org/lists/openldap-software/200503/msg00520.html
....is there any alternative?
Sure you can.
ldapsearch -h <ldaphost> -b "cn=joe,dc=yourdoamin,dc=com" -s base -D cn=admin,dc=yourdomain,dc=com -W "(objectclass=*)" "*"
Will retrieve all user attributes for the DN: cn=joe,dc=yourdoamin,dc=com.
But, for the list, you would need to repeat the search for each one.
We often do this in a bash script.
Can you use a filter to identify which DNs you need?
-jim
Seems like it is only possible in Active Directory. All I had to do is filter by the distinguishedName attribute, however on my tests there was no performance gain.
Active Directory includes the distinguishedName attribute on every
object; the value is the object's DN. The following example elaborates
the previous example to include a value of distinguishedName on each
object.
http://msdn.microsoft.com/en-us/library/cc223167.aspx

Mediawiki and databases

Is there a way I can create a database from which to pull data into my mediawiki table? Or is there a way to have a database like drupal and place a mediawiki type interface on it?
There is no way to directly do this in stock MediaWiki, although you can fake it up somewhat with templates. For example, you could can a template something like this:
{{#switch:{{{key}}}
| key1 = value1
| key2 = value2
| key3 = value3
...
}}
Template:NUMBEROF/data on the English Wikipedia is an example of this style (with two levels of keys).
Or you can create a set of templates, one for each "record", that each take an "output formatter" template as a parameter and pass that output formatter a named parameter for each column in the record. The Country data templates on the English Wikipedia are an example of this pattern.
Or you could combine the above two styles, with one parameter to select the row (as in the first style) and a second to provide the output formatter (as in the second).
If you don't mind installing extensions, you could use the Labeled Section Transclusion extension to transclude sections of a data page. Or you could install the Semantic MediaWiki extension, which I hear allows all sorts of querying of data from the wiki's pages. Or you could install one of the many Database extensions that may allow you to do what you want. Or you could write your own database extension.
You could also have a look at http://www.mediawiki.org/wiki/Extension:Data_Transfer, which do not require Semantic MediaWiki even though it's written for use with SMW. (If you use SMW there are, as noted in an earlier reply, plenty extensions and built in options.)