I found these instructions to turn it on:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.MySQL.html
But the Edit Parameters button is disabled:
Am I missing something?
Given the name "default.mysql5.6," it looks like you are trying to edit the default parameter group.
You cannot modify the parameter settings of a default DB parameter group; you must create your own DB parameter group to change parameter settings from their default value.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html
Related
I need help around triggers and using user macros in them. Using zabbix 3.4. I have a host and it has macro called '{$CLASS_A}'
I want to setup a Trigger that goes off when {$CLASS_A} = "HUGE" and free memory is less the 5G.
{my_test_server.vm.memory.size[available].last()}<5G
Can I not just do:
{$CLASS_A} = "HUGE" AND {my_test_server.vm.memory.size[available].last()}<5G
I can not see what I should be doing to get this to work. Any help would be great.
The "and" operator is case-sensitive and should be lowercase.
The macro usage is incorrect as well: you can use a macro on the right portion of the expression (see here for more), like:
{ca_001:system.cpu.load[,avg1].last()}>{$MAX_CPULOAD}
You can modify your current trigger to:
{my_test_template:vm.memory.size[available].last()}<{$MAX_MEMORY}
then define {$MAX_MEMORY} both on the template and the host: the template macro value will act as default and you can ovverride it with a host macro.
I am using Taleo Connect Client to export data from Taleo. I encountered two questions:
How can I add blank columns to an output CSV file?
For example, try to add ColumnBlank1 between Column_FirstName and Column_LastName.
Column_FirstName|ColumnBlank1|Column_LastName
John||Lee
Adam||Jackson
How can I set default value like "N" for one field?
DBaluke Huang's answer was correct, but he left out some details. Adding the full solution for others who might need this too.
To export a blank or fixed string value in a column using TCC (Taleo Connect client) do the following:
Open your Export
Click the projections tab
Click the add button
Click Projection Function
Choose the Replace Function
Click ok
In the First Parameter Section: In the Value box, add any string field
from your list on the entity tab. The Data Type should be Field.
In the Second Parameter Section, In the Value box, add the same field
from Parameter 1 value box. The Data Type should be Field.
In the Third Parameter section, In the value box, enter no value for
blank or enter the fixed string you want in all records.
Then change the data type to string in this section.
For those unfamiliar with the replace function you are looking for the string Parameter1.Value in Parameter2.value and then replacing all instances where the string is found with parameter3.value
You can export a blank field with <quer:string/>.
<quer:projection alias="Blank" xmlns:quer="http://www.taleo.com/ws/integration/query">
<quer:string/>
</quer:projection>
Steps
Open your export in Taleo Connect Client.
Open the General tab and set the Export mode to "CSV-report".
Open the Projections tab.
Click Add.
Select Add a complex projection and click OK.
Under Complex projection, enter the following:
<quer:projection alias="Blank" xmlns:quer="http://www.taleo.com/ws/integration/query">
<quer:string/>
</quer:projection>
Save your changes.
Example:
<quer:query productCode="RC1704" model="http://www.taleo.com/ws/tee800/2009/01" projectedClass="Candidate" locale="en" mode="CSV" csvheader="true" csvdelimiter="|" largegraph="true" preventDuplicates="false" xmlns:quer="http://www.taleo.com/ws/integration/query">
<quer:subQueries/>
<quer:projections>
<quer:projection>
<quer:field path="FirstName"/>
</quer:projection>
<quer:projection alias="Blank">
<quer:string/>
</quer:projection>
<quer:projection>
<quer:field path="LastName"/>
</quer:projection>
</quer:projections>
<quer:projectionFilterings/>
<quer:filterings/>
<quer:sortings/>
<quer:sortingFilterings/>
<quer:groupings/>
<quer:joinings/>
</quer:query>
Results:
FirstName|Blank|LastName
John||Lee
Adam||Jackson
Jane||Doe
Notes:
If you get a SAX parsing error when running the export, make sure your Export mode is set to "CSV-report". (Appears as mode="CSV" in source)
When adding a complex projection in TCC, you must include xmlns:quer="http://www.taleo.com/ws/integration/query", or else TCC will call your source "invalid". However, it is not required when editing your export's source directly outside of TCC.
I resolved the issue by:
Add a function projection in Projections. Set your Alias. Set First parameter value as whatever field that available. Set the second parameter's value as same as the first parameter. Change Third parameter's value as "blank" and set Data type as String.
Same step as the first question, and set Change Third parameter's value as "N".
SnappyData documentation give an example on how to submit a jar to a cluster:
https://snappydatainc.github.io/snappydata/howto/run_spark_job_inside_cluster/
But what if I need to submit the jar with the same class CreatePartitionedRowTable
multiple times, but with different paramter, say different suffix to append to the names of the tables created, How do I do that?
UPDATE:
To be more precise, say I want to submit the jar with different parameters when I submit the jar, something like this
bin/snappy-job.sh submit
--app-name CreatePartitionedRowTable
--class org.apache.spark.examples.snappydata.CreatePartitionedRowTable
--app-jar examples/jars/quickstart.jar
--lead localhost:8090
--CustomeParam suffix
the additional
--CustomeParam suffix
will be passed in to the job, and the code can pick up this parameter suffix, and appending the suffix to the table names to be created, so that I don't have to modify my code every time that I want to submit the jar with a different suffix.
Update 2:
I just went through the examples and found an example usage:
https://github.com/SnappyDataInc/snappydata/blob/master/examples/src/main/scala/org/apache/spark/examples/snappydata/CreateColumnTable.scala
so basically run like this:
* bin/snappy-job.sh submit
* --app-name CreateColumnTable
* --class org.apache.spark.examples.snappydata.CreateColumnTable
* --app-jar examples/jars/quickstart.jar
* --lead [leadHost:port]
* --conf data_resource_folder=../../quickstart/src/main/resources
and use config to get the customized parameter.
Each time you submit your app jar with snappy-job.sh it will create a new Job and run it. It could be the same jar with different content. Do you see any exception or the modified class (CreatePartitionedRowTable) is not getting picked ?
I am running OpenLDAP 2.4.31. Based on Reverse Group Membership Maintenance:
The memberof overlay updates an attribute (by default memberOf) whenever changes occur to the membership attribute (by default member) of entries of the objectclass (by default groupOfNames) configured to trigger updates.
I would like to change these defaults, so the overlay is based on the objectClass groupOfUniqueNames and the attribute uniqueMember. I did not find any mention on how to do this in the documentation, and also I did not find any default setting for this in cn=config; what are the settings that I have to add here to make the desired changes?
I have already added the memberof and referential integrity configuration to cn=config based on this article.
Use the following to change the memberof behaviour. I'm showing the solution here for a traditional slapd.conf configuration.
memberof-group-oc groupOfUniqueNames
memberof-member-ad uniqueMember
As for the referential integrity, you can use the memberof overlay's own setting to do this, which is much easier:
memberof-refint true
For cn=config, you probably therefore want the following:
olcMemberOfRefInt: TRUE
olcMemberOfGroupOC: groupOfUniqueNames
olcMemberOfMemberAD: UniqueMember
The example provided on www.schenkels.nl (your link) almost gets you there. You can append the following to the block dn: olcOverlay={0}memberof,olcDatabase={1}hdb,cn=config:
olcMemberOfGroupOC: groupOfNames
olcMemberOfMemberAD: member
olcMemberOfMemberOfAD: memberOf
Above shows the defaults that you already mentioned. It should be possible to change those to the attributes you want to use. Check out the member-of man page for a description of the configuration options.
Let's say I want to have a provide CActiveDataProvider for a CGridView. I need to put a SUM(invitesCount) AS invites into a Provider result. How to retrieve it? I guess I cannot just use $dataProvider->invites?
You need to specify the following in your relationinvites
'invites '=>array(self::BELONGS_TO, 'CampaignFund', 'campaign_id', 'select' => 'SUM(invitesCount)'),
and use this relation in your criteria.
Several other options:
Use CStatRelation
invitesCount=>array(self::STAT,'Invites','foreign_key_field');
The addition of a public property can work. However, the field would only be set if you altered the default find query to include this new condition. This can be done by overriding defaultScope() or creating a new scope and using it whenever invitesCount is required.
Another option would be to create a database view from the required query and create a new Model from that database view.