Is there a method for zotonic cms to replace some defined text parts to links? - zotonic

Is there a method, yes bla bla, like in Subtext CMS (on asp.net btw) which is called there Terms Expansion or (Keyword Expansion) i don't remember correct.. For Zotonic cms to replace some defined text parts to links? I mean: -- animal -> should be replace to animal
Ah, please well help me.. I will be gratefull.

You could use a custom filter to do this. Here's a simple example I use for mailing lists which replaces #NAME with the name of the recipient:
-module(filter_inject_firstname).
-export([inject_firstname/3]).
-include("zotonic.hrl").
inject_firstname(Body, undefined, _Context) ->
Body;
inject_firstname(Body, Recipient, _Context) ->
Val = case proplists:get_value(props, Recipient, []) of
Props when is_list(Props) ->
[{email, proplists:get_value(email, Recipient)},
{name_first, proplists:get_value(name_first, Props)},
{name_surname, proplists:get_value(name_surname, Props)},
{name_surname_prefix, proplists:get_value(name_surname_prefix, Props)}];
_ ->
[{email, proplists:get_value(email, Recipient)}]
end,
Name = proplists:get_value(name_first, Val),
iolist_to_binary(re:replace(Body, "#NAME", Name, [global])).

Related

how do I access a JSON file after it has been decoded

I can't figure out how to access this output from JSON
p (2 items)
0:"message" -> "ok"
1:"results" -> List (1 item)
key:"results"
value:List (1 item)
[0]:Map (4 items)
0:"uid" -> "1"
1:"name" -> "TallyMini"
2:"camera" -> "2"
3:"Version" -> "0.1.0"
enter code here
lst.add(convertDataToJson[1]['value']['name']);
I have tried various combinations of indexs 'result', 'list', 'value', name
I guess you are using http and get a JSON object from the server. And in this case, call json.decode() method then
Flutter understands JSON as a Map . ( Map<String, dynamic> json).
So, to get the value of name field, firstly you need to access to results field.
json['results'] . It is a list, and continue accessing to the name field.
It my answer is not helpful for you, please show me more your code.
You can try like as:
String name= jsonData['results'][0]['name'];
Please let me know it's work or not.

How can I improve the ease of working with JSON in Haskell?

Haskell has become useful as a web language (thanks Servant!), and yet JSON is still so painful for me so I must be doing something wrong (?)
I hear JSON mentioned as a pain point enough, and the responses I've heard revolve around "use PureScript", "wait for Sub/Row Typing", "use esoterica, like Vinyl", "Aeson + just deal with the explosion of boiler plate data types".
As an (unfair) reference point, I really enjoy the ease of Clojure's JSON "story" (of course, it's a dynamic language, and has it's tradeoffs for which I still prefer Haskell).
Here's an example I've been staring at for an hour.
{
"access_token": "xxx",
"batch": [
{"method":"GET", "name":"oldmsg", "relative_url": "<MESSAGE-ID>?fields=from,message,id"},
{"method":"GET", "name":"imp", "relative_url": "{result=oldmsg:$.from.id}?fields=impersonate_token"},
{"method":"POST", "name":"newmsg", "relative_url": "<GROUP-ID>/feed?access_token={result=imp:$.impersonate_token}", "body":"message={result=oldmsg:$.message}"},
{"method":"POST", "name":"oldcomment", "relative_url": "{result=oldmsg:$.id}/comments", "body":"message=Post moved to https://workplace.facebook.com/{result=newmsg:$.id}"},
{"method":"POST", "name":"newcomment", "relative_url": "{result=newmsg:$.id}/comments", "body":"message=Post moved from https://workplace.facebook.com/{result=oldmsg:$.id}"},
]
}
I need to POST this to FB workplace, which will copy a message to a new group, and comment a link on both, linking to each other.
My first attempt looked something like:
data BatchReq = BatchReq {
method :: Text
, name :: Text
, relativeUrl :: Text
, body :: Maybe Text
}
data BatchReqs = BatchReqs {
accessToken :: Text
, batch :: [BatchReq]
}
softMove tok msgId= BatchReqs tok [
BatchReq "GET" "oldmsg" (msgId `append` "?fields=from,message,id") Nothing
...
]
That's painfully rigid, and dealing with Maybes all over is uncomfortable. Is Nothing a JSON null? Or should the field be absent? Then I worried about deriving the Aeson instances, and had to figure out how to convert eg relativeUrl to relative_url. Then I added an endpoint, and now I have name clashes. DuplicateRecordFields! But wait, that causes so many problems elsewhere. So update the data type to use eg batchReqRelativeUrl, and peel that off when deriving instances using Typeables and Proxys. Then I needed to add endpoints, and or massage the shape of that rigid data type for which I added more datapoints, trying to not let the "tyranny of small differences" bloat my data types too much.
At this point, I was largely consuming JSON, so decided a "dynamic" thing would be to use lenses. So, to drill into a JSON field holding a group id I did:
filteredBy :: (Choice p, Applicative f) => (a -> Bool) -> Getting (Data.Monoid.First a) s a -> Optic' p f s s
filteredBy cond lens = filtered (\x -> maybe False cond (x ^? lens))
-- the group to which to move the message
groupId :: AsValue s => s -> AppM Text
groupId json = maybe (error500 "couldn't find group id in json.")
pure (json ^? l)
where l = changeValue . key "message_tags" . values . filteredBy (== "group") (key "type") . key "id" . _String
That's rather heavy to access fields. But I also need to generate payloads, and I'm not skilled enough to see how lenses will be nice for that. Circling around to the motivating batch request, I've come up with a "dynamic" way of writing these payloads. It could be simplified with helper fns, but, I'm not even sure how much nicer it'll get with that.
softMove :: Text -> Text -> Text -> Value
softMove accessToken msgId groupId = object [
"access_token" .= accessToken
, "batch" .= [
object ["method" .= String "GET", "name" .= String "oldmsg", "relative_url" .= String (msgId `append` "?fields=from,message,id")]
, object ["method" .= String "GET", "name" .= String "imp", "relative_url" .= String "{result=oldmsg:$.from.id}?fields=impersonate_token"]
, object ["method" .= String "POST", "name" .= String "newmsg", "relative_url" .= String (groupId `append` "/feed?access_token={result=imp:$.impersonate_token}"), "body" .= String "message={result=oldmsg:$.message}"]
, object ["method" .= String "POST", "name" .= String "oldcomment", "relative_url" .= String "{result=oldmsg:$.id}/comments", "body" .= String "message=Post moved to https://workplace.facebook.com/{result=newmsg:$.id}"]
, object ["method" .= String "POST", "name" .= String "newcomment", "relative_url" .= String "{result=newmsg:$.id}/comments", "body" .= String "message=Post moved from https://workplace.facebook.com/{result=oldmsg:$.id}"]
]
]
I'm considering having JSON blobs in code or reading them in as files and using Text.Printf to splice in variables...
I mean, I can do it all like this, but would sure appreciate finding an alternative. FB's API is a bit unique in that it can't be represented as a rigid data structure like a lot of REST APIs; they call it their Graph API which is quite a bit more dynamic in use, and treating it like a rigid API has been painful thus far.
(Also, thanks to all the community help getting me this far with Haskell!)
Update: Added some comments on the "dynamic strategy" at the bottom.
In similar situations, I've used single-character helpers to good effect:
json1 :: Value
json1 = o[ "batch" .=
[ o[ "method" .= s"GET", "name" .= s"oldmsg",
"url" .= s"..." ]
, o[ "method" .= s"POST", "name" .= s"newmsg",
"url" .= s"...", "body" .= s"..." ]
]
]
where o = object
s = String
Note that the non-standard syntax (no space between one-character helper and argument) is intentional. It's a signal to me and others reading my code that these are technical "annotations" to satisfy the type checker rather than a more usual kind of function call that's actually doing something.
While this adds a little clutter, the annotations are easy to ignore while reading the code. They're also easy to forget while writing code, but the type checker catches those, so they're easy to fix.
In your particular case, I think some more structured helpers do make sense. Something like:
softMove :: Text -> Text -> Text -> Value
softMove accessToken msgId groupId = object [
"access_token" .= accessToken
, "batch" .= [
get "oldmsg" (msgId <> "?fields=from,message,id")
, get "imp" "{result=oldmsg:$.from.id}?fields=impersonate_token"
, post "newmsg" (groupId <> "...") "..."
, post "oldcomment" "{result=oldmsg:$.id}/comments" "..."
, post "newcomment" "{result=newmsg:$.id}/comments" "..."
]
]
where get name url = object $ req "GET" name url
post name url body = object $ req "POST" name url
<> ["body" .= s body]
req method name url = [ "method" .= s method, "name" .= s name,
"relative_url" .= s url ]
s = String
Note that you can tailor these helpers to the specific JSON you're generating in a particular case and define them locally in a where clause. You don't need to commit to some big chunk of ADT and function infrastructure that covers all JSON use-cases in your code, as you might do if the JSON was more unified in structure across the application.
Comments on the "Dynamic Strategy"
With respect to whether or not using a "dynamic strategy" is the right approach, it probably depends on more context than can realistically be shared in a Stack Overflow question. But, taking a step back, the Haskell type system is useful to the extent that it helps clearly model the problem domain. At its best, the types feel natural and assist you with writing correct code. When they stop doing this, you need to rethink your types.
The pain you encountered with a more traditional ADT-driven approach to this problem (rigidity of the types, proliferation of Maybes, and the "tyranny of small differences") suggests that these types were a bad model at least for what you were trying to do in this case. In particular, given that your problem was one of generating fairly straightforward JSON directives/commands for an external API, rather than doing lots of data manipulation on structures that also happened to allow JSON serialization/deserialization, modeling the data as Haskell ADTs was probably overkill.
My best guess is that, if you really wanted to properly model the FB Workplace API, you wouldn't want to do it at the JSON level. Instead, you'd do it at a higher level of abstraction with Message, Comment, and Group types, and you'd end up wanting to generate the JSON dynamically anyway, because your types wouldn't directly map to the JSON structures expected by the API.
It might be insightful to compare your problem to generating HTML. Consider first the lucid (blaze-based) or shakespeare templating packages. If you look at how these work, they don't try to build up HTML by generating a DOM with ADTs like data Element = ImgElement ... | BlockquoteElement ... and then serializing them to HTML. Presumably the authors decided that this abstraction wasn't really necessary, because the HTML just needs to be generated, not analyzed. Instead they use functions (lucid) or a quasiquoter (shakespeare) to build up a dynamic data structure representing an HTML document. The chosen structure is rigid enough to ensure certain sorts of validity (e.g., proper matching of opening and closing element tags) but not others (e.g., no one stops you from sticking a <p> child in the middle of your <span> element).
When you use these packages in a larger web app, you model the problem domain at a higher level of abstraction than HTML elements, and you generate the HTML in a largely dynamic fashion because there's not a clear one-to-one mapping between the types in your problem domain model and HTML elements.
On the other hand, there's a type-of-html package that does model individual elements, so it's a type error to try to nest a <tr> inside <td> and so on. Developing these types probably took a lot of work, and there's a lot of inflexibility "baked in", but the trade-off is a whole other level of type safety. On the other hand, this seems easier to do for HTML than to do for a particular finicky JSON API.

Replace template smart tags <<tag>> to [tag] in mysql

I have an table name templateType, It has column name Template_Text.
The Template have many smart tags <> to [tag] using mysql, and I need to replace << to [ and >> with ].
Edit from OP's comments:
It is an template with large text and having multiple smart tags. As example : " I <<Fname>> <<Lname>>, <<UserId>> <<Designation>> of xyz organization, Proud to announce...."
Here I need to replace these << with [ and >> with ], so it will look like
" [Fname] [Lname], [UserId] ...."
Based on your comments, your MySQL version does not support Regex_Replace() function. So, a generic solution is not feasible.
Assuming that your string does not contain additional << and >>, other than following the <<%>> format, we can use Replace() function.
I have also added a WHERE condition, so that we pick only those rows which match our given substring criteria.
Update templateType
SET Template_Text = REPLACE(REPLACE(Template_Text, '<<', '['), '>>', ']')
WHERE Template_Text LIKE '%<<%>>%'
In case the problem is further complex, you may get some ideas from this answer: https://stackoverflow.com/a/53286571/2469308
A couple of replace calls should work:
SELECT REPLACE(REPLACE(template_text, '<<', '['), '>>', '])
FROM template_type

How do I match a CSV-style quoted string in nom?

A CSV style quoted string, for the purposes of this question, is a string in which:
The string starts and ends with exactly one ".
Two double quotes inside the string are collapsed to one double quote. "Alo""ha"→Alo"ha.
"" on its own is an empty string.
Error inputs, such as "A""" e", cannot be parsed. It's an A", followed by junk e".
I've tried several things, none of which have worked fully.
The closest I've gotten, thanks to some help from user pinkieval in #nom on the Mozilla IRC:
use std::error as stderror; /* Avoids needing nightly to compile */
named!(csv_style_string<&str, String>, map_res!(
terminated!(tag!("\""), not!(peek!(char!('"')))),
csv_string_to_string
));
fn csv_string_to_string(s: &str) -> Result<String, Box<stderror::Error>> {
Ok(s.to_string().replace("\"\"", "\""))
}
This does not catch the end of the string correctly.
I've also attempted to use the re_match! macro with r#""([^"]|"")*""#, but that always results in an Err::Incomplete(1).
I've determined that the given CSV example for Nom 1.0 doesn't work for a quoted CSV string as I'm describing it, but I do know implementations differ.
Here is one way of doing it:
use nom::types::CompleteStr;
use nom::*;
named!(csv_style_string<CompleteStr, String>,
delimited!(
char!('"'),
map!(
many0!(
alt!(
// Eat a " delimiter and the " that follows it
tag!("\"\"") => { |_| '"' }
| // Normal character
none_of!("\"")
)
),
// Make a string from a vector of chars
|v| v.iter().collect::<String>()
),
char!('"')
)
);
fn main() {
println!(r#""Alo\"ha" = {:?}"#, csv_style_string(CompleteStr(r#""Alo""ha""#)));
println!(r#""" = {:?}"#, csv_style_string(CompleteStr(r#""""#)));
println!(r#"bad format: {:?}"#, csv_style_string(CompleteStr(r#""A""" e""#)));
}
(I wrote it in full nom, but a solution like yours, based on an external function instead of map!() each character, would work too, and may be more efficient.)
The magic here, that would also solve your regexp issue, is to use CompleteStr. This basically tells nom that nothing will come after that input (otherwise, nom assumes you're doing a streaming parser, so more input may follow).
This is needed because we need to know what to do with a " if it is the last character fed to nom. Depending on the character that comes after it (another ", a normal character, or EOF), we have to take a different decision -- hence the Incomplete result, meaning nom does not have enough input to make the decision. Telling nom that EOF comes next solves this indecision.
Further reading on Incomplete on nom's author's blog: http://unhandledexpression.com/general/2018/05/14/nom-4-0-faster-safer-simpler-parsers.html#dealing-with-incomplete-usage
You may note that this parser does not actually rejects the invalid input, but parses the beginning and returns the rest. If you use this parser as a subparser in another parser, the latter would then feed the remainder to the next subparser, which would crash as well (because it would expect a comma), causing the overall parser to fail.
If you don't want that, you could make csv_style_string match peek!(alt!(char!(',')|char!('\n")|eof!())).

JSON- renaming name/value pairs

Hay all,
I am using Kohana 3 and I am attempting to integreate the jquery fullcarlendar plugin. The naming conventions used for this plugin seems to be "title" for the event, "start" for the start date, "allday" for a boolean, and so on.
After querying I generated a json string like
[{"eventdate":"2011-02-05 06:15:35","name":"EBS, Heriot Watt Graduation Ceremony"},{"eventdate":"2011-02-05 06:16:20","name":"Heriot Watt University Edinburgh Business School Graduation Ceremony 2011"}]
Is there a way to do something like
DB::select('start'=>'simpleevent.eventdate', 'title'=>'simpleevent.name')
->from('simpleevent')
->where('YEAR("eventdate")', '=', $todayasarray[0])
Basically after the query I get an array of arrays in PHP which is then used in
json_encode($myArray)
So can I rename the "name" for each name/value pair?
`
DB::select(array('simpleevent.eventdate', 'start'), array('simpleevent.name', 'title'))
->from('simpleevent')
->where( /*condition*/)
I tried this, basically after creating the json as a string on my action I used the php str_replace() function as follows.
$oldnames = array("name","eventdate");
$newnames = array("title","start");
$v->jsonData = str_replace($oldnames, $newnames, $jsondata);
This is an option only if you cannot make the alias change as shown above by Dusan