JSON format in wordpress database - json

I need to import JSON data in a WP database. I found the correct table of the database but the example JSON data present in table is not in a normal JSON format.
I need to import:
{"nome": "Pippo","cognome": "Paperino"}
but the example data in the table is:
a:2:{s:4:"nome";s:5:"Pippo";s:7:"cognome";s:8:"Paperino";}
How i can convert my JSON to "WP JSON"?

The data is serialized, that's why it looks weird. You can use maybe_unserialize() in WordPress, this function will unserialize the data if it was serialized.
https://developer.wordpress.org/reference/functions/maybe_unserialize/
Some functions serialize data before saving it in wordpress, and some will also unserialize when pulling from the DB. So depending on how you save it, and how you later extract the data, you might end up with serialized data.

Related

MongoDB newbie - Storing Sql Server Table Data as Json Data in a Collection

Does this use of MongoDB make sense ?
I'm very new to MongoDB ( 2 days ) and I need advice on this design.
I want to export / convert Sql Server tables data ( < 1000 Rows each ) to
MongoDB collections. I want to convert the sql data to JSON string data and then
save that as a single string "column/object" where the collectionname = sqlservertablename.
Given this design, I would retrieve all the JSON Data at once using the collectionname and then return the
json string to my Asp.Net MVC form which can render the data.
On the client, changes can be made and then I would save all the json data back to MongoDb.
This design is almost like using MongoDb as a text/flatfile for each "Table".
Does this make sense ?
Any links on doing this type of thing ?
Thanks, LA Guy
EDIT: shows how to convert sql server table to BSON document.
Convert SQL table to mongoDB document
I will review this and see if it makes sense to do this.

How to parse Nested Json messages from Kafka topic to hive

I'm pretty new to spark streaming and scala. I have a Json data and some other random log data coming in from a kafka topic.I was able to filter out just the json data like this
val messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topicsSet).map(_._2).filter (x => x.matches("^[{].*" ))
My json data looks like this.
{"time":"125573","randomcol":"abchdre","address":{"city":"somecity","zip":"123456"}}
Im trying to parse the json data and put it into hive table.
can someone please point me in the correct direction.
Thanks
There are multiple ways to do this.
create an external hive table with required columns and pointing to this data location.
When you create the table , you could use the default JSON serde and then use get_json_object hive function and load this raw data into a final table. Refer this for the function details
OR
You could try the avro serde and mention the avro schema as per your json message to create the hive table. Refer this for avro serde example
.
Hope it helps

hadoop - Validate json data loaded into hive warehouse

I have json files, volume is approx 500 TB. I have loaded complete set into hive data warehouse.
How would I validate or test the data that was loaded into hive warehouse. What should be my testing strategy ?
Client want us to validate the json data. Whether the data loaded into hive is correct ot not. Is there any miss? If yes, which field it was?
Please help.
How is your data being stored in hive tables ?
One option is create a Hive UDF function that receive the JSON string and validate the data and return another string with the error message or an empty string if the JSON string is well formed.
Here is a Hve UDF tutorial: http://blog.matthewrathbone.com/2013/08/10/guide-to-writing-hive-udfs.html
With the Hive UDF function in place you can executequeries like:
select strjson, validateJson(strjson) from jsonTable where validateJson(strjson) != "";

Apache Drill Query PostgreSQL Json

I am trying to query a jsonb field in PostgreSQL in drill and read it as if were coming from a json storage type but am running into trouble. I can conver from text to json but cannot seem to query the json object. At least I think I can convert to JSON. My goal is to avoid reading through millions of uneven json objects from PostgreSQL, perform joins and things with text files such as CSV files and XML files. Is there a way to query the text field as if it were coming from a json storage type without writing large files to disk?
The goal is to generate results implicitly which PostgreSQL nor Pentaho do and integrate these data sets with others of any format.
Attempt:
SELECT * FROM (SELECT convert_to(json_field,'JSON') as data FROM postgres.mytable) as q1
Sample Result:
[B#7106dd
Attempt to existing field that should be in any json object:
SELECT data[field] FROM (SELECT convert_to(json_field,'JSON') as data FROM postgres.mytable) as q1
Result:
null
Attempting to do anything with jsonb results in a Null Pointer Error.

Converting Hive Map to Redshift JSON

Can anyone recommend a code snippet, script, tool for converting a Hive Map field to a Redshift JSON field?
I have a Hive table that has two Map fields and I need to move the data to Redshift. I can easily move the data to a string format but then lose some functionality with that. Would prefer to have Map ported to JSON to maintain key, value pairs.
Thanks.
You might want to try the to_json UDF
http://brickhouseconfessions.wordpress.com/2014/02/07/hive-and-json-made-simple/