AS3. Store Array in Local SQL Database - actionscript-3

I am developing an application in Adobe Flash Builder 4.6 (AIR APP) in which a user fills forms and stores data like name, phone, address (along with some report data) in Local SQL Database.
My coding is all Okay but have a just one problem. Along with data, there is an array which I am trying to store on Local SQL Database, but my all attempts have failed.
I store array in text datatype in database and when I access the stored array it give an error.
Cannot convert "[object Object]" to Array.
I dont know what is the problem.
Please Help ... Thanks.
I use array with nested array like..
My array is
var ar:Array=["report_name", {label: report_label, number: 5}, [{label: Label1, data: Data1}, {label: Label2, data: Data2}, {label: Label3, data: Data3}]];
Now I store this array in database like.
sqlstatement.text ="INSERT INTO table (id, data) VALUES ('', '"+ar+"')";
In this statement data has TEXT datatype.
Now when I access it like
var array:Array = [];
array = Array(sqlstatement.getResult().data[0].data;
It give coercion error.

You get the result you get because SQL query is a basically a string, so it is equal to
"INSERT INTO table (id, data) VALUES ('', '" + ar.toString() + "')"
where you lose all generic objects {...} and the data structure.
MESepehr is absolutely correct, what you need is JSON data format, which will work fine for you as long, as you keep only Numbers, ints, Strings, Booleans, Arrays, Objects and nulls there.
Then, it is as easy as the following.
Store:
sqlstatement.text = "INSERT INTO table (id, data) VALUES ('', '" + JSON.stringify(ar) + "')";
Restore:
var array:Array = JSON.parse(sqlstatement.getResult().data[0].data) as Array;

Related

Can I submit a JSON to an Oracle Insert like I do with MySQL using Node?

I have the following code that I used for inserting into MySQL (MariaDB)....
import mysql from "mysql";
const INSERT_QUERY = "INSERT INTO CALL_DATE SET ? ON DUPLICATE KEY UPDATE MADE_DATE = VALUES(MADE_DATE)";
insertCallDate(callId, server, date){
const callDate = {
...
};
return connection.query(
INSERT_QUERY,
callDate
);
}
When I move to oracleDB I would like to do something like that again but the closest I can find is something like...
const INSERT_QUERY = "INSERT INTO CALL_DATE SET (ID, ...) values (:1, ...)";
Is there something similar to MySQL so I can pass a prestructured JSON object to Oracle? Specifically using the Node JS oracledb library?
There's a short section on JSON in the node-oracledb documentation. To quote an example:
const data = { "userId": 1, "userName": "Chris", "location": "Australia" };
const s = JSON.stringify(data); // change JavaScript value to a JSON string
const result = await connection.execute(
`INSERT INTO j_purchaseorder (po_document) VALUES (:bv)`,
[s] // bind the JSON string
);
There are also two runnable examples: selectjson.js and selectjsonblob.js.
Most of the JSON technology in Oracle is not specific to node-oracledb, so the Oracle manual Database JSON Developer’s Guide is a good resource.
You may be interested in SODA, which is also documented for node-oracledb and has an example, soda1.js. It lets you store 'documents' in the DB. These documents can be anything, but by default JSON documents are used.

azure ADF - Get field list of .csv file from lookup activity

context: azure - ADF brief process description:
Get a list of the fields defined in the first row of a .csv(blobed) file. This is the first step, detect fields
then 2nd step would be a kind of compare with actual columns of an SQL table
3rd one a stored procedure execution to make the alter table task, finishing with a (customized) table containing all fields needed to successfully load the .csv file into the SQl table.
To begin my ADF pipeline, I set up a lookup activity that "querys" the first line of my blobed file, "First row only" flag = ON.As a second pipeline activity, an "Append Variable" task, there I would like to get all .csv fields(first row) retrieved from the lookup activity, as a list.
Here is where a getting the nightmare.
As far I know, with dynamic content I can get an array with all values (w/ format like {"field1_name":"field1_value_1st_row", "field2_name":"field2_value_1st_row", etc })
with something like #activity('Lookup1').output.firstrow.
Or any array element with #activity('Lookup1').output.firstrow.<element_name>,
but I can't figure out how to get a list of all field names (keys?) of the array.
I will appreciate any advice, many thanks!
I would save the part of LookUp Activity because it seems that you are familiar with it.
You could use Azure Function HttpTrigger to get the key list of firstrow JSON object. For example your json object like this as you mentioned in your question:
{"field1_name":"field1_value_1st_row", "field2_name":"field2_value_1st_row"}
Azure Function code:
module.exports = async function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.');
var array = [];
for(var key in req.body){
array.push(key);
}
context.res = {
body: {"keyValue":array}
};
};
Test Output:
Then use Azure Function Activity to get the output:
#activity('<AzureFunctionActivityName>').keyValue
Use Foreach Activity to loop the keyValue array:
#item()
Still based on the above sample input data,please refer to my sample code:
dct = {"field1_name": "field1_value_1st_row", "field2_name": "field2_value_1st_row"}
list = []
for key in dct.keys():
list.append(key)
print(list)
dicOutput = {"keys": list}
print(dicOutput)
Have you considered doing this in ADF data flow? You would map the incoming fields to a SQL dataset without a target schema. Define a new table name in the dataset definition and then map the incoming fields from your CSV to a new target table schema definition. ADF will write the rows to a new table using that file's schema.

How to access json schema info in SparkDataset Api when using plain map/reduce functions?

Given there is a dataset of messages, defined by following code:
case class Message(id: Int, value: String)
var messages = Seq(
(0, """{"action":"update","timestamp":"2017-10-05T23:01:19Z"}"""),
(1, """{"action":"update","timestamp":"2017-10-05T23:01:19Z"}""")
).toDF("id", "value").as[Message]
var schema = new StructType().add("action", StringType).add("timestamp", TimestampType)
var res = messages.select(
from_json(col("value").cast("string"), schema)
)
+------------------------------------+
|jsontostructs(CAST(value AS STRING))|
+------------------------------------+
| [update,2017-10-0...|
| [update,2017-10-0...|
What is the best way to access the schema information in a plain map function. The function itself returns a row which has lost all the Type infos. In order to reach to the values one has to specify the type again e.g
res.head().getStruct(0).getValuesMap[TimestampType](Seq("timestamp"))
=> Map[String,org.apache.spark.sql.types.TimestampType] = Map(timestamp -> 2017-10-06 01:01:19.0)
or
res.head().getStruct(0).getString(0)
=> res20: String = update
Is there some better way to access the raw json data without spark sql aggregation functions?
As a rule of thumb:
To use collection API (map, flatMap, mapPartitions, groupByKey, etc.) use strongly typed API - define record type (case class works the best) which reflects the schema and use Encoders to convert things back and forth:
case class Value(action: String, timestamp: java.sql.Timestamp)
case class ParsedMessage(id: Int, value: Option[Value])
messages.select(
$"id", from_json(col("value").cast("string"), schema).alias("value")
).as[ParsedMessage].map(???)
With Dataset[Row] stay with high level SQL / DataFrame API (select, where, agg, groupBy)

how to insert values into mysql table from another bigquery response

My Python program connects to BigQuery and fetching data which I want to insert into a MySQL table. It's successfully fetching the results from BigQuery. It's also successfully connecting to MySQL DB but it's not inserting the data. I see its complaining for the row[1].
What's the right way to insert the values from BigQuery response into MySQL table columns?
query_data = {mybigquery}
query_response = query_request.query(projectId='myprojectid',body=query_data).execute()
for row in query_response['rows']:
cursor.execute ("INSERT INTO database.table VALUES ('row[0]','row[1]','row[2]','row[3]','row[4]');")
Also, I tried to use
cursor.execute ("INSERT INTO database.table VALUES (%s,%s,%s,%s,%s);")
or
cursor.execute ("INSERT INTO database.table VALUES (row[0],row[1],row[2],row[3],row[4]);")
But in all it fails while inserting values in mysql table
String literals
Regarding the original question, the issue lies with quoting your variables. This causes the execute function to treat them as string literals rather than getting the values from them.
As suggested by #Herman, to properly execute the SQL statement with the values I think you intend, you would need something more like this:
query_data = {mybigquery}
statement = 'INSERT INTO database.table VALUE (%s, %s, %s);'
response = query_request.query(projectId='myprojectid', body=query_data).execute()
rows = response['rows']
for row in rows:
values = (row[0], row[1], row[2])
cursor.execute(statement, values)
BigQuery query JSON
Keep in mind however that the above will not work out of the box as row in the code above does not conform to the response received from the BigQuery Job: query API.
In this API, rows is an array of row objects. Each row object has a property f which is an array of fields. Lastly, each field has a property v which is the value of this field.
To get the value of second field in a row, you should use row['f'][1]['v']. Since you require a tuple or list for the params argument of the cursor.execute() method, you could get a list of field values using list comprehension as follows:
for row in rows:
values = [field['v'] for field in row['f]]
Sanitize values before inserting
The TypeError you get once correctly reading the field values may be raised because execute or str cannot convert a value to a string properly. One of the significant differences between BigQuery and MySQL is that a value in BigQuery can be a record with multiple values of its own. To ensure this gets inserted properly, you must sanitize those values yourself prior to inserting them. If the value is a list or dict, it cannot be stored in MySQL without being serialized in some way like with the str method.
Example
def sanitize(value):
if type(value) is list:
return str(value)
if type(value) is dict:
return str(value)
# this may be required for other types
return value
data = {mybigquery}
statement = 'INSERT INTO database.table VALUE (%s, %s, %s);'
response = request.query(projectId='projid', body=data).execute()
for row in response['rows']:
values = [sanitize(field['v']) for field in row['f']]
cursor.execute(statement, values)
This is very basic sanitation. You should really validate all field values and ensure that they will be properly converted to MySQL types and not simply insert an array of values.
What is the error message? It should be something like:
cursor.execute(
"INSERT INTO database.table VALUES (%s, %s, %s, %s, %s)", row[0:5])

How do I insert a mysql spatial point with a yii model?

I have a model type that was generated from a mysql table that has address data and also a spatial POINT field named "coordinates". When a model is created or updated I want to geocode the address and store the latitude and longitude coordinates in the POINT field.
My understanding is the way to do this is to geocode the address in the model's beforeSave method. I have done this and have the coordinates in an associative array. Now my question is how can I insert this data into my coordinates field? This is what I'm trying:
public function beforeSave()
{
$singleLineAddress = $this->getSingleLineAddress();
$coords = Geocoder::getCoordinates($singleLineAddress);
// WORKS: using the following line works to insert POINT(0 0)
//$this->coordinates = new CDbExpression("GeomFromText('POINT(0 0)')");
// DOESN'T WORK: using the following line gives an error
$this->coordinates = new CDbExpression("GeomFromText('POINT(:lat :lng)')",
array(':lat' => $coords['lat'], ':lng' => $coords['lng'] ));
return parent::beforeSave();
}
When I do this I get the following error:
CDbCommand failed to execute the SQL statement: SQLSTATE[HY093]:
Invalid parameter number: number of bound variables does not match
number of tokens. The SQL statement executed was: INSERT INTO place
(city, state, name, street, postal_code, phone, created,
coordinates) VALUES (:yp0, :yp1, :yp2, :yp3, :yp4, :yp5,
UTC_TIMESTAMP(), GeomFromText('POINT(:lat :lng)'))
Small edit in #dlnGd0nG 's answer if you are using Yii 2
$this->coordinates = new yii\db\Expression("GeomFromText(:point)",
array(':point'=>'POINT('.$coords['lat'].' '.$coords['lng'].')'));
Try this instead
$this->coordinates = new CDbExpression("GeomFromText(:point)",
array(':point'=>'POINT('.$coords['lat'].' '.$coords['lng'].')'));