Azure Pipeline - Importing Taskgroup via json always creates a new one instead of changing the existing - json

I have created a Task Group in Azure Pipeline via the GUI.
Then, I exported the JSON.
Next, I have changed the inputs in the json.
Afterward, I wanted to import this new json to change the existing TaskGroup.
Result:
It didn't update the existing TaskGroup, instead, it created a new task group called the same but as postfix " - Copy".
Analyzed:
When I downloaded the new imported Task Group I have seen that the value of Id has changed.
Anyway, I could not found a way to update the existing TaskGroup, what do I have to change in my Json in order to alter and not to create a new one?
Thanks!

Try using the Taskgroups Update API.

Related

Dynamically refer to Json value in Data Factory copy

I have ADF CopyRestToADLS activity which correctly saves json complex object to Data Lake storage. But I additionally need to pass one of the json values (myextravalue) to a stored procedure. I tried referencing it in the stored procedure parameter as #{activity('CopyRESTtoADLS').output.myextravalue but I am getting error
The actions CopyRestToADLS refernced by 'inputs' in the action ExectuteStored procedure1 are not defined in the template
{
"items": [1000 items],
"count": 1000,
"myextravalue": 15983444
}
I would like to try to dynamically reference this value because the CopyRestToADLS source REST dataset dynamically calls different REST endpoints so the structure of JSON object is different each time. But the myextravalue is always present in each JSON call.
How is it possible to refernce myextravalue and use it as a parameter?
Rich750
You could create another lookup active on REST data source to get the json value. Then pass it to the Stored Procedure active.
Yes, it will create a new REST request, and it seams to be an easy way to achieve your purpose. Lookup active to get the content of the source and won't save it.
The another solution may be get the value from the copy active output file, after the copy active completed.
I'm glad you solved it by this way:
"I created a Data Flow to read from the folder where Copy Activity saves dynamically named output json filenames. After importing schema from sample file, I selected the myextravalue as the only mapping in the Sink Mapping section."

Import json files from s3 into postgres RDS

I want to make a script (maybe lambda?) so every new json file uploaded to this s3 is also uploaded directly into a postgres table located in PostgreSQL RDS.
The json in nested and contains lists of jsons inside, so it is not that simple to just parse it in Postgres. In addition, it has a changing number of columns, so a new file may add up a new column to the table. (If a file has a new column that didn't appear yet, I want to add it and put null objects for the rest of the table values).
How can I do it efficiently?
As suggested, you can write lambda to listen to S3 events and trigger a function when a new file is uploaded.
https://n2ws.com/blog/aws-automation/lambda-function-s3-event-triggers
One event is trigged you need to read & parse the file.
Now connect to database & run sql queries after generating them from the object.

How to use ReplaceRows from .NET Google.Apis.Fusiontables.v2 (stream csv)?

Goal: to update a Fusion Table by replacing old rows by new ones from a csv file without headers using ReplaceRows().
I am using the Google.Apis.Fusiontables.v2 library.
I have read and reread the documentation, but still can`t get my code working.
Authentication is working and I am able to perform simple INSERTs without issue:
string sql = "INSERT INTO 11t9VLt3vzb46oGQMaS2LTSPWUyBYNcfi1shkmvag (rpu_id, NO_BAIL, 'Usage (description)', 'Use (description)', 'Sup. louable m2', 'Sup. Utilisable m2', 'SumTotal Lou', 'Percent Lou', 'SumTotal Util', 'Percent Util') VALUES (9999,1111,'Test','Test En',1,2,3,4,5,6)"
Sqlresponse sqlRspnse = service.Query.Sql(sql).Execute();
I have tried ReplaceRowsMediaUpload and ReplaceRowsMediaUpload directly from the TableResource class without luck.
Calling the upload function from the service object doesn't error out, but I'm not sure what to do next that would actually replace the rows in the Fusion Table (service is a FusiontablesService):
StreamReader str = new StreamReader(Server.MapPath("~") + #"\sample2.csv");
service.Table.ReplaceRows("1X7JMLFy75uq20UnU6cLrGTTDfp6lLuD1Fc3vYYjQ", str.BaseStream, "text/csv").Upload();
I've tried:
service.Table.ReplaceRows("1X7JMLFy75uq20UnU6cLrGTTDfp6lLuD1Fc3vYYjQ").Execute()
following the upload, but this just puts the Fusion table in "stuck" mode.
Can someone please provide the lines required to make ReplaceRows work? (Explanations would be appreciated, but aren't necessary!).
You should change "text/csv" for "application/octet-stream". (See accepted MIME type here: https://developers.google.com/fusiontables/docs/v2/reference/table/replaceRows)
StreamReader str = new StreamReader(Server.MapPath("~") + #"\sample2.csv");
service.Table.ReplaceRows("1X7JMLFy75uq20UnU6cLrGTTDfp6lLuD1Fc3vYYjQ", str.BaseStream, "application/octet-stream").Upload();
The call to Upload should be enough.
Also, try to create a new table to test it out, to be sure it is setup correctly.
You can use a REST API call to replace a row in your Google Fusion table directly instead of writing methods to do that. Here is an example:
POST https://www.googleapis.com/upload/fusiontables/v2/tables/tableId/replace
Please refer to this document for more details, it has a testing environment tool too.

How to avoid triggering unique validator on model edition?

I'm using Flask, SQLAlchemy and WTForms. I have a number of properties in my model object which are marked as unique and nullable=False. This works fine when creating a new row in the database but when I try to edit an existing object the validator on WTForms fails with
{'aproperty': [u'Already exists.']}
How can I make this validation pass without having to change my data model?
Update
Following the documentation was of no use to me.
You need to associate the existing record with the form. Otherwise the validator has no way of knowing that you're updating an existing record instead of creating a new one. Something like the following should do the trick:
current_obj = ...
form = MyForm(request.form, obj=current_obj)
form.validate_on_submit():
form.populate_obj(current_obj)

how to populate database fields when model changes in django

I have a django app that is evolving. The model often changes and I use Django South to apply schema migrations.
Sometimes my changes involve populating new values that are added based on sql logic.
For example, added a new boolean flag for currently paying users. I have added the field, applied the migration but now I want to populate the field based on the data from other table to show who is paying.
I know I can do this with a simple sql statement, but my environment is automated and uses CI. I want to push changes and have the flag populated automatically.
How can I accomplish this? With South? With Django?
There is a thing called data migration, this is a perfect use case for it:
Data migrations are used to change the data stored in your database to
match a new schema, or feature.
from south.v2 import DataMigration
from django.conf import settings
class Migration(DataMigration):
def forwards(self, orm):
# update your user's boolean flag here
See an example of a data migration here.
Or, alternatively, you can open your schema migration .py file and populate your field in forwards() method, like this:
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding field 'User.paying'
db.add_column(u'user', 'paying',
self.gf('django.db.models.fields.BooleanField')(default=True),
keep_default=False)
# update your user's boolean flag here
def backwards(self, orm):
# Deleting field 'User.paying'
db.delete_column(u'user', 'paying')
You can add your code in migration script created by south.
If you have updated a model and done schemamigration with south, it will create a script to apply that migration. It will be in appname/migration/00N_some_name.py.
You can add your code in forwards() method in that script at the end after schema alteration is done.