AOSP How to verify OTA updates by their metadata - updates

I'm building an OTA update for my custom Android 10 build as follows:
./build/make/tools/releasetools/ota_from_target_files \
--output_metadata_path metadata.txt \
target-files.zip \
ota.zip
The resulting ota.zip can be applied by extracting the payload.bin and payload_properties.txt according to the android documentation for update_engine_client.
update_engine_client --payload=file:///<wherever>/paypload.bin \
--update \
--headers=<Contents of payload_properties.txt>
This all works so I'm pretty sure from this result that I've created the OTA correctly, however, I'd like to be able to download the metadata and verify that the payload can be applied before having the client download the entire payload.
Looking at the update_engine_client --help options, it appears one can verify the metadata as follows:
update_engine_client --verify --metadata=<path to metadata.txt from above>
This is where I'm failing to achieve the desired result though. I get an error that says it failed to parse the payload header. It's failing with kDownloadInvalidMetadataMagicString which when I read the source appears to be the first 4 bytes of the metadata. Apparently the metadata.txt I created isn't right for the verification tool.
So I'm hoping someone can point me in the right direction to either generate the metadata correctly or tell me how to use the tool correctly.

Turns out the metadata generated by the ota tool is in human readable format. The verify method expects a binary file. That file is not part of the zip contents as a unique file. Instead, it's prepended to the payload.bin. So the first bytes of payload.bin are actually payload_metadata.bin, and those bytes will work correctly with the verify method of update_engine_client to determine if the payload is applicable.
I'm extracting the payload_metadata.bin in a makefile as follows:
$(DEST)/%.meta: $(DEST)/%.zip
unzip $< -d /tmp META-INF/com/android/metadata
python -c 'import re; meta=open("/tmp/META-INF/com/android/metadata").read(); \
m=re.match(".*payload_metadata.bin:([0-9]*):([0-9]*)", meta); \
s=int(m.groups()[0]); l=int(m.groups()[1]); \
z=open("$<","rb").read(); \
open("$#","wb").write(z[s:s+l])'
rm -rf /tmp/META-INF

Related

Unrecognized content type parameters: format when serving model on databricks experiement

I got this Error when serving a model into databricks using MLflow,
Unrecognized content type parameters: format. IMPORTANT: The MLflow Model scoring protocol has changed in MLflow version 2.0. If
you are seeing this error, you are likely using an outdated scoring
request format. To resolve the error, either update your request
format or adjust your MLflow Model's requirements file to specify an
older version of MLflow (for example, change the 'mlflow' requirement
specifier to 'mlflow==1.30.0'). If you are making a request using the
MLflow client (e.g. via mlflow.pyfunc.spark_udf()), upgrade your
MLflow client to a version >= 2.0 in order to use the new request
format. For more information about the updated MLflow Model scoring
protocol in MLflow 2.0, see
https://mlflow.org/docs/latest/models.html#deploy-mlflow-models.
I'm looking after the right format to use on my Json input, as the format I am using looks like this example :
[
{
"input1":12,
"input2":290.0,
"input3":'red'
}
]
I don't really know if it's related to a version of my mlfow (currently I'm using mlflow==1.24.0), I can not update the version as I do not have some privileges.
I also have tried the solution suggested here and got :
TypeError:spark_udf() got an unexpected keyword argument 'env_manager'
I do not find any documentation so far to solve this issue.
Thank you for your help, in advance.
When you are logging the model, your MLflow version is 1.24, but when you serve it as an API in Databrick's there will be a new environment created for it. This new environment is installing a 2.0+ version of MLflow. As the error message suggests, you can either specify the MLflow version or update the request format.
If you are using Classic Model Serving, you should specify the version, if you are using Serverless Model Serving, you should update the request format. If you must use Classic Model Serving and do not want to upgrade, scroll to the bottom.
Specify the MLflow version
When logging the model, you can specify a new Conda environment or add additional pip requirements that are used when the model is being served.
pip
# log model with mlflow==1.* specified
mlflow.<flavor>.log_model(..., extra_pip_requirements=["mlflow==1.*"])
Conda
# get default conda env
conda_env = mlflow.<flavor>.get_default_conda_env()
print(conda_env)
# specify mlflow==1.*
conda_env = {
"channels": ["conda-forge"],
"dependencies": [
"python=3.9.5",
"pip<=21.2.4",
{"pip": ["mlflow==1.*", "cloudpickle==2.0.0"]},
],
"name": "mlflow-env",
}
# log model with new conda_env
mlflow.<flavor>.log_model(..., conda_env=conda_env)
Update the request
An alternative is to update the JSON request format, but this only will work if you are using Databrick's Serverless.
In the MLflow docs link at the end of the error message, you can see all the formats. From the data, you provided, I would suggest using dataframe_split or dataframe_records.
{
"dataframe_split": {
"columns": ["input1", "input2", "input3"],
"data": [[1, 2, "red"]]
}
}
{
"dataframe_records": [
{
"input1": 12,
"input2": 290,
"input3": "red"
}
]
}
Classic model serving with MLflow 2.0+
If you are using Classic Model Serving, don't want to specify the MLflow version and want to use the UI for inference, DO NOT log an input_example when you log the model. I know this does not follow "best practice" for MLflow, but because of some investigating, I believe there is an issue with Databricks when you do this.
When you log an input_example, MLFlow logs information about the example including type and pandas_orient. This information is used to generate the inference recipe. As you can see in the generated curl command, it sets format=pandas-records (the JSON is not generated). But this returns the Unrecognized content type... error.
curl \
-u token:$DATABRICKS_TOKEN \
-X POST \
-H "Content-Type: application/json; format=pandas-records" \
-d '{
"dataframe_split": {
"columns": ["input1", "input2", "input3"],
"data": [[12, 290, 3]]
}
}' \
https://<url>/model/<model>/<version>/invocations
For me when I removed format=pandas-records entirely, then everything works as expected. Because of this, I believe if you log an example and use the UI then Databricks is adding this format to the request for you. Which results in an error even if you did everything correctly. While in serverless the generated curl does not include this parameter at all.

Parse XML from not well formed page using xpath

Notice:
While writing this question, I notice that there is a Github API that solves my problem without HTML parsing: https://api.github.com/repos/mozilla/geckodriver/releases/latest I decided to ask it anyway since I'm intested how to solve the described problem of parsing malformed HTML itself. So please dont downvote because there is a github API for it! We can replace github by any other page throwing validation errors.
I want to download the latest version of geckodriver. By fetching the redirection target of the latest tag, I'm on the releases page
curl $(curl -s "https://github.com/mozilla/geckodriver/releases/latest" --head | grep -i location | awk '{print $2}' | sed 's/\r//g') > /tmp/geckodriver.html
The first assets with geckodriver-vx.xxx-linux64.tar.gz is the required link. Since XML is schemantic, it should be parsed properly. Different tools like xmllint could parse it using xpaths. Since xpath is new for me, I tried a simple query on the header. But xmllint throws a lot of errors:
$ xmllint --xpath '//div[#class=Header]' /tmp/geckodriver.html
/tmp/geckodriver.html:51: parser error : Specification mandate value for attribute data-pjax-transient
<meta name="selected-link" value="repo_releases" data-pjax-transient>
^
/tmp/geckodriver.html:107: parser error : Opening and ending tag mismatch: link line 105 and head
</head>
^
/tmp/geckodriver.html:145: parser error : Entity 'nbsp' not defined
Sign up
^
/tmp/geckodriver.html:172: parser error : Entity 'rarr' not defined
es <span class="Bump-link-symbol float-right text-normal text-gray-light">→
...
There are a lot more. It seems that the github page is not properly well formed, as the specification requires it. I also tried xmlstarlet
xmlstarlet sel -t -v -m '//div[#class=Header]' /tmp/geckodriver.html
but the result is similar.
Is it not possible to extract some data using those tools when the HTML is not well formed?
curl $(curl -s "https://github.com/mozilla/geckodriver/releases/latest" --head | grep -i location | awk '{print $2}' | sed 's/\r//g') > /tmp/geckodriver.html
It may be simpler to use -L, and have curl follow the redirection:
curl -L https://github.com/mozilla/geckodriver/releases/latest
Then, xmllint accepts an --html argument, to use an HTML parser:
xmllint --html --xpath '//div[#class=Header]'
However, this doesn't match anything on that page, so perhaps you want to base your XPath on something like:
'string((//a[span[contains(.,"linux")]])[1]/#href)'
Which yields:
/mozilla/geckodriver/releases/download/v0.26.0/geckodriver-v0.26.0-linux32.tar.gz

About Amster '--body' option (OpenAM command line)

There is a '--body' option for most of the Amster commands. This options allows you to send the body of a request with JSON syntax. However, if the body of your request is big, the --body option will be big and the Amster command will be huge for your terminal. Is there any option to specify this JSON text in a way that it is not so uncomfortable for the command-line?
Maybe it exists an option that allows you to indicate the path of a JSON file or something like that.
I will be very grateful for any answer.
My answer below is based on the latest available Amster (6.0.0)
You can use Amster in Script mode.
Essentially you can write your amster commands in a separate file, lets call it myscript.amster, please note, the extension is not important.
You can then add your entire command including the json in your script, for e.x. to create a realm: Please note the use of: \ to spill the json across multiple lines.
create Realms --global --body '{ \
"name": "test", \
"active": false, \
"parentPath": "/", \
"aliases": [ "testing" ] \
}'
Now, you can run this script in two modes:
From within the amster shell:
am> :load <pathToYourScript>
Without having to enter the script mode:
amster/amster <pathToYourScript>
In this mode, do remember to connect to your openam server before running your commands and :quit at the end. You should find some more samples in the samples directory of your amster.

Unable to import data from CouchDB

I am trying to import and then export data to a remote machine. Here goes the schema of the database. This is just a document that I got it in a form of json.
{"docs":[
{"id":"702decba698fea7df3fa46fdd9000fa4","key":"702decba698fea7df3fa46fdd9000fa4","value":{"rev":"1-f8c63611d5bc7354cac42d2a697ad57a"},"doc":{"_id":"702decba698fea7df3fa46fdd9000fa4","_rev":"1-f8c63611d5bc7354cac42d2a697ad57a","contributors":null,"truncated":false,"text":"RT #Whistlepodu4Csk: : First time since 1987 World Cup no Asian teams in the WC final\nThis had occurred in 1975, 1979, 1987 and now #CWC15\n…","in_reply_to_status_id":null,"in_reply_to_user_id":null,"id":583090814735155201,"favorite_count":0,"author":{"py/object":"tweepy.models.User","py/state":{"follow_request_sent":false,"profile_use_background_image":true,"profile_text_color":"333333","id":3102321084,"verified":false,"profile_location":null,"profile_image_url_https":"https://pbs.twimg.com/profile_images/579460416977252352/weSzVnPF_normal.jpg","profile_sidebar_fill_color":"DDEEF6","is_translator":false,"geo_enabled":false,"entities":{"description":{"urls":[]}},"followers_count":1,"profile_sidebar_border_color":"C0DEED","id_str":"3102321084","default_profile_image":false,"location":"Chennai","is_translation_enabled":false,"utc_offset":null,"statuses_count":9,"description":"12/11","friends_count":23,"profile_link_color":"0084B4","profile_image_url":"http://pbs.twimg.com/profile_images/579460416977252352/weSzVnPF_normal.jpg","notifications":false,"profile_background_image_url_https":"https://abs.twimg.com/images/themes/theme1/bg.png","profile_background_color":"C0DEED","profile_background_image_url":"http://abs.twimg.com/images/themes/theme1/bg.png","name":"charandevaa","lang":"en","profile_background_tile":false,"favourites_count":7,"screen_name":"charandevaarg","url":null,"created_at":{"py/object":"datetime.datetime","__reduce__":[{"py/type":"datetime.datetime"},["B98DFgEtLgAAAA=="]]},"contributors_enabled":false,"time_zone":null,"protected":false,"default_profile":true,"following":false,"listed_count":0}},"retweeted":false,"coordinates":null,"entities":{"symbols":[],"user_mentions":[{"indices":[3,19],"id_str":"570379002","screen_name":"Whistlepodu4Csk","name":"Chennai Super Kings","id":570379002}],"hashtags":[{"indices":[132,138],"text":"CWC15"},{"indices":[139,140],"text":"IndvsAus"}],"urls":[]},"in_reply_to_screen_name":null,"id_str":"583090814735155201","retweet_count":9,"metadata":{"iso_language_code":"en","result_type":"recent"},"favorited":false,"retweeted_status":{"py/object":"tweepy.models.Status","py/state":{"contributors":null,"truncated":false,"text":": First time since 1987 World Cup no Asian teams in the WC final\nThis had occurred in 1975, 1979, 1987 and now #CWC15\n#IndvsAus\"","in_reply_to_status_id":null,"in_reply_to_user_id":null,"id":581059988317073409,"favorite_count":6,"author":{"py/object":"tweepy.models.User","py/state":{"follow_request_sent":false,"profile_use_background_image":true,"profile_text_color":"333333","id":570379002,"verified":false,"profile_location":null,"profile_image_url_https":"https://pbs.twimg.com/profile_images/460329225124188160/FgnIhlVM_normal.jpeg","profile_sidebar_fill_color":"DDEEF6","is_translator":false,"geo_enabled":false,"entities":{"url":{"urls":[{"indices":[0,22],"url":"http://t.co/Kx3erXpkEJ","expanded_url":"http://chennaisuperkings.com","display_url":"chennaisuperkings.com"}]},"description":{"urls":[{"indices":[138,160],"url":"http://t.co/yfitkkfz5D","expanded_url":"http://www.facebook.com/chennaisuperkingsofficialfansclub","display_url":"facebook.com/chennaisuperki…"}]}},"followers_count":13604,"profile_sidebar_border_color":"000000","id_str":"570379002","default_profile_image":false,"location":"Chennai","is_translation_enabled":false,"utc_offset":19800,"statuses_count":13107,"description":"Chennai super kings fans club:All about Mahi, Raina,Mccullum,Aswhin,Bravo. Updates about Suriya: Beleive in CSK: Whistlepodu!Suriya Rocks http://t.co/yfitkkfz5D","friends_count":11962,"profile_link_color":"CCC200","profile_image_url":"http://pbs.twimg.com/profile_images/460329225124188160/FgnIhlVM_normal.jpeg","notifications":false,"profile_background_image_url_https":"https://pbs.twimg.com/profile_background_images/518467484358164480/yUXQYv3m.jpeg","profile_background_color":"FFF04D","profile_banner_url":"https://pbs.twimg.com/profile_banners/570379002/1370113848","profile_background_image_url":"http://pbs.twimg.com/profile_background_images/518467484358164480/yUXQYv3m.jpeg","name":"Chennai Super Kings","lang":"en","profile_background_tile":true,"favourites_count":283,"screen_name":"Whistlepodu4Csk","url":"http://t.co/Kx3erXpkEJ","created_at":{"py/object":"datetime.datetime","__reduce__":[{"py/type":"datetime.datetime"},["B9wFAxUWFAAAAA=="]]},"contributors_enabled":false,"time_zone":"Chennai","protected":false,"default_profile":false,"following":false,"listed_count":23}},"retweeted":false,"coordinates":null,"entities":{"symbols":[],"user_mentions":[],"hashtags":[{"indices":[111,117],"text":"CWC15"},{"indices":[118,127],"text":"IndvsAus"}],"urls":[]},"in_reply_to_screen_name":null,"id_str":"581059988317073409","retweet_count":9,"metadata":{"iso_language_code":"en","result_type":"recent"},"favorited":false,"source_url":"http://twitter.com/download/android","user":{"py/id":13},"geo":null,"in_reply_to_user_id_str":null,"lang":"en","created_at":{"py/object":"datetime.datetime","__reduce__":[{"py/type":"datetime.datetime"},["B98DGgsvMwAAAA=="]]},"in_reply_to_status_id_str":null,"place":null,"source":"Twitter for Android"}},"source_url":"http://www.twitter.com","user":{"py/id":1},"geo":null,"in_reply_to_user_id_str":null,"lang":"en","doc_type":"tweet","created_at":{"py/object":"datetime.datetime","__reduce__":[{"py/type":"datetime.datetime"},["B98EAQIRJgAAAA=="]]},"in_reply_to_status_id_str":null,"place":null,"source":"Twitter for Windows Phone"}}]}
Approach 1:
Here is the command:
curl -d #db.json -H "Content-type: application/json" -X POST http://127.0.0.1:5984/cwc15/_bulk_docs
I get below error:
{"error":"not_found","reason":"no_db_file"}
I did follow below post before I am posting this problem -
https://www.google.com.au/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&cad=rja&uact=8&ved=0CC4QFjAD&url=http%3A%2F%2Fstackoverflow.com%2Fquestions%2F26264647%2Fcouchdb-exported-file-wont-import-back-into-database&ei=8GMbVY3eNNjo8AW18YL4BA&usg=AFQjCNHdm1o0NS49nKPrEl0zU-n7eVRv8Q&bvm=bv.89744112,d.dGc
And I did not get any help from google. The last post I can see over there is way back in 2012, and I couldn't find the help any good. Could someone please help me out. I could be a life saver for me.
Approach 2 -
curl -H 'Content-Type: application/json' -X POST http://localhost:5984/_replicate -d ' {"source": "http://example.com:5984/dbname/", "target": "http://localhost#:5984/dbname/"}'
Gave my source and the target where I wanted to copy. In target gave the IP address of that machine followed by port no/dbname/
Got error: Connection Timedout
Approach 3:
Exported the couch database with filename - cwc15.couch
Stored in flash drive.
Took root login and went to the location where this file is stored.
Command - cp cwc15.couch /var/lib/couchdb
Get error -
Error:{{case_clause,{{badmatch,
{error,eaccess}},
[{couch_file,init,1,
[{file,"couch_file.erl"},{line,314}]},
{gen_server,init_it,6,
[{file,"gen_server.erl"},{line,304}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,
239}]}]}},
[{couch_server.handle_info,2,
[{file,couch_server.erl"},{line,442}]},
{gen_server,handle_msg,5,
[{file,"gen_server.erl"},{line,604}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,239}]}]}
{gen_server,call,
[couch_server,
{open,<<"cwc15>>,
[{user_ctx,
{user_ctx,null,
[<<"_admin">>],
<<"{couch_httpd_auth,
default_authentication_handler}">>}}]},
infinity]}
{"error":"not_found","reason":"no_db_file"} - database doesn't exists, you need to create it first: 1 Also, don't use -d curl key for uploading files: that argument is for sending data in text mode, while binary one (-T or --data-binary) is what you really want to. JSON is ineed text format, but Unicode data may play devil role here.
For Connection Timedout error happened because source or target databases aren't reachable by URLs you'd specified. Not sure what they were in real, but localhost#:5984 doesn't looks good one. Also, here you didn't create a database again, so initial error may occur.
The error in your logs {error,eaccess} means bad file permissions which you accidentally broken with copying a file. Follow the install instructions to restore it and ensure that nothing else is broken.

Get list of repositories using gitweb for external use

To be used in another external script, we need the list of repositories hosted in a git repository server. We have GitWeb also enabled on the server.
Any one know if GitWeb exposes some API through which we can get the list of repositories ? Like GitBlit RPC (http://gitblit.com/rpc.html like https://your.glitblit.url/rpc?req=LIST_REPOSITORIES) ?
Thanks.
No, from what I can see of the gitweb.cgi (from gitweb/gitweb.perl) implementation, there is no RPC API with JSON return messages.
This is only visible through the web page.
In the bottom right corner there is a small button that reads: TXT
You can get the list of projects there, for example:
For sourceware, the gitweb page: https://sourceware.org/git/
The TXT button links here: https://sourceware.org/git/?a=project_index
It should return a list of projects which are basically
<name of the git repository> <owner>
in plain text, perfectly parseable by script.
But if you want JSON, you'd have to convert it with something like this:
$ wget -q -O- "https://sourceware.org/git/?a=project_index" \
| jq -R -n '[ inputs | split(" ")[0:2] | {"project": .[0], "owner": .[1]} ]'