I've setup Apache Drill, created http storage plugin and set its configuration as here:
{
"type": "http",
"cacheResults": false,
"connections": {
"accounts": {
"url": "https://my.datasource.url",
"method": "GET",
"headers": {
"Authorization": "Bearer access_token...",
"Accept": "application/json"
},
"authType": "none",
"userName": null,
"password": null,
"postBody": null,
"params": null,
"dataPath": "QueryResponse/Account",
"requireTail": false,
"inputType": "json"
}
},
"timeout": 0,
"proxyHost": null,
"proxyPort": 0,
"proxyType": "direct",
"proxyUsername": null,
"proxyPassword": null,
"enabled": true
}
I am able to run queries through rest call (as well as from web ui and odbc) as here:
{
"queryType": "SQL",
"query": "select * from myds.accounts"
}
The problem is, access token is short lived and multiple users need to access these data sources with their own access tokens, so saving token withing connection doesn't work for me.
Is there any way I could send access token from the client at the time of sending query? I have no preference of using either Rest API or ODBC, any of them would be good as far as it solves my problem. Thanks
It may be possible to specify some of the configuration at query time. The example below demonstrates, in the file system plugin, how to use the table() function to alter the configuration options at runtime. In this case, we're specifying which sheet to query in an excel file.
SELECT *
FROM table(dfs.`excel/test_data.xlsx` (type => 'excel', sheetName =>'secondSheet'))
I don't know if this will work for the REST plugin or not, but it's worth a try. (It is admittedly a bit of a hack)
Another option, which would require modification to the plugin, would be to create special variables that could be specified at query time. For instance, we could create a _headers variable so that you could insert items into the headers at query time. Thus, a query might look like:
SELECT...
FROM ...
WHERE _headers="Authorization=1234"
I'm really wondering what the best way to accomplish this is. I'm sure you're not the only one with this issue.
Related
I'm trying to save data to my MySql db from a Node method. This includes a field called attachments.
console.log(JSON.stringify(post.acf.attachments[0])); returns:
{
"ID": 4776,
"id": 4776,
"title": "bla",
"filename": "bla.pdf",
"filesize": 1242207,
"url": "https://example.com/wp-content/uploads/bla.pdf",
"link": "https://example.com/bla/",
"alt": "",
"author": "1",
"description": "",
"caption": "",
"name": "bla",
"status": "inherit",
"uploaded_to": 0,
"date": "2020-10-23 18:05:13",
"modified": "2020-10-23 18:05:13",
"menu_order": 0,
"mime_type": "application/pdf",
"type": "application",
"subtype": "pdf",
"icon": "https://example.com/wp-includes/images/media/document.png"
}
This is indeed the data I want to save to the db:
await existing_post.save({
...
attachments: post.acf.attachments[0],
)};
However, the attachments field produces a 422 server error (if I comment out this field, the other fields save without a problem to the db). I'm not getting what is causing this error. Any ideas?
I've also tried
await existing_post.save({
...
attachments: post.acf.attachments,
)};
but then it seems to just save "[object Object]" to the database.
The field in the database is defined as text. I've also tried it by defining the field as json, but that made no difference.
exports.up = function (knex, Promise) {
return knex.schema.table("posts", function (table) {
table.longtext("attachments");
});
};
The 422 error code is about the server unable to process the data you are sending to it. In your case, your table field is longtext when post.acf.attachments seems like an object. That's why it saves [object Object] to your db (It is the return value of the toString() method).
Try using
await existing_post.save({
...
attachments: JSON.stringify(post.acf.attachments),
)};
MySQL and knex both support the JSON format, I'd suggest you change the field to json. (See knex docs and mysql 8 docs). You'll stiil need to stringify your objects tho.
EDIT: I just saw that Knex supports jsonInsert (and plenty other neat stuff) as a query builder that should be useful for you.
Mysql also support a large range of cool stuffs for handling jsons
In addition, when you fetch the results in the database, you'll need to parse the JSON result to get an actual JSON object:
const acf = await knex('posts').select('acf').first();
const attachment = JSON.parse(acf.attachment;
Knex also provide jsonExtract that should fill your needs (See also the mysql json_extract
I was wondering if I'm miss-understanding the expected behavior of usernameAttributeProvider or if it's a bug (and if there is an alternate solution).
I have the following service:
{
"#class": "org.apereo.cas.support.oauth.services.OAuthRegisteredService",
...
"usernameAttributeProvider": {
"#class": "org.apereo.cas.services.PrincipalAttributeRegisteredServiceUsernameProvider",
"usernameAttribute": "uidNumber",
"canonicalizationMode": "NONE"
},
...
}
The goal is to provide to this service another identifier (uidNumber) than the one used for other services (default identifier for other services is uid).
N.B.: both (uid and uidNumber) come from LDAP.
This works well (I can connect to the service), but I detected an unexpected behavior.
When a user connects to the above-mentioned service, in CAS logs, the WHO: is the uid for every ACTION except for SERVICE_TICKET_VALIDATED for which it is the uidNumber.
After connecting to the above-mentioned service (and only after connecting to this specific service), if the user accesses an OidcRegisteredService service, the sub in the OIDC response is the uidNumber whereas it is expecting to receive the uid (which makes it fail to authenticate the user). The OidcRegisteredService service configuration doesn't provide any specific config for the usernameAttribute:
{
"#class": "org.apereo.cas.services.OidcRegisteredService",
"serviceId": "...",
"name": "...",
"id": ...,
"clientId": "...",
"clientSecret": "...",
"bypassApprovalPrompt": true,
"scopes": ["java.util.HashSet", ["openid", "profile", "email", "offline_access"]]
}
That means that depending on whether the user connected to the first described OAuthRegisteredService or not, he can connect to the OidcRegisteredService service or not.
Note that it doesn't affect other RegexRegisteredService services or a SamlRegisteredService that specifies its own usernameAttributeProvider.
I also tried forcing the usernameAttributeProvider: it doesn't change what is received in the sub (still uid or uidNumber depending on whether the user connected to the OAuthRegisteredService before or not):
{
"#class": "org.apereo.cas.services.OidcRegisteredService",
"serviceId": "...",
"name": "...",
"id": ...,
"clientId": "...",
"clientSecret": "...",
"bypassApprovalPrompt": true,
"scopes": ["java.util.HashSet", ["openid", "profile", "email", "offline_access"]],
"usernameAttributeProvider": {
"#class": "org.apereo.cas.services.PrincipalAttributeRegisteredServiceUsernameProvider",
"usernameAttribute": "uid"
}
}
N.B.: CAS version is 5.2.7
Am I missing something?
Am I missing something?
You are not. Most likely, this is a bug that should be fixed, or you can patch server yourself since your CAS version has been EOL for many years.
Modify this line in your CAS overlay and use the service parameter to re-calculate the username/subject.
According to https://learn.microsoft.com/en-gb/azure/virtual-machines/windows/extensions-dsc-template, the latest method for passing credentials from an ARM template to a DSC extension is by placing the whole credential within the configurationArguments of the protectedSettings section, as shown below:
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.24",
"autoUpgradeMinorVersion": true,
"settings": {
"wmfVersion": "latest",
"configuration": {
"url": "[concat(parameters('_artifactsLocation'), '/', variables('artifactsProjectFolder'), '/', variables('dscArchiveFolder'), '/', variables('dscSitecoreInstallArchiveFileName'))]",
"script": "[variables('dscSitecoreInstallScriptName')]",
"function": "SitecoreInstall"
},
"configurationArguments": {
"nodeName": "[parameters('CMCD VMName')]",
"sitecorePackageUrl": "[concat(parameters('sitecorePackageLocation'), '/', parameters('sitecoreRelease'), '/', parameters('sitecorePackageFilename'))]",
"sitecorePackageUrlSasToken": "[parameters('sitecorePackageLocationSasToken')]",
"sitecoreLicense": "[concat(parameters('sitecorePackageLocation'), '/', parameters('sitecoreLicenseFilename'))]",
"domainName": "[parameters('domainName')]",
"joinOU": "[parameters('domainOrgUnit')]"
},
"configurationData": {
"url": "[concat(parameters('_artifactsLocation'), '/', variables('artifactsProjectFolder'), '/', variables('dscArchiveFolder'), '/', variables('dscSitecoreInstallConfigurationName'))]"
}
},
"protectedSettings": {
"configurationUrlSasToken": "[parameters('_artifactsLocationSasToken')]",
"configurationDataUrlSasToken": "[parameters('_artifactsLocationSasToken')]",
"configurationArguments": {
"domainJoinCredential": {
"userName": "[parameters('domainJoinUsername')]",
"password": "[parameters('domainJoinPassword')]"
}
}
}
}
Azure DSC is supposed to handle the encrypting/decrypting of the protectedSettings for me. This does appear to work, as I can see that the protectedSettings are encrypted within the settings file on the VM, however the operation ultimately fails with:
VM has reported a failure when processing extension 'dsc-sitecore-de
v-install'. Error message: "The DSC Extension received an incorrect input: Comp
ilation errors occurred while processing configuration 'SitecoreInstall'. Pleas
e review the errors reported in error stream and modify your configuration code
appropriately. System.InvalidOperationException error processing property 'Cre
dential' OF TYPE 'xComputer': Converting and storing encrypted passwords as pla
in text is not recommended. For more information on securing credentials in MOF
file, please refer to MSDN blog: http://go.microsoft.com/fwlink/?LinkId=393729
At C:\Packages\Plugins\Microsoft.Powershell.DSC\2.24.0.0\DSCWork\dsc-sitecore-d
ev-install.0\dsc-sitecore-dev-install.ps1:103 char:3
+ xComputer Converting and storing encrypted passwords as plain text is not r
ecommended. For more information on securing credentials in MOF file, please re
fer to MSDN blog: http://go.microsoft.com/fwlink/?LinkId=393729 Cannot find pat
h 'HKLM:\SOFTWARE\Microsoft\PowerShell\3\DSC' because it does not exist. Cannot
find path 'HKLM:\SOFTWARE\Microsoft\PowerShell\3\DSC' because it does not exis
t.
Another common error is to specify parameters of type PSCredential without an e
xplicit type. Please be sure to use a typed parameter in DSC Configuration, for
example:
configuration Example {
param([PSCredential] $UserAccount)
...
}.
Please correct the input and retry executing the extension.".
The only way that I can make it work is to add PsDscAllowPlainTextPassword = $true to my configurationData, but I thought I was using the protectedSettings section to avoid using plain text passwords...
Am I doing something wrong, or is it simply that my understanding is wrong?
Proper way of doing this:
"settings": {
"configuration": {
"url": "xxx",
"script": "xxx",
"function": "xx"
},
"configurationArguments": {
"param1": xxx,
"param2": xxx
etc...
}
},
"protectedSettings": {
"configurationArguments": {
"NameOfTheCredentialsParameter": {
"userName": "USERNAME",
"password": "PASSWORD!1"
}
}
}
this way you don't need PsDSCAllowPlainTextPassword = $true
Then you can receive the parameters in your Configuration with
Configuration MyConf
param (
[PSCredential] $NameOfTheCredentialsParameter
)
An use it in your resource
Registry DoNotOpenServerManagerAtLogon {
Ensure = "Present"
Key = "HKEY_CURRENT_USER\SOFTWARE\Microsoft\ServerManager"
ValueName = "DoNotOpenServerManagerAtLogon"
ValueData = 1
ValueType = REG_DWORD"
PsDscRunAsCredential = $NameOfTheCredentialsParameter
}
The fact that you still need to use the PsDSCAllowPlainTextPassword = $true is documented
Here is the quoted section:
However, currently you must tell PowerShell DSC it is okay for credentials to be outputted in plain text during node configuration MOF generation, because PowerShell DSC doesn’t know that Azure Automation will be encrypting the entire MOF file after its generation via a compilation job.
Based on the above, it seems that it is an order of operations issue. The MOF is generated and THEN encrypted.
I have a potential client who wants to pull some data from a website via VBA. I am new to XML and JSON.
I found a link somewhere that provides the following code which uses MSXML to return data for a single item from that particular website.
Public Function GetItemSalePrice(item As String) As Double
Dim dblItem As Long
With CreateObject("msxml2.xmlhttp")
.Open "GET", "http://www.gw2spidy.com/api/v0.9/json/item/" & item, False
.send
dblItem = Split(Split(.responsetext, "min_sale_unit_price"":")(1), ",")(0)
GetItemSalePrice = dblItem / 100
End With
End Function
However, the data my client wants to return comes in pages of up to 500 records at a time. He indicates that the
he wants to pass in date ranges, and a page number, similar to the following.
https://api.appfigures.com/v2/reviews?client_key=xxxxxxxf&start=2015-01-01&end=2016-01-21&page=1
But because this is an https site, it wants a userid and password. Can I simply reformat that string to include userid and password? Or is there another method or property of the MSXML object that can be set for authentication?
The client indicates that the return value looks like:
{
"total": 140,
"pages": 28,
"this_page": 1,
"reviews": [{
"author": "DeveloperToDeveloper",
"title": "Just Spectacular",
"review": "Finally able to remove the ads! The description is hilarious!! Thanks!!!",
"original_title": null,
"original_review": null,
"stars": "5.00",
"iso": "US",
"version": "1.2",
"date": "2012-09-19T17:05:00",
"product": 6567539,
"weight": 0,
"id": "5561747L7xnbsMRu8UbPvy7A71Dv6A=="
}]
}
But with multiple records returned, as many as 500 at a time. Is there an efficient way of reading data in that format into a table? One or more records at a time? I can obviously write a text parser but I assume that someone has probably already done that leg work.
thanks Bob. If figured out the authentication issue, a popup dialog is used to enter userid and password and the client was satisfied with that.
And given the structure of the returned string, I was able to parse the data myself without too much difficulty.
If I make this call to the server from a browser:
http://localhost:8080/api/items/number/all.json
Or from curl:
curl -G http://localhost:8080/api/items/number/all.json
I get back the following json:
{
"language": null,
"number": 10,
"queryId": 0,
"from": null,
"to": null,
"percentage": 33,
"dataInfoSet": null
}
However when I use d3.json call:
d3.json("http://localhost:8080/api/items/number/all.json", function(jsondata) {
console.log(jsondata);
});
The output from console.log is null.
If instead the http call, I save the json in a file (fileWithData.json) and do:
d3.json("fileWithData.json", function(jsondata) {
console.log(jsondata);
});
Everything works as expected. Does anyone know what might be the problem?
Solved with the help of d3-js Goole group.
The problem was that the page loading the json was not being served from localhost:8080, thus, there were cross domain restrictions. I just deployed the file within the same application.
In case cross domain calls have to be made, the group suggested the use of jasonp and specially and CORS (
http://www.nczonline.net/blog/2010/05/25/cross-domain-ajax-with-cross-origin-resource-sharing/
)