itgenoda074 SysAccessDenied on retrieval of binary blob in DocumentAttachmentFiles - exact-online

When I execute the following query on Exact Online:
use 450*** /* Replace anonymized *** by last three digits of division. */
set use-http-disk-cache false
set use-http-memory-cache false
select * from ExactOnlineREST..DocumentAttachmentFiles where id = to_guid('8b60ccb6-e89d-4397-a4ca-001d6f57a4eb')
I get an error:
itgenoda074: Uw sessie is verlopen. Meld je opnieuw aan. [Session expired]
itgenoda074: Not authenticated. Request to 'https://start.exactonline.nl/docs/SysAttachment.aspx?ID=8b60ccb6-e89d-4397-a4ca-001d6f57a4eb&_Division_=450***' gave response from 'https://start.exactonline.nl/docs/SysAccessDenied.aspx?Mode=128&_Division_=450***'.
This happens both directly after a long session as well as straight after log on, so the token is still valid.
How can I access the document using Invantive SQL?
I have all privileges on the Exact Online environment for the owning company.

The following will not help since it is a recursive API calls (from file metadata to actual blob):
set ignore-http-403-errors false
But you can ignore any errors during recursive retrieval of the actual binary blob:
set ignore-document-download-errors true
After that, run the following query to determine the document ID associated with the attachment:
select document from ExactOnlineREST..DocumentAttachmentFiles where id = to_guid('8b60ccb6-e89d-4397-a4ca-001d6f57a4eb')
And put the document GUID in the following query:
select * from exactonlinerest..documents where id=to_guid('VALUE FROM PREVIOUS QUERY')
Probably you will retrieve 0 rows. Sometimes the document attachment files continue to exist while the document is no longer available.
If so, report it to Exact Online themselves. Their APIs should return referential correct data.
If not, please extend your question.

Related

Empty columns when querying

When using the Invantive Query Tool to request the table GLTransactionlines on Exact Online, my query times out.
When selecting a single column the query returns no data. Specifically, I would like to know from what table I can request my Transaction Lines.
I have used the following query:
select division_code
, gltransaction_date
, gltransaction_journal_code_attr
, glaccount_code_attr
, amount_value
, glaccount_balancetype_attr
from gltransactionlines
where glaccount_balancetype_attr = 'W';
local export results as "${rptoutpath}\TransactionsPLlsc.xlsx" format xlsx
When I select *, the Invantive Query Tool returns that there are too many columns in GLTransactionLines.
The exact error is:
De externe server heeft een fout geretourneerd: (401) Niet gemachtigd.
It occurs after ten minutes. When I let run DebugView along, it shows me that the following URL does not return:
Load Exact Online data using URL 'https://start.exactonline.nl/Docs/XMLDownload.aspx?Topic=gltransactions&Params_details=1&Params_documents=0&_Division_=1362280'
When I try to export another Exact Online table, it works. And sometimes fetching the GLTransactionLines works too.
It seems that the XML API of GL Transaction Lines is slow or malfunctioning on your environment. Please contact your supplier about this. As an alternative, you might want to switch to using the REST API which contains similar data, such as:
select *
from TransactionLines
where financialyear = 2016
and financialperiod = 12

Does Statement.RETURN_GENERATED_KEYS generate any extra round trip to fetch the newly created identifier?

JDBC allows us to fetch the value of a primary key that is automatically generated by the database (e.g. IDENTITY, AUTO_INCREMENT) using the following syntax:
PreparedStatement ps= connection.prepareStatement(
"INSERT INTO post (title) VALUES (?)",
Statement.RETURN_GENERATED_KEYS
);
while (resultSet.next()) {
LOGGER.info("Generated identifier: {}", resultSet.getLong(1));
}
I'm interested if the Oracle, SQL Server, postgresQL, or MySQL driver uses a separate round trip to fetch the identifier, or there is a single round trip which executes the insert and fetches the ResultSet automatically.
It depends on the database and driver.
Although you didn't ask for it, I will answer for Firebird ;). In Firebird/Jaybird the retrieval itself doesn't require extra roundtrips, but using Statement.RETURN_GENERATED_KEYS or the integer array version will require three extra roundtrips (prepare, execute, fetch) to determine the columns to request (I still need to build a form of caching for it). Using the version with a String array will not require extra roundtrips (I would love to have RETURNING * like in PostgreSQL...).
In PostgreSQL with PgJDBC there is no extra round-trip to fetch generated keys.
It sends a Parse/Describe/Bind/Execute message series followed by a Sync, then reads the results including the returned result-set. There's only one client/server round-trip required because the protocol pipelines requests.
However sometimes batches that can otherwise be streamed to the server may be broken up into smaller chunks or run one by on if generated keys are requested. To avoid this, use the String[] array form where you name the columns you want returned and name only columns of fixed-width data types like integer. This only matters for batches, and it's a due to a design problem in PgJDBC.
(I posted a patch to add batch pipelining support in libpq that doesn't have that limitation, it'll do one client/server round trip for arbitrary sized batches with arbitrary-sized results, including returning keys.)
MySQL receives the generated key(s) automatically in the OK packet of the protocol in response to executing a statement. There is no communication overhead when requesting generated keys.
In my opinion even for such a trivial thing a single approach working in all database systems will fail.
The only pragmatic solution is (in analogy to Hibernate) to find the best working solution for each target RDBMS (and
call it a dialect of your one for all solution:)
Here the information for Oracle
I'm using a sequence to generate key, same behavior is observed for IDENTITY column.
create table auto_pk
(id number,
pad varchar2(100));
This works and use only one roundtrip
def stmt = con.prepareStatement("insert into auto_pk values(auto_pk_seq.nextval, 'XXX')",
Statement.RETURN_GENERATED_KEYS)
def rowCount = stmt.executeUpdate()
def generatedKeys = stmt.getGeneratedKeys()
if (null != generatedKeys && generatedKeys.next()) {
def id = generatedKeys.getString(1);
But unfortunately you get ROWID as a result - not the generated key
How is it implemented internally? You can see it if you activate a 10046 trace (BTW this is also the best way to see
how many roundtrips were performed)
PARSING IN CURSOR
insert into auto_pk values(auto_pk_seq.nextval, 'XXX')
RETURNING ROWID INTO :1
END OF STMT
So you see the JDBC Standard 3.0 is implemented, but you don't get a requested result. Under the cover is used the
RETURNING clause.
The right approach to get the generated key in Oracle is therefore:
def stmt = con.prepareStatement("insert into auto_pk values(auto_pk_seq.nextval, 'XXX') returning id into ?")
stmt.registerReturnParameter(1, Types.INTEGER);
def rowCount = stmt.executeUpdate()
def generatedKeys = stmt.getReturnResultSet()
if (null != generatedKeys && generatedKeys.next()) {
def id = generatedKeys.getLong(1);
}
Note:
Oracle Release 12.1.0.2.0
To activate the 10046 trace use
con.createStatement().execute "alter session set events '10046 trace name context forever, level 12'"
con.createStatement().execute "ALTER SESSION SET tracefile_identifier = my_identifier"
Depending on frameworks or libraries to do things that are perfectly possible in plain SQL is bad design IMHO, especially when working against a defined DBMS. (The Statement.RETURN_GENERATED_KEYS is relatively innocuous, although it apparently does raise a question for you, but where frameworks are built on separate entities and doing all sorts of joins and filters in code or have custom-built transaction isolation logic things get inefficient and messy very quickly.)
Why not simply:
PreparedStatement ps= connection.prepareStatement(
"INSERT INTO post (title) VALUES (?) RETURNING id");
Single trip, defined result.

JSON Queries - Failed to execute

So, I am trying to execute a query using ArcGIS API, but it should match any Json queries. I am kind of new to this query format, so I am pretty sure I must be missing something, but I can't figure out what it is.
This page allows for testing queries on the database before I actually implement them in my code. Features in this database have several fields, including OBJECTID and Identificatie. I would like to, for example, select the feature where Identificatie = 1. If I enter this in the Where field though (Identificatie = 1) an error Failed to execute appears. This happens for every field, except for OBJECTID. Querying where OBJECTID = 1 returns the correct results. I am obviously doing something wrong, but I don't get it why OBJECTID does work here. A brief explanation (or a link to a page documenting queries for JSON, which I haven't found), would be appreciated!
Identificatie, along with most other fields in the service you're using, is a string field. Therefore, you need to use single quotes in your WHERE clause:
Identificatie = '1'
Or to get one that actually exists:
Identificatie = '1714100000729432'
OBJECTID = 1 works without quotes because it's a numeric field.
Here's a link to the correct query. And here's a link to the query with all output fields included.

Ruby MySQL output conflicting on different servers

I have coded a Ruby IRC bot which is on github (/ninjex/rubot) which is having some conflicting output with MySQL on a dedicated server I just purchased.
Firstly we have the connection to the database in the MySQL folder (in .gitignore) which looks similar to the following code block.
#con = Mysql.new('localhost', 'root', 'pword', 'db_name')
Then we have an actual function to query the database
def db_query
que = get_message # Grabs query from user i.e,./db_query SELECT * FROM words
results = #con.query(que) # Send query through the connection i.e, #con.query("SELECT * FROM WORDS")
results.each {|x| chan_send(x)} # For each row returned, send it to the channel via
end
On my local machine, when running the command:
./db_query SELECT amount, user from words WHERE user = 'Bob' and word = 'hello'
I receive the output in IRC in an Array like fashion: ["17", "Bob"] Where 17 is amount and Bob is the user.
However, using this same function on my dedicated server results in an output like: 17Bob I have attempted many changes in the code, as well as try to parse the data into it's own variable, however it seems that 17Bob is coming out as a single variable, making it impossible to parse into something like an array, which I could then use to send the data correctly.
This seems odd to me on both my local machine and the dedicated server, as I was expecting the output to first send 17 to the IRC and then Bob like:
17
Bob
For all the functions and source you can check my github /Ninjex/rubot, however you may need to install some gems.
A few notes:
Make sure you are sanitizing query via get_message. Or you are opening yourself up to some serious security problems.
Ensure you are using the same versions of the mysql gem, ruby and MySql. Differences in any of these may alter the expected output.
If you are at your wits end and are unable to resolve the underlying issue, you can always send a custom delimiter and use it to split. Unfortunately, it will muck up the case that is actually working and will need to be stripped out.
Here's how I would approach debugging the issue on the dedicated machine:
def db_query
que = get_sanitized_message
results = #con.query(que)
require 'pry'
binding.pry
results.each {|x| chan_send(x)}
end
Add the pry gem to your Gemfile, or gem install pry.
Update your code to use pry: see above
This will open up a pry console when the binding.pry line is hit and you can interrogate almost everything in your running application.
I would take a look at results and see if it's an array. Just type results in the console and it will print out the value. Also type out results.class. It's possible that query is returning some special result set object that is not an array, but that has a method to access the result array.
If results is an array, then the issue is most likely in chan_send. Perhaps it needs to be using something like puts vs print to ensure there's a new line after each message. Is it possible that you have different versions of your codebase deployed? I would also add a sleep 1 within the each block to ensure that this is not related to your handling of messages arriving at the same time.

Second RMySQL operation fails - why?

I am running a script that stores different datasets to a MySQL database. This works so far, but only sequentially. e.g.:
# write table1
replaceTable(con,tbl="table1",dframe=dframe1)
# write table2
replaceTable(con,tbl="table2",dframe=dframe2)
If I select both (I use StatET / Eclipse) and run the selection, I get an error:
Error in function (classes, fdef, mtable) :
unable to find an inherited method for function "dbWriteTable",
for signature "MySQLConnection", "data.frame", "data.frame".
I guess this has to do with the fact that my con is still busy or so when the second request is started. When I run the script line after line it just works fine. Hence I wonder, how can I tell R to wait til the first request is ready and then go ahead ? How can I make R scripts interactive (just console like plot examples - no tcl/tk).
EDIT:
require(RMySQL)
replaceTable <- function(con,tbl,dframe){
if(dbExistsTable(con,tbl)){
dbRemoveTable(con,tbl)
dbWriteTable(con,tbl,dframe)
cat("Existing database table updated / overwritten.")
}
else {
dbWriteTable(con,tbl,dframe)
cat("New database table created")
}
}
dbWriteTable has two important arguments:
overwrite: a logical specifying whether to overwrite an existing table
or not. Its default is ‘FALSE’.
append: a logical specifying whether to append to an existing table
in the DBMS. Its default is ‘FALSE’.
For past project I have successfully achieve appending, overwriting, creating, ... of tables with proper combinations of these.