AutoHotKeys script opens Access file as read-only - ms-access

Using AutoHotkeys to launch a Microsoft Access application in an unknown path, the script works only when the path and file name are given literally. When the script instead uses a variable, Access opens the file but as read-only.
Why? Is there a fix?
Does the script lack some permission?
The file opens normally using this:
acc := ComObjCreate("Access.Application")
acc.OpenCurrentDatabase("d:\MyDatabases\MyDB.accdb")
acc.Visible := true
acc := Nothing
The file opens read-only using this:
MyDB := A_ScriptDir "/MyDB.accdb"
StringReplace,MyDB,MyDB,\,/,All
acc := ComObjCreate("Access.Application")
acc.OpencurrentDatabase(MyDB)
acc.Visible := true
acc := Nothing
Edit:
I’ve found a substitute that works:
MyDB := A_ScriptDir "/MyDB.accdb"
StringReplace,MyDB,MyDB,\,/,All
acc := ComObjGet(MyDB)
acc := Nothing

So, your problem most likely the StringReplace functionality you are using.
First, StringReplace is a depreciated function, the current one is StrReplace.
Second, it looks like you are attempting to change all instances of / to \, however, this is not what you are accomplishing with that function. In fact, you are doing the opposite. What you likely meant to do was StringReplace, MyDb, MyDb, /, \, All.
This would change all instances of / to \, instead of \ to /, which would make more sense in this scenario.

Related

Encode String in XQuery

Wrote a function in my web application based on eXist-db to export some xml elements to csv with XQuery. Everything works fine but I have some umlauts like ü, ä or ß in my elements which are displayed the wrong way in my csv. I tried to encode the content by using fn:normalize-unicode but this is not working.
Here is a minimalized example of my code snippet:
let $input =
<root>
<number>1234</number>
<name>Aufmaß</name>
</root>
let $csv := string-join(
for $ta in $input
return concat($ta/number/text(), fn:normalize-unicode($ta/name/text())))
let $csv-ueber-string := concat($csv-ueber, string-join($massnahmen, $nl))
let $set-content-type := response:set-header('Content-Type', 'text/csv')
let $set-accept := response:set-header('Accept', 'text/csv')
let $set-file-name := response:set-header('Content-Disposition', 'attachment; filename="export.csv"')
return response:stream($csv, '')
It's very unlikely indeed that there's anything wrong with your query, or that there's anything you can do in your query to correct this.
The problem is likely to be either
(a) the input data being passed to your query is in a different character encoding from what the query processor thinks it is
(b) the output data from your query is in a different character encoding from what the recipient of the output thinks it is.
A quick glance at your query suggests that it doesn't actually have any external input other that the query source code itself. But the source code is one of the inputs, and that's a possible source of error. A good way to eliminate this possibility might be to see what happens if you replace
<name>Aufmaß</name>
by
<name>Aufma{codepoints-to-string(223)}</name>
If that solves the problem, then your query source text is not in the encoding that the query compiler thinks it is.
The other possibility is that the problem is on the output side, and frankly, this seems more likely. You seem to be producing an HTTP response stream as output, and constructing the HTTP headers yourself. I don't see any evidence that you are setting any particular encoding in the HTTP response headers. The response:stream() function is vendor-specific and I'm not familiar with its details, but I suspect that you need to ensure it encodes the content in UTF-8 and that the HTTP headers say it is in UTF-8; this may be by extra parameters to the function, or by external configuration options.
As you might expect, eXist is serializing the CSV as Unicode (UTF-8). But when you open the resulting export.csv file directly in Excel (i.e., via File > Open), Excel will try its best to guess the encoding of the CSV file. But CSV files lack any way of declaring their encoding, so applications may well guess wrong, as it sounds like Excel did in your case. On my computer, Excel guesses wrong too, mangling the encoding of Aufmaß as Aufmaß. Here's the way to force Excel to use the encoding of a UTF-8 encoded CSV file such as the one produced by your query.
In Excel, start a new spreadsheet via File > New
Select File > Import to bring up a series of dialogs that let you specify how to import the CSV file.
In the first dialog, select "CSV file" as the type of file.
In the next dialog, titled "Text Import Wizard -
Step 1 of 3", select "Unicode (UTF-8)" as the "File origin." (At least these are the titles/order in my copy of MS Excel for Mac 2016).
Proceed through the remainder of the dialogs, keeping the default values.
Excel will then place the contents of your export.csv in the new spreadsheet.
Lastly, let me provide the following query I used to test and confirm that the CSV file produced by eXist does open as expected when following the directions above. The query is essentially the same as yours but fixes some problems in your query that prevented me from running it directly. I saved this query at /db/csv-test.xq and called it via http://localhost:8080/exist/rest/db/csv-test.xq,
xquery version "3.1";
let $input :=
<root>
<number>1234</number>
<name>Aufmaß</name>
</root>
let $cell-separator := ","
let $column-headings := $input/*/name()
let $header-row := string-join($column-headings, $cell-separator)
let $body-row := string-join($input/*/string(), $cell-separator)
let $newline := '
'
let $csv := string-join(($header-row, $body-row), $newline)
return
response:stream-binary(
util:string-to-binary($csv),
"text/csv",
"export.csv"
)

Create document in network on 4D database

I'd like to ask if its possible for 4D to create a document on a network directory. For example:
vIP:="\\100.100.100.100" // this is a hypothetical IP
vPath:=vIP+"\storage\"
vDoc:=Create document(vPath+"notes.txt")
If(OK=1)
SEND PACKET(vDoc;"Hello World")
CLOSE DOCUMENT(vDoc)
End if
one way to do this is:
you can map your second machine's drive to machine where your 4d database is running.
then this drive would behave like a local drive.
for example:
i have mapped a drive which is named as "D" drive on remote machine, and it becomes "W" drive on machine where 4D database is running.
then you can use this code
c_Text(vPath)
vPath:="W:\var\www....." //temp path.....
vDoc:=Create document(vPath+"notes.txt")
If(OK=1)
SEND PACKET(vDoc;"Hello World")
CLOSE DOCUMENT(vDoc)
End if
I know this is an old question, but there aren't too many of us 4D coders floating around here, so I'll answer this for posterity!
Yes, you can create a document on a network share like this, assuming you have the appropriate permissions to do so.
In this case, I think you just need to be careful of how you escape the path. Ensure that you're doubling up your backslashes so that the code block looks like this (note the extra backslashes around the IP address and folder name):
vIP:="\\\\100.100.100.100" // this is a hypothetical IP
vPath:=vIP+"\\storage\\"
vDoc:=Create document(vPath+"notes.txt")
If (OK=1)
SEND PACKET(vDoc;"Hello World")
CLOSE DOCUMENT(vDoc)
End if
Hope this helps!
Yes, although undocumented, the CREATE DOCUMENT command does works with a valid UNC path provided that you have sufficient privileges to create a document at the path given.
However, you have an issue with your sample code. Your issue comes down to your usage of the backslash \ character.
The backslash \ character is used for escape sequences in 4D and is therefore used for escaping many other characters, so it must also be escaped itself. Simply doubling all of your backslashes in your sample code from \ to \\ should correct the issue.
Your sample code:
vIP:="\\100.100.100.100" // this is a hypothetical IP
vPath:=vIP+"\storage\"
vDoc:=Create document(vPath+"notes.txt")
If(OK=1)
SEND PACKET(vDoc;"Hello World")
CLOSE DOCUMENT(vDoc)
End if
Should be written like this:
vIP:="\\\\100.100.100.100" // this is a hypothetical IP
vPath:=vIP+"\\storage\\"
vDoc:=Create document(vPath+"notes.txt")
If(OK=1)
SEND PACKET(vDoc;"Hello World")
CLOSE DOCUMENT(vDoc)
End if
Your code could be further improved by using Test Path Name to confirm the path is valid, and that the file does not exist. Then if it does exist you could even use Open Document and Set Document Position to append to the document, like this:
vIP:="\\\\100.100.100.100"
vPath:=vIP+"\\storage\\"
vDocPath:=vPath+"notes.txt"
If (Test path name(vPath)=Is a folder)
// is a valid path
If (Not(Test path name(vDocPath)=Is a document))
// document does not exist
vDoc:=Create document(vDocPath)
If (OK=1)
SEND PACKET(vDoc;"Hello World")
CLOSE DOCUMENT(vDoc)
End if
Else
// file already exists at location!
vDoc:=Open document(vDocPath)
If (OK=1)
SET DOCUMENT POSITION(vDoc;0;2) // position 0 bytes from EOF
SEND PACKET(vDoc;"\rHello Again World") // new line prior to Hello
CLOSE DOCUMENT(vDoc)
End if
End if
Else
// path is not valid!
ALERT(vPath+" is invalid")
End if

How to get bash like environment variable value?

There is a string like this: set mystring "$ENV_NAME/c/a/b/c"
How to get the full path?
To get the full path, you will need to use file join. To get to the environment variable, you will need to access the global env array:
set fullPath [file join $env(ENV_NAME) c a b c]
If ENV_NAME=/usr/bin, then the above will return fullPath as /usr/bin/c/a/b/c. You will get similar results in Windows platform.
Unfortunately your question is not very clear.
You say that you have the string set mystring "$ENV_NAME/c/a/b/c", which could either mean that this is some user supplied input, or it could be part of your program/Tcl script.
If this is indeed user supplied input, I suggest you use eval:
proc substEnv {input} {
set __ENV [array get ::env]
dict with __ENV {}
eval $input
}
puts [substEnv {set mystring "$ENV_NAME/c/a/b/c"}]
If you can trust the user, this is fine, otherwise I suggest using a safe interpreter. Note that a safe interpreter does not have access to the ::env array, so you have to pass the contents of it to the safe interpreter.
But if this is part of your program, I suggest you use file join instead
set path [file join $::env(ENV_NAME) c a b c]
file join deals with things like $::env(ENV_NAME) is the root directory (/ on *nix or C:\ on windows, which both end with the path separator as special case)

Vim function to copy a code function to clipboard

I want to have keyboard shortcut in Vim to copy a whole function from a Powershell file to the Windows clipboard. Here is the command for it:
1) va{Vok"*y - visual mode, select {} block, visual line mode, go to selection top, include header line, yank to Windows clipboard.
But it would work only for functions without an inner {} block. Here is a valid workaround for it:
2) va{a{a{a{a{a{a{Vok"*y - the same as (1), but selecting {} block is done multiple times - would work for code blocks that have 7 inner {} braces.
But the thing is - the (1) command works fine when called from a vim function, but (2) misbehaves and selects wrong code block when called from a vim function:
function! CopyCodeBlockToClipboard ()
let cursor_pos = getpos('.')
execute "normal" 'va{a{a{a{a{a{a{Vok"*y'
call setpos('.', cursor_pos)
endfunction
" Copy code block to clipboard
map <C-q> :call CopyCodeBlockToClipboard()<CR>
What am I doing wrong here in the CopyCodeBlockToClipboard?
The (2) command works as expected when executed directly in vim.
UPDATE:
I've noticed that:
if there are more a{ then the included blocks in the function
then vim wouldn't execute V
Looks like vim handles errors differently here. Extra a{ produces some error and regular command execution just ignores it. But execution from withing a function via :normal fails and wouldn't call V (or probably any command that follows the error).
Any workaround for this?
Try this function
function! CopyCodeBlockToClipboard()
let cursor_pos = getpos('.')
let i = 1
let done = 0
while !done
call setpos('.', cursor_pos)
execute "normal" 'v' . i . 'aBVok"*y'
if mode() =~ "^[vV]"
let done = 1
else
let i = i + 1
endif
endwhile
execute "normal \<ESC>"
call setpos('.', cursor_pos)
endfunction
This preforms a execute command to select blocks until it fails to select a block larger block. ([count]aB selects [count] blocks) It seems when the selection fails we end up in visual mode. So we can use mode() to check this.
When this function exits you should be in normal mode and the cursor should be restored to where you started. And the function will be in the * register.
This macro should come close to what you want to achieve:
?Function<CR> jump to first Function before the cursor position
v enter visual mode
/{<CR> extend it to next {
% extend it to the closing }
"*y yank into the system clipboard

Regex match of apostrophe in autohotkey script

I have an autohotkey script which looks up a word in a bilingual dictionary when I double click any word on a webpage. If I click on something like "l'homme" the l' is copied into the clipboard as well as the homme. I want the autohotkey script to strip out everything up to and including the apostrophe.
I can't get autohotkey to match the apostrophe. Below is a sample script which prints out the ascii values of the first four characters. If I double click "l'homme" on this page, it prints out: 108,8217,104,111. The second character is clearly not the ascii code for an apostrophe. I think it's most probably something to do with the HTML representation of an apostrophe, but I haven't been able to get to the bottom of it. I've tried using autohotkey's transform, HTML function without any luck.
I've tried both the Unicode and non-Unicode versions of autohotkey. I've saved the script in UTF-8.
#Persistent
return
OnClipboardChange:
;debugging info:
c1 := Asc(SubStr(clipboard,1,1))
c2 := Asc(SubStr(clipboard,2,1))
c3 := Asc(SubStr(clipboard,3,1))
c4 := Asc(SubStr(clipboard,4,1))
Msgbox 0,info, char1: %c1% `nchar2: %c2% `nchar3: %c3% `nchar4: %c4%
;the line below is what I want to use, but it doesn't find a match
stripToApostrophe:= RegExReplace(clipboard,".*’")
There is the standard quote ' and there is the "curling" quote ’.
Your regex might have to be
.*['’]
to cover both cases.
Maybe you'd like to make it non-greedy, too, if a word can have more than one apostrophe and you only want to remove the first:
.*?['’]
EDIT:
Interesting. I tried this:
w1 := "l’homme"
w2 := "l'homme"
c1 := Asc(SubStr(w1,2,1))
c2 := Asc(SubStr(w2,2,1))
v1 := RegExReplace(w1, ".*?['’]")
v2 := RegExReplace(w2, ".*?['’]")
MsgBox 0,info, %c1% - %c2% - %v1% - %v2%
return
And got back 146 - 39 - homme - homme. I'm editing from Notepad. Is it possible that our regex, while we think we're typing 8217, actually has 146 upon our pasting?
EDIT:
Apparently unicode support was added only for AutoHotkey_L. Using it, I believe the correct regex should be either
".*?[\x{0027}\x{0092}\x{2019}]"
or
".*?(" Chr(0x0027) "|" Chr(0x0092) "|" Chr(0x2019) ")"