Using web2py (Version 2.8.2-stable+timestamp.2013.11.28.13.54.07), on 64-bit Windows, I have the following problem
There is an exe program that is started on user request (first an txt file is created then p is triggered).
p = subprocess.Popen(['woshi_engine.exe', scriptId], shell=True, stdout = subprocess.PIPE, cwd=path_1)
while the exe file is running it is creating a txt file.
The program is stopped on user request by deleting the file the program needs as input.
when exe is started i have other requests user can trigger. it is common that request comes to server (I used microsoft network monitor to check that), but the function is not triggered.
I tried using scheduler but no success. Same problem
I am really stuck here with this problem
Thank you for your help
With a help of web2py google group the solution is.
I used scheduler. Created a scheduler.py file with the following code
def runWoshiEngine(scriptId, path):
import os, sys
import time
import subprocess
p = subprocess.Popen(['woshi_engine.exe', scriptId], shell=True, stdout = subprocess.PIPE, cwd=path)
return dict(status = 1)
from gluon.scheduler import Scheduler
scheduler = Scheduler(db)
In my controller function
task = scheduler.queue_task(runWoshiEngine, [scriptId, path])
you also have to import scheduler (from gluon.scheduler import Scheduler)
then I run the scheduler from command prompt with the following (so if I understood correctly you have two instances of web2py running, one for webserver, one for scheduler)
web2py.py -K woshiweb -D 0 (-D 0 is for verbose logging so it can be removed)
Related
I need to mount the azure file storage to Linux-Pools when they are being spun-up.I am following the instructions given here to achieve that: mounting Azure-File Storage to Batch Specically in my Azure CLI script under the Pools start commands I am inserting something which looks like this
--start-task-command-line="apt-get update && apt-get install cifs-utils && mkdir -p {} && mount -t cifs {} {} -o vers=3.0,username={},password={},dir_mode=0777,file_mode=0777,serverino".format(_COMPUTE_NODE_MOUNT_POINT, _STORAGE_ACCOUNT_SHARE_ENDPOINT, _COMPUTE_NODE_MOUNT_POINT, _STORAGE_ACCOUNT_NAME, _STORAGE_ACCOUNT_KEY)
but when I run the tasks with the auto-user that batch uses by default I get an error in the stderr.txt file mentioning that it was unable to create the "/mnt/MyAzureFileshare" directory and so my guess is the mounting didn't occur during the pool creation process.I saw a very similar question to the one I am facing:setting custom user identity for tasks and even the official Microsoft documentation goes over this in detail:Run Tasks under User accounts in Batch but none of them put a light on how to achieve this using Azure CLI.
In order to install specific packages so that Azure File Storage can be mounted requires sudo privileges and I am unable to do that through the Azure-CLI. In order to recreate the error I would recommend having a look at this:app to replicate the issue
What I want to achieve is:
1) Create a Pool with the Azure-File Storage mounted on it and elevate the privileges of the auto-user to the admin level using Azure CLI
2) Run tasks with the same auto-user with Admin Privileges using the azure CLI
Update 1:
I was able to mount Azure File Storage with Batch using the Azure CLI. I still am not able to populate the Azure File Storage with the output files of the app that I deployed on Batch Nodes.I have got no error in the stderr.txt files.
The output of the stderr.txt file is:
WARNING: In "login" auth mode, the following arguments are ignored: --account-key
Alive[################################################################] 100.0000%
Finished[#############################################################] 100.0000%
pdf--->png: 0%| | 0/1 [00:00<?, ?it/s]
pdf--->png: 100%|##########| 1/1 [00:00<00:00, 1.16it/s]WARNING: In "login" auth mode, the following arguments are ignored: --account-key
WARNING: uploading /mnt/batch/tasks/workitems/pdf-processing-job-2018-10-29-15-36-15/job-1/mytask-0/wd/png_files-2018-10-29-15-39-25/akronbeaconjournal_20180108_AkronBeaconJournal_0___page---0.png
Alive[################################################################] 100.0000%
Finished[#############################################################] 100.0000%
The Python App that was deployed on the Batch Nodes is:
import os
import fitz
import subprocess
import argparse
import time
from tqdm import tqdm
import sentry_sdk
import sys
import datetime
def azure_active_directory_login(azure_username,azure_password,azure_tenant):
try:
azure_login_output=subprocess.check_output(["az","login","--service-principal","--username",azure_username,"--password",azure_password,"--tenant",azure_tenant])
except subprocess.CalledProcessError:
sentry_sdk.capture_message("Invalid Azure Login Credentials")
sys.exit("Invalid Azure Login Credentials")
def download_from_azure_blob(azure_storage_account,azure_storage_account_key,input_azure_container,file_to_process,pdf_docs_path):
file_to_download=os.path.join(input_azure_container,file_to_process)
try:
subprocess.check_output(["az","storage","blob","download","--container-name",input_azure_container,"--file",os.path.join(pdf_docs_path,file_to_process),"--name",file_to_process,"--account-key",azure_storage_account_key,\
"--account-name",azure_storage_account,"--auth-mode","login"])
except subprocess.CalledProcessError:
sentry_sdk.capture_message("unable to download the pdf file")
sys.exit("unable to download the pdf file")
def pdf_to_png(input_folder_path,output_folder_path):
pdf_files=[x for x in os.listdir(input_folder_path) if x.endswith((".pdf",".PDF"))]
pdf_files.sort()
for pdf in tqdm(pdf_files,desc="pdf--->png"):
doc=fitz.open(os.path.join(input_folder_path,pdf))
page_count=doc.pageCount
for f in range(page_count):
page=doc.loadPage(f)
pix = page.getPixmap()
if pdf.endswith(".pdf"):
png_filename=pdf.split(".pdf")[0]+"___"+"page---"+str(f)+".png"
pix.writePNG(os.path.join(output_folder_path,png_filename))
elif pdf.endswith(".PDF"):
png_filename=pdf.split(".PDF")[0]+"___"+"page---"+str(f)+".png"
pix.writePNG(os.path.join(output_folder_path,png_filename))
def upload_to_azure_blob(azure_storage_account,azure_storage_account_key,output_azure_container,png_docs_path):
try:
subprocess.check_output(["az","storage","blob","upload-batch","--destination",output_azure_container,"--source",png_docs_path,"--account-key",azure_storage_account_key,\
"--account-name",azure_storage_account,"--auth-mode","login"])
except subprocess.CalledProcessError:
sentry_sdk.capture_message("Unable to upload file to the container")
def upload_to_fileshare(png_docs_path):
try:
subprocess.check_output(["cp","-r",png_docs_path,"/mnt/MyAzureFileShare/"])
except subprocess.CalledProcessError:
sentry_sdk.capture_message("unable to upload to azure file share ")
if __name__=="__main__":
#Credentials
sentry_sdk.init("<Sentry Creds>")
azure_username=<azure_username>
azure_password=<azure_password>
azure_tenant=<azure_tenant>
azure_storage_account=<azure_storage_account>
azure_storage_account_key=<azure_account_key>
try:
parser = argparse.ArgumentParser()
parser.add_argument("input_azure_container",type=str,help="Location to download files from")
parser.add_argument("output_azure_container",type=str,help="Location to upload files to")
parser.add_argument("file_to_process",type=str,help="file link in azure blob storage")
args = parser.parse_args()
timestamp = time.time()
timestamp_humanreadable= datetime.datetime.fromtimestamp(timestamp).strftime('%Y-%m-%d-%H-%M-%S')
task_working_dir=os.getcwd()
file_to_process=args.file_to_process
input_azure_container=args.input_azure_container
output_azure_container=args.output_azure_container
pdf_docs_path=os.path.join(task_working_dir,"pdf_files"+"-"+timestamp_humanreadable)
png_docs_path=os.path.join(task_working_dir,"png_files"+"-"+timestamp_humanreadable)
os.mkdir(pdf_docs_path)
os.mkdir(png_docs_path)
except Exception as e:
sentry_sdk.capture_exception(e)
azure_active_directory_login(azure_username,azure_password,azure_tenant)
download_from_azure_blob(azure_storage_account,azure_storage_account_key,input_azure_container,file_to_process,pdf_docs_path)
pdf_to_png(pdf_docs_path,png_docs_path)
upload_to_azure_blob(azure_storage_account,azure_storage_account_key,output_azure_container,png_docs_path)
upload_to_fileshare(png_docs_path)
The upload_to_fileshare() in the python app above should initiate the upload but in my case nothing happens and there is no error in the copy operation in the stderr.txt files
Please let me know a way to troubleshoot this issue
It does not look like the run elevated parameter is exposed via a command line argument through the CLI. You can however specify a JSON file to the --json argument formatted as the REST API object to get all functionalities.
We recently installed a server dedicated to unit tests, which deploys
updates automatically via Jenkins when commits are done, and sends
mails when a regression is noticed
> This requires our database to always be up-to-date
Since the database-schema-reference is our MWB, we added some scripts
during deploy, which export the .mwb to a .sql (using python) This
worked fine... but still has some issues
Our main concern is that the functions attached to the schema are not exported at all, which makes the DB unusable.
We'd like to hack into the python code to make it export scripts... but didn't find enough informations about it.
Here is the only piece of documentation we found. It's not very clear for us. We didn't find any information about exporting scripts.
All we found is that a db_Script class exists. We don't know where we can find its instances in our execution context, nor if they can be exported easily. Did we miss something ?
For reference, here is the script we currently use for the mwb to sql conversion (mwb2sql.sh).
It calls the MySqlWorkbench from command line (we use a dummy x-server to flush graphical output.)
What we need to complete is the python part passed in our command-line call of workbench.
# generate sql from mwb
# usage: sh mwb2sql.sh {mwb file} {output file}
# prepare: set env MYSQL_WORKBENCH
if [ "$MYSQL_WORKBENCH" = "" ]; then
export MYSQL_WORKBENCH="/usr/bin/mysql-workbench"
fi
export INPUT=$(cd $(dirname $1);pwd)/$(basename $1)
export OUTPUT=$(cd $(dirname $2);pwd)/$(basename $2)
"$MYSQL_WORKBENCH" \
--open $INPUT \
--run-python "
import os
import grt
from grt.modules import DbMySQLFE as fe
c = grt.root.wb.doc.physicalModels[0].catalog
fe.generateSQLCreateStatements(c, c.version, {})
fe.createScriptForCatalogObjects(os.getenv('OUTPUT'), c, {})" \
--quit-when-done
set -e
I am trying to start a new python3.3 app on Openshift. Due to the changes they made a couple of weeks ago I am having trouble getting the new app started. I copied a known working app into a new Openshift app. I am now getting this error:
Target WSGI script '/var/lib/openshift/<masked>/app-root/runtime/repo/wsgi.py' does not contain WSGI application 'application'.
My wsgi.py code is the same in both working and non working apps
import os
from myapp import main_production_no_pserve
if __name__ == '__main__':
ip = os.environ['OPENSHIFT_PYTHON_IP']
port = int(os.environ['OPENSHIFT_PYTHON_PORT'])
app = main_production_no_pserve(global_config=None)
from waitress import serve
print("Starting Waitress Server on http://{0}:{1}".format(ip, port))
serve(app, host=ip, port=port, threads=50)
The working app was created before the mid March Openshift change. Something is different now that the exact same code does not work.
It seems openshift is looking for an application function entry point within the wsgi.py. In the old method of the working app, the wsgi.py is actually executed (not looking for the application function. For kicks, I did this based off of the default wsgi.py when new openshift app was created.
import os
from myapp import main_production_no_pserve
def application(environ, start_response):
ip = os.environ['OPENSHIFT_PYTHON_IP']
port = int(os.environ['OPENSHIFT_PYTHON_PORT'])
app = main_production_no_pserve(global_config=None)
from waitress import serve
print("Starting Waitress Server on http://{0}:{1}".format(ip, port))
serve(app, host=ip, port=port, threads=50)
return
With that change I get the dreaded OSError: [Errno 98] Address already in use and a rhc app-force-stop does nothing.
I am at a loss why one app works and another does not. I would appreciate some help. :)
I have managed to create an instance and ssh into it. However, I have couple of questions regarding the Google Compute Engine.
I understand that I will be charged for the time my instance is running. That is till I exit out of the instance. Is my understanding correct?
I wish to run some batch job (java program) on my instance. How do I make my instance stop automatically after the job is complete (so that I don't get charged for the additional time it may run)
If I start the job and disconnect my PC, will the job continue to run on the instance?
Regards,
Asim
Correct, instances are charged for the time they are running. (to the minute, minimum 10 minutes). Instances run from the time they are started via the API until they are stopped via the API. It doesn't matter if any user is logged in via SSH or not. For most automated use cases users never log in - programs are installed and started via start up scripts.
You can view your running instances via the Cloud Console, to confirm if any are currently running.
If you want to stop your instance from inside the instance, the easiest way is to start the instance with the compute-rw Service Account Scope and use gcutil.
For example, to start your instance from the command line with the compute-rw scope:
$ gcutil --project=<project-id> addinstance <instance name> --service_account_scopes=compute-rw
(this is the default when manually creating an instance via the Cloud Console)
Later, after your batch job completes, you can remove the instance from inside the instance:
$ gcutil deleteinstance -f <instance name>
You can put halt command at the end of your batch script (assuming that you output your results on persistent disk).
After halt the instance will have a state of TERMINATED and you will not be charged.
See https://developers.google.com/compute/docs/pricing
scroll downn to "instance uptime"
You can auto shutdown instance after model training. Just run few extra lines of code after the model training is complete.
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('compute', 'v1', credentials=credentials)
# Project ID for this request.
project = 'xyz' # Project ID
# The name of the zone for this request.
zone = 'xyz' # Zone information
# Name of the instance resource to stop.
instance = 'xyz' # instance id
request = service.instances().stop(project=project, zone=zone, instance=instance)
response = request.execute()
add this to your model training script. When the training is complete GCP instance automatically shuts down.
More info on official website:
https://cloud.google.com/compute/docs/reference/rest/v1/instances/stop
If you want to stop the instance using the python script, you can follow this way:
from google.cloud.compute_v1.services.instances import InstancesClient
from google.oauth2 import service_account
instance_client = InstancesClient().from_service_account_file(<location-path>)
zone = <zone>
project = <project>
instance = <instance_id>
instance_client.stop(project=project, instance=instance, zone=zone)
In the above script, I have assumed you are using service-account for authentication. For documentation of libraries used you can go here:
https://googleapis.dev/python/compute/latest/compute_v1/instances.html
Is there a way to connect to a network drive that requires a different username/password than the username/password of the user running the package?
I need to copy files from a remote server. Right now I map the network drive in Windows Explorer then do I filesystem task. However, eventually this package will be ran automatically, from a different machine, and will need to map the network drive on its own. Is this possible?
You can use the Execute Process task with the "net use" command to create the mapped drive. Here's how the properties of the task should be set:
Executable: net
Arguments: use \Server\SomeShare YourPassword /user:Domain\YourUser
Any File System tasks following the Execute Process will be able to access the files.
Alternative Method
This Sql Server Select Article covers the steps in details but the basics are:
1) Create a "Execute Process Task" to map the network drive (this maps to the z:)
Executable: cmd.exe
Arguments: /c "NET USE Z: "\\servername\shareddrivename" /user:mydomain\myusername mypassword"
2) Then run a "File System Task" to perform the copy. Remember that the destination "Flat File Connection" must have "DelayValidation" set to True as z:\suchandsuch.csv won't exist at design time.
3) Finally, unmap the drive when you're done with another "Execute Process Task"
Executable: cmd.exe
Arguments: /c "NET USE Z: /delete"
Why not use an FTP task to GET the files over to the local machine? Run SSIS on the local machine. When transferring using FTP in binary, its real fast. Just remember that the ROW delimter for SSIS should be LF, not CRLF, as binary FTp does not convert LF (unix) to CRLF (windows)
You have to map the network drive, here's an example that I'm using now:
profile = "false"
landingPadDir = Dts.Variables("strLandingPadDir").Value.ToString
resultsDir = Dts.Variables("strResultsDir").Value.ToString
user = Dts.Variables("strUserName").Value.ToString
pass = Dts.Variables("strPassword").Value.ToString
driveLetter = Dts.Variables("strDriveLetter").Value.ToString
objNetwork = CreateObject("WScript.Network")
CheckDrive = objNetwork.EnumNetworkDrives()
If CheckDrive.Count > 0 Then
For intcount = 0 To CheckDrive.Count - 1 Step 2 'if drive is already mapped, then disconnect it
If CheckDrive.Item(intcount) = driveLetter Then
objNetwork.RemoveNetworkDrive(driveLetter)
End If
Next
End If
objNetwork.MapNetworkDrive(driveLetter, landingPadDir, profile, user, pass)
From There just use that driveLetter and access the file via the mapped drive.
I'm having one issue (which led me here) with a new script that accesses two share drives and performs some copy/move operations between the drives and I get an error from SSIS that says:
This network connection has files open or requests pending.
at Microsoft.VisualBasic.CompilerServices.LateBinding.InternalLateCall(Object o, Type objType, String name, Object[] args, String[] paramnames, Boolean[] CopyBack, Boolean IgnoreReturn)
at Microsoft.VisualBasic.CompilerServices.NewLateBinding.LateCall(Object Instance, Type Type, String MemberName, Object[] Arguments, String[] ArgumentNames, Type[] TypeArguments, Boolean[] CopyBack, Boolean IgnoreReturn)
at ScriptTask_3c0c366598174ec2b6a217c43470f581.ScriptMain.Main()
This is only on the "2nd run" of the process and if I run it a 3rd time it all works fine so I'm guessing the connection isn't being properly closed or it is not waiting for the copy/move to complete before moving forward or some such, but I'm unable to find a "close" or "flush" command that prevents this error. If you have any solution, please let me know, but the above code should work for getting the drive mapped using your alternate credentials and allow you to access that share.
Zach
To make the package more robust, you can do the following;
In the first Execute Process Task, set - FailTaskIfReturnCodeNotSuccessValue = False
This will let the package run if the last disconnect has not worked.
This is an older question but more recent versions of SQL Server with SSIS databases allow you to use a proxy to execute SQ Server jobs.
In SSMS Under Security<Credentials set up a credential in the database mapped to the AD account you want to use.
Under SQL Server Agent create a new proxy giving it the credential from step 1 and permissions to execute SSIS packages.
Under the SQL Server Agent jobs create a new job that executes your package
Select the step that executes the package and click EDIT. In the Run As dropdown select the Proxy you created in step 2