I am trying to update a text file stored in s3 but keep its permissions, here is my code.
def update(key, str):
s3 = boto.connect_s3()
b = s3.get_bucket('bucket')
k = bucket.get_key(key)
acl = k.get_acl()
k.set_contents_from_string(str)
k.set_acl(acl)
The error I get is as follows.
S3ResponseError: S3ResponseError: 403 Forbidden
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>...</RequestId>
<HostId>...</HostId>
</Error>
I have also tried
k.set_contents_from_string(str, policy=acl)
What is the correct way to update/replace a file but keep the original files permissions.
key.set_contents_from_string(str, policy='public-read', headers={'Content-Type': "something"})
Related
I am new in ruby on rails and I want to read data from a JSON file from a specified directory, but I constantly get an error in chap3(File name)
Errno::ENOENT in TopController#chap3. No such file or directory # rb_sysopen - links.json.
In the console, I get a message
Failed to load resource: the server responded with a status of 500 (Internal Server Error)
How I can fix that?
Code:
require "json"
class TopController < ApplicationController
def index
#message = "おはようございます!"
end
def chap3
data = File.read('links.json')
datahash = JSON.parse(data)
puts datahash.keys
end
def getName
render plain: "名前は、#{params[:name]}"
end
def database
#members = Member.all
end
end
JSON file:
{ "data": [
{"link1": "http://localhost:3000/chap3/a.html"},
{"link2": "http://localhost:3000/chap3/b.html"},
{"link3": "http://localhost:3000/chap3/c.html"},
{"link4": "http://localhost:3000/chap3/d.html"},
{"link5": "http://localhost:3000/chap3/e.html"},
{"link6": "http://localhost:3000/chap3/f.html"},
{"link7": "http://localhost:3000/chap3/g.html"}]}
I would change these two lines
data = File.read('links.json')
datahash = JSON.parse(data)
in the controller to
datahash = Rails.root.join('app/controllers/links.json').read
Note: I would consider moving this kind of configuration file into the /config folder and creating a simple Ruby class to handle it. Additionally, you might want to consider paths instead of URLs with a host because localhost:3000 might work in the development environment but in production, you will need to return non-localhost URLs anyway.
Rails use the content of file in the controller
#data = File.read("#{Rails.root}/app/controllers/links.json")
I want to upload nodes from a csv:
LOAD CSV FROM 'file:///Downloads/template_algorithmes.csv' AS line
MATCH (p:Person {name:'Raf'})
CREATE (al:Algorithm {name: line[1], project:line[2], description:line[3], input:line[4], output:line[5], remark:line[9]}), (p)-[:WORKED_ON]->(al)
But it answers:
Couldn't load the external resource at: file:/var/lib/neo4j/import/Downloads/template_algorithmes_TEITGEN_raphael.csv
Indeed, it is in /Downloads/ not in var/lib... which don't even have a neo4j folder:
bash-5.1$ cd /var/lib/
abrt/ cni/ dnf/ games/ initramfs/ misc/ PackageKit/ rpm-state/ tpm2-tss/
AccountsService/ color/ dnsmasq/ gdm/ iscsi/ mlocate/ plymouth/ samba/ udisks2/
alsa/ colord/ docker/ geoclue/ kdump/ net-snmp/ polkit-1/ selinux/ unbound/
alternatives/ containerd/ fedora-third-party/ gssproxy/ libvirt/ NetworkManager/ portables/ sss/ upower/
authselect/ containers/ flatpak/ hp/ lockdown/ nfs/ power-profiles-daemon/ systemd/ xkb/
bluetooth/ dbus/ fprint/ httpd/ logrotate/ openvpn/ private/ texmf/
chrony/ dhclient/ fwupd/ hyperv/ machines/ os-prober/ rpm/ tpm/
You can change it in the neo4j.conf configuration file located in <NEO4J_HOME>/conf. Then ensure to restart/bounce your server.
#default
#dbms.directories.import=import
dbms.directories.import=<new location>
For my application, new file uploaded to storage is read and the data is added to a main file. The new file contains 2 lines, one a header and other an array whose values are separated by a comma. The main file will need maximum of 265MB. The new files will have maximum of 30MB.
def write_append_to_ecg_file(filename,ecg,patientdata):
file1 = open('/tmp/'+ filename,"w+")
file1.write(":".join(patientdata))
file1.write('\n')
file1.write(",".join(ecg.astype(str)))
file1.close()
def storage_trigger_function(data,context):
#Download the segment file
download_files_storage(bucket_name,new_file_name,storage_folder_name = blob_path)
#Read the segment file
data_from_new_file,meta = read_new_file(new_file_name, scale=1, fs=125, include_meta=True)
print("Length of ECG data from segment {} file {}".format(segment_no,len(data_from_new_file)))
os.remove(new_file_name)
#Check if the main ecg_file_exists
file_exists = blob_exists(bucket_name, blob_with_the_main_file)
print("File status {}".format(file_exists))
data_from_main_file = []
if ecg_file_exists:
download_files_storage(bucket_name,main_file_name,storage_folder_name = blob_with_the_main_file)
data_from_main_file,meta = read_new_file(main_file_name, scale=1, fs=125, include_meta=True)
print("ECG data from main file {}".format(len(data_from_main_file)))
os.remove(main_file_name)
data_from_main_file = np.append(data_from_main_file,data_from_new_file)
print("data after appending {}".format(len(data_from_main_file)))
write_append_to_ecg_file(main_file,data_from_main_file,meta)
token = upload_files_storage(bucket_name,main_file,storage_folder_name = main_file_blob,upload_file = True)
else:
write_append_to_ecg_file(main_file,data_from_new_file,meta)
token = upload_files_storage(bucket_name,main_file,storage_folder_name = main_file_blob,upload_file = True)
The GCF is deployed
gcloud functions deploy storage_trigger_function --runtime python37 --trigger-resource patch-us.appspot.com --trigger-event google.storage.object.finalize --timeout 540s --memory 8192MB
For the first file, I was able to read the file and write the data to the main file. But after uploading the 2nd file, its giving Function execution took 70448 ms, finished with status: 'connection error' On uploading the 3rd file, it gives the Function invocation was interrupted. Error: memory limit exceeded. Despite of deploying the function with 8192MB memory, I am getting this error. Can I get some help on this.
I'm using Latex since years but I'm new to embedded luacode (with Lualatex). Below you can see a simplified example:
\begin{filecontents*}{data.json}
[
{"firstName":"Max", "lastName":"Möller"},
{"firstName":"Anna", "lastName":"Smith"}
];
\end{filecontents*}
\documentclass[11pt]{article}
\usepackage{fontspec}
%\setmainfont{Carlito}
\usepackage{tikz}
\usepackage{luacode}
\begin{document}
\begin{luacode}
require("lualibs.lua")
local file = io.open('data.json','rb')
local jsonstring = file:read('*a')
file.close()
local jsondata = utilities.json.tolua(jsonstring)
tex.print('\\begin{tabular}{cc}')
for key, value in pairs(jsondata) do
tex.print(value["firstName"] .. ' & ' .. value["lastName"] .. '\\\\')
end
tex.print('\\hline\\end{tabular}')
\end{luacode}
\end{document}
When executing Lualatex following error occurs:
LuaTeX error [\directlua]:6: attempt to index field 'json' (a nil value) [\directlua]:6: in main chunk. \end{luacode}
When commenting the line \usepackage{fontspec} the output will be produced. Alternatively, the error can be avoided by commenting utilities.json.tolua(jsonstring) and all following lua-code lines.
So the question is: How can I use both "fontspec" package and json-data without generating an error message? Apart from this I have another question: How to enable german umlauts in output of luacode (see first "lastName" in example: Möller)?
Ah, I'm using TeX Live 2015/Debian on Ubuntu 16.04.
Thank you,
Jerome
I'm trying to download the following file. The code works just find in RStudio when I run to the console. But when I try to compile a markdown file (to either html or pdf), it gives an error. Why can't markdown communicate with the csv zip file?
```
temp = tempfile()
download.file("http://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Provider-Charge-Data/Downloads/Inpatient_Data_2013_CSV.zip", temp)
temp2 = unz(temp, "Medicare_Provider_Charge_Inpatient_DRG100_FY2013.csv")
medData = read_csv(temp2)
```
Gives the following error (I had to remove URLs in error because I don't have enough reputation points):
trying URL Quitting from lines 41-49 (medicare.Rmd) Error in
download.file("..", : cannot open URL '...' Calls: ...
withCallingHandlers -> withVisible -> eval -> eval -> download.file