I have used copy function for copying the file.My code work without using s3 bucket both in local and IIS But when i used S3 bucket i got this error.
Error executing "PutObject" on
"{bucket_path}/local-iportalDocuments/96/building.jpeg"; AWS HTTP
error: Client error: PUT {{bucket_path}}/local-iportalDocuments/96/building.jpeg resulted in a
403 Forbidden response:\n\nAccessDeniedAccess
Denied3BGXVF (truncated...)\n AccessDenied
(client): Access Denied - \nAccessDeniedAccess
Denied3BGXVF8J5RST47DW9tUMIzQGZyiKdb4+Vwj10EWFxUiMYzmdaCCNVVfzuSRzAhj4YvVssE0+OA12IeW3WTu2K+POr0s=",
"exception": "Aws\S3\Exception\S3Exception
I use this line of code for copy :
$mediaItem = $oldModel->getMedia('document_files')->find($file['id']);
$copiedMediaItem = $mediaItem->copy($model, 'document_files', config('app.media_disk'));
..Need Help
Related
Trying to configure lookup from the timezone gem;
https://github.com/panthomakos/timezone
I get a 403 when I use google configuration
I am adding this code:
Timezone::Lookup.config(:google) do |c|
c.api_key = 'your_google_api_key_goes_here'
c.client_id = 'your_google_client_id' # if using 'Google for Work'
end
To a file in initializers called timezone.rb and I I see on google dev cloud Time Zone API is enabled , and I have restarted the application
I am getting:
*** Timezone::Error::Google Exception: 403 Forbidden
when I try:
Timezone.lookup(-34.92771808058, 138.477041423321)
I use WinSCP 5.17 to retrieve files from an FTP, when I retrieve files from the root folder of the FTP all works, but as soon as I try to retrieve the files from the sub-folder it doesn't work.
Here is the instruction I use:
get /Clients/Folder2/Folder3/*.* F:\folder1\folder2\
and this is the error message:
Error listing directory '/Clients/Folder2/Folder3'.
Bad message (badly formatted packet or protocol incompatibility).
Error code: 5
Error message from server: Bad message
Thanks for your help.
Finally I solved the problem, I generated the script from WinSCP GUI and it works.
open ftps://username:password#ftp-adresse.azure.com/ -certificate="ee:5f:af:7c:26:6b:bb:6f:cd:86:6a:2c:03:1e:8f:ab:e7:63:fd:43" -rawsettings FollowDirectorySymlinks=1
cd /Clients/folder2/folder3
lcd "F:\folder1\folder2"
get "*.xlsx"
exit
I am trying to read JSON files from a subdirectory called world from a S3 bucket named hello. When I list all the objects of that directory using boto3, I can see several part files(which were possibly created by a spark job) like below.
world/
world/_SUCCESS
world/part-r-00000-....json
world/part-r-00001-....json
world/part-r-00002-....json
world/part-r-00003-....json
world/part-r-00004-....json
world/part-r-00005-....json
world/part-r-00006-....json
world/part-r-00007-....json
I have written the following code to read all these files.
spark_session = SparkSession
.builder
.config(
conf=SparkConf().setAll(spark_config).setAppName(app_name)
).getOrCreate()
hadoop_conf = spark_session._jsc.hadoopConfiguration()
hadoop_conf.set("fs.s3a.server-side-encryption-algorithm", "AES256")
hadoop_conf.set("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider")
hadoop_conf.set("fs.s3a.access.key", "my-aws-access-key")
hadoop_conf.set("fs.s3a.secret.key", "my-aws-secret-key")
hadoop_conf.set("com.amazonaws.services.s3a.enableV4", "true")
df = spark_session.read.json("s3a://hello/world/")
and getting the following error
py4j.protocol.Py4JJavaError: An error occurred while calling o98.json.
: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: , AWS Error Code: null, AWS Error Message: Forbidden, S3 Extended Request ID:
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:976)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:956)
at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:892)
at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:557)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:545)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.immutable.List.flatMap(List.scala:355)
at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:545)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:359)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:392)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:834)
I have tried with "s3a://hello/world/*" and "s3a://hello/world/*.json"as well but still getting the same error.
FYI, I am using the following versions of the tools:
pyspark 2.4.5
com.amazonaws:aws-java-sdk:1.7.4
org.apache.hadoop:hadoop-aws:2.7.1
org.apache.hadoop:hadoop-common:2.7.1
Can anyone help me with this?
it seems that the credentials you are using to access the bucket/ folder doesn't have required access right .
Please check the following things
Credentials or role specified in your application code
Policy attached to the Amazon Elastic Compute Cloud (Amazon EC2)
instance profile role
Amazon S3 VPC endpoint policy
Amazon S3 source and destination bucket policies
Few things which you can use to debug quickly
on your master node of the cluster try to access the bucket using
aws s3 ls s3://hello/world/
if this throws the error try to resolve the access control by following this link
https://aws.amazon.com/premiumsupport/knowledge-center/emr-s3-403-access-denied/
I have been trying to download a zip file from my cpanel server but I was getting first 403 forbidden error so I added .htaccess with the following code as :
Require all granted so for the time my 403 forbidden error was solved but now I am getting 500 Internal Server Error..!
Here is link to the directory where I am getting error please as :
http://huntedhunter.com/backups/
And I want to download this file as :
http://huntedhunter.com/backups/backup_hunter.zip
so getting the 500 Internal server error on file downloading..!
So if you people can please help me.
As discussed, the issue is with the file permission for backup_hunter.zip
so make sure the file permissions are set accordingly
an example from File Permissions Tutorial
image from the linked tutorial, read the same for more info
Hii I am stuck with the same error.
I am trying to call specific folder from Drive.
//retrieving files from Drive rootFolder<br>
Children.List request = service.children().list(root_ID);
//setquery to get file with title:Books <br>
request.setQ("title = 'Books'");
//executed the request to retrieve the file list<br>
ChildList children = request.execute();//occured error executing this line
ERROR:com.google.api.client.googleapis.json.GoogleJsonResponseException: 500 OK
Coding language: JAVA
OS: Windows 7
Server: Apache Tomcat 7.0
I request you to provide info on how you were able to solve this.