Why does google drive api create a File with a null name - google-drive-api

I'm getting an NullPointerException from uploadedFile.getName() in the following code that utilizes the google api. If I change that to a string fileName the code works fine.
public static void downloadFile(boolean useDirectDownload, File uploadedFile, java.io.File parentDir)throws IOException {
OutputStream out = new FileOutputStream(new java.io.File(parentDir, uploadedFile.getName()));
DRIVE.files().get(uploadedFile.getId()).executeMediaAndDownloadTo(out);
}
I understand what a null pointer is but not sure why I'm getting one in this case because prior to calling the download file I set the filename in code earlier and the file is uploaded with the name I set.
public static File uploadFile(boolean useDirectUpload, String uploadFilePath, String fileType, String fileName) throws IOException {
File fileMetadata = new File();
fileMetadata.setName(fileName);
fileMetadata.setMimeType(fileType);
java.io.File filePath = new java.io.File(uploadFilePath);
FileContent mediaContent = new FileContent(fileType, filePath);
return DRIVE.files().create(fileMetadata, mediaContent)
.setFields("id")
.execute();
}
The above two sets of code are called by
java.io.File parentDir = P.createDirectory("C:\\DIR");
File uploadedFile = P.uploadFile(true, "C:\\DIR\AA.pdf", "application/pdf", "BB.pdf");
P.downloadFile(true, uploadedFile, parentDir);
For a specific question, why does the File returned by the following statement have null for the name?
return DRIVE.files().create(fileMetadata, mediaContent)
.setFields("id")
.execute();
Maven dependency for drive services:
<dependency>
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-drive</artifactId>
<version>v3-rev40-1.22.0</version>
</dependency>

You're only getting id as part of the partial response when the create method is called since its the only one specified in the setFields - https://developers.google.com/resources/api-libraries/documentation/drive/v3/java/latest/com/google/api/services/drive/Drive.Files.Create.html.
Try to add other fields (name, and other related fields necessary) and the call should work

why does the File returned by the following statement have null for the name?
Because you haven't requested the file name to be included in the REST response. Change .setFields("id") to .setFields("id, name")

Related

Get thymeleaf html file with all includes

Hy guys, I'm new to Thymeleaf, my goal is to have an endpoint which returns a String with the content of a html file. Should be easy, but that file html contains thymeleaf code and uses fragments from others files so using Files.readString(path) does not reach the goal.
How can I include them (I want to only include fragments, I don't have to process the file)?
That's what I've done till now:
#GetMapping(path = "/get-template-html")
public String getTemplateHTMLEndpoint() {
String templateHtml = "Problems reading template.html";
try {
String stringPath = new ClassPathResource("templates/template.html").getFile().getPath();
Path path = Path.of(stringPath);
templateHtml = Files.readString(path);
} catch (IOException e) {
e.printStackTrace();
}
return templateHtml;
}
But in another method where I process the file passing the context I got an error about the fragment "footer::footer"
String templateHtml = callGetTemplateHTMLEndPoint();
StringTemplateResolver stringTemplateResolver = new StringTemplateResolver();
stringTemplateResolver.setTemplateMode(TemplateMode.HTML);
TemplateEngine templateEngine = new TemplateEngine();
templateEngine.setTemplateResolver(stringTemplateResolver);
Context context = new Context();
context.setVariable("expenseReportPdf", expenseReportPdf);
context.setVariable("expensePdf", expensePdfList);
context.setVariable("expenseImg", jpgFile);
context.setVariable("amountCompanyCurrency", amountCompanyCurrency);
context.setVariable("expenseIncurredList", expenseIncurredList);
context.setVariable("expenseRiepiloghiList",expenseRiepiloghiList);
context.setVariable("advancePayBigDecimal", advancePayBigDecimal);
context.setVariable("dailyAllowanceList", dailyAllowanceList);
context.setVariable("dailyAllowanceFlag", dailyAllowanceFlag);
context.setVariable("logo",logo);
context.setVariable("logoSmartex",logoSmartex);
String renderedHtmlContent = templateEngine.process(templateHtml, context);
If you created a "standard" Spring Boot with Thymeleaf project (e.g. via https://start.spring.io), then the templates are automatically resolved if you put them in src/main/resources/templates
So if you created src/main/resources/templates/templates.html, then you can create a controller that will use that template like this:
#Controller
#RequestMapping("/")
public class MyController {
#GetMapping("/get-template-html")
public String getTemplateHtml() {
return "templates"; // name of your template file without the .html extension
}
}
If you start the application and access http://localhost:8080/get-template-html, you should see the HTML of the template.

USACO Code Submission Problem - Output File Missing

I'm practicing some USACO past released problems but whenever I submit my code for grading I receive the error:
Your output file (FILENAME.out):
[File missing!]
I tested every problem using this simple code, but still receive the same error:
import java.util.*;
import java.io.*;
public class Test
{
public static void main (String [] args) throws IOException
{
PrintWriter out = new PrintWriter(new BufferedWriter(new FileWriter(FILENAME)));
out.println("Hello world.");
out.close();
System.exit(0);
}
}
Why would this code not create an output file?
The USACO grading system has the output file already made in the same directory as your java solution, so all you need to do is just write to it.
In your line
PrintWriter out = new PrintWriter(new BufferedWriter(new FileWriter(FILENAME)));
you should change this to
PrintWriter out = new PrintWriter(new BufferedWriter(new FileWriter(FILENAME.out)));
since this is the name of the file. This does not create an actual file, but just writes to the existing one on the USACO grading system.

How to uncompress Gzipped with Apache Spark Java

i have a sequence file. In this file is each value compressed json file with GZipped. My Problem, how to read in the gzipped json files with Apache Spark ?
for this my code,
JavaSparkContext jsc = new JavaSparkContext("local", "sequencefile");
JavaPairRDD<String, byte[]> file = jsc.sequenceFile("file:\\E:\\part-00004", String.class, byte[].class);
JavaRDD<String> map = file.map(new Function<Tuple2<String, byte[]>, String>() {
public String call(Tuple2<String, byte[]> stringTuple2) throws Exception {
byte[] uncompress = uncompress(stringTuple2._2);
return uncompress.toString();
}
});
But this code func not working.
Have a nice day
While creating spark context use the constructor which will also take the spark configuration as third parameter.
Set the spark configuration value for key “org.apache.hadoop.io.compression.codecs”
As below
“org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec”

JSON to CSV conversion on HDFS

I am trying to convert a JSON file into CSV.
I have a JAVA code which is able to do it perfectly on UNIX file system and on local file system.
I have written below main class to perform this conversion on HDFS.
public class ClassMain {
public static void main(String[] args) throws IOException {
String uri = args[1];
String uri1 = args[2];
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(uri), conf);
FSDataInputStream in = null;
FSDataOutputStream out = fs.create(new Path(uri1));
try{
in = fs.open(new Path(uri));
JsonToCSV toCSV = new JsonToCSV(uri);
toCSV.json2Sheet().write2csv(uri1);
IOUtils.copyBytes(in, out, 4096, false);
}
finally{
IOUtils.closeStream(in);
IOUtils.closeStream(out);
}
}
}
json2sheet and write2csv are methods which perform the conversion and write operation.
I am running this jar using below command:
hadoop jar json-csv-hdfs.jar com.nishant.ClassMain /nishant/large.json /nishant/output
The problem is, it does not write anything at /nishant/output. It creates a 0 sized /nishant/output file.
Maybe the usage of copyBytes is not a good idea here.
How to achieve this on HDFS if it is working OK on unix FS and local FS.
Here I am trying to convert JSON file to CSV and not trying to map JSON objects to their values
FileSystem needs only one configuration key to successfully connect to HDFS.
conf.set(key, "hdfs://host:port"); // where key="fs.default.name"|"fs.defaultFS"

How to read a CSV file from Hdfs?

I have my Data in a CSV file. I want to read the CSV file which is in HDFS.
Can anyone help me with the code??
I'm new to hadoop. Thanks in Advance.
The classes required for this are FileSystem, FSDataInputStream and Path. Client should be something like this :
public static void main(String[] args) throws IOException {
// TODO Auto-generated method stub
Configuration conf = new Configuration();
conf.addResource(new Path("/hadoop/projects/hadoop-1.0.4/conf/core-site.xml"));
conf.addResource(new Path("/hadoop/projects/hadoop-1.0.4/conf/hdfs-site.xml"));
FileSystem fs = FileSystem.get(conf);
FSDataInputStream inputStream = fs.open(new Path("/path/to/input/file"));
System.out.println(inputStream.readChar());
}
FSDataInputStream has several read methods. Choose the one which suits your needs.
If it is MR, it's even easier :
public static class YourMapper extends
Mapper<LongWritable, Text, Your_Wish, Your_Wish> {
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
//Framework does the reading for you...
String line = value.toString(); //line contains one line of your csv file.
//do your processing here
....................
....................
context.write(Your_Wish, Your_Wish);
}
}
}
If you want to use mapreduce you can use TextInputFormat to read line by line and parse each line in mapper's map function.
Other option is to develop (or find developed) CSV input format for reading data from file.
There is one old tutorial here http://hadoop.apache.org/docs/r0.18.3/mapred_tutorial.html but logic is same in new versions
If you are using single process for reading data from file it is same as reading file from any other file system. There is nice example here https://sites.google.com/site/hadoopandhive/home/hadoop-how-to-read-a-file-from-hdfs
HTH