Using Scala 2.8 and Lift 2.2.
I'm calling the Github API and requesting repositories for a user. When the user has less than 30 repos one call is made and there is no need to concatenate JValues. However, when the user has more than 30 repos multiple calls are made. I would like to concatenate these results from these calls and then "flatten" them. i.e. The "repositories" name on a JValue should return all the repos not just the first 30.
The code below returns the following: Array(List(JObject(List(JField(repositories,JArray(...JObject(List(JField(repositories,JArray...))))))))
What I want is: Array(List(JObject(List(JField(repositories,JArray(....))))) where the repositories name points to all of the repos.
I've wrestled with this for a bit and can't seem to get it.
import java.io._
import net.liftweb.json.JsonAST._
import net.liftweb.json.JsonParser._
import org.apache.http.client.methods.HttpGet
import org.apache.http.impl.client.{ DefaultHttpClient }
object Github extends Application {
implicit val formats = net.liftweb.json.DefaultFormats
val client = new DefaultHttpClient()
var repos = JArray(List[JValue]())
//Pick on mojombo since he has 30+ repos requires to calls to API
var method = new HttpGet("http://github.com/api/v2/json/repos/show/" + "mojombo" + "?page=1")
var response = client.execute(method)
var instream = response.getEntity.getContent();
var reader = new BufferedReader(new InputStreamReader(instream))
var line1 = reader.readLine
method = new HttpGet("http://github.com/api/v2/json/repos/show/" + "mojombo" + "?page=2")
response = client.execute(method)
instream = response.getEntity.getContent();
reader = new BufferedReader(new InputStreamReader(instream))
val line2 = reader.readLine
println(parse(line1) ++ parse(line2))
}
Function 'merge' should merge those JSONs like you described:
parse(line1) merge parse(line2)
Or more generically:
List(json1, json2, ...).foldLeft(JNothing: JValue)(_ merge _)
Related
The following Groovy code can create a new CSV file (in this example testfile.csv) and write JSON data in the in CSV. I do not want to create a new CSV file but just want to add (Append) few more lines to the existing testfile.csv file without overwriting the file. May someone please help what to change in the following code to force it to append the file instead of writing a new one? I heard about StandardOpenOption.APPEND but no idea where to put that. Thanks
import groovy.json.JsonSlurper;
import org.apache.commons.csv.CSVFormat;
import org.apache.commons.csv.CSVPrinter;
import com.oracle.e1.common.OrchestrationAttributes;
import java.text.SimpleDateFormat;
HashMap < String, Object > main(OrchestrationAttributes orchAttr, HashMap inputMap) {
HashMap < String, Object > returnMap = new HashMap < String, Object > ();
returnMap.put("CSVComplete", "false");
// Write the view number after jsonIn.fs_DATABROWSE_
def jsonIn = new JsonSlurper().parseText(inputMap.get("Vendor Data"));
def jsonData = jsonIn.fs_DATABROWSE_GettingJsonDataFromSomewhere.data.gridData.rowset;
if (jsonData.size() == 0) {
returnMap.put("CSVComplete", "empty");
return returnMap;
}
def fileName = orchAttr.getTempFileName("testfile.csv");
returnMap.put("CSVOutFileName", fileName);
//class writer to write file def
def sw = new StringWriter();
//build the CSV writer with a header
//def csv = new CSVPrinter(sw, CSVFormat.DEFAULT.withHeader("Business Unit", "Document Number", "LT", "SUB","Amount","HardcodedTHREAD","ApprovedBudget","fromview003"));
def csv = new CSVPrinter(sw, CSVFormat.DEFAULT); //No header
// create output file
fileCsvOut = new File(fileName);
def count=0;
// build the CSV
def an8Map = new ArrayList();
for (int i = 0; i < jsonData.size(); i++) {
def businessunit = jsonData[i].table1_column1;
if (an8Map.contains(businessunit)) {
continue;
}
an8Map.add(businessunit);
count++;
csv.printRecord(businessunit, jsonData[i].table_column,
jsonData[i].table1_column1, jsonData[i]. table1_column2, jsonData[i]. table1_column3, "Fixed text1 "Fixed text2", "Fixedtext3");
}
csv.close();
//writing csv to file
fileCsvOut.withWriter('UTF-8') {
writer ->
writer.write(sw.toString())
}
orchAttr.writeDebug(sw.toString());
returnMap.put("csv", sw.toString());
returnMap.put("CSVComplete", "true");
returnMap.put("CSVcount", Integer.toString(count));
return returnMap;
}
use withWriterAppend instead of withWriter
https://docs.groovy-lang.org/latest/html/groovy-jdk/java/io/File.html#withWriterAppend(java.lang.String,%20groovy.lang.Closure)
I have a problem for a long time that I would like to solve:
I have a fragment with buttons, that when I press a button I have the variable * data that happened to the url I want to open.
val button01 = view.findViewById(R.id.tv_01) as Button
button01.setOnClickListener{
dato = "01"
miTexto.setText("Jornada - 01")
requestJsonObject()
}
My url
val url = Directions.JORNADAS + Directions.CMP + "&jor=$dato&tmp=" + Directions.TMP
This url is clear that he opens it to me, it is a json which I pause and I already pass to the adapter and shows it to me.
From the adapter I pass data, which come in the Json with a put.extras intent, to the activity details.kt which depends on the item that pulse shows me the data of another url.
val intent = Intent(context, Detalles::class.java
holder.itemView.setOnClickListener{
intent.putExtra("nomLocal", jornada.nomLocal)context.startActivity(intent)......
Up here all good.
My problem: I need to pass the variable data to the activity Details.kt to be able to access the url, since * data is a piece of url of what I am going to parse in the activity Details
I had thought about adding an item to the Json
private fun requestJsonObject() {
val queue = newRequestQueue(activity)
//http://www.ffcv.es/ncompeticiones/server.php?action=getResultados&cmp=328&jor=1&tmp=2018/2019
val url = Directions.JORNADAS + Directions.CMP + "&jor=$dato&tmp=" + Directions.TMP
val stringRequest = StringRequest(Request.Method.GET, url, Response.Listener { response ->
val builder = GsonBuilder()
val mGson = builder.create()
val items: List<ModelJor>
items = Arrays.asList(*mGson.fromJson(response, Array<ModelJor>::class.java))
items.add(ModelJor("\"jornada\":" + $dato)) // dato en rojo
Log.d("Resultado", items.toString())
recyclerView !!.layoutManager = GridLayoutManager(activity!!, 1)
val adapter = AdapJor(activity !!, items)
recyclerView !!.adapter = adapter
}, Response.ErrorListener { error -> Log.d(TAG, "Error " + error.message) })
queue.add(stringRequest)
}
Any solution?
am trying to add shapefile data to postgis using c#
string path = browse_path.Text;
ProcessStartInfo startInfo = new ProcessStartInfo("CMD.exe");
Process p = new Process();
startInfo.RedirectStandardInput = true;
startInfo.UseShellExecute = false;
startInfo.RedirectStandardOutput = true;
startInfo.RedirectStandardError = true;
p = Process.Start(startInfo);
string chgdir = #"chdir " + #"C:\Program Files\PostgreSQL\9.4\bin\";
p.StandardInput.WriteLine(chgdir);
string pass = #"set PGPASSWORD=postgres";
p.StandardInput.WriteLine(pass);
string cmd = #"shp2pgsql -I -s 4326 " + path + " public.states | psql -U postgres -d postgres";
p.StandardInput.WriteLine(cmd);
p.WaitForExit();
p.Close();`
and for waiting almost 7-8 mins its not working. my shp file is 160 kb only.. but the command is working fine if i run it in the cmd rather then using code..
This is a function I wrote to import shapefiles to PG. It uses Nuget packages CliWrap and Npgsql and I just copied shp2pgsql and its dependencies to a project folder 'Tools' so it can be run on a machine that doesn't have PostgreSQL installed. Its a bit messy and you might need to add error handling but it worked for my needs.
public async static Task<bool> OutputSHPtoPSGLAsync(string shpfile, string host, string user, string pwd, string db, string schema = "public", bool dropIfExists = true, string table = "[SHPNAME]")
{
FileInfo shp = new FileInfo(shpfile);
if (!shp.Exists) return false;
if (table == "[SHPNAME]") table = Path.GetFileNameWithoutExtension(shpfile).ToLower();
string args = string.Format("{0} {1}.{2}", shpfile, schema, table);
Command cli = Cli.Wrap(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, #"tools\shp2pgsql.exe")).WithArguments(args);
ExecutionResult eo = await cli.ExecuteAsync();
string sql = eo.StandardOutput;
if (dropIfExists) sql = sql.Replace("CREATE TABLE", string.Format("DROP TABLE IF EXISTS \"{0}\".\"{1}\";\r\nCREATE TABLE", schema, table));
string constring = string.Format("Host={0};Username={1};Password={2};Database={3}", host, user, pwd, db);
using (NpgsqlConnection connection = new NpgsqlConnection(constring))
{
connection.Open();
new NpgsqlCommand(sql, connection).ExecuteNonQuery();
}
return true;
}
I was looking at NetTopologySuite which has type definitions compatible with Npgsql and PostGIS but its all still pre-release and couldn't be bothered working it out.
Please keep in mind this is a open question and I am not looking for a specific answer but just approaches and routes I can take.
Essentially I am getting a csv file from my aws s3 bucket. I am able to get it successfully using
AmazonS3 s3Client = new AmazonS3Client(new ProfileCredentialsProvider());
S3Object object = s3Client.getObject(
new GetObjectRequest(bucketName, key));
Now I want to populate a dynamodb table using this JSON file.
I was confused as i found all sorts of stuff online.
Here is one suggestion - This approach is however only reading the file it is not inserting anything to the dynamodb table.
Here is another suggestion - This approach is lot closer to what i am looking for , it is populating a table from a JSON file.
However i was wondering is there a generic way to ready any json file and populate a dynamodb table based on that ? Also for my case what approach is the best?
Since i originally asked the question I did more work.
What I have done so far
I have a csv file sitting in s3 that looks like this
name,position,points,assists,rebounds
Lebron James,SF,41,12,11
Kyrie Irving,PG,41,7,5
Stephen Curry,PG,29,8,4
Klay Thompson,SG,31,5,5
I am able to sucessfully pick it up as a s3object doing the following
AmazonS3 s3client = new AmazonS3Client(/**new ProfileCredentialsProvider()*/);
S3Object object = s3client.getObject(
new GetObjectRequest("lambda-function-bucket-blah-blah", "nba.json"));
InputStream objectData = object.getObjectContent();
Now I want to insert this in to my dynamodb table so i am attempting the following.
AmazonDynamoDBClient dbClient = new AmazonDynamoDBClient();
dbClient.setRegion(Region.getRegion(Regions.US_BLAH_1));
DynamoDB dynamoDB = new DynamoDB(dbClient);
//DynamoDB dynamoDB = new DynamoDB(client);
Table table = dynamoDB.getTable("MyTable");
//after this point i have tried many json parsers etc and did table.put(item) etc but nothing has worked. I would appreciate kind help
For CSV parsing, you can use plain reader as your file looks quite simple
AmazonS3 s3client = new AmazonS3Client(/**new ProfileCredentialsProvider()*/);
S3Object object = s3client.getObject(
new GetObjectRequest("lambda-function-bucket-blah-blah", "nba.json"));
InputStream objectData = object.getObjectContent();
AmazonDynamoDBClient dbClient = new AmazonDynamoDBClient();
dbClient.setRegion(Region.getRegion(Regions.US_BLAH_1));
DynamoDB dynamoDB = new DynamoDB(dbClient);
//DynamoDB dynamoDB = new DynamoDB(client);
Table table = dynamoDB.getTable("MyTable");
String line = "";
String cvsSplitBy = ",";
try (BufferedReader br = new BufferedReader(
new InputStreamReader(objectData, "UTF-8"));
while ((line = br.readLine()) != null) {
// use comma as separator
String[] elements = line.split(cvsSplitBy);
try {
table.putItem(new Item()
.withPrimaryKey("name", elements[0])
.withString("position", elements[1])
.withInt("points", elements[2])
.....);
System.out.println("PutItem succeeded: " + elements[0]);
} catch (Exception e) {
System.err.println("Unable to add user: " + elements);
System.err.println(e.getMessage());
break;
}
}
} catch (IOException e) {
e.printStackTrace();
}
Depending the complexity of your CSV, you can use 3rd party libraries like Apache CSV Parser or open CSV
I leave the original answer for parsing JSon
I would use the Jackson library and following your code do the following
AmazonS3 s3client = new AmazonS3Client(/**new ProfileCredentialsProvider()*/);
S3Object object = s3client.getObject(
new GetObjectRequest("lambda-function-bucket-blah-blah", "nba.json"));
InputStream objectData = object.getObjectContent();
AmazonDynamoDBClient dbClient = new AmazonDynamoDBClient();
dbClient.setRegion(Region.getRegion(Regions.US_BLAH_1));
DynamoDB dynamoDB = new DynamoDB(dbClient);
//DynamoDB dynamoDB = new DynamoDB(client);
Table table = dynamoDB.getTable("MyTable");
JsonParser parser = new JsonFactory()
.createParser(objectData);
JsonNode rootNode = new ObjectMapper().readTree(parser);
Iterator<JsonNode> iter = rootNode.iterator();
ObjectNode currentNode;
while (iter.hasNext()) {
currentNode = (ObjectNode) iter.next();
String lastName = currentNode.path("lastName").asText();
String firstName = currentNode.path("firstName").asText();
int minutes = currentNode.path("minutes").asInt();
// read all attributes from your JSon file
try {
table.putItem(new Item()
.withPrimaryKey("lastName", lastName, "firstName", firstName)
.withInt("minutes", minutes));
System.out.println("PutItem succeeded: " + lastName + " " + firstName);
} catch (Exception e) {
System.err.println("Unable to add user: " + lastName + " " + firstName);
System.err.println(e.getMessage());
break;
}
}
parser.close();
Inserting the records in your table will depend of your schema, I just put an arbitrary example, but anyway this will get you the reading of your file and the way to insert into the dynamoDB table
As you talked about the different approaches, another possibility is to setup a AWS Pipeline
I'm trying to figure out if there's a way of accessing column headings in a CSV using OpenCSV in groovy? This is what I have:
#GrabConfig( systemClassLoader=true )
#Grab( 'mysql:mysql-connector-java:5.1.27' )
#Grab(group = 'net.sf.opencsv', module = 'opencsv', version = '2.3')
import groovy.sql.Sql
import au.com.bytecode.opencsv.CSVReader
import au.com.bytecode.opencsv.CSVParser
def sql = Sql.newInstance("jdbc:mysql://localhost:3306/nid", "developer","whatever", "com.mysql.jdbc.Driver")
def notes = sql.dataSet("vdc_notifications")
def TEST_FILE_NAME = 'C:\\Users\\me\\Desktop\\test.csv'
List<String[]> rows = new CSVReader(new FileReader(new File(TEST_FILE_NAME)), CSVParser.DEFAULT_SEPARATOR, CSVParser.DEFAULT_ESCAPE_CHARACTER, CSVParser.DEFAULT_QUOTE_CHARACTER).readAll()
What i'm trying to get to is being able to do:
rows.each() { row -> println row.some_column_name }
usually the first row of a CSV file contains the names of the columns. so you can either use a simple parsing approach and take the first row for the header:
#Grab('net.sf.opencsv:opencsv:2.3')
import au.com.bytecode.opencsv.CSVReader
import au.com.bytecode.opencsv.bean.CsvToBean;
import au.com.bytecode.opencsv.bean.HeaderColumnNameMappingStrategy;
def csv = """\
name,age
Charlie,23
Billy,64"""
// read the files and keep the first one as the header
def csvr = new CSVReader(new StringReader(csv))
def header
while ((line=csvr.readNext())) {
if (!header) {
header = line
} else {
// create a map from the header and the line
println([header,line].transpose().collectEntries())
}
}
Or you can use the CsvToBean and the HeaderColumnNameMappingStrategy to create beans:
class Person {
String name
Integer age
}
// use the mapper
def ctb = new CsvToBean<Person>()
def hcnms = new HeaderColumnNameMappingStrategy<Person>()
hcnms.type = Person
println ctb.parse(hcnms, new StringReader(csv))