Migrate Dynamo Document.Item to aws-sdk-java-v2 - json

Does anyone know where "com.amazonaws.services.dynamodbv2.document.Item", item.fromJSON(), and Item.toJSON() are in the aws-sdk-java-v2?
I'm looking to migrate the following code:
-- From v1 --
AmazonDynamoDB dynamoDB = AmazonDynamoDBClientBuilder
...
DynamoDB dynamo = new DynamoDB(dynamoDB);
dbtable = dynamo.getTable(table);
dbtable.putItem(Item.fromJSON(jsonString));
Item item = dbtable.getItem(spec);
String jsonString = item.toJSON();
-- To v2 --
DynamoDbClient ddbClient = DynamoDbClient
.builder()
...
.build();
??? dbtable.putItem(Item.fromJSON(jsonString))
??? jsonString = dbtable.getItem(spec).toJSON()

Not sure I fully understand you question, but if you are just looking for the code, it is here:
https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-dynamodb/src/main/java/com/amazonaws/services/dynamodbv2/document/Item.java
starting at line 1236.

Found the answer. The DynamoDB Document API HAS NOT been implemented in the aws-sdk-java-v2 libraries.
Github feature request can be found here: https://github.com/aws/aws-sdk-java-v2/issues/36

Related

RDF4J SPARQL query to JSON

I am trying to move data from a SPARQL endpoint to a JSONObject. Using RDF4J.
RDF4J documentation does not address this directly (some info about using endpoints, less about converting to JSON, and nothing where these two cases meet up).
Sofar I have:
SPARQLRepository repo = new SPARQLRepository(<My Endpoint>);
Map<String, String> headers = new HashMap<String, String>();
headers.put("Accept", "SPARQL/JSON");
repo.setAdditionalHttpHeaders(headers);
try (RepositoryConnection conn = repo.getConnection())
{
String queryString = "SELECT * WHERE {GRAPH <urn:x-evn-master:mwadata> {?s ?p ?o}}";
GraphQuery query = conn.prepareGraphQuery(queryString);
debug("Mark 2");
try (GraphQueryResult result = query.evaluate())
this fails because "Server responded with an unsupported file format: application/sparql-results+json"
I figured a SPARQLGraphQuery should take the place of GraphQuery, but RepositoryConnection does not have a relevant prepare statement.
If I exchange
try (RepositoryConnection conn = repo.getConnection())
with
try (SPARQLConnection conn = (SPARQLConnection)repo.getConnection())
I run into the problem that SPARQLConnection does not generate a SPARQLGraphQuery. The closest I can get is:
SPARQLGraphQuery query = (SPARQLGraphQuery)conn.prepareQuery(QueryLanguage.SPARQL, queryString);
which gives a runtime error as these types cannot be cast to eachother.
I do not know how to proceed from here. Any help or advise much appreciated. Thank you
this fails because "Server responded with an unsupported file format: application/sparql-results+json"
In RDF4J, SPARQL SELECT queries are tuple queries, so named because each result is a set of bindings, which are tuples of the form (name, value). In contrast, CONSTRUCT (and DESCRIBE) queries are graph queries, so called because their result is a graph, that is, a collection of RDF statements.
Furthermore, setting additional headers for the response format as you have done here is not necessary (except in rare circumstances), the RDF4J client handles this for you automatically, based on the registered set of parsers.
So, in short, simplify your code as follows:
SPARQLRepository repo = new SPARQLRepository(<My Endpoint>);
try (RepositoryConnection conn = repo.getConnection()) {
String queryString = "SELECT * WHERE {GRAPH <urn:x-evn-master:mwadata> {?s ?p ?o}}";
TupleQuery query = conn.prepareTupleQuery(queryString);
debug("Mark 2");
try (TupleQueryResult result = query.evaluate()) {
...
}
}
If you want to write the result of the query in JSON format, you could use a TupleQueryResultHandler, for example the SPARQLResultsJSONWriter, as follows:
SPARQLRepository repo = new SPARQLRepository(<My Endpoint>);
try (RepositoryConnection conn = repo.getConnection()) {
String queryString = "SELECT * WHERE {GRAPH <urn:x-evn-master:mwadata> {?s ?p ?o}}";
TupleQuery query = conn.prepareTupleQuery(queryString);
query.evaluate(new SPARQLResultsJSONWriter(System.out));
}
This will write the result of the query (in this example to standard output) using the SPARQL Query Results JSON format. If you have a non-standard format in mind, you could of course also create your own TupleQueryResultHandler implementation.
For more details on the various ways in which you can process the result (including iterating, streaming, adding to a List, or just directly sending to a result handler), see the documentation on querying a repository. As an aside, the javadoc on the RDF4J APIs is pretty extensive too, so if your Java editing environment has support for displaying that, I'd advise you to make use of it.

Json marshalling in dart

I have i small question about json serializing in dart. I am using "exportable" lib.
https://pub.dartlang.org/packages/exportable
Here a small code:
Condition t = new Condition();
Configurator configurator1 = new Exportable(Configurator);
configurator1.alias = conf_b_1;
configurator1.value = conf1;
Configurator configurator2 = new Exportable(Configurator);
configurator2.alias = conf_b_2;
configurator2.value = conf2;
t.configurators.add(configurator1);
ntp.condition = t;
print("________________toString_______________");
print(""+t.toString());
print("_______________________________________");
print("________________toJson_______________");
print(""+t.toJson());
print("_______________________________________");
and the result is:
________________toString_______________
{"ref":"4","logicalRefId":"41","value":"1","alias":false,"configurators":["{\"alias\":\"1\",\"value\":\"10\"}"]}
________________toJson_______________
{"ref":"4","logicalRefId":"41","value":"1","alias":false,"configurators":["{\"alias\":\"1\",\"value\":\"10\"}"]}
The configurator part is totally broken. Where am i doing it wrong ?
Thx in advance for reading me,
Nerio.

Couchbase Custom Reduce behaving inconsistently

I am using couchbase version 2.0.1 - enterprise edition (build-170) and java-client version 1.2.2
I have a custom reduce function to get last activity of a user
The response from java client is inconsistent At time I get correct response and most of the time I get null value against valid keys. Even Stale.FALSE doesn't help !!
Number of records in view is around 1 millon and result set for query is arounk 1K key value pairs.
I am not sure what could be the issue here.. It will be great if someone can help.
Reduce Function is as below:
function (key, values, rereduce) {
var currDate = 0;
var activity = "";
for(var idx in values){
if(currDate < values[idx][0]){
currDate = values[idx][0];
activity = values[idx][1];
}
}
return [currDate, activity];
}
View Query:
CouchbaseClient cbc = Couchbase.getConnection();
Query query = new Query();
query.setIncludeDocs(false);
query.setSkip(0);
query.setLimit(10000);
query.setReduce(true);
query.setGroupLevel(4);
query.setRange(startKey,endKey);
View view = cbc.getView(document, view);
ViewResponse response = cbc.query(view, query);
Looks like There was some compatibility issue with java-client 1.2.2 and google gson 1.7.1 which was being used in my application. I switched to java-client 1.2.3 and google gson 2.2.4. Things are working as great now.

sqlalchemy: Error with commit on session - 'SessionMaker' object has no attribute '_model_changes'

I'm new to SqlAlchemy. We were working primarily with Flask, but in a particular case I needed a manual database connection. So I launched a new db connection with something like this:
write_engine = create_engine("mysql://user:pass#localhost/db?charset=utf8")
write_session = scoped_session(sessionmaker(autocommit=False,
autoflush=False,bind=write_engine))
nlabel = write_session.query(Label).filter(Label.id==label.id).first() # Works
#Later in code
ms = Message("Some message")
write_session.add(ms) # Works fine
write_session.commit() # Errors out
Error looks like "AttributeError: 'SessionMaker' object has no attribute '_model_changes'"
What am I doing wrong?
From the documentation I think you might be missing the initialization of the Session object.
Try:
Session = scoped_session(sessionmaker(autocommit=False, autoflush=False,bind=write_engine))
write_session = Session()
It's a shot in the dark- I'm not intimately familiar with SQLAlchemy. Best of luck!
Your issue is that you are missing this line:
db_session._model_changes = {}

Retrieving column mapping info in T4

I'm working on a T4 file that generates .cs classes based on an entity model, and one of the things I'm trying to get to is the mapping info in the model. Specifically, for each field in the model I'm trying retrieve the database field name it is mapped to.
I've found that the mapping info is apparently stored in StorageMappingItemCollection, but am having an impossible time figuring out how to query it and retrieve the data I need. Has anyone worked with this class and can maybe provide guidance?
The code I have so far goes something like this (I've pasted everything up to the problematic line):
<#
System.Diagnostics.Debugger.Launch();
System.Diagnostics.Debugger.Break();
#>
<## template language="C#" debug="true" hostspecific="true"#>
<## include file="EF.Utility.CS.ttinclude"#>
<## output extension=".cs"#><#
CodeGenerationTools code = new CodeGenerationTools(this);
MetadataLoader loader = new MetadataLoader(this);
CodeRegion region = new CodeRegion(this, 1);
MetadataTools ef = new MetadataTools(this);
string inputFile = #"MyModel.edmx";
EdmItemCollection ItemCollection = loader.CreateEdmItemCollection(inputFile);
StoreItemCollection storeItemCollection = null;
loader.TryCreateStoreItemCollection(inputFile, out storeItemCollection);
StorageMappingItemCollection storageMappingItemCollection = null;
loader.TryCreateStorageMappingItemCollection(
inputFile, ItemCollection, storeItemCollection, out storageMappingItemCollection);
var item = storageMappingItemCollection.First();
storageMappingItemCollection has methods like GetItem() and such, but I can't for the life of me get it to return data on fields that I know exist in the model.
Thx in advance!
Parsing the MSL isn't really that hard with Linq to XML
string mslManifestResourceName = GetMslName(ConfigurationManager.ConnectionStrings["Your Connection String"].ConnectionString);
var stream = Assembly.GetExecutingAssembly().GetManifestResourceStream(mslManifestResourceName);
XmlReader xreader = new XmlTextReader(stream);
XDocument doc = XDocument.Load(xreader);
XNamespace xmlns = "http://schemas.microsoft.com/ado/2009/11/mapping/cs";
var items = from entitySetMap in doc.Descendants(xmlns + "EntitySetMapping")
let entityTypeMap = entitySetMap.Element(xmlns + "EntityTypeMapping")
let mappingFragment = entityTypeMap.Element(xmlns + "MappingFragment")
select new
{
EntitySet = entitySetMap.Attribute("Name").Value,
TypeName = entityTypeMap.Attribute("TypeName").Value,
TableName = mappingFragment.Attribute("StoreEntitySet").Value
};
It may be easier to parse the EDMX file as XML rather than using the StorageMappingItemCollection.