using IcePDF 6.3.2 in java for showing documents. But how can I read the page layout for a org.icepdf.core.pobjects? - icepdf

Upgraded IcePDF, but layout management has changed? How do I read pagelayout in IcePDF V.6.3.2
Thanks for any help input, regards Per
In version 4.2.2 I used to do typeless like this:
Catalog catalog = document_.getCatalog();
Object tmp = catalog.getObject(Catalog.PAGELAYOUT_KEY);
if (tmp != null && tmp instanceof Name) {
String pageLayout = ((Name) tmp).getName();
etc...

Related

In Android 10 getDeviceID value is null

In android 10, couldn't get device id using permission "READ_PHONE_STATE". I got an error while trying to get deviceID "The user 10296 does not meet the requirements to access device identifiers". I referred the developer site, but couldn't get proper solution. Also "READ_PRIVILEGE_PHONE_STATE" permission also not accessible.
Please prefer any solution to get deviceID from Android 10 mobile's and help me to resolve this issue.
Thanks in advance
getDeviceId() has been deprecated since API level 26.
"READ_PRIVILEGE_PHONE_STATE" is only accessible by The best practices suggest that you should "Avoid using hardware identifiers." for unique identifiers. You can use an instance id from firebase e.g FirebaseInstanceId.getInstance().getId();.
Or you can generate a custom globally-unique ID (GUID) to uniquely identify the app instance e.g
String uniqueID = UUID.randomUUID().toString();
or the least recommended way
String deviceId = android.provider.Settings.Secure.getString(
context.getContentResolver(), android.provider.Settings.Secure.ANDROID_ID);
You can also use this one, it's getting token from Firebase:-
String deviceId = FirebaseInstanceId.getInstance().getToken();
and this way is also working:-
String deviceId = UUID.randomUUID().toString();
As per the latest release in Android 10, Restriction on non-resettable device identifiers.
pps must have the READ_PRIVILEGED_PHONE_STATE privileged permission in order to access the device's non-resettable identifiers, which include both IMEI and serial number.
Check this for Privacy Changes in Android 10
To avoid such scenarios use UUID.randomUUID().toString() that represents an immutable universally unique identifier (UUID). A UUID represents a 128-bit value.
This is the definitive solution !
String myuniqueID;
int myversion = Integer.valueOf(android.os.Build.VERSION.SDK);
if (myversion < 23) {
WifiManager manager = (WifiManager) getApplicationContext().getSystemService(Context.WIFI_SERVICE);
WifiInfo info = manager.getConnectionInfo();
myuniqueID= info.getMacAddress();
if (myuniqueID== null) {
TelephonyManager mngr = (TelephonyManager) getSystemService(Context.TELEPHONY_SERVICE);
if (ActivityCompat.checkSelfPermission(this, android.Manifest.permission.READ_PHONE_STATE) != PackageManager.PERMISSION_GRANTED) {
return;
}
myuniqueID= mngr.getDeviceId();
}
}
else if (myversion > 23 && myversion < 29) {
TelephonyManager mngr = (TelephonyManager) getSystemService(Context.TELEPHONY_SERVICE);
if (ActivityCompat.checkSelfPermission(this, android.Manifest.permission.READ_PHONE_STATE) != PackageManager.PERMISSION_GRANTED) {
return;
}
myuniqueID= mngr.getDeviceId();
}
else
{
String androidId = Settings.Secure.getString(this.getContentResolver(), Settings.Secure.ANDROID_ID);
myuniqueID= androidId;
}

What causes facet errors after Hibernate Search upgrade from version 4 to 5?

Since upgrading (described below) the Facet search throws this exception.
HSEARCH000268: Facet request 'groupArchiv' tries to facet on field
'facetfieldarchiv' which either does not exists or is not configured
for faceting (via #Facet). Check your configuration.
Migrating from hibernate.search.version 4.4.4 to hibernate.search.version 5.5.2
lucene-queryparser 5.3.1
jdk 1.8xx
All the Indexing is via a ClassBridge.
The field facetfieldarchiv is in the index.
All other searches are working fine.
protected List<FacetBean> searchFacets(String searchQuery, String defaultField,
String onField, String facetGroupName)
{
List<FacetBean> results = new ArrayList<FacetBean>();
FullTextSession ftSession = getHibernateFulltextSession();
org.apache.lucene.analysis.Analyzer analyzer = getAnalyzer(Archiv.class);
QueryParser parser = new QueryParser(defaultField, analyzer);
try
{
Query query = parser.parse(searchQuery);
QueryBuilder builder = ftSession.getSearchFactory().buildQueryBuilder().forEntity(Item.class).get();
FacetingRequest gruppeFacetingRequest = builder.facet()
.name(facetGroupName)
.onField(onField).discrete()
.orderedBy(FacetSortOrder.COUNT_DESC)
.includeZeroCounts(false)
.maxFacetCount(99999)
.createFacetingRequest();
org.hibernate.search.FullTextQuery hibQuery = ftSession.createFullTextQuery(query, Item.class);
FacetManager facetManager = hibQuery.getFacetManager();
facetManager.enableFaceting(gruppeFacetingRequest);
Iterator<Facet> itf1 = facetManager.getFacets(facetGroupName).iterator();
**// The error occurs here,**
while (itf1.hasNext())
{
FacetBean bean = new FacetBean();
Facet facetgruppe = itf1.next();
bean.setFacetName(facetgruppe.getFacetingName());
bean.setFacetFieldName(facetgruppe.getFieldName());
bean.setFacetValue(facetgruppe.getValue());
bean.setFacetCount(facetgruppe.getCount());
results.add(bean);
}
} catch (Exception e)
{
logger.error(" Fehler FacetSuche: " + e);
}
return results;
}
The faceting API went through an overhaul between Hibernate Search 4 and 5. In the 4.x series one could facet on any (single valued) field without special configuration. The implementation was based on a custom Collector.
In Hibernate Search 5.x the implementation has changed and native Lucene faceting support is used. For this to work though, the faceted fields need to be known at index time. For this the annotation #Facet got introduced which needs to be places on fields used for faceting. You find more information in the Hibernate Search online docs or check this blog post which gives you a short summary of the changes.
Thank you for answering.
I didn't catch that change since 5.x
My facets are made up of several fields.
Is there a possibility to build the facets in a ClassBridge using pur Lucene?
like
FacetField f = new FacetField(fieldName, fieldValue);
document.add(f);
indexWriter.addDocument(document);
Thank you
pe

JSON validator validates the JSON but my left hand tags are not appearing

i am developing an iOS app and this app fetches data using JSON from mysql database. i have configured the json properly so far. right hand $outputs are ok and json validator validated the json formatting is completely ok. but my left hand tags are not appearing some how. here i m providing the code what i did. For this app publishing is on hold for long time.
//fetch data from current month table
$q = mysql_query("SELECT * FROM $monthyear where news_date = '$date'");
if (!$q) {
die('Invalid query executed: ' . mysql_error());
}
while($e=mysql_fetch_row($q)){
$output[]=$e;
$responce[$e]['news_id'] = $output['news_id'];
$responce[$e]['news_title'] = $output['news_title'];
$responce[$e]['news_reporter'] = $output['news_reporter'];
$responce[$e]['news_details'] = $output['news_details'];
$responce[$e]['photo'] = $output['photo'];
$responce[$e]['path'] = 'admin/'.str_replace('\\','',$output['path']);
$responce[$e]['menu_id'] = $output['menu_id'];
$responce[$e]['menu_type'] = $output['menu_type'];
$responce[$e]['news_publish_status'] = $output['news_publish_status'];
$responce[$e]['news_order'] = $output['news_order'];
$responce[$e]['news_date'] = $output['news_date'];
$responce[$e]['news_time'] = $output['news_time'];
$responce[$e]['added_by'] = $output['added_by'];
$responce[$e]['directory'] = $output['directory'];
$responce[$e]['read_number'] = $output['read_number'];
$responce[$e]['comment_number'] = $output['comment_number'];
$responce[$e]['news_comment_table_name'] = $output['news_comment_table_name'];
}
echo(json_encode($output));
I am not getting any way to show left hand tags though it exists in the script. can any one help me on this by guiding or giving an example from existing source code after modifications. TIA
Ishtiaque
You are using $e which is an array as an array key in $response[$e], and this is not allowed in PHP:
Arrays and objects can not be used as keys. Doing so will result in a warning: Illegal offset type.
I think you want to use the ID as a key.

Couchbase Custom Reduce behaving inconsistently

I am using couchbase version 2.0.1 - enterprise edition (build-170) and java-client version 1.2.2
I have a custom reduce function to get last activity of a user
The response from java client is inconsistent At time I get correct response and most of the time I get null value against valid keys. Even Stale.FALSE doesn't help !!
Number of records in view is around 1 millon and result set for query is arounk 1K key value pairs.
I am not sure what could be the issue here.. It will be great if someone can help.
Reduce Function is as below:
function (key, values, rereduce) {
var currDate = 0;
var activity = "";
for(var idx in values){
if(currDate < values[idx][0]){
currDate = values[idx][0];
activity = values[idx][1];
}
}
return [currDate, activity];
}
View Query:
CouchbaseClient cbc = Couchbase.getConnection();
Query query = new Query();
query.setIncludeDocs(false);
query.setSkip(0);
query.setLimit(10000);
query.setReduce(true);
query.setGroupLevel(4);
query.setRange(startKey,endKey);
View view = cbc.getView(document, view);
ViewResponse response = cbc.query(view, query);
Looks like There was some compatibility issue with java-client 1.2.2 and google gson 1.7.1 which was being used in my application. I switched to java-client 1.2.3 and google gson 2.2.4. Things are working as great now.

What is the best way to migrate data from BigTable/GAE Datastore to RDBMS?

Now that Google has announced availability of Cloud SQL storage for app engine, what will be the best way to migrate existing data from BigTable/GAE Datastore to MySQL?
To respond to the excellent questions brought up by Peter:
In my particular scenario, I kept my data model very relational.
I am fine with taking my site down for a few hours to make the transition, or at least warning people that any changes they make for the next few hours will be lost due to database maintenance, etc.
My data set is not very large - the main dashboard for my app says .67gb, and the datastore statistics page says it's more like 200mb.
I am using python.
I am not using the blobstore (although I think that is a separate question from a pure datastore migration - one could migrate datastore usage to MySql while maintaining the blobstore).
I would be fine with paying a reasonable amount (say, less than $100).
I believe my application is Master/Slave - it was created during the preview period of App Engine. I can't seem to find an easy way to verify that though.
It seems like the bulk uploader should be able to be used to download the data into a text format that could then be loaded with mysqlimport, but I don't have any experience with either technology. Also, it appears that Cloud SQL only supports importing mysqldumps, so I would have to install MqSQL locally, mysqlimport the data, then dump it, then import the dump?
An example of my current model code, in case it's required:
class OilPatternCategory(db.Model):
version = db.IntegerProperty(default=1)
user = db.UserProperty()
name = db.StringProperty(required=True)
default = db.BooleanProperty(default=False)
class OilPattern(db.Model):
version = db.IntegerProperty(default=2)
user = db.UserProperty()
name = db.StringProperty(required=True)
length = db.IntegerProperty()
description = db.TextProperty()
sport = db.BooleanProperty(default=False)
default = db.BooleanProperty(default=False)
retired = db.BooleanProperty(default=False)
category = db.CategoryProperty()
class League(db.Model):
version = db.IntegerProperty(default=1)
user = db.UserProperty(required=True)
name = db.StringProperty(required=True)
center = db.ReferenceProperty(Center)
pattern = db.ReferenceProperty(OilPattern)
public = db.BooleanProperty(default=True)
notes = db.TextProperty()
class Tournament(db.Model):
version = db.IntegerProperty(default=1)
user = db.UserProperty(required=True)
name = db.StringProperty(required=True)
center = db.ReferenceProperty(Center)
pattern = db.ReferenceProperty(OilPattern)
public = db.BooleanProperty(default=True)
notes = db.TextProperty()
class Series(db.Model):
version = db.IntegerProperty(default=3)
created = db.DateTimeProperty(auto_now_add=True)
user = db.UserProperty(required=True)
date = db.DateProperty()
name = db.StringProperty()
center = db.ReferenceProperty(Center)
pattern = db.ReferenceProperty(OilPattern)
league = db.ReferenceProperty(League)
tournament = db.ReferenceProperty(Tournament)
public = db.BooleanProperty(default=True)
notes = db.TextProperty()
allow_comments = db.BooleanProperty(default=True)
complete = db.BooleanProperty(default=False)
score = db.IntegerProperty(default=0)
class Game(db.Model):
version = db.IntegerProperty(default=5)
user = db.UserProperty(required=True)
series = db.ReferenceProperty(Series)
score = db.IntegerProperty()
game_number = db.IntegerProperty()
pair = db.StringProperty()
notes = db.TextProperty()
entry_mode = db.StringProperty(choices=entry_modes, default=default_entry_mode)
Have you considered using the Map Reduce framework?
You could write mappers that store the datastore entities in CloudSQL.
Do not forget to add a column for the datastore key, this might
help you avoiding duplicate rows or identifying missing rows.
You might have a look at https://github.com/hudora/gaetk_replication
for an inspiration on the mapper functions.