Stop App Maker creating duplicate relations - duplicates

When I change the relations between two data sources using a Multi Select widget, App Maker duplicates the relations.
Exported data:
The product - bundles relation is many-to-many.
This relation is managed from the product data source via a Multi Select widget for the bundles.
Multi Select settings:
Data source: product (inherited).
Values: bound to #datasource.item.bundles.
Options: bound to #datasources.bundles.items.
Has anyone encountered this before, discovered why, or found a solution?
I have tried Googling anything I can think of to find a solution, but I have not found anyone else with this issue.
I have observed this in more than one of my relations that are managed by a Multi Select widget. I have also observed this in one of my relations that are managed via a client-side script that is triggered in the UI (but I'm going to assume I am at fault on that one until I find out why the Multi Selects make duplicates).
Documentation I have read to try to understand why:
Modify associations with a data binding i.e Multi Select widgets
Many-to-many export
EDIT
Logging the related records to the console shows that while I'm editing the relations, there is only one instance of each record. After saving the product record and reloading the app, logging to the console shows that both relations have been duplicated.
Also, the product data source is in manual-save mode and the bundle data source is not.

Related

Preferred way of breaking up AJAX updates to multiple database tables in NodeJS

This should be a pretty common issue: let's say I'm updating a users table as well as a users_organizations table. From the UI perspective, there is only one button "Save".
I can either:
1) Create a single API route
2) Create one API route for each resource (one for users, one for users_organizations)
And then, suppose I choose 1). Should I update both tables in a single database call or should I split it up into 2 database calls?
In general I'm never sure how to approach these problems. Sometimes there's an action that affects more than 2 database tables at once. How do I ensure robustness, proper error handling, and keep my code sane all at once?
Definitely a problem I struggle with as well.
From what I've seen in the past, most operations that go along with a UI action are related, and can be given a common action name like update-user when clicking "Save". I'd have a single API endpoint to update the user, such as PUT /api/users/123 in a REST API. The body of that request would contain updated fields and new organizations the user belongs to.
Then on the server side I would make 2 database calls, one to update the user table and one to update the user_organization table.
If you feel 2 operations are so different that it's difficult to come up with a common API endpoint name, or if they need to be called independently in other parts of the app, I would argue that they should be 2 different API endpoints.
At the end of the day I try to ask, if a new developer were to try to understand this code, what would be the simplest approach?

One codebase, two clients, two versions of a Doctrine ORM entity

I have an app that collects data. It's a survey of sorts. The questions for the survey can be managed by a GUI tied to database tables in the app. But the actual answers to the questions get stored in a single table: observations. I've considered an EAV model instead, but let's set that aside for the moment. The Observation entity has over 900 properties because the survey has around that many questions. This has worked ok so far, even if it is a bit ugly in spots. But now I'm working on making this app power a new survey from a new client. It's key that I maintain the same codebase and the same git repository, but the app needs to accommodate another 700 observation properties. I added them to my entity and attempted to do a migration to create the new database columns. But alas, I hit an error telling me that the row size is too large. Too many columns!
The workaround I'd like to explore is to have multiple versions of the Observation entity. I could have one for each survey and use a config file to select the right one. But I want the selected entity to sit in the same spot in the ORM hierarchy. So, for example. If I call
$subscription->getObservation()
I want it to return the right kind of
observation based on the config. It's ok if each install ends up having a table for each survey because all but one of those tables would have 0 rows.
As mentioned above, another option would be to abandon the wide-table design and use EAV. But that approach has some major downsides.

Data Migrating document for Couchbase (i.e changing existing field type)?

I am coming from object relation database background, I understand Couchbase is schema-less, but data migration will still happen as the application develop.
In SQL we have management tool to alter table, or I can write migration script with SQL to do migration from version 1 table to version 2 table.
But in document, say we have json Document UserProfile:
UserProfile
{
"Owner": "Rich guy!",
"Car": "Cool car"
}
We might want to add a last visit field there, allow user have multiple car, so the new updated document will become follows:
UserProfile
{
"Owner": "Rich guy!",
"Car": ["Cool car", "Another car"],
"LastVisit": "2015-09-29"
}
But for easier maintenance, I want all other UserProfile documents to follow the same format, having "Car" field as an array.
From my experience in SQL, I could write migration script which support migrating different version of table. Migrate from version 1 table to version 2...N table.
So how can I should I write such migration code? I will have to really just writing an app (executable) using Couchbase SDK to migrate all the documents each time?
What will be the good way for doing migration like this?
Essentially, your problem breaks down into two parts:
Finding all the documents that need to be updated.
Retrieving and updating said documents.
You can do this in one of two ways: using a view that gives you the document ids, or using a DCP stream to get all the documents from the bucket. The view only gives you the ids of the documents, so you basically iterate over all the ids, and then retrieve, update and store each one using regular key-value methods. The DCP protocol, on the other hand, gives you the actual documents.
The advantage of using a view is that it's very simple to implement, works with any language SDK, and it lets you write your own logic around the process to make it more robust and safe. The disadvantage is having to build a view just for this, and also that if the data keeps changing, you must retrieve the ENTIRE view result at once, because if you try to page over the view with offsets, the ordering of results can change, thus giving you an inconsistent snapshot of the data.
The advantage of using DCP to stream all documents is that you're guaranteed to get a consistent snapshot of your data even if it's constantly changing, and also that you get the whole document directly as part of the stream, so you don't need to retrieve it separately - just update and store back to the database. The disadvantage is that it's currently only implemented in the Java SDK and is considered an experimental feature. See this blog for a simple implementation.
The third - and most convenient for an SQL user - way to do this is through the N1QL query language that's introduced in Couchbase 4. It has the same data manipulation commands as you would expect in SQL, so you could basically issue a command along the lines of UPDATE myBucket SET prop = {'field': 'value'} WHERE condition = 'something'. The advantage of this is pretty clear: it both finds and updates the documents all at once, without writing a single line of program code. The disadvantage is that the DML commands are considered "beta" in the 4.0 release of Couchbase, and that if the data set is too large, then it might not actually work due to timing out at some point. And of course, that fact that you need Couchbase 4.0 in the first place.
I don't know of any official tool currently to help with data model migrations, but there are some helpful code snippets depending on the SDK you use (see e.g. bulk updates in java).
For now you will have to write your own script. The basic process is as follow:
Make sure all your documents have a model_version attribute that you increment after each migration.
Before a migration update your application code so it can handle both the old and the new model_version, and so that new documents are written in the new model.
Write a script that iterate through all the old model documents in your bucket(you need a view that emits the document key), make the update you want, increment model_version and save the document back.
In a high concurrency environment it's important to have good error handling and monitoring, you could have for example a view that counts how many documents are in each model_version.
You can use Couchmove, which is a java migration tool working like Flyway DB.
You can execute N1QL queries with this tool to migrate your documents and keep tracking of your changes.
If I understood correctly, the crux here is getting and then 'update every CB docs'. This can be done with a view, provided that you understand that views are only 'eventually consistent' (unlike read/write actions which are strongly consistent).
If (at migration-time) no new documents are added to your bucket, then your view would be up-to-date and should return the entire set of documents to be migrated. easy.
On the other hand, if new documents continue to be written into your bucket, and these documents need to be migrated, then you will have to run your migration code continually to catch all these new docs (since the view wont return them until it is updated, a few seconds later).
In this 2nd scenario, while migration is happening, your bucket will contain a heterogeneous collection of docs: some that have been migrated already, some that are about to be migrated and some that your view has not 'seen' yet (because they were recently added) and would only be migrated once you re-run the migration code.
To make the migration process efficient, you'll need to find a way to differentiate between already-migrated items and yet-to-be-migrated items. You can add a field to each doc with its 'version number' and update it during the migration. Your view should be defined to only select documents with older 'version number' and ignore already-migrated items.
I suggest you read more about couchbase views - here and on their site.
Regarding your migration: There are two aspects here: (1) getting the list of document ids that need to be updated and (2) the actual update.
The actual update is simple: you retrieve the doc and save it again with the new format. There's no explicit schema. Where once you added column in SQL and populated it, you now just add a field in the json-doc (of all the docs). All migrated docs should have this field. Side note: Things get little more complicated if (while you're migrating) the document can be updated by another process. This requires special handling (read aboud CAS if that's the case).
Getting all the relevant doc-keys requires that you define a view and query it. Its beyond the scope of this answer (and is very well documented). Once you have all the keys, you simply iterate them one by one and update them.
With N1QL, Couchbase provides the same schema migration capabilities as you have in RDBMS or object-relational database. For the example in your question, you can place the following query in a migration script:
UPDATE UserProfile
SET Car = TO_ARRAY(Car),
LastVisit = NOW_STR();
This will migrate all the documents in your bucket to your new schema. Note that update statements in Couchbase provide document-level atomicity, not statement-level atomicity. But since this update is idempotent (repeatable), you can run it multiple times if you run into errors. Note: similar to the last paragraph of David's answer above.

Making unique Composite index in Sharepoint 2013

I am developing a access database front-end, where the database resides in SharePoint list. There is a Attendance Table with AttDate and StaffID columns apart from other columns.
Want to achieve: Only one record is added per staff for a day. i.e. only One Attendance is recorded in a day. When user tries to enter attendance of same staff again on same day, he should get error.
When the back-end was in Access file, I had created an index with 2 columns, and made the index "allow unique values only". The screen looks like this.
Now when, I am moving my back-end to SharePoint, I was expecting same functionality. But, moving Tables to SharePoint using Access 2013 wizard did not create the index. Hence I thought creating it manually will solve the problem. So, I created an index with 2 columns, See screenshot below.
When I entered data, it still allows multiple values , see Screenshot
below
Please help, as to what can be the solution to this problem. I am
allowed to change existing table structure, if the solution so
demands. Any workaround will also be helpful.
SharePoint indexing is more about making it faster to retrieve and search for items in SharePoint. It has nothing to do with unique constraints.
You're going to have to add something to your SharePoint instance that will perform this check for you.
You haven't mentioned whether you're using SharePoint on-line or on-premise. You do say that you're using the Access front-end. This typically means you'll need to use an event receiver which will involve C# (or VB.NET) programming.
Workflows wouldn't prevent the duplicate row from being saved
JavaScript would help if using SharePoint UI, but won't prevent services
You do mention that you're using an Access front-end. Maybe you can add some business logic in your Access file?
Hope this helps

Link SharePoint List to Access 2010 - USer Information Lookup

I have a lookup list that is used in a custom solution to provide information about a specific location. This list includes columns of type People and Groups.
Given the quantity of locations that will be available I'd be very keen that the list is maintained and imported from an Access database. I do something similar with my configurations list which works great. It just means I can rapidly deploy all configurations across different environments.
The problem I have encountered is that it doesn't seem to handle columns of data type People and Groups. The lookup is not available in Access. From what little i have found online, I'd not even sure if this is possible. This article suggests that Access automatically creates a link to the USerInfo table. Even with this link, I cannot look up values.
Can someone please let me know if this is possible or a limitation and cannot be achieved when linking a list to Access?
This absolutely possible, and MS Access should automatically link any dependent lists when you import a parent list that has lookups.
Check to make sure you don't have multiple UserInfo lists linked, like UserInfo1, UserInfo2, etc. If so, delete all your linked SP lists and relink.