How to get the current temporary "anonymous" active user key? - google-apps-script

The Reference guide for Session.getTemporaryActiveUserKey() says the method returns "a temporary key that is unique to the active user" and that this key "rotates every 30 days and is unique to the script".Actually the method returns a temporary key also for "anonymous" sessions, indicating that no user is currently logged in.Is this temporary "anonymous user" key stored somewhere? Since it changes every 30 days, is there any method to get the current "anonymous key" to compare it against the one returned by the getTemporaryActiveUserKey() method?Thanks!

I've recently built something using this very param so I think I just might know what you need.
To your question though -
Is this temporary "anonymous user" key stored somewhere?
The answer is No. The Session.getTemporaryActiveUserKey() is not automatically stored somewhere.
You can however, store it using PropertiesService (or practically any other place that you're comfortable accessing like Sheets, Notes etc.) and one of the implementations of this can be found here.
Hope this helps.

Related

Mysql: possible to add constraint that prevents a one to many relation from having less than certain number of relations?

I have a user table that has many, say user_property table, where the foreign user_id is stored in the user_property table.
Now is it possible to add constraint so that a user should have at least one user property? So when a user have five properties, he can delete it one by one, but when there is only one property left, he can not delete it? I tried Googling but I am not even sure what is the search keyword for this.
The reason is, I want to avoid checking if a user have one property remaining only from the application layer, because it reads from replica, the read and write might not be synchronized, and on certain condition the user might accidentally delete all properties.
Any suggestion or different approaches is appreciated.
I don't think you can do this with a constraint. The problem is handling new users. You cannot insert a new user, because it has no properties. You cannot insert a new property, because the user reference is not valid. Ouch!
One solution involves triggers. The idea is the following:
Add to the the users table a column for the number of current properties.
Add to the users table a column for the maximum number of properties ever.
Default the two values to 0 for new users.
Add a check constraint (or trigger) that when the maximum is > 0 then the current number has to be > 0.
In any database, you need to implement the first two counts using triggers (on user_property). MySQL does not support check constraints, so the last condition also requires a trigger.
There is no constraint in SQL that does what you describe.
A foreign key constraint would ensure that every row in user_property must reference a row that exists in the user table.
But there is no constraint in SQL that does the reverse: ensure every user is referenced by at least one row in user_property.
A CHECK constraint has been mentioned by some other comments and answers. But a CHECK constraint can reference only columns of the same row. It can't reference other rows of the same table or different tables.
The most straightforward solution is to handle this in application code. That is:
Implement a function that INSERTs to user, while making sure there's also an INSERT of the first row to user_property.
Implement a function that DELETEs from user_property, but first check if it's would leave zero properties for the given user_id. If so, return an error instead of deleting the user property.
Implementing such data integrity rules in application code comes with a risk, of course. What if you have multiple apps that access the same database? You need to implement the same rules in different apps. Perhaps even in different programming languages. Sounds like a PITA.
Nevertheless, not all business rules can be implemented with simple SQL declarative constraints.

Specific format for user ID as primary in SQL database

I am in the process of creating a Users table and would like each user to have a random User ID that will the the primary element and in an auto-increment/generated type of situation.
The format will be something like this: 246b6fe6-d1f2-4961-ae07-08f3057e0a13 and I would need each newly created user to be auto-assigned an internal "Person ID" under this format.
I don't think MySQL is able to handle this type of Unique ID (but I might be wrong).
Should I have a script generate a random string until one is found that isn't assigned then use it to create the entry in the table or is there an easier way that I'm just not yet aware of?
Thanks for your help!

Using Database Primary Key in HTML ID

Just wanted to ask.
I have site where each user is linked to an ID in the Database and this Primary Key is included in many tables. The fastest way for me to pull a users information is to have this ID.
Would it be considered bad practice to put this ID in website HTML code? eg id="theIDnumber"
Otherwise i can just use the username and then reference this in the Database for this ID - which is fine but using the ID would be faster I believe.
thoughts?
I'd say no, if your keys are predictable. A trivial example: if you are using sequentially incrementing primary keys users can extract information from data that could be a privacy concern. e.g. they can infer which account was created before their account. Life also becomes easy for those trying to systematically leech information from your site.
Some related reading
https://stackoverflow.com/a/7452072/781695
You give your end users the opportunity to mess with those variables
and pass any data that they like. The counter measure to mitigate this
vulnerability is to create indirect object references instead. This
may sound like a big change, but it does not necessarily have to be.
You don't have to go and rekey all your tables or anything, you can do
it just by being clever with your data through the use of an indirect
reference map.
https://security.stackexchange.com/a/33524/37949
Hiding database keys isn't exactly required, but it does make life
more difficult if an attacker is trying to reference internal IDs in
an attack. Direct references to file names and other such internal
identifiers can allow attackers to map the internal structure of the
server, which might be useful in other attacks. This also invites path
injection and directory traversal problems.
https://www.owasp.org/index.php/Insecure_Direct_Object_Reference_Prevention_Cheat_Sheet
An object reference map is first populated with a list of authorized
values which are temporarily stored in the session. When the user
requests a field (ex: color=654321), the application does a lookup in
this map from the session to determine the appropriate column name. If
the value does not exist in this limited map, the user is not
authorized. Reference maps should not be global (i.e. include every
possible value), they are temporary maps/dictionaries that are only
ever populated with authorized values.

A Never Delete Relational DB Schema Design

I am considering designing a relational DB schema for a DB that never actually deletes anything (sets a deleted flag or something).
1) What metadata columns are typically used to accomodate such an architecture? Obviously a boolean flag for IsDeleted can be set. Or maybe just a timestamp in a Deleted column works better, or possibly both. I'm not sure which method will cause me more problems in the long run.
2) How are updates typically handled in such architectures? If you mark the old value as deleted and insert a new one, you will run into PK unique constraint issues (e.g. if you have PK column id, then the new row must have the same id as the one you just marked as invalid, or else all of your foreign keys in other tables for that id will be rendered useless).
If your goal is auditing, I'd create a shadow table for each table you have. Add some triggers that get fired on update and delete and insert a copy of the row into the shadow table.
Here are some additional questions that you'll also want to consider
How often do deletes occur. What's your performance budget like? This can affect your choices. The answer to your design will be different depending of if a user deleting a single row (like lets say an answer on a Q&A site vs deleting records on an hourly basis from a feed)
How are you going to expose the deleted records in your system. Is it only through administrative purposes or can any user see deleted records. This makes a difference because you'll probably need to come up with a filtering mechanism depending on the user.
How will foreign key constraints work. Can one table reference another table where there's a deleted record?
When you add or alter existing tables what happens to the deleted records?
Typically the systems that care a lot about audit use tables as Steve Prentice mentioned. It often has every field from the original table with all the constraints turned off. It often will have a action field to track updates vs deletes, and include a date/timestamp of the change along with the user.
For an example see the PostHistory Table at https://data.stackexchange.com/stackoverflow/query/new
I think what you're looking for here is typically referred to as "knowledge dating".
In this case, your primary key would be your regular key plus the knowledge start date.
Your end date might either be null for a current record or an "end of time" sentinel.
On an update, you'd typically set the end date of the current record to "now" and insert a new record the starts at the same "now" with the new values.
On a "delete", you'd just set the end date to "now".
i've done that.
2.a) version number solves the unique constraint issue somewhat although that's really just relaxing the uniqueness isn't it.
2.b) you can also archive the old versions into another table.

I am getting persistent but intermittent "Violation of PRIMARY KEY constraint" Errors

I am the person in my company who tries to solve coldfusion errors and bugs. We get daily emails with full details of coldfusion errors etc, as well we store this information in our database.
And for a few different applications in ColdFusion, they seem to sporadically generated "Violation of PRIMARY KEY constraint" errors.
In the code we always check for the existence of a row in the database before we try to do an insert, and it still generate's that error.
So my thinking is, either we need to a cftransaction around these each of the check, insert or update blocks. But I am not sure this will truly solve the problem.
These are coded in standard coldfusion style/framework. Here is an example in pseudo-code.
cfquery name="check_sometable" datasource="#dsn#"
select id
from sometable
/cfquery
if check_sometable.recordcount gt 0
-do insert
else
-do update
/endif
So why would this intermittently, cause primary key violations?
Is this a sql server problem, are we missing a configuration option?
Are we getting all of this because we are not on the latest hotfixed version of coldfusion 8 standard?
Do we need to upgrade our jdbc/odbc drivers?
Thank You.
Sounds like race conditions to me. Two connections check for the next available id at the same time, get the same one and then the insert fails on the second one. Why are you not using an identity field to create the PK if it is a surrogate key?
If you have a PK that is a natural key, then the violation is a good thing, you have two users trying to insert the same record which you do not want. I would try to fail it gracefully though, with an error that says someone else has created the same record. And then ask if they want to update it after loading the new values to their screen. I'm not sure I would want it to set up so that the data is automatically updated by the second person without them seeing what the first person put into the database.
Further this might be an indication that your natural key is not as unique as you think it is. Not sure what this application does, but how likely is it that two people would want to be working with the same data at a the same time? So if your natural key were something like company name, be aware that they are not guaranteed to be unique and you might have users overwriting good data for one company with data for another company already. I've found in life there are truly very few really unique, never changing natural keys. So if your natural key really isn't unique, you may already have bad data and the PK violations are just a symptom of a differnt problem not the real problem.